Joseph G. Davis

Last updated
Joseph G. Davis
Born(1953-10-08)October 8, 1953
Mysore, India
Nationality Australian
Alma mater University of Pittsburgh
Indian Institute of Management Ahmedabad
Calicut University
Known for Crowdsourcing
Human Computation
Knowledge Management
Ontologies
Service Computing
AwardsNational Science Talent Scholar (India)
IBM Faculty Research Award
Scientific career
Fields Information Systems
Service Computing
Semantic Web
Knowledge Management
Institutions University of Sydney
University of Pittsburgh

Joseph G. Davis (born 1953) is an Indian-born, Australian Information systems researcher, and Professor of Information Systems and Services, and Director of the Knowledge Discovery and Management Research Group (KDMRG) [1] at the University of Sydney in Sydney, Australia. He is known for his work on decision support systems, ontologies, semantic technologies, and technological and organizational approaches to discovering and sharing knowledge in organizations.

Contents

Biography

Davis completed the first stage of education from St. Joseph's boys higher secondary school, Kozhikode. He completed his PhD at the University of Pittsburgh in 1986 under the supervision of William R. King. Prior to this, he earned his Masters at the Indian Institute of Management, Ahmedabad (IIMA), India, and Bachelor of Science (BSc) in Mathematics and Statistics at St. Joseph’s College, Devagiri, Calicut University as a National Science Talent Scholar.

He worked for four years in industry middle management roles in India before starting his PhD research. He has previously served in the Information Systems departments at Indiana University Bloomington, Indiana, United States, the University of Auckland, Auckland, New Zealand, and University of Wollongong, Wollongong, Australia. Davis has held Visiting Professorships or Visiting Researcher positions at the University of Pittsburgh and Carnegie Mellon University in Pittsburgh, Syracuse University, Syracuse, Moscow State University, Moscow, and IBM Research Laboratories in Bangalore, India, and Newcastle University, Newcastle upon Tyne, UK.

At the School of Information Technologies, the University of Sydney, Davis was instrumental in launching the Knowledge Discovery and Management Research Group and starting the Master of Information Technology Management (MITM) [2] course and revising the Information Systems major at the undergraduate level. He has served as the Associate Dean (International) in the Faculty of Engineering and Information Technologies at the University of Sydney from 2010 to 2013 and as Associate Head of School (of Information Technologies) from 2002 to 2007. He is also the theme leader for Service Computing at the Centre for Distributed and High Performance Computing [3] at the University of Sydney.

Research

Davis’s research interests and contributions span knowledge management including ontologies, Knowledge Graphs, service computing, and crowdsourcing/human computation. His research has been funded by the Australian Research Council, Carnegie Bosch Institute, and the Cooperative Research Centre (CRC) for Smart Services, among others. He was a National Science Talent Scholar in Mathematics in India and was awarded the IBM Faculty Research Award in 2008. [4]

Davis has published two books and over one hundred refereed research papers in these and related areas.

The research performed by his lab, KDMRG, spans knowledge discovery and management, ontologies, data mining, service computing and service systems, crowdsourcing and human computation, and Linked Open Data.

Publications

Books
Selected Research Papers [5]

Related Research Articles

Knowledge representation and reasoning is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets.

Semantic network

A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields. A semantic network may be instantiated as, for example, a graph database or a concept map.

The Semantic Web is an extension of the World Wide Web through standards set by the World Wide Web Consortium (W3C). The goal of the Semantic Web is to make Internet data machine-readable.

In computer science and information science, an ontology encompasses a representation, formal naming and definition of the categories, properties and relations between the concepts, data and entities that substantiate one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of concepts and categories that represent the subject.

Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning or semantic content as opposed to lexicographical similarity. These are mathematical tools used to estimate the strength of the semantic relationship between units of language, concepts or instances, through a numerical description obtained according to the comparison of information supporting their meaning or describing their nature. The term semantic similarity is often confused with semantic relatedness. Semantic relatedness includes any relation between two terms, while semantic similarity only includes "is a" relations. For example, "car" is similar to "bus", but is also related to "road" and "driving".

Vasant G. Honavar is an Indian born American computer scientist, and artificial intelligence, machine learning, big data, data science, causality, knowledge representation, bioinformatics and health informatics researcher and educator.

Carole Goble British computer scientist

Carole Anne Goble, is a British academic who is Professor of Computer Science at the University of Manchester. She is principal investigator (PI) of the myGrid, BioCatalogue and myExperiment projects and co-leads the Information Management Group (IMG) with Norman Paton.

Terminology extraction is a subtask of information extraction. The goal of terminology extraction is to automatically extract relevant terms from a given corpus.

Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between the concepts that these terms represent from a corpus of natural language text, and encoding them with an ontology language for easy retrieval. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process.

Semantic analytics, also termed semantic relatedness, is the use of ontologies to analyze content in web resources. This field of research combines text analytics and Semantic Web technologies like RDF. Semantic analytics measures the relatedness of different ontological concepts.

Amit Sheth is a computer scientist at University of South Carolina in Columbia, South Carolina. He is the founding Director of the Artificial Intelligence Institute, and a Professor of Computer Science and Engineering. From 2007 to June 2019, he was the Lexis Nexis Ohio Eminent Scholar, director of the Ohio Center of Excellence in Knowledge-enabled Computing, and a Professor of Computer Science at Wright State University. Sheth's work has been cited by over 48,800 publications. He has an h-index of 106, which puts him among the top 100 computer scientists with the highest h-index. Prior to founding the Kno.e.sis Center, he served as the director of the Large Scale Distributed Information Systems Lab at the University of Georgia in Athens, Georgia.

A concept search is an automated information retrieval method that is used to search electronically stored unstructured text for information that is conceptually similar to the information provided in a search query. In other words, the ideas expressed in the information retrieved in response to a concept search query are relevant to the ideas contained in the text of the query.

Ontology engineering field which studies the methods and methodologies for building ontologies

In computer science, information science and systems engineering, ontology engineering is a field which studies the methods and methodologies for building ontologies, which are formal representations of a set of concepts within a domain and the relationships between those concepts. In a broader sense, this field also includes a knowledge construction of the domain using formal ontology representations such as OWL/RDF. A large-scale representation of abstract concepts such as actions, time, physical objects and beliefs would be an example of ontological engineering. Ontology engineering is one of the areas of applied ontology, and can be seen as an application of philosophical ontology. Core ideas and objectives of ontology engineering are also central in conceptual modeling.

The Semantic Sensor Web (SSW) is a marriage of sensor and Semantic Web technologies. The encoding of sensor descriptions and sensor observation data with Semantic Web languages enables more expressive representation, advanced access, and formal analysis of sensor resources. The SSW annotates sensor data with spatial, temporal, and thematic semantic metadata. This technique builds on current standardization efforts within the Open Geospatial Consortium's Sensor Web Enablement (SWE) and extends them with Semantic Web technologies to provide enhanced descriptions and access to sensor data.

Knowledge extraction is the creation of knowledge from structured and unstructured sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction (NLP) and ETL, the main criteria is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge or the generation of a schema based on the source data.

iPlant Collaborative

The iPlant Collaborative, recently renamed Cyverse, is a virtual organization created by a cooperative agreement funded by the US National Science Foundation (NSF) to create cyberinfrastructure for the plant sciences (botany). The NSF compared cyberinfrastructure to physical infrastructure, "... the distributed computer, information and communication technologies combined with the personnel and integrating components that provide a long-term platform to empower the modern scientific research endeavor". In September 2013 it was announced that the National Science Foundation had renewed iPlant's funding for a second 5-year term with an expansion of scope to all non-human life science research.

Flora-2 is an open source semantic rule-based system for knowledge representation and reasoning. The language of the system is derived from F-logic, HiLog, and Transaction logic. Being based on F-logic and HiLog implies that object-oriented syntax and higher-order representation are the major features of the system. Flora-2 also supports a form of defeasible reasoning called Logic Programming with Defaults and Argumentation Theories (LPDA). Applications include intelligent agents, Semantic Web, knowledge-bases networking, ontology management, integration of information, security policy analysis, automated database normalization, and more.

The Open Semantic Framework (OSF) is an integrated software stack using semantic technologies for knowledge management. It has a layered architecture that combines existing open source software with additional open source components developed specifically to provide a complete Web application framework. OSF is made available under the Apache 2 license.

The Computer Science Ontology (CSO) is an automatically generated taxonomy of research topics in the field of Computer Science. It was produced by the Open University in collaboration with Springer Nature by running an information extraction system over a large corpus of scientific articles. Several branches were manually improved by domain experts. The current version includes about 14K research topics and 160K semantic relationships.

Ontotext GraphDB

Ontotext GraphDB is a graph database and knowledge discovery tool compliant with RDF and SPARQL and available as a high-availability cluster. Ontotext GraphDB is used in various European research projects.

References

  1. Knowledge Discovery and Management Research Group (KDMRG) at the University of Sydney
  2. "Master of Information Technology Management (MITM), the University of Sydney". Archived from the original on 2017-09-11. Retrieved 2013-07-15.
  3. The Centre for Distributed and High Performance Computing at the University of Sydney
  4. 2008 Faculty Award recipients
  5. Joseph G. Davis: University of Sydney Google Scholar profile.