Computational informatics

Last updated

Computational informatics is a subfield of informatics that emphasizes issues in the design of computing solutions rather than its underlying infrastructure. Computational informatics can also be interpreted as the use of computational methods in the information sciences.

Contents

Development

From a historical viewpoint, medical informatics scientists (also known as medical informaticians) started to use artificial intelligence and Bayesian statistical methods in diagnosis and medical decision making, as early as in the 1970s. An example is the MYCIN system developed at Stanford University. The field has since evolved to use a wide range of computational methods and to interact with all possible scientific and other disciplinary domains. Later, the field integrated the following:

Education

Several universities offer graduate programs in this area. One example is the Penn State College of Information Sciences and Technology. Another example is the Hamburg University of Technology which offers a consecutive Bachelor and Master program with emphasis on computational techniques. Some programs are targeted at specific domains. For instance, the Biomedical Informatics Program at Stanford University focuses on technologies and methods for understanding biomedical data and to improve health care.

In Tunisia, University of Manouba offers a Master program called Intelligent and Decisional Informatics [1] which tries to cover all aspect of computational informatics.

Related Research Articles

Computer science Study of the foundations and applications of computation

Computer science is the study of algorithmic processes, computational machines and computation itself. As a discipline, computer science spans a range of topics from theoretical studies of algorithms, computation and information to the practical issues of implementing computational systems in hardware and software.

Natural language processing Field of computer science and linguistics

Natural language processing (NLP) is a subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to process and analyze large amounts of natural language data. The goal is a computer capable of "understanding" the contents of documents, including the contextual nuances of the language within them. The technology can then accurately extract information and insights contained in the documents as well as categorize and organize the documents themselves.

In computer science and information science, an ontology encompasses a representation, formal naming and definition of the categories, properties and relations between the concepts, data and entities that substantiate one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of concepts and categories that represent the subject.

Computational archaeology describes computer-based analytical methods for the study of long-term human behaviour and behavioural evolution. As with other sub-disciplines that have prefixed 'computational' to their name, the term is reserved for methods that could not realistically be performed without the aid of a computer.

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

Machine learning Study of algorithms that improve automatically through experience

Machine learning (ML) is the study of computer algorithms that improve automatically through experience and by the use of data. It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.

Text mining, also referred to as text data mining, similar to text analytics, is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources." Written resources may include websites, books, emails, reviews, and articles. High-quality information is typically obtained by devising patterns and trends by means such as statistical pattern learning. According to Hotho et al. (2005) we can differ three different perspectives of text mining: information extraction, data mining, and a KDD process. Text mining usually involves the process of structuring the input text, deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling.

Health informatics Applications of information processing concepts and machinery in medicine

Healthcare informatics or biomedical informatics is the field of science and engineering that apply informatics fields to medicine. The health domain provides an extremely wide variety of problems that can be tackled using computational techniques.

Computer Science & Engineering (CSE) is an academic program at many universities which comprises scientific and engineering aspects of computing. CSE is also a term often used in Europe to translate the name of engineering informatics academic programs.

Biomedical text mining refers to the methods and study of how text mining may be applied to texts and literature of the biomedical and molecular biology domains. As a field of research, biomedical text mining incorporates ideas from natural language processing, bioinformatics, medical informatics and computational linguistics. The strategies developed through studies in this field are frequently applied to the biomedical and molecular biology literature available through services such as PubMed.

Statistical relational learning (SRL) is a subdiscipline of artificial intelligence and machine learning that is concerned with domain models that exhibit both uncertainty and complex, relational structure. Note that SRL is sometimes called Relational Machine Learning (RML) in the literature. Typically, the knowledge representation formalisms developed in SRL use first-order logic to describe relational properties of a domain in a general manner and draw upon probabilistic graphical models to model the uncertainty; some also build upon the methods of inductive logic programming. Significant contributions to the field have been made since the late 1990s.

Informatics is the study of computational systems, especially those for data storage and retrieval. According to ACM Europe andInformatics Europe, informatics is synonymous with computer science and computing as a profession, in which the central notion is transformation of information. In other countries, the term "informatics" is used with a different meaning in the context of library science.

Jun'ichi Tsujii is a Japanese computer scientist specializing in natural language processing and text mining, particularly in the field of biology and bioinformatics.

The following outline is provided as an overview of and topical guide to formal science:

Jason H. Moore is a translational bioinformatics scientist, biomedical informatician, and human geneticist, the Edward Rose Professor of Informatics and Director of the Institute for Biomedical Informatics at the Perelman School of Medicine at the University of Pennsylvania, where he is also Senior Associate Dean for Informatics and Director of the Division of Informatics in the Department of Biostatistics, Epidemiology, and Informatics.

The following outline is provided as an overview of and topical guide to natural language processing:

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

Outline of machine learning Overview of and topical guide to machine learning

The following outline is provided as an overview of and topical guide to machine learning. Machine learning is a subfield of soft computing within computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence. In 1959, Arthur Samuel defined machine learning as a "field of study that gives computers the ability to learn without being explicitly programmed". Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions.

Yuval Shahar

Yuval Shahar, M.D., Ph.D., is an Israel professor, physician, researcher and computer scientist

References

  1. "MR IDIAG (Informatique décisionnelle et intelligence appliquée à la gestion)" . Retrieved 2016-06-15.