Context model

Last updated

A context model (or context modeling) defines how context data are structured and maintained (It plays a key role in supporting efficient context management). [1] It aims to produce a formal or semi-formal description of the context information that is present in a context-aware system. In other words, the context is the surrounding element for the system, and a model provides the mathematical interface and a behavioral description of the surrounding environment.

It is used to represent the reusable context information of the components (The top-level classes consist of Operating system, component container, hardware requirement and Software requirement).

A key role of context model is to simplify and introduce greater structure into the task of developing context-aware applications. [2] [3]

Examples of context models

The Unified Modeling Language as used in systems engineering defines a context model as the physical scope of the system being designed, which could include the user as well as the environment and other actors. A system context diagram represents the context graphically..

Several examples of context models occur under other domains.

Related Research Articles

Knowledge representation and reasoning is the field of artificial intelligence (AI) dedicated to representing information about the world in a form that a computer system can use to solve complex tasks such as diagnosing a medical condition or having a dialog in a natural language. Knowledge representation incorporates findings from psychology about how humans solve problems and represent knowledge in order to design formalisms that will make complex systems easier to design and build. Knowledge representation and reasoning also incorporates findings from logic to automate various kinds of reasoning, such as the application of rules or the relations of sets and subsets.

<span class="mw-page-title-main">Systemic functional grammar</span> Primary tenets

Systemic functional grammar (SFG) is a form of grammatical description originated by Michael Halliday. It is part of a social semiotic approach to language called systemic functional linguistics. In these two terms, systemic refers to the view of language as "a network of systems, or interrelated sets of options for making meaning"; functional refers to Halliday's view that language is as it is because of what it has evolved to do. Thus, what he refers to as the multidimensional architecture of language "reflects the multidimensional nature of human experience and interpersonal relations."

<span class="mw-page-title-main">WordNet</span> Computational lexicon of English

WordNet is a lexical database of semantic relations between words that links words into semantic relations including synonyms, hyponyms, and meronyms. The synonyms are grouped into synsets with short definitions and usage examples. It can thus be seen as a combination and extension of a dictionary and thesaurus. While it is accessible to human users via a web browser, its primary use is in automatic text analysis and artificial intelligence applications. It was first created in the English language and the English WordNet database and software tools have been released under a BSD style license and are freely available for download from that WordNet website. There are now WordNets in more than 200 languages.

In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology.

Word-sense disambiguation (WSD) is the process of identifying which sense of a word is meant in a sentence or other segment of context. In human language processing and cognition, it is usually subconscious/automatic but can often come to conscious attention when ambiguity impairs clarity of communication, given the pervasive polysemy in natural language. In computational linguistics, it is an open problem that affects other computer-related writing, such as discourse, improving relevance of search engines, anaphora resolution, coherence, and inference.

Parsing, syntax analysis, or syntactic analysis is the process of analyzing a string of symbols, either in natural language, computer languages or data structures, conforming to the rules of a formal grammar. The term parsing comes from Latin pars (orationis), meaning part.

Tree-adjoining grammar (TAG) is a grammar formalism defined by Aravind Joshi. Tree-adjoining grammars are somewhat similar to context-free grammars, but the elementary unit of rewriting is the tree rather than the symbol. Whereas context-free grammars have rules for rewriting symbols as strings of other symbols, tree-adjoining grammars have rules for rewriting the nodes of trees as other trees.

Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning or semantic content as opposed to lexicographical similarity. These are mathematical tools used to estimate the strength of the semantic relationship between units of language, concepts or instances, through a numerical description obtained according to the comparison of information supporting their meaning or describing their nature. The term semantic similarity is often confused with semantic relatedness. Semantic relatedness includes any relation between two terms, while semantic similarity only includes "is a" relations. For example, "car" is similar to "bus", but is also related to "road" and "driving".

In linguistics, linguistic competence is the system of unconscious knowledge that one knows when they know a language. It is distinguished from linguistic performance, which includes all other factors that allow one to use one's language in practice.

<span class="mw-page-title-main">Syntax (programming languages)</span> Set of rules defining correctly structured programs

In computer science, the syntax of a computer language is the rules that define the combinations of symbols that are considered to be correctly structured statements or expressions in that language. This applies both to programming languages, where the document represents source code, and to markup languages, where the document represents data.

Speech segmentation is the process of identifying the boundaries between words, syllables, or phonemes in spoken natural languages. The term applies both to the mental processes used by humans, and to artificial processes of natural language processing.

In philosophy, the term formal ontology is used to refer to an ontology defined by axioms in a formal language with the goal to provide an unbiased view on reality, which can help the modeler of domain- or application-specific ontologies to avoid possibly erroneous ontological assumptions encountered in modeling large-scale ontologies.

Contextual design (CD) is a user-centered design process developed by Hugh Beyer and Karen Holtzblatt. It incorporates ethnographic methods for gathering data relevant to the product via field studies, rationalizing workflows, and designing human–computer interfaces. In practice, this means that researchers aggregate data from customers in the field where people are living and applying these findings into a final product. Contextual design can be seen as an alternative to engineering and feature driven models of creating new systems.

<span class="mw-page-title-main">Biolinguistics</span> Study of the biology and evolution of language

Biolinguistics can be defined as the study of biology and the evolution of language. It is highly interdisciplinary as it is related to various fields such as biology, linguistics, psychology, anthropology, mathematics, and neurolinguistics to explain the formation of language. It is important as it seeks to yield a framework by which we can understand the fundamentals of the faculty of language. This field was first introduced by Massimo Piattelli-Palmarini, professor of Linguistics and Cognitive Science at the University of Arizona. It was first introduced in 1971, at an international meeting at the Massachusetts Institute of Technology (MIT). Biolinguistics, also called the biolinguistic enterprise or the biolinguistic approach, is believed to have its origins in Noam Chomsky's and Eric Lenneberg's work on language acquisition that began in the 1950s as a reaction to the then-dominant behaviorist paradigm. Fundamentally, biolinguistics challenges the view of human language acquisition as a behavior based on stimulus-response interactions and associations. Chomsky and Lenneberg militated against it by arguing for the innate knowledge of language. Chomsky in 1960s proposed the Language Acquisition Device (LAD) as a hypothetical tool for language acquisition that only humans are born with. Similarly, Lenneberg (1967) formulated the Critical Period Hypothesis, the main idea of which being that language acquisition is biologically constrained. These works were regarded as pioneers in the shaping of biolinguistic thought, in what was the beginning of a change in paradigm in the study of language.

DOGMA, short for Developing Ontology-Grounded Methods and Applications, is the name of research project in progress at Vrije Universiteit Brussel's STARLab, Semantics Technology and Applications Research Laboratory. It is an internally funded project, concerned with the more general aspects of extracting, storing, representing and browsing information.

In semiotics, linguistics, sociology and anthropology, context refers to those objects or entities which surround a focal event, in these disciplines typically a communicative event, of some kind. Context is "a frame that surrounds the event and provides resources for its appropriate interpretation". It is thus a relative concept, only definable with respect to some focal event within a frame, not independently of that frame.

Language resource management Lexical markup framework, is the International Organization for Standardization ISO/TC37 standard for natural language processing (NLP) and machine-readable dictionary (MRD) lexicons. The scope is standardization of principles and methods relating to language resources in the contexts of multilingual communication.

<span class="mw-page-title-main">Jan Dietz</span> Dutch computer scientist

Jean Leonardus Gerardus (Jan) Dietz is a Dutch Information Systems researcher, Professor Emeritus of Information Systems Design at the Delft University of Technology, known for the development of the Design & Engineering Methodology for Organisations. and his work on Enterprise Engineering.

Knowledge extraction is the creation of knowledge from structured and unstructured sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must represent knowledge in a manner that facilitates inferencing. Although it is methodically similar to information extraction (NLP) and ETL, the main criterion is that the extraction result goes beyond the creation of structured information or the transformation into a relational schema. It requires either the reuse of existing formal knowledge or the generation of a schema based on the source data.

The following outline is provided as an overview of and topical guide to natural-language processing:

References

  1. Nicolas Guelfi; Anthony Savidis (2006). Rapid integration of software engineering techniques. Springer. p. 131. ISBN   3-540-34063-7.
  2. Abdelsalam Helal; Mounir Mokhtari; Bessam Abdulrazak (2008). The Engineering Handbook of Smart Technology for Aging, Disability and Independence. Wiley. p. 592. ISBN   978-0-471-71155-1.
  3. Trullemans, Sandra; Van Holsbeeke, Lars; Signer, Beat (2017). "The Context Modelling Toolkit: A Unified Multi-Layered Context Modelling Approach". Proceedings of the ACM on Human-Computer Interaction (PACMHCI), 1(1). ACM: 7:1–7:16.
  4. Klein, Dan, and Christopher D. (2002) Manning. "A generative constituent-context model for improved grammar induction." In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 128-135. Association for Computational Linguistics.
  5. Delcher, A.; Harmon, D.; Kasif, S.; White, O.; Salzberg, S. L. (1999). "Improved microbial gene identification with GLIMMER". Nucleic Acids Research. 27 (23): 4636–4641. doi:10.1093/nar/27.23.4636. PMC   148753 . PMID   10556321.
  6. Wang, Xiao Hang; Zhang, D. Qing; Gu, Tao; Pung, Hung Keng (2004). "Ontology based context modeling and reasoning using OWL". Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications Workshops. IEEE: 18–22. CiteSeerX   10.1.1.3.9626 .
  7. Gu, Tao; Wang, Xiao Hang; Pung, Hung Keng; Zhang, Da Qing (2004). "An ontology-based context model in intelligent environments" (PDF). Proceedings of Communication Networks and Distributed Systems Modeling and Simulation Conference. 2004: 270–275.
  8. Component, Context, and Manufacturing Model Library – 2 (C2M2L-2), Broad Agency Announcement, DARPA-BAA-12-30, February 24, 2012