You can help expand this article with text translated from the corresponding article in German. (February 2012)Click [show] for important translation instructions.
|
In information science, formal concept analysis (FCA) is a principled way of deriving a concept hierarchy or formal ontology from a collection of objects and their properties. Each concept in the hierarchy represents the objects sharing some set of properties; and each sub-concept in the hierarchy represents a subset of the objects (as well as a superset of the properties) in the concepts above it. The term was introduced by Rudolf Wille in 1981, and builds on the mathematical theory of lattices and ordered sets that was developed by Garrett Birkhoff and others in the 1930s.
Formal concept analysis finds practical application in fields including data mining, text mining, machine learning, knowledge management, semantic web, software development, chemistry and biology.
The original motivation of formal concept analysis was the search for real-world meaning of mathematical order theory. One such possibility of very general nature is that data tables can be transformed into algebraic structures called complete lattices, and that these can be utilized for data visualization and interpretation. A data table that represents a heterogeneous relation between objects and attributes, tabulating pairs of the form "object g has attribute m", is considered as a basic data type. It is referred to as a formal context. In this theory, a formal concept is defined to be a pair (A, B), where A is a set of objects (called the extent) and B is a set of attributes (the intent) such that
In this way, formal concept analysis formalizes the semantic notions of extension and intension.
The formal concepts of any formal context can—as explained below—be ordered in a hierarchy called more formally the context's "concept lattice". The concept lattice can be graphically visualized as a "line diagram", which then may be helpful for understanding the data. Often however these lattices get too large for visualization. Then the mathematical theory of formal concept analysis may be helpful, e.g., for decomposing the lattice into smaller pieces without information loss, or for embedding it into another structure which is easier to interpret.
The theory in its present form goes back to the early 1980s and a research group led by Rudolf Wille, Bernhard Ganter and Peter Burmeister at the Technische Universität Darmstadt. Its basic mathematical definitions, however, were already introduced in the 1930s by Garrett Birkhoff as part of general lattice theory. Other previous approaches to the same idea arose from various French research groups, but the Darmstadt group normalised the field and systematically worked out both its mathematical theory and its philosophical foundations. The latter refer in particular to Charles S. Peirce, but also to the Port-Royal Logic .
In his article "Restructuring Lattice Theory" (1982), [1] initiating formal concept analysis as a mathematical discipline, Wille starts from a discontent with the current lattice theory and pure mathematics in general: The production of theoretical results—often achieved by "elaborate mental gymnastics"—were impressive, but the connections between neighboring domains, even parts of a theory were getting weaker.
Restructuring lattice theory is an attempt to reinvigorate connections with our general culture by interpreting the theory as concretely as possible, and in this way to promote better communication between lattice theorists and potential users of lattice theory
— Rudolf Wille, [1]
This aim traces back to the educationalist Hartmut von Hentig, who in 1972 pleaded for restructuring sciences in view of better teaching and in order to make sciences mutually available and more generally (i.e. also without specialized knowledge) critiqueable. [2] Hence, by its origins formal concept analysis aims at interdisciplinarity and democratic control of research. [3]
It corrects the starting point of lattice theory during the development of formal logic in the 19th century. Then—and later in model theory—a concept as unary predicate had been reduced to its extent. Now again, the philosophy of concepts should become less abstract by considering the intent. Hence, formal concept analysis is oriented towards the categories extension and intension of linguistics and classical conceptual logic. [4]
Formal concept analysis aims at the clarity of concepts according to Charles S. Peirce's pragmatic maxim by unfolding observable, elementary properties of the subsumed objects. [3] In his late philosophy, Peirce assumed that logical thinking aims at perceiving reality, by the triade concept, judgement and conclusion. Mathematics is an abstraction of logic, develops patterns of possible realities and therefore may support rational communication. On this background, Wille defines:
The aim and meaning of Formal Concept Analysis as mathematical theory of concepts and concept hierarchies is to support the rational communication of humans by mathematically developing appropriate conceptual structures which can be logically activated.
— Rudolf Wille, [5]
The data in the example is taken from a semantic field study, where different kinds of bodies of water were systematically categorized by their attributes. [6] For the purpose here it has been simplified.
The data table represents a formal context, the line diagram next to it shows its concept lattice. Formal definitions follow below.
bodies of water | attributes | ||||||||
---|---|---|---|---|---|---|---|---|---|
temporary | running | natural | stagnant | constant | maritime | ||||
objects | canal | ||||||||
channel | |||||||||
lagoon | |||||||||
lake | |||||||||
maar | |||||||||
puddle | |||||||||
pond | |||||||||
pool | |||||||||
reservoir | |||||||||
river | |||||||||
rivulet | |||||||||
runnel | |||||||||
sea | |||||||||
stream | |||||||||
tarn | |||||||||
torrent | |||||||||
trickle |
The above line diagram consists of circles, connecting line segments, and labels. Circles represent formal concepts. The lines allow to read off the subconcept-superconcept hierarchy. Each object and attribute name is used as a label exactly once in the diagram, with objects below and attributes above concept circles. This is done in a way that an attribute can be reached from an object via an ascending path if and only if the object has the attribute.
In the diagram shown, e.g. the object reservoir has the attributes stagnant and constant, but not the attributes temporary, running, natural, maritime. Accordingly, puddle has exactly the characteristics temporary, stagnant and natural.
The original formal context can be reconstructed from the labelled diagram, as well as the formal concepts. The extent of a concept consists of those objects from which an ascending path leads to the circle representing the concept. The intent consists of those attributes to which there is an ascending path from that concept circle (in the diagram). In this diagram the concept immediately to the left of the label reservoir has the intent stagnant and natural and the extent puddle, maar, lake, pond, tarn, pool, lagoon, and sea.
A formal context is a triple K = (G, M, I), where G is a set of objects, M is a set of attributes, and I ⊆ G × M is a binary relation called incidence that expresses which objects have which attributes. [4] For subsets A ⊆ G of objects and subsets B ⊆ M of attributes, one defines two derivation operators as follows:
Applying either derivation operator and then the other constitutes two closure operators:
The derivation operators define a Galois connection between sets of objects and of attributes. This is why in French a concept lattice is sometimes called a treillis de Galois (Galois lattice).
With these derivation operators, Wille gave an elegant definition of a formal concept: a pair (A,B) is a formal concept of a context (G, M, I) provided that:
Equivalently and more intuitively, (A,B) is a formal concept precisely when:
For computing purposes, a formal context may be naturally represented as a (0,1)-matrix K in which the rows correspond to the objects, the columns correspond to the attributes, and each entry ki,j equals to 1 if "object i has attribute j." In this matrix representation, each formal concept corresponds to a maximal submatrix (not necessarily contiguous) all of whose elements equal 1. It is however misleading to consider a formal context as boolean, because the negated incidence ("object g does not have attribute m") is not concept forming in the same way as defined above. For this reason, the values 1 and 0 or TRUE and FALSE are usually avoided when representing formal contexts, and a symbol like × is used to express incidence.
The concepts (Ai, Bi) of a context K can be (partially) ordered by the inclusion of extents, or, equivalently, by the dual inclusion of intents. An order ≤ on the concepts is defined as follows: for any two concepts (A1, B1) and (A2, B2) of K, we say that (A1, B1) ≤ (A2, B2) precisely when A1 ⊆ A2. Equivalently, (A1, B1) ≤ (A2, B2) whenever B1 ⊇ B2.
In this order, every set of formal concepts has a greatest common subconcept, or meet. Its extent consists of those objects that are common to all extents of the set. Dually, every set of formal concepts has a least common superconcept, the intent of which comprises all attributes which all objects of that set of concepts have.
These meet and join operations satisfy the axioms defining a lattice, in fact a complete lattice. Conversely, it can be shown that every complete lattice is the concept lattice of some formal context (up to isomorphism).
Real-world data is often given in the form of an object-attribute table, where the attributes have "values". Formal concept analysis handles such data by transforming them into the basic type of a ("one-valued") formal context. The method is called conceptual scaling.
The negation of an attribute m is an attribute ¬m, the extent of which is just the complement of the extent of m, i.e., with (¬m)′ = G \ m′. It is in general not assumed that negated attributes are available for concept formation. But pairs of attributes which are negations of each other often naturally occur, for example in contexts derived from conceptual scaling.
For possible negations of formal concepts see the section concept algebras below.
An implication A → B relates two sets A and B of attributes and expresses that every object possessing each attribute from A also has each attribute from B. When (G,M,I) is a formal context and A, B are subsets of the set M of attributes (i.e., A,B ⊆ M), then the implication A → Bis valid if A′ ⊆ B′. For each finite formal context, the set of all valid implications has a canonical basis, [7] an irredundant set of implications from which all valid implications can be derived by the natural inference (Armstrong rules). This is used in attribute exploration, a knowledge acquisition method based on implications. [8]
Formal concept analysis has elaborate mathematical foundations, [4] making the field versatile. As a basic example we mention the arrow relations, which are simple and easy to compute, but very useful. They are defined as follows: For g ∈ G and m ∈ M let
and dually
Since only non-incident object-attribute pairs can be related, these relations can conveniently be recorded in the table representing a formal context. Many lattice properties can be read off from the arrow relations, including distributivity and several of its generalizations. They also reveal structural information and can be used for determining, e.g., the congruence relations of the lattice.
Temporal concept analysis (TCA) is an extension of Formal Concept Analysis (FCA) aiming at a conceptual description of temporal phenomena. It provides animations in concept lattices obtained from data about changing objects. It offers a general way of understanding change of concrete or abstract objects in continuous, discrete or hybrid space and time. TCA applies conceptual scaling to temporal data bases. [14]
In the simplest case TCA considers objects that change in time like a particle in physics, which, at each time, is at exactly one place. That happens in those temporal data where the attributes 'temporal object' and 'time' together form a key of the data base. Then the state (of a temporal object at a time in a view) is formalized as a certain object concept of the formal context describing the chosen view. In this simple case, a typical visualization of a temporal system is a line diagram of the concept lattice of the view into which trajectories of temporal objects are embedded. [15]
TCA generalizes the above mentioned case by considering temporal data bases with an arbitrary key. That leads to the notion of distributed objects which are at any given time at possibly many places, as for example, a high pressure zone on a weather map. The notions of 'temporal objects', 'time' and 'place' are represented as formal concepts in scales. A state is formalized as a set of object concepts. That leads to a conceptual interpretation of the ideas of particles and waves in physics. [16]
There are a number of simple and fast algorithms for generating formal concepts and for constructing and navigating concept lattices. For a survey, see Kuznetsov and Obiedkov [17] or the book by Ganter and Obiedkov, [8] where also some pseudo-code can be found. Since the number of formal concepts may be exponential in the size of the formal context, the complexity of the algorithms usually is given with respect to the output size. Concept lattices with a few million elements can be handled without problems.
Many FCA software applications are available today. [18] The main purpose of these tools varies from formal context creation to formal concept mining and generating the concepts lattice of a given formal context and the corresponding implications and association rules. Most of these tools are academic open-source applications, such as:
A formal context can naturally be interpreted as a bipartite graph. The formal concepts then correspond to the maximal bicliques in that graph. The mathematical and algorithmic results of formal concept analysis may thus be used for the theory of maximal bicliques. The notion of bipartite dimension (of the complemented bipartite graph) translates [4] to that of Ferrers dimension (of the formal context) and of order dimension (of the concept lattice) and has applications e.g. for Boolean matrix factorization. [25]
Given an object-attribute numerical data-table, the goal of biclustering is to group together some objects having similar values of some attributes. For example, in gene expression data, it is known that genes (objects) may share a common behavior for a subset of biological situations (attributes) only: one should accordingly produce local patterns to characterize biological processes, the latter should possibly overlap, since a gene may be involved in several processes. The same remark applies for recommender systems where one is interested in local patterns characterizing groups of users that strongly share almost the same tastes for a subset of items. [26]
A bicluster in a binary object-attribute data-table is a pair (A,B) consisting of an inclusion-maximal set of objects A and an inclusion-maximal set of attributes B such that almost all objects from A have almost all attributes from B and vice versa.
Of course, formal concepts can be considered as "rigid" biclusters where all objects have all attributes and vice versa. Hence, it is not surprising that some bicluster definitions coming from practice [27] are just definitions of a formal concept. [28] Relaxed FCA-based versions of biclustering and triclustering include OA-biclustering [29] and OAC-triclustering [30] (here O stands for object, A for attribute, C for condition); to generate patterns these methods use prime operators only once being applied to a single entity (e.g. object) or a pair of entities (e.g. attribute-condition), respectively.
A bicluster of similar values in a numerical object-attribute data-table is usually defined [31] [32] [33] as a pair consisting of an inclusion-maximal set of objects and an inclusion-maximal set of attributes having similar values for the objects. Such a pair can be represented as an inclusion-maximal rectangle in the numerical table, modulo rows and columns permutations. In [28] it was shown that biclusters of similar values correspond to triconcepts of a triadic context where the third dimension is given by a scale that represents numerical attribute values by binary attributes.
This fact can be generalized to n-dimensional case, where n-dimensional clusters of similar values in n-dimensional data are represented by n+1-dimensional concepts. This reduction allows one to use standard definitions and algorithms from multidimensional concept analysis [33] [10] for computing multidimensional clusters.
In the theory of knowledge spaces it is assumed that in any knowledge space the family of knowledge states is union-closed. The complements of knowledge states therefore form a closure system and may be represented as the extents of some formal context.
The formal concept analysis can be used as a qualitative method for data analysis. Since the early beginnings of FCA in the early 1980s, the FCA research group at TU Darmstadt has gained experience from more than 200 projects using the FCA (as of 2005). [34] Including the fields of: medicine and cell biology, [35] [36] genetics, [37] [38] ecology, [39] software engineering, [40] ontology, [41] information and library sciences, [42] [43] [44] office administration, [45] law, [46] [47] linguistics, [48] political science. [49]
Many more examples are e.g. described in: Formal Concept Analysis. Foundations and Applications, [34] conference papers at regular conferences such as: International Conference on Formal Concept Analysis (ICFCA), [50] Concept Lattices and their Applications (CLA), [51] or International Conference on Conceptual Structures (ICCS). [52]
{{citation}}
: CS1 maint: location (link)A concept is defined as an abstract idea. It is understood to be a fundamental building block underlying principles, thoughts, and beliefs. Concepts play an important role in all aspects of cognition. As such, concepts are studied within such disciplines as linguistics, psychology, and philosophy, and these disciplines are interested in the logical and psychological structure of concepts, and how they are put together to form thoughts and sentences. The study of concepts has served as an important flagship of an emerging interdisciplinary approach, cognitive science.
In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology.
Inductive logic programming (ILP) is a subfield of symbolic artificial intelligence which uses logic programming as a uniform representation for examples, background knowledge and hypotheses. The term "inductive" here refers to philosophical rather than mathematical induction. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesised logic program which entails all the positive and none of the negative examples.
Categorization is a type of cognition involving conceptual differentiation between characteristics of conscious experience, such as objects, events, or ideas. It involves the abstraction and differentiation of aspects of experience by sorting and distinguishing between groupings, through classification or typification on the basis of traits, features, similarities or other criteria that are universal to the group. Categorization is considered one of the most fundamental cognitive abilities, and it is studied particularly by psychology and cognitive linguistics.
In mathematics, especially in order theory, a Galois connection is a particular correspondence (typically) between two partially ordered sets (posets). Galois connections find applications in various mathematical theories. They generalize the fundamental theorem of Galois theory about the correspondence between subgroups and subfields, discovered by the French mathematician Évariste Galois.
A Petri net, also known as a place/transition net, is one of several mathematical modeling languages for the description of distributed systems. It is a class of discrete event dynamic system. A Petri net is a directed bipartite graph that has two types of elements: places and transitions. Place elements are depicted as white circles and transition elements are depicted as rectangles. A place can contain any number of tokens, depicted as black circles. A transition is enabled if all places connected to it as inputs contain at least one token. Some sources state that Petri nets were invented in August 1939 by Carl Adam Petri—at the age of 13—for the purpose of describing chemical processes.
In mathematics, a closure operator on a set S is a function from the power set of S to itself that satisfies the following conditions for all sets
An image schema is a recurring structure within our cognitive processes which establishes patterns of understanding and reasoning. As an understudy to embodied cognition, image schemas are formed from our bodily interactions, from linguistic experience, and from historical context. The term is introduced in Mark Johnson's book The Body in the Mind; in case study 2 of George Lakoff's Women, Fire and Dangerous Things: and further explained by Todd Oakley in The Oxford handbook of cognitive linguistics; by Rudolf Arnheim in Visual Thinking; by the collection From Perception to Meaning: Image Schemas in Cognitive Linguistics edited by Beate Hampe and Joseph E. Grady.
The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. The expression was coined by Richard E. Bellman when considering problems in dynamic programming. The curse generally refers to issues that arise when the number of datapoints is small relative to the intrinsic dimension of the data.
Object–role modeling (ORM) is used to model the semantics of a universe of discourse. ORM is often used for data modeling and software engineering.
In the context of spatial analysis, geographic information systems, and geographic information science, a field is a property that fills space, and varies over space, such as temperature or density. This use of the term has been adopted from physics and mathematics, due to their similarity to physical fields (vector or scalar) such as the electromagnetic field or gravitational field. Synonymous terms include spatially dependent variable (geostatistics), statistical surface ( thematic mapping), and intensive property (physics and chemistry) and crossbreeding between these disciplines is common. The simplest formal model for a field is the function, which yields a single value given a point in space (i.e., t = f(x, y, z) )
Conceptual clustering is a machine learning paradigm for unsupervised classification that has been defined by Ryszard S. Michalski in 1980 and developed mainly during the 1980s. It is distinguished from ordinary data clustering by generating a concept description for each generated class. Most conceptual clustering methods are capable of generating hierarchical category structures; see Categorization for more information on hierarchy. Conceptual clustering is closely related to formal concept analysis, decision tree learning, and mixture model learning.
Rudolf Wille was a German mathematician and was professor of General Algebra from 1970 to 2003 at Technische Universität Darmstadt. His most celebrated work is the invention of formal concept analysis, an unsupervised machine learning technique that applies mathematical lattice theory to organize data based on objects and their shared attributes.
In mathematics, specifically order theory, the Dedekind–MacNeille completion of a partially ordered set is the smallest complete lattice that contains it. It is named after Holbrook Mann MacNeille whose 1937 paper first defined and constructed it, and after Richard Dedekind because its construction generalizes the Dedekind cuts used by Dedekind to construct the real numbers from the rational numbers. It is also called the completion by cuts or normal completion.
Boolean analysis was introduced by Flament (1976). The goal of a Boolean analysis is to detect deterministic dependencies between the items of a questionnaire or similar data-structures in observed response patterns. These deterministic dependencies have the form of logical formulas connecting the items. Assume, for example, that a questionnaire contains items i, j, and k. Examples of such deterministic dependencies are then i → j, i ∧ j → k, and i ∨ j → k.
A feature, in the context of geography and geographic information science, is something that exists at a moderate to scale at a location in the space and scale of relevance to geography; that is, at or near the surface of Earth. It is an item of geographic information, and may be represented in maps, geographic information systems, remote sensing imagery, statistics, and other forms of geographic discourse. Such representations of features consist of descriptions of their inherent nature, their spatial form and location, and their characteristics or properties.
The triune continuum paradigm is a paradigm for general system modeling published in 2002. The paradigm allows for building of rigorous conceptual frameworks employed for systems modeling in various application contexts.
Lattice Miner is a formal concept analysis software tool for the construction, visualization and manipulation of concept lattices. It allows the generation of formal concepts and association rules as well as the transformation of formal contexts via apposition, subposition, reduction and object/attribute generalization, and the manipulation of concept lattices via approximation, projection and selection. Lattice Miner allows also the drawing of nested line diagrams.
In formal concept analysis (FCA) implications relate sets of properties. An implication A→B holds in a given domain when every object having all attributes in A also has all attributes in B. Such implications characterize the concept hierarchy in an intuitive manner. Moreover, they are "well-behaved" with respect to algorithms. The knowledge acquisition method called attribute exploration uses implications.
The General Concept Lattice (GCL) proposes a novel general construction of concept hierarchy from formal context, where the conventional Formal Concept Lattice based on Formal Concept Analysis (FCA) only serves as a substructure.