FOSD origami

Last updated

Feature-oriented programming or feature-oriented software development (FOSD) is a general paradigm for program synthesis in software product lines. The feature-oriented programming page is recommended, it explains how an FOSD model of a domain is a tuple of 0-ary functions (called values) and a set of 1-ary (unary) functions called features. This page discusses multidimensional generalizations of FOSD models, which are important for compact specifications of complex programs.

Contents

Origami

A fundamental generalization of metamodels is origami. The essential idea is that a program's design need not be represented by a single expression; multiple expressions can be used. [1] [2] [3] This involves the use of multiple orthogonal GenVoca models.

Example: Let T be a tool model, which has features P (parse), H (harvest), D (doclet), and J (translate to Java). P is a value and the rest are unary-functions. A tool T1 that parses a file written in a Java dialect language and translates it to pure Java is modeled by: T1 = J•P. And a javadoc-like tool T2 parses a file in a Java dialect, harvests comments, and translates harvested comments into an HTML page is: T2 = D•H•P. So tools T1 and T2 are among the products of the product line of T.
A language model L describes a family (product line) of Java dialects. It includes the features: B (Java 1.4), G (generics), S (State machines). B is a value, and the rest are unary functions. So a dialect of Java L1 that has generics (i.e., Java 1.5) is: L1 = G•B. And a dialect of Java L2 that has language support for state machines is: L2 = S•B. So dialects L1 and L2 are among the products of the product line of L.
To describe a javadoc like tool (E) for the dialect of Java with state machines requires two expressions: one that defines the tool functionality for E (using the T model) and its Java dialect (using the L model):
  E = D•H•P    -- tool equation   E = S•B      -- language equation
Models L and T are orthogonal GenVoca models: one expresses the feature-based structure of the E tool, and the other the feature-based structure of its input language. Note that models T and L really are abstract in the following sense: the implementation of any feature of T really depends on the tool's dialect (expressed by L), and (symmetrically) the implementation of any feature of L really depends on the tool's functionality (expressed by T). So the only way one could implement E is by knowing both T and L equations.

Let U=[U1,U2,...,Un] be a GenVoca model of n features, and W=[W1,...Wm] be a GenVoca model of m features. The relationship between two orthogonal models U and W is a matrix UW, called an Origami matrix, where each row corresponds to a feature in U and each column corresponds to a feature in W. Entry UWij is a function that implements the combination of features Ui and Wj.

Note: UW is the tensor product of U and W (i.e., UW=U×W).
Example. Recall models T=[P,H,D,J] and L=[B,G,S]. The Origami matrix TL is:
where PB is a value that implements a parser for Java, PG is a unary-function that extends a Java parser to parse generics, and PS is a unary-function that extends a Java parser to parse state machine specifications. HB is a unary-function that implements a harvester of comments on Java code. HG is a unary-function that implements a harvester of comments on generic code, and HS is a unary-function that implements a harvester of comments on state machine specifications, and so on.

To see how multiple equations are used to synthesize a program, again consider models U and W. A program F is described by two equations, one per model. We can write an equation for F in two different ways: referencing features by name or by their index position, such as:

— U expression of F
— W expression of F

The UW model defines how models U and W are implemented. Synthesizing program F involves projecting UW of unneeded columns and rows, and aggregating (a.k.a. tensor contraction):

A fundamental property of origami matrices, called orthogonality, is that the order in which dimensions are contracted does not matter. In the above equation, summing across the U dimension (index i) first or the W dimension (index j) first does not matter. Of course, orthogonality is a property that must be verified. Efficient (linear) algorithms have been developed to verify that origami matrices (or tensors/n-dimensional arrays) are orthogonal. [4] The significance of orthogonality is one of view consistency. Aggregating (contracting) along a particular dimension offers a 'view' of a program. Different views should be consistent: if one repairs the program's code in one view (or proves properties about a program in one view), the correctness of those repairs or properties should hold in all views.

In general, a product of a product line may be represented by n expressions, from n orthogonal and abstract GenVoca models G1 ... Gn. The Origami matrix (or cube or tensor) is an n-dimensional array A:

A product H of this product line is formed by eliminating unnecessary rows, columns, etc. from A, and aggregating (contracting) the n-cube into a scalar:

Example. Recall program E and model T=[P,H,D,J]. E=D•H•P=T2•T1•T0. Similarly, E's representation in model L=[B,G,S] is E=S•B=L2•L0. Synthesizing E given Origami model TL is evaluating the following expression: .

Applications

There are several of product line applications developed using Origami. Among them include:

More applications to be supplied.

See also

Related Research Articles

<span class="mw-page-title-main">Gradient</span> Multivariate derivative (mathematics)

In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field whose value at a point gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of . If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to maximize a function by gradient ascent. In coordinate-free terms, the gradient of a function may be defined by:

In mathematics, a geometric algebra is an extension of elementary algebra to work with geometrical objects such as vectors. Geometric algebra is built out of two fundamental operations, addition and the geometric product. Multiplication of vectors results in higher-dimensional objects called multivectors. Compared to other formalisms for manipulating geometric objects, geometric algebra is noteworthy for supporting vector division and addition of objects of different dimensions.

In mathematics, an operator is generally a mapping or function that acts on elements of a space to produce elements of another space. There is no general definition of an operator, but the term is often used in place of function when the domain is a set of functions or other structured objects. Also, the domain of an operator is often difficult to characterize explicitly, and may be extended so as to act on related objects. See Operator (physics) for other examples.

In mathematics, a recurrence relation is an equation according to which the th term of a sequence of numbers is equal to some combination of the previous terms. Often, only previous terms of the sequence appear in the equation, for a parameter that is independent of ; this number is called the order of the relation. If the values of the first numbers in the sequence have been given, the rest of the sequence can be calculated by repeatedly applying the equation.

In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal unit vectors. A unit vector means that the vector has a length of 1, which is also known as normalized. Orthogonal means that the vectors are all perpendicular to each other. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unit length. An orthonormal set which forms a basis is called an orthonormal basis.

In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite dimension is a basis for whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, the standard basis for a Euclidean space is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection is also orthonormal, and every orthonormal basis for arises in this fashion.

In mathematics, especially the usage of linear algebra in mathematical physics and differential geometry, Einstein notation is a notational convention that implies summation over a set of indexed terms in a formula, thus achieving brevity. As part of mathematics it is a notational subset of Ricci calculus; however, it is often used in physics applications that do not distinguish between tangent and cotangent spaces. It was introduced to physics by Albert Einstein in 1916.

In mathematics, the Grassmannian is a differentiable manifold that parameterizes the set of all -dimensional linear subspaces of an -dimensional vector space over a field . For example, the Grassmannian is the space of lines through the origin in , so it is the same as the projective space of one dimension lower than . When is a real or complex vector space, Grassmannians are compact smooth manifolds, of dimension . In general they have the structure of a nonsingular projective algebraic variety.

In linear algebra, the Gram matrix of a set of vectors in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product . If the vectors are the columns of matrix then the Gram matrix is in the general case that the vector coordinates are complex numbers, which simplifies to for the case that the vector coordinates are real numbers.

In mathematics, in the area of numerical analysis, Galerkin methods are named after the Soviet mathematician Boris Galerkin. They convert a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions.

In mathematics, orthogonal coordinates are defined as a set of d coordinates in which the coordinate hypersurfaces all meet at right angles (note that superscripts are indices, not exponents). A coordinate surface for a particular coordinate qk is the curve, surface, or hypersurface on which qk is a constant. For example, the three-dimensional Cartesian coordinates (x, y, z) is an orthogonal coordinate system, since its coordinate surfaces x = constant, y = constant, and z = constant are planes that meet at right angles to one another, i.e., are perpendicular. Orthogonal coordinates are a special but extremely common case of curvilinear coordinates.

In applied mathematics, discontinuous Galerkin methods (DG methods) form a class of numerical methods for solving differential equations. They combine features of the finite element and the finite volume framework and have been successfully applied to hyperbolic, elliptic, parabolic and mixed form problems arising from a wide range of applications. DG methods have in particular received considerable interest for problems with a dominant first-order part, e.g. in electrodynamics, fluid mechanics and plasma physics.

The image segmentation problem is concerned with partitioning an image into multiple regions according to some homogeneity criterion. This article is primarily concerned with graph theoretic approaches to image segmentation applying graph partitioning via minimum cut or maximum cut. Segmentation-based object categorization can be viewed as a specific case of spectral clustering applied to image segmentation.

In computer programming, feature-oriented programming (FOP) or feature-oriented software development (FOSD) is a programming paradigm for program generation in software product lines (SPLs) and for incremental development of programs.

Feature-oriented software development (FOSD) is a general paradigm for software generation, where a model of a product line is a tuple of 0-ary and 1-ary functions. This page discusses a more abstract concept of models of product lines of product lines (PL**2) called metamodels, and product lines of product lines of product lines called meta-metamodels (PL**3), and further abstract concepts.

<span class="mw-page-title-main">Hilbert space</span> Type of topological vector space

In mathematics, Hilbert spaces allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space.

In mathematics, Weingarten functions are rational functions indexed by partitions of integers that can be used to calculate integrals of products of matrix coefficients over classical groups. They were first studied by Weingarten (1978) who found their asymptotic behavior, and named by Collins (2003), who evaluated them explicitly for the unitary group.

In mathematics, least squares function approximation applies the principle of least squares to function approximation, by means of a weighted sum of other functions. The best approximation can be defined as that which minimizes the difference between the original function and the approximation; for a least-squares approach the quality of the approximation is measured in terms of the squared differences between the two.

The Petrov–Galerkin method is a mathematical method used to approximate solutions of partial differential equations which contain terms with odd order and where the test function and solution function belong to different function spaces. It can be viewed as an extension of Bubnov-Galerkin method where the bases of test functions and solution functions are the same. In an operator formulation of the differential equation, Petrov–Galerkin method can be viewed as applying a projection that is not necessarily orthogonal, in contrast to Bubnov-Galerkin method.

In physics, the Gaudin model, sometimes known as the quantum Gaudin model, is a model, or a large class of models, in statistical mechanics first described in its simplest case by Michel Gaudin. They are exactly solvable models, and are also examples of quantum spin chains.

References

  1. "Generating Product-Lines of Product-Families" (PDF).
  2. "Refinements and Multi-Dimensional Separation of Concerns" (PDF).
  3. "Evaluating Support for Features in Advanced Modularization Technologies" (PDF).
  4. "Design and Analysis of Multidimensional Program Structures" (PDF).