PROSE modeling language

Last updated

PROSE [1] [2] [3] [4] was the mathematical 4GL virtual machine that established the holistic modeling paradigm known as Synthetic Calculus [5] [6] [7] (AKA MetaCalculus). A successor to the SLANG [8] /CUE [9] simulation and optimization language developed at TRW Systems, it was introduced in 1974 on Control Data supercomputers. It was the first commercial language [10] [11] [12] [13] to employ automatic differentiation (AD), which was optimized to loop in the instruction-stack of the CDC 6600 CPU.

Contents

Although PROSE was a rich block-structured procedural language, its focus was the blending of simultaneous-variable mathematical systems such as:

implicit non-linear equations systems, ordinary differential-equations systems, and multidimensional optimization.

Each of these kinds of system models were distinct and had operator templates to automate and solve them, added to the procedural syntax. These automated system problems were considered "holistic" because their unknowns were simultaneous, and they could not be reduced in formulation to solve piecewise, or by algebra manipulation (e.g. substitution), but had to be solved as wholes. And wholeness also pertained to algorithmic determinacy or mathematical "closure", which made solution convergence possible and certain in principle, if not corrupted by numerical instability.

Holarchies of Differential Propagation

Since these holistic problem models could be independently automated and solved due to this closure, they could be blended into higher wholes by nesting one inside of another, in the manner of subroutines. Users could regard them as if they were ordinary subroutines.

Yet semantically, this mathematical blending was considerably more complex than the mechanics of subroutines because an iterative solution engine was attached to each problem model by its calling operator template above it in the program hierarchy. In its numerical solution process, this engine would take control and would call the problem model subroutine iteratively, not returning to the calling template until its system problem was solved. During some or maybe all of the iterative model-subroutine calls, the engine would invoke automatic differentiation of the formulas in the model holarchy with respect to the model's input-unknowns (arguments) defined in the calling template. Additional mechanisms were performed in the semantics to accommodate ubiquitous nesting of these holistic models.

Differentiation of Prediction Processes

If the nested solution was a prediction (e.g. numerical integration), then its solution algorithm, in addition to the model formulas, would also be automatically differentiated. Since this differentiation propagated (via the chain rule) throughout the integration from initial conditions to boundary conditions, the differentiation of boundary conditions with respect to initial conditions (so called Fréchet derivatives) would be performed. This enabled the routine solution of boundary-value problems by iterative "shooting" methods using Newton-method engines. Of course, at the same time this propagated differentiation could also be performed with respect to arbitrary parameters of the differential equations to further shape the integrated functions. These parameters could be solved for as unknowns of any nest in the holarchy above the integration process, a significant convenience in overall problem formulation.

Differentiation of Search Processes

If the inner nested problem was a search, and the outer problem was also a search (e.g. optimization), then the partial derivatives produced with respect to the inner-search unknowns had to be converted into partial derivatives of the outer search via a differential-geometry coordinate transformation. This was also an iterative process involving higher order differentiation and sometimes different independent variables.

Yet, these extended and iterative differential-arithmetic processes were totally hidden from the user, and were hardly more significant in his modeling task than if only ordinary subroutines and their calls were involved. The fact that they were iterative and the number and kind of iterations were indefinite, because a whole sub-problem was being solved which was also a part of a higher problem. It was natural to call each problem nest a "holon", as this dynamic entity perfectly fitted the theory of Arthur Koestler who coined that term. This was not done in the original PROSE documentation, because in those years, Koestler's theory was new, and somewhat controversial. This term was later used after Ken Wilber had ratified Koestler's holon concepts.

Automation Operator Templates

The complete modeling paradigm consisted of only three classes of holons, as distinguished by their operator templates below.

Optimization

FINDsimultaneous-unknownsINmodel-subroutineBYsolver-engine         [HOLDINGinequality-constraint-variables]        [MATCHINGequality-constraint-variables]        TOMAXIMIZE|MINIMIZEobjective-variable

Correlation

FINDsimultaneous-unknownsINmodel-subroutineBYsolver-engineTO MATCHequality-constraint-variables

Simulation

INITIATEsolver-engineFORmodel-subroutineEQUATIONSrate-variables/level-variablesOFindependent-variableSTEPincrement-variableTOlimit-variableINTEGRATEmodel-subroutineBYsolver-engine

These three operator templates created dynamic holons encapsulating an equations model subroutine hierarchy which could contain other nested holons, because the model subroutines could contain any of the operator templates encapsulating sub-problems. Each holon in the holarchy had a solver algorithm engine, which could be interchanged with others in its holon class. The extended arithmetic of automatic differentiation and its ability to dynamically differentiate numerical integration gave rise to the unique mode of holarchy modeling illustrated in Figure 1.

Figure 1. Calculus of Variations Application PROSEOptimalDesign&ControlHolarchy.png
Figure 1. Calculus of Variations Application

This example problem was originally a FORTRAN application from a RAND report about an algorithm used for optimization of boundary-value problem applications. This report, also published as a textbook, [14] described Quasilinearization, an alternative to "dynamic programming", invented by the same author, Richard Bellman. The FORTRAN program in Appendix Two of the textbook contains over five times the amount of code as the 25-line PROSE program, entirely embedded in the white boxes (visible syntax) of Figure 1. More significant in this modeling versus programming discussion is that the FORTRAN program contains 14 DO loops, whereas the PROSE program contains no loops. Another point to make about program simplification is that dynamic memory management could be taken for granted by the user. At the return from a holon to its calling operator template, the holon was destroyed and its memory was freed for other use.

This application is actually trivial in the amount of code required to state the problem. That is why the PROSE program is so small. All of its iterative solution methodology is under the hood in the solver engines (ellipses in Figure 1). Models seldom need loops. This is why spreadsheets, which are modeling tools, don't even have them.

This example problem provides a full encapsulation of the holon paradigm in a single application. All three of its holon types are employed: optimization searching at the highest level of the holarchy, correlation searching (a restricted subset of optimization searching) as the middle holon, and system dynamics simulation as the innermost holon. Another PROSE program with this same anatomy is illustrated in Figure 2. This is a somewhat larger application of the optimization of a cantilevered wing structure to maximize lift, subject to structure and weight constraints. In this case there are ten coordinate dimensions of the optimization unknowns that are being searched by the outer holon solver.

Figure 2. Wing Design Optimization Problem PROSE Wing Design Optimization.png
Figure 2. Wing Design Optimization Problem

The two outer holons each has a hidden coordinate system of unknowns that its search engine solves for. And these engines require partial derivatives of all downstream variables dependent upon those unknowns, which are evaluated by automatic differentiation arithmetic. The derivatives of the outer coordinate system must be computed from the derivatives of the inner coordinate system, after the inner search engine has converged (found a local solution). This is where a differential-geometry coordinate transformation is applied. The wing problem of Figure 2 has more downstream subprograms, not shown, including an integral quadrature function.

As these subprograms include the numerical integration of the system dynamics (differential equations) model, the automatic differentiation arithmetic includes differentiation of the integration algorithm of the simulation engine (and the quadrature solver), to evaluate derivatives of the boundary (end-point of the integrated curves) conditions with respect to the initial conditions. This calculation is not possible via formal symbolic differentiation. Nor is it feasible with finite difference approximation. Only automatic differentiation, with its exact chain-rule propagation is feasible.

Automated Holon Architecture

Figure 3. Generalized Holon Architecture MetaCalculus Holon Architecture.png
Figure 3. Generalized Holon Architecture

Figure 3 shows the generalized architecture of a holon in profile, showing the visible modeling syntax and the invisible semantics architecture with its characteristic 5-step iteration process. A holon is a calculus problem-solving unit, mathematically associated with a coordinate system dynamically created by the operation template. Its operator is a solver engine, either a numerical predictor in the case of simulation, or a search engine in the case of correlation and optimization. Its operand is a model procedure (which may itself be a holarchy of subordinated holons).

In essence a holon is a metaphoric computation container like a spreadsheet, but allowing procedural looping like an ordinary algebraic language. Yet its purpose is to frame algebraic formulas that represent higher mathematics (e.g. differential equations are algebraic formulas, in which some of their variables are rates).

Figures 4-7 show how the different holon classes of simulation, correlation, and optimization reflect this architecture, separating modeling (science equations) from algorithmic solver engines of the art of numerical approximation mathematics.

Figure 4. Simulation Holon MetaCalculus Simulation Holon.png
Figure 4. Simulation Holon
Figure 5. Correlation Holon MetaCalculus Correlation Holon.png
Figure 5. Correlation Holon
Figure 6. Unconstrained Optimization Holon MetaCaclulus FreeOptimization Holon.png
Figure 6. Unconstrained Optimization Holon
Figure 7. Constrained Optimization Holon ConstrainedOptimization Holon.png
Figure 7. Constrained Optimization Holon

Holons are formula-system solution processes

As mentioned above, a Holon is a computation container like a spreadsheet that encapsulates a set of input algebraic formulas. But unlike a spreadsheet, these formulas are parts of an irreducible whole which can only be solved together as a unit, involving a succession of approximations (iterations). A spreadsheet, which only involves a single pass of formula calculations, may therefore be thought of as a "degenerate" or "reduced" holon, one that only involves single-pass calculations.

A holon model elevates an encapsulated system of algebraic formulas to a higher problem archetype relating simultaneous unknowns to a definable solution condition other than a single pass through the set of formulas. Iterative calculus "under the hood" is required to "converge" multiple-pass approximations to the solution condition.

Metaphoric Problem Archetypes

Each holon automates one of the three system-problem archetypes that have emerged from higher math with a distinct class of solution methods, applicable as interchangeable operators. These methods operate on the input formulas and their calculations, to guide the successive approximations to the holon solution. These problem archetypes easily precipitate out of the formula collections that represent modeling of natural phenomena, and can be used as building blocks to synthesize whole computation programs as holarchies of sequential or nested holons. Used together as an alphabet these archetypal problems become a topology of higher math modeling within an algebraic programming language containing special semantic "glue" methodologies which propagate calculus "influence" through the collections of holons.

As the holons combine to form greater wholes via alphabetic combination, such holarchies also tend to become problem archetypes, which often precipitate out of natural phenomena modeling. Examples are boundary-value problems, solved by the combination of correlation and simulation holons.

PROSE Pantheon

PROSE introduced a pantheon of interchangeable solvers named for mythical gods in the three engine categories:

Optimization

  • HERA an advanced version of Newton's second-order gradient method with special logic to recognize and avoid unwanted extrema during its search process;
  • HERCULES a special constrained optimization solver for linear, integer, and mixed-integer problems;
  • JOVE a sequential unconstrained optimization technique applying Newton's second-order gradient search;
  • JUPITER a moving exterior truncations penalty-function method applying a Davidon-Fletcher-Powell (DFP) variable-metric search;
  • THOR a "sectionally linearized" linear programming technique; and
  • ZEUS a sequential unconstrained optimization technique applying a Davidon-Fletcher-Powell (DFP) variable-metric search.

Correlation

  • AJAX a damped Newton-Raphson and Newton-Gauss pseudo-inverse root finder; and
  • MARS a damped Newton-Raphson and Newton-Householder pseudo-inverse root finder.

System-Dynamics Simulation

  • ATHENA multi-order Runge-Kutta with differential propagation and optional limiting of any output dependent variables;
  • GEMINI self-starting technique of rational function extrapolation from Gragg, Bulirsch, and Stoer with differential propagation or not according to context;
  • ISIS Runge-Kutta-Gill with differential propagation;
  • JANISIS ISIS or JANUS, depending on differential or non-differential-propagating contexts;
  • JANUS Adams-Moulton predictor-corrector for non-differential-propagating contexts;
  • MERCURY Gears rate/state differentiating stiff and step-size optimizing method for non-differential-propagating contexts;
  • MERLIN self-starting technique of rational function extrapolation from Gragg, Bulirsch, and Stoer with differential propagation;
  • MINERVA multi-order Runge-Kutta without differential propagation and optional limiting of any output dependent variables;
  • NEPTUNE self-starting technique of rational function extrapolation from Gragg, Bulirsch, and Stoer without differential propagation; and
  • PEGASUS a special 5th order Runge-Kutta technique known as Sarafyan Embedding, in which a 4th order result is obtained at the same time plus optional limiting of any output dependent variables in non-differential propagating contexts.

Nesting Contexts

These solvers applied different numerical methods in the three engine categories, depending upon the nesting context in which they were applied. Some simulation solvers (JANUS, MERCURY, MINERVA, MERLIN and PEGASUS) could not be nested in automatic differentiation contexts of correlation and optimization because they were not overloaded for automatic-differentiation arithmetic. Thus hybrid versions, JANISIS (ISIS or JANUS) and GEMINI (MERLIN or NEPTUNE) were introduced, which would work efficiently in automatic differentiation mode or ordinary arithmetic mode (differentiation internally turned off). This greatly speeded up the iterative searches of solvers like AJAX, MARS, JOVE, ZEUS, and JUPITER, which iteratively called their models many more times in non-differentation mode, when various modes of non-derivative search sub-steps were applied.

Related Research Articles

In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.

<span class="mw-page-title-main">Numerical analysis</span> Study of algorithms using numerical approximation

Numerical analysis is the study of algorithms that use numerical approximation for the problems of mathematical analysis. It is the study of numerical methods that attempt at finding approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics, numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology.

<span class="mw-page-title-main">Mathematical optimization</span> Study of mathematical algorithms for optimization problems

Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.

<span class="mw-page-title-main">Partial differential equation</span> Type of differential equation

In mathematics, a partial differential equation (PDE) is an equation which computes a function between various partial derivatives of a multivariable function.

A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The development of the computer algebra systems in the second half of the 20th century is part of the discipline of "computer algebra" or "symbolic computation", which has spurred work in algorithms over mathematical objects such as polynomials.

<span class="mw-page-title-main">Optimal control</span> Mathematical way of attaining a desired output from a dynamic system

Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure. Or the dynamical system could be a nation's economy, with the objective to minimize unemployment; the controls in this case could be fiscal and monetary policy. A dynamical system may also be introduced to embed operations research problems within the framework of optimal control theory.

Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science that uses advanced computing capabilities to understand and solve complex physical problems. This includes

TK Solver is a mathematical modeling and problem solving software system based on a declarative, rule-based language, commercialized by Universal Technical Systems, Inc.

<span class="mw-page-title-main">Differential equation</span> Type of functional equation (mathematics)

In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.

Numerical methods for partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs).

Trajectory optimization is the process of designing a trajectory that minimizes some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop solution to an optimal control problem. It is often used for systems where computing the full closed-loop solution is not required, impractical or impossible. If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant, then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory. If only the first step of the trajectory is executed for an infinite-horizon problem, then this is known as Model Predictive Control (MPC).

Dynamic simulation is the use of a computer program to model the time-varying behavior of a dynamical system. The systems are typically described by ordinary differential equations or partial differential equations. A simulation run solves the state-equation system to find the behavior of the state variables over a specified period of time. The equation is solved through numerical integration methods to produce the transient behavior of the state variables. Simulation of dynamic systems predicts the values of model-system state variables, as they are determined by the past state values. This relationship is found by creating a model of the system.

Advanced process monitor (APMonitor) is a modeling language for differential algebraic (DAE) equations. It is a free web-service or local server for solving representations of physical systems in the form of implicit DAE models. APMonitor is suited for large-scale problems and solves linear programming, integer programming, nonlinear programming, nonlinear mixed integer programming, dynamic simulation, moving horizon estimation, and nonlinear model predictive control. APMonitor does not solve the problems directly, but calls nonlinear programming solvers such as APOPT, BPOPT, IPOPT, MINOS, and SNOPT. The APMonitor API provides exact first and second derivatives of continuous functions to the solvers through automatic differentiation and in sparse matrix form.

<span class="mw-page-title-main">Finite element method</span> Numerical method for solving physical or engineering problems

The finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential.

Gas networks simulation or Gas Pipeline Simulation is a process of defining the mathematical model of gas transmission and gas distribution systems, which are usually composed of highly integrated pipe networks operating over a wide range of pressures. Simulation allows to predict the behaviour of gas network systems under different conditions. Such predictions can be effectively used to guide decisions regarding the design and operation of the real system.

References

  1. PROSE – A General Purpose Higher Level Language, Procedure Manual, Control Data Corp. Pub No. 840003000 Rev. B (Jan 1977)
  2. 1 2 3 4 5 6 PROSE – A general Purpose Higher Level Language, Calculus Operations Manual, Control Data Corp. Pub. No 840003200 Rev B (Jan. 1977)
  3. PROSE – A general Purpose Higher Level Language, Calculus Applications Guide, Control Data Corp. Pub No. 84000170 Rev. A (Jan 1977)
  4. PROSE – A general Purpose Higher Level Language, Time Sharing System Guide, Control Data Corp. Pub. No 84000160 Rev A (Jan. 1977)
  5. J.M. Thames, The Evolution of Synthetic Calculus: A Mathematical Technology for Advanced Architecture, in Proc. of the International Workshop on High-Level Language Computer Architecture, University of Maryland, 1982
  6. B. Krinsky and J. Thames, The Structure of Synthetic Calculus, A Programming Paradigm of Mathematical Design, in Proc. of the International Workshop on High Level Computer Architecture, University of Maryland, 1984
  7. 1 2 J.M. Thames, Synthetic Calculus – A Paradigm of Mathematical Program Synthesis, in A. Griewank and G.F. Corliss, eds., Automatic Differentiation of Algorithms: Theory, Implementations, and Applications, SIAM, Philadelphia (1991)
  8. J.M. Thames, “SLANG—A Problem-Solving Language of Continuous Model Simulation and Optimization", ACM National Conference, San Francisco, 1969.
  9. J.D. McCully, “The Q Approach to Problem Solving”, Proceedings of the Fall Joint Computer Conference, 1969.
  10. R.N. Nilsen and W.J. Karplus, "Continuous-System Simulation Languages: State of the Art Survey" Annales de l'Association Internationale pour le Calcul analogique - No 1, Jan, 1974, p. 20
  11. J.M. Thames, Computing in calculus, Research/Development, (1975), pp. 24–30
  12. F.W. Pfeiffer, Some Advances Related to Nonlinear Programming, ACM Sigmap Bulletin, Issue 28, Jan 1980, pp. 15-21
  13. F.W. Pfeiffer, Automatic differentiation in PROSE, ACM SIGNUM Newsletter, 22 (1987), pp. 1–8
  14. 1 2 R.E. Bellman and R.E. Kalaba, Quasilinearization and Nonlinear Boundary-Value Problems, The RAND Corporation, American Elsevier Publishing Co., New York, 1965, p. 125, p. 168