Flow-based programming

Last updated

In computer programming, flow-based programming (FBP) is a programming paradigm that defines applications as networks of black box processes, which exchange data across predefined connections by message passing, where the connections are specified externally to the processes. These black box processes can be reconnected endlessly to form different applications without having to be changed internally. FBP is thus naturally component-oriented.

Contents

FBP is a particular form of dataflow programming based on bounded buffers, information packets with defined lifetimes, named ports, and separate definition of connections.

Introduction

Flow-based programming defines applications using the metaphor of a "data factory". It views an application not as a single, sequential process, which starts at a point in time, and then does one thing at a time until it is finished, but as a network of asynchronous processes communicating by means of streams of structured data chunks, called "information packets" (IPs). In this view, the focus is on the application data and the transformations applied to it to produce the desired outputs. The network is defined externally to the processes, as a list of connections which is interpreted by a piece of software, usually called the "scheduler".

The processes communicate by means of fixed-capacity connections. A connection is attached to a process by means of a port, which has a name agreed upon between the process code and the network definition. More than one process can execute the same piece of code. At any point in time, a given IP can only be "owned" by a single process, or be in transit between two processes. Ports may either be simple, or array-type, as used e.g. for the input port of the Collate component described below. It is the combination of ports with asynchronous processes that allows many long-running primitive functions of data processing, such as Sort, Merge, Summarize, etc., to be supported in the form of software black boxes.

Because FBP processes can continue executing as long they have data to work on and somewhere to put their output, FBP applications generally run in less elapsed time than conventional programs, and make optimal use of all the processors on a machine, with no special programming required to achieve this. [1]

The network definition is usually diagrammatic, and is converted into a connection list in some lower-level language or notation. FBP is often a visual programming language at this level. More complex network definitions have a hierarchical structure, being built up from subnets with "sticky" connections. Many other flow-based languages/runtimes are built around more traditional programming languages, the most notable[ citation needed ] example is RaftLib which uses C++ iostream-like operators to specify the flow graph.

FBP has much in common with the Linda [2] language in that it is, in Gelernter and Carriero's terminology, a "coordination language": [3] it is essentially language-independent. Indeed, given a scheduler written in a sufficiently low-level language, components written in different languages can be linked together in a single network. FBP thus lends itself to the concept of domain-specific languages or "mini-languages".

FBP exhibits "data coupling", described in the article on coupling as the loosest type of coupling between components. The concept of loose coupling is in turn related to that of service-oriented architectures, and FBP fits a number of the criteria for such an architecture, albeit at a more fine-grained level than most examples of this architecture.

FBP promotes high-level, functional style of specifications that simplify reasoning about system behavior. An example of this is the distributed data flow model for constructively specifying and analyzing the semantics of distributed multi-party protocols.

History

Flow-based programming was invented by J. Paul Morrison in the early 1970s, and initially implemented in software for a Canadian bank. [4] FBP at its inception was strongly influenced by some IBM simulation languages of the period, in particular GPSS, but its roots go all the way back to Conway's seminal paper on what he called coroutines. [5]

FBP has undergone a number of name changes over the years: the original implementation was called AMPS (Advanced Modular Processing System). One large application in Canada went live in 1975, and, as of 2013, has been in continuous production use, running daily, for almost 40 years. Because IBM considered the ideas behind FBP "too much like a law of nature" to be patentable they instead put the basic concepts of FBP into the public domain, by means of a Technical Disclosure Bulletin, "Data Responsive Modular, Interleaved Task Programming System", [6] in 1971. [4] An article describing its concepts and experience using it was published in 1978 in the IBM Research IBM Systems Journal under the name DSLM. [7] A second implementation was done as a joint project of IBM Canada and IBM Japan, under the name "Data Flow Development Manager" (DFDM), and was briefly marketed in Japan in the late '80s under the name "Data Flow Programming Manager".

Generally the concepts were referred to within IBM as "Data Flow", but this term was felt to be too general, and eventually the name "Flow-Based Programming" was adopted.

From the early '80s to 1993 J. Paul Morrison and IBM architect Wayne Stevens refined and promoted the concepts behind FBP. Stevens wrote several articles describing and supporting the FBP concept, and included material about it in several of his books. [8] [9] [ non-primary source needed ] [10] [ non-primary source needed ]. In 1994 Morrison published a book describing FBP, and providing empirical evidence that FBP led to reduced development times. [11]

Concepts

The following diagram shows the major entities of an FBP diagram (apart from the Information Packets). Such a diagram can be converted directly into a list of connections, which can then be executed by an appropriate engine (software or hardware).

Simple FBP diagram FBP - Simple network.png
Simple FBP diagram

A, B and C are processes executing code components. O1, O2, and the two INs are ports connecting the connections M and N to their respective processes. It is permitted for processes B and C to be executing the same code, so each process must have its own set of working storage, control blocks, etc. Whether or not they do share code, B and C are free to use the same port names, as port names only have meaning within the components referencing them (and at the network level, of course).

M and N are what are often referred to as "bounded buffers", and have a fixed capacity in terms of the number of IPs that they can hold at any point in time.

The concept of ports is what allows the same component to be used at more than one place in the network. In combination with a parametrization ability, called Initial Information Packets (IIPs), ports provide FBP with a component reuse ability, making FBP a component-based architecture. FBP thus exhibits what Raoul de Campo and Nate Edwards of IBM Research have termed configurable modularity.

Information Packets or IPs are allocated in what might be called "IP space" (just as Linda's tuples are allocated in "tuple space"), and have a well-defined lifetime until they are disposed of and their space is reclaimed - in FBP this must be an explicit action on the part of an owning process. IPs traveling across a given connection (actually it is their "handles" that travel) constitute a "stream", which is generated and consumed asynchronously - this concept thus has similarities to the lazy cons concept described in the 1976 article by Friedman and Wise. [12]

IPs are usually structured chunks of data - some IPs, however, may not contain any real data, but are used simply as signals. An example of this is "bracket IPs", which can be used to group data IPs into sequential patterns within a stream, called "substreams". Substreams may in turn be nested. IPs may also be chained together to form "IP trees", which travel through the network as single objects.

The system of connections and processes described above can be "ramified" to any size. During the development of an application, monitoring processes may be added between pairs of processes, processes may be "exploded" to subnets, or simulations of processes may be replaced by the real process logic. FBP therefore lends itself to rapid prototyping.

This is really an assembly line image of data processing: the IPs travelling through a network of processes may be thought of as widgets travelling from station to station in an assembly line. "Machines" may easily be reconnected, taken off line for repair, replaced, and so on. Oddly enough, this image is very similar to that of unit record equipment that was used to process data before the days of computers, except that decks of cards had to be hand-carried from one machine to another.

Implementations of FBP may be non-preemptive or preemptive - the earlier implementations tended to be non-preemptive (mainframe and C language), whereas the latest Java implementation (see below) uses Java Thread class and is preemptive.

Examples

The telegram problem

FBP components often form complementary pairs. This example uses two such pairs. The problem described seems very simple as described in words, but in fact is surprisingly difficult to accomplish using conventional procedural logic. The task, called the "telegram problem", originally described by Peter Naur, is to write a program which accepts lines of text and generates output lines containing as many words as possible, where the number of characters in each line does not exceed a certain length. The words may not be split and we assume no word is longer than the size of the output lines. This is analogous to the word-wrapping problem in text editors. [13]

In conventional logic, the programmer rapidly discovers that neither the input nor the output structures can be used to drive the call hierarchy of control flow. In FBP, on the other hand, the problem description itself suggests a solution:

Here is the most natural solution in FBP (there is no single "correct" solution in FBP, but this seems like a natural fit):

Peter Naur's telegram problem FBP- Telegram problem.png
Peter Naur's telegram problem

where DC and RC stand for "DeCompose" and "ReCompose", respectively.

As mentioned above, Initial Information Packets (IIPs) can be used to specify parametric information such as the desired output record length (required by the rightmost two components), or file names. IIPs are data chunks associated with a port in the network definition which become "normal" IPs when a "receive" is issued for the relevant port.

Batch update

This type of program involves passing a file of "details" (changes, adds and deletes) against a "master file", and producing (at least) an updated master file, and one or more reports. Update programs are generally quite hard to code using synchronous, procedural code, as two (sometimes more) input streams have to be kept synchronized, even though there may be masters without corresponding details, or vice versa.

Canonical "batch update" structure FBP - Standard Update.png
Canonical "batch update" structure

In FBP, a reusable component (Collate), based on the unit record idea of a Collator, makes writing this type of application much easier as Collate merges the two streams and inserts bracket IPs to indicate grouping levels, significantly simplifying the downstream logic. Suppose that one stream ("masters" in this case) consists of IPs with key values of 1, 2 and 3, and the second stream IPs ("details") have key values of 11, 12, 21, 31, 32, 33 and 41, where the first digit corresponds to the master key values. Using bracket characters to represent "bracket" IPs, the collated output stream will be as follows:

( m1 d11 d12 ) ( m2 d21 ) ( m3 d31 d32 d33 ) (d41)

As there was no master with a value of 4, the last group consists of a single detail (plus brackets).

The structure of the above stream can be described succinctly using a BNF-like notation such as

{ ( [m] d* ) }*

Collate is a reusable black box which only needs to know where the control fields are in its incoming IPs (even this is not strictly necessary as transformer processes can be inserted upstream to place the control fields in standard locations), and can in fact be generalized to any number of input streams, and any depth of bracket nesting. Collate uses an array-type port for input, allowing a variable number of input streams.

Multiplexing processes

Flow-based programming supports process multiplexing in a very natural way. Since components are read-only, any number of instances of a given component ("processes") can run asynchronously with each other.

Example of multiplexing FBP - multiplexing diagram.png
Example of multiplexing

When computers usually had a single processor, this was useful when a lot of I/O was going on; now that machines usually have multiple processors, this is starting to become useful when processes are CPU-intensive as well. The diagram in this section shows a single "Load Balancer" process distributing data between three processes, labeled S1, S2 and S3, respectively, which are instances of a single component, which in turn feed into a single process on a "first-come, first served" basis.

Simple interactive network

Schematic of general interactive application FBP - interactive app schematic2.png
Schematic of general interactive application

In this general schematic, requests (transactions) coming from users enter the diagram at the upper left, and responses are returned at the lower left. The "back ends" (on the right side) communicate with systems at other sites, e.g. using CORBA, MQSeries, etc. The cross-connections represent requests that do not need to go to the back ends, or requests that have to cycle through the network more than once before being returned to the user.

As different requests may use different back-ends, and may require differing amounts of time for the back-ends (if used) to process them, provision must be made to relate returned data to the appropriate requesting transactions, e.g. hash tables or caches.

The above diagram is schematic in the sense that the final application may contain many more processes: processes may be inserted between other processes to manage caches, display connection traffic, monitor throughput, etc. Also the blocks in the diagram may represent "subnets" - small networks with one or more open connections.

Comparison with other paradigms and methodologies

Jackson Structured Programming (JSP) and Jackson System Development (JSD)

This methodology assumes that a program must be structured as a single procedural hierarchy of subroutines. Its starting point is to describe the application as a set of "main lines", based on the input and output data structures. One of these "main lines" is then chosen to drive the whole program, and the others are required to be "inverted" to turn them into subroutines (hence the name "Jackson inversion"). This sometimes results in what is called a "clash", requiring the program to be split into multiple programs or coroutines. When using FBP, this inversion process is not required, as every FBP component can be considered a separate "main line".

FBP and JSP share the concept of treating a program (or some components) as a parser of an input stream.

In Jackson's later work, Jackson System Development (JSD), the ideas were developed further. [14] [15]

In JSD the design is maintained as a network design until the final implementation stage. The model is then transformed into a set of sequential processes to the number of available processors. Jackson discusses the possibility of directly executing the network model that exists prior to this step, in section 1.3 of his book (italics added):

The specification produced at the end of the System Timing step is, in principle, capable of direct execution. The necessary environment would contain a processor for each process, a device equivalent to an unbounded buffer for each data stream, and some input and output devices where the system is connected to the real world. Such an environment could, of course, be provided by suitable software running on a sufficiently powerful machine. Sometimes, such direct execution of the specification will be possible, and may even be a reasonable choice. [15]

FBP was recognized by M A Jackson as an approach that follows his method of "Program decomposition into sequential processes communicating by a coroutine-like mechanism" [16]

Applicative programming

W.B. Ackerman defines an applicative language as one which does all of its processing by means of operators applied to values. [17] The earliest known applicative language was LISP.

An FBP component can be regarded as a function transforming its input stream(s) into its output stream(s). These functions are then combined to make more complex transformations, as shown here:

Two functions feeding one FBP - functional processes.png
Two functions feeding one

If we label streams, as shown, with lower case letters, then the above diagram can be represented succinctly as follows:

c = G(F(a),F(b));

Just as in functional notation F can be used twice because it only works with values, and therefore has no side effects, in FBP two instances of a given component may be running concurrently with each other, and therefore FBP components must not have side-effects either. Functional notation could clearly be used to represent at least a part of an FBP network.

The question then arises whether FBP components can themselves be expressed using functional notation. W.H. Burge showed how stream expressions can be developed using a recursive, applicative style of programming, but this work was in terms of (streams of) atomic values. [18] In FBP, it is necessary to be able to describe and process structured data chunks (FBP IPs).

Furthermore, most applicative systems assume that all the data is available in memory at the same time, whereas FBP applications need to be able to process long-running streams of data while still using finite resources. Friedman and Wise suggested a way to do this by adding the concept of "lazy cons" to Burge's work. This removed the requirement that both of the arguments of "cons" be available at the same instant of time. "Lazy cons" does not actually build a stream until both of its arguments are realized - before that it simply records a "promise" to do this. This allows a stream to be dynamically realized from the front, but with an unrealized back end. The end of the stream stays unrealized until the very end of the process, while the beginning is an ever-lengthening sequence of items.

Linda

Many of the concepts in FBP seem to have been discovered independently in different systems over the years. Linda, mentioned above, is one such. The difference between the two techniques is illustrated by the Linda "school of piranhas" load balancing technique - in FBP, this requires an extra "load balancer" component which routes requests to the component in a list which has the smallest number of IPs waiting to be processed. Clearly FBP and Linda are closely related, and one could easily be used to simulate the other.

Object-oriented programming

An object in OOP can be described as a semi-autonomous unit comprising both information and behaviour. Objects communicate by means of "method calls", which are essentially subroutine calls, done indirectly via the class to which the receiving object belongs. The object's internal data can only be accessed by means of method calls, so this is a form of information hiding or "encapsulation". Encapsulation, however, predates OOP - David Parnas wrote one of the seminal articles on it in the early 70s [19] - and is a basic concept in computing. Encapsulation is the very essence of an FBP component, which may be thought of as a black box, performing some conversion of its input data into its output data. In FBP, part of the specification of a component is the data formats and stream structures that it can accept, and those it will generate. This constitutes a form of design by contract. In addition, the data in an IP can only be accessed directly by the currently owning process. Encapsulation can also be implemented at the network level, by having outer processes protect inner ones.

A paper by C. Ellis and S. Gibbs distinguishes between active objects and passive objects. [20] Passive objects comprise information and behaviour, as stated above, but they cannot determine the timing of this behaviour. Active objects on the other hand can do this. In their article Ellis and Gibbs state that active objects have much more potential for the development of maintainable systems than do passive objects. An FBP application can be viewed as a combination of these two types of object, where FBP processes would correspond to active objects, while IPs would correspond to passive objects.

Actor model

FBP considers Carl Hewitt's actor as an asynchronous processes with 2 ports: one for input messages and one for control signals. A control signal is emitted by the actor itself after each round of execution. The purpose of this signal is to avoid parallel execution of the actor's body and so to allow to access the fields of the actor object without synchronization.

See also

Related Research Articles

<span class="mw-page-title-main">Jackson structured programming</span>

Jackson structured programming (JSP) is a method for structured programming developed by British software consultant Michael A. Jackson and described in his 1975 book Principles of Program Design. The technique of JSP is to analyze the data structures of the files that a program must read as input and produce as output, and then produce a program design based on those data structures, so that the program control structure handles those data structures in a natural and intuitive way.

<span class="mw-page-title-main">Visual programming language</span> Programming language written graphically by a user

In computing, a visual programming language, also known as diagrammatic programming, graphical programming or block coding, is a programming language that lets users create programs by manipulating program elements graphically rather than by specifying them textually. A VPL allows programming with visual expressions, spatial arrangements of text and graphic symbols, used either as elements of syntax or secondary notation. For example, many VPLs are based on the idea of "boxes and arrows", where boxes or other screen objects are treated as entities, connected by arrows, lines or arcs which represent relations. VPLs are generally the basis of Low-code development platforms.

<span class="mw-page-title-main">Prograph</span>

Prograph is a visual, object-oriented, dataflow, multiparadigm programming language that uses iconic symbols to represent actions to be taken on data. Commercial Prograph software development environments such as Prograph Classic and Prograph CPX were available for the Apple Macintosh and Windows platforms for many years but were eventually withdrawn from the market in the late 1990s. Support for the Prograph language on macOS has recently reappeared with the release of the Marten software development environment.

In computing, dataflow is a broad concept, which has various meanings depending on the application and context. In the context of software architecture, data flow relates to stream processing or reactive programming.

In computer programming, dataflow programming is a programming paradigm that models a program as a directed graph of the data flowing between operations, thus implementing dataflow principles and architecture. Dataflow programming languages share some features of functional languages, and were generally developed in order to bring some functional concepts to a language more suitable for numeric processing. Some authors use the term datastream instead of dataflow to avoid confusion with dataflow computing or dataflow architecture, based on an indeterministic machine paradigm. Dataflow programming was pioneered by Jack Dennis and his graduate students at MIT in the 1960s.

In software engineering, a pipeline consists of a chain of processing elements, arranged so that the output of each element is the input of the next; the name is by analogy to a physical pipeline. Usually some amount of buffering is provided between consecutive elements. The information that flows in these pipelines is often a stream of records, bytes, or bits, and the elements of a pipeline may be called filters; this is also called the pipe(s) and filters design pattern. Connecting elements into a pipeline is analogous to function composition.

A data-flow diagram is a way of representing a flow of data through a process or a system. The DFD also provides information about the outputs and inputs of each entity and the process itself. A data-flow diagram has no control flowthere are no decision rules and no loops. Specific operations based on the data can be represented by a flowchart.

<span class="mw-page-title-main">Kahn process networks</span> Model of computation

A Kahn process network is a distributed model of computation in which a group of deterministic sequential processes communicate through unbounded first in, first out channels. The model requires that reading from a channel is blocking while writing is non-blocking. Due to these key restrictions, the resulting process network exhibits deterministic behavior that does not depend on the timing of computation nor on communication delays.

Concurrent computing is a form of computing in which several computations are executed concurrently—during overlapping time periods—instead of sequentially—with one completing before the next starts.

<span class="mw-page-title-main">Stream (computing)</span> Sequence of data items available over time

In computer science, a stream is a sequence of data elements made available over time. A stream can be thought of as items on a conveyor belt being processed one at a time rather than in large batches.

<span class="mw-page-title-main">Binary Modular Dataflow Machine</span>

Binary Modular Dataflow Machine (BMDFM) is a software package that enables running an application in parallel on shared memory symmetric multiprocessing (SMP) computers using the multiple processors to speed up the execution of single applications. BMDFM automatically identifies and exploits parallelism due to the static and mainly dynamic scheduling of the dataflow instruction sequences derived from the formerly sequential program.

In computer science, stream processing is a programming paradigm which views streams, or sequences of events in time, as the central input and output objects of computation. Stream processing encompasses dataflow programming, reactive programming, and distributed data processing. Stream processing systems aim to expose parallel processing for data streams and rely on streaming algorithms for efficient implementation. The software stack for these systems includes components such as programming models and query languages, for expressing computation; stream management systems, for distribution and scheduling; and hardware components for acceleration including floating-point units, graphics processing units, and field-programmable gate arrays.

Object-oriented design (OOD) is the process of planning a system of interacting objects to solve a software problem. It is a method for software design. By defining classes and their functionality for their children, each object can run the same implementation of the class with its state.

<span class="mw-page-title-main">Structured analysis</span>

In software engineering, structured analysis (SA) and structured design (SD) are methods for analyzing business requirements and developing specifications for converting practices into computer programs, hardware configurations, and related manual procedures.

<span class="mw-page-title-main">Function model</span>

In systems engineering, software engineering, and computer science, a function model or functional model is a structured representation of the functions within the modeled system or subject area.

<span class="mw-page-title-main">VisualAp</span> Media processing framework

VisualAp is a visual framework for building applications and emulate systems. VisualAp is cross-platform as it is a 100% Java application.

<span class="mw-page-title-main">Node graph architecture</span> Software design structured around a node graph

Node graph architecture is a software design structured around the notion of a node graph. Both the source code and the user interface are designed around the editing and composition of atomic functional units. Node graphs are a type of visual programming language.

<span class="mw-page-title-main">Distributed Data Management Architecture</span> Open, published architecture for creating, managing and accessing data on a remote computer

Distributed Data Management Architecture (DDM) is IBM's open, published software architecture for creating, managing and accessing data on a remote computer. DDM was initially designed to support record-oriented files; it was extended to support hierarchical directories, stream-oriented files, queues, and system command processing; it was further extended to be the base of IBM's Distributed Relational Database Architecture (DRDA); and finally, it was extended to support data description and conversion. Defined in the period from 1980 to 1993, DDM specifies necessary components, messages, and protocols, all based on the principles of object-orientation. DDM is not, in itself, a piece of software; the implementation of DDM takes the form of client and server products. As an open architecture, products can implement subsets of DDM architecture and products can extend DDM to meet additional requirements. Taken together, DDM products implement a distributed file system.

<span class="mw-page-title-main">Node-RED</span> Programming tool for network-aware devices

Node-RED is a flow-based, low-code development tool for visual programming developed originally by IBM for wiring together hardware devices, APIs and online services as part of the Internet of things.

References

  1. "Flow-based Programming".
  2. Carriero, Nicholas; Gelernter, David (1989). "Linda in context". Communications of the ACM. 32 (4): 444–458. doi: 10.1145/63334.63337 . S2CID   5900105.
  3. Gelernter, David; Carriero, Nicholas (1992). "Coordination languages and their significance". Communications of the ACM. 35 (2): 97–107. doi:10.1145/129630.129635. S2CID   7748555.
  4. 1 2 Gabe Stein (August 2013). "How an Arcane Coding Method From 1970s Banking Software Could Save the Sanity of Web Developers Everywhere" . Retrieved 24 January 2016.
  5. Conway, Melvin E. (1963). "Design of a separable transition-diagram compiler". Communications of the ACM. 6 (7): 396–408. doi: 10.1145/366663.366704 . S2CID   10559786.
  6. J. Paul Morrison, Data Responsive Modular, Interleaved Task Programming System, IBM Technical Disclosure Bulletin, Vol. 13, No. 8, 2425-2426, January 1971
  7. Morrison, J. P. (1978). "Data Stream Linkage Mechanism". IBM Systems Journal. 17 (4): 383–408. doi:10.1147/sj.174.0383.
  8. Stevens, W. P. (1982). "How data flow can improve application development productivity". IBM Systems Journal. 21 (2): 162–178. doi:10.1147/sj.212.0162.
  9. W.P. Stevens, Using Data Flow for Application Development, Byte, June 1985
  10. W.P. Stevens, Software Design - Concepts and Methods, Practical Software Engineering Series, Ed. Allen Macro, Prentice Hall, 1990, ISBN   0-13-820242-7
  11. Johnston, Wesley M.; Hanna, J. R. Paul; Millar, Richard J. (2004). "Advances in dataflow programming languages". ACM Computing Surveys. 36 (1): 1–34. CiteSeerX   10.1.1.99.7265 . doi:10.1145/1013208.1013209. S2CID   5257722.
  12. D.P. Friedman and D.S. Wise, CONS should not evaluate its arguments, Automata, Languages and Programming, Edinburgh University Press, Edinburgh, 1976
  13. "Peter Naur's "Telegram Problem"". Archived from the original on 2014-09-06. Retrieved 2014-09-06.
  14. "Programming" by M. A. Jackson, published in Proceedings of Workshop on Software in High-Energy Physics, pages 1-12, CERN, Geneva, 4–6 October 1982
  15. 1 2 "A System development method Archived 2012-02-06 at the Wayback Machine " by M. A. Jackson, published in Tools and notions for program construction: An advanced course, Cambridge University Press, 1982
  16. "JSP In Perspective" Michael Jackson; JSP in Perspective; in Software Pioneers: Contributions to Software Engineering; Manfred Broy, Ernst Denert eds; Springer, 2002
  17. W.B. Ackerman, Data Flow Languages, Proceedings National Computer Conference, pp. 1087-1095, 1979
  18. W.H. Burge, Recursive Programming Techniques, Addison-Wesley, Reading, MA, 1975
  19. Parnas, D. L. (1972). "On the criteria to be used in decomposing systems into modules". Communications of the ACM. 15 (12): 1053–1058. doi: 10.1145/361598.361623 . S2CID   53856438.
  20. C. Ellis and S. Gibbs, Active Objects: Realities and Possibilities, in Object-Oriented Concepts, Databases, and Applications, eds. W. Kim and F.H. Lochovsky, ACM Press, Addison-Wesley, 1989