Data-flow analysis

Last updated

Data-flow analysis is a technique for gathering information about the possible set of values calculated at various points in a computer program. A program's control-flow graph (CFG) is used to determine those parts of a program to which a particular value assigned to a variable might propagate. The information gathered is often used by compilers when optimizing a program. A canonical example of a data-flow analysis is reaching definitions.

Contents

A simple way to perform data-flow analysis of programs is to set up data-flow equations for each node of the control-flow graph and solve them by repeatedly calculating the output from the input locally at each node until the whole system stabilizes, i.e., it reaches a fixpoint. This general approach, also known as Kildall's method, was developed by Gary Kildall while teaching at the Naval Postgraduate School. [1] [2] [3] [4]

Basic principles

Data-flow analysis is the process of collecting information about the way the variables are defined and used in the program. It attempts to obtain particular information at each point in a procedure. Usually, it is enough to obtain this information at the boundaries of basic blocks, since from that it is easy to compute the information at points in the basic block. In forward flow analysis, the exit state of a block is a function of the block's entry state. This function is the composition of the effects of the statements in the block. The entry state of a block is a function of the exit states of its predecessors. This yields a set of data-flow equations:

For each block b:

In this, is the transfer function of the block . It works on the entry state , yielding the exit state . The join operation combines the exit states of the predecessors of , yielding the entry state of .

After solving this set of equations, the entry and/or exit states of the blocks can be used to derive properties of the program at the block boundaries. The transfer function of each statement separately can be applied to get information at a point inside a basic block.

Each particular type of data-flow analysis has its own specific transfer function and join operation. Some data-flow problems require backward flow analysis. This follows the same plan, except that the transfer function is applied to the exit state yielding the entry state, and the join operation works on the entry states of the successors to yield the exit state.

The entry point (in forward flow) plays an important role: Since it has no predecessors, its entry state is well defined at the start of the analysis. For instance, the set of local variables with known values is empty. If the control-flow graph does not contain cycles (there were no explicit or implicit loops in the procedure) solving the equations is straightforward. The control-flow graph can then be topologically sorted; running in the order of this sort, the entry states can be computed at the start of each block, since all predecessors of that block have already been processed, so their exit states are available. If the control-flow graph does contain cycles, a more advanced algorithm is required.

An iterative algorithm

The most common way of solving the data-flow equations is by using an iterative algorithm. It starts with an approximation of the in-state of each block. The out-states are then computed by applying the transfer functions on the in-states. From these, the in-states are updated by applying the join operations. The latter two steps are repeated until we reach the so-called fixpoint: the situation in which the in-states (and the out-states in consequence) do not change.

A basic algorithm for solving data-flow equations is the round-robin iterative algorithm:

for i ← 1 to N
initialize node i
while (sets are still changing)
for i ← 1 to N
recompute sets at node i

Convergence

To be usable, the iterative approach should actually reach a fixpoint. This can be guaranteed by imposing constraints on the combination of the value domain of the states, the transfer functions and the join operation.

The value domain should be a partial order with finite height (i.e., there are no infinite ascending chains < < ...). The combination of the transfer function and the join operation should be monotonic with respect to this partial order. Monotonicity ensures that on each iteration the value will either stay the same or will grow larger, while finite height ensures that it cannot grow indefinitely. Thus we will ultimately reach a situation where T(x) = x for all x, which is the fixpoint.

The work list approach

It is easy to improve on the algorithm above by noticing that the in-state of a block will not change if the out-states of its predecessors don't change. Therefore, we introduce a work list: a list of blocks that still need to be processed. Whenever the out-state of a block changes, we add its successors to the work list. In each iteration, a block is removed from the work list. Its out-state is computed. If the out-state changed, the block's successors are added to the work list. For efficiency, a block should not be in the work list more than once.

The algorithm is started by putting information-generating blocks in the work list. It terminates when the work list is empty.

Ordering

The efficiency of iteratively solving data-flow equations is influenced by the order at which local nodes are visited. [5] Furthermore, it depends on whether the data-flow equations are used for forward or backward data-flow analysis over the CFG. Intuitively, in a forward flow problem, it would be fastest if all predecessors of a block have been processed before the block itself, since then the iteration will use the latest information. In the absence of loops it is possible to order the blocks in such a way that the correct out-states are computed by processing each block only once.

In the following, a few iteration orders for solving data-flow equations are discussed (a related concept to iteration order of a CFG is tree traversal of a tree).

Initialization

The initial value of the in-states is important to obtain correct and accurate results. If the results are used for compiler optimizations, they should provide conservative information, i.e. when applying the information, the program should not change semantics. The iteration of the fixpoint algorithm will take the values in the direction of the maximum element. Initializing all blocks with the maximum element is therefore not useful. At least one block starts in a state with a value less than the maximum. The details depend on the data-flow problem. If the minimum element represents totally conservative information, the results can be used safely even during the data-flow iteration. If it represents the most accurate information, fixpoint should be reached before the results can be applied.

Examples

The following are examples of properties of computer programs that can be calculated by data-flow analysis. Note that the properties calculated by data-flow analysis are typically only approximations of the real properties. This is because data-flow analysis operates on the syntactical structure of the CFG without simulating the exact control flow of the program. However, to be still useful in practice, a data-flow analysis algorithm is typically designed to calculate an upper respectively lower approximation of the real program properties.

Forward analysis

The reaching definition analysis calculates for each program point the set of definitions that may potentially reach this program point.

  if b == 4 then      a = 5;   else       a = 3;   endif   if a < 4 then      ... 

The reaching definition of variable a at line 7 is the set of assignments a = 5 at line 2 and a = 3 at line 4.

Backward analysis

The live variable analysis calculates for each program point the variables that may be potentially read afterwards before their next write update. The result is typically used by dead code elimination to remove statements that assign to a variable whose value is not used afterwards.

The in-state of a block is the set of variables that are live at the start of it. It initially contains all variables live (contained) in the block, before the transfer function is applied and the actual contained values are computed. The transfer function of a statement is applied by killing the variables that are written within this block (remove them from the set of live variables). The out-state of a block is the set of variables that are live at the end of the block and is computed by the union of the block's successors' in-states.

Initial code:

Backward analysis:

The in-state of b3 only contains b and d, since c has been written. The out-state of b1 is the union of the in-states of b2 and b3. The definition of c in b2 can be removed, since c is not live immediately after the statement.

Solving the data-flow equations starts with initializing all in-states and out-states to the empty set. The work list is initialized by inserting the exit point (b3) in the work list (typical for backward flow). Its computed in-state differs from the previous one, so its predecessors b1 and b2 are inserted and the process continues. The progress is summarized in the table below.

processingout-stateold in-statenew in-statework list
b3{}{}{b,d}(b1,b2)
b1{b,d}{}{}(b2)
b2{b,d}{}{a,b}(b1)
b1{a,b,d}{}{}()

Note that b1 was entered in the list before b2, which forced processing b1 twice (b1 was re-entered as predecessor of b2). Inserting b2 before b1 would have allowed earlier completion.

Initializing with the empty set is an optimistic initialization: all variables start out as dead. Note that the out-states cannot shrink from one iteration to the next, although the out-state can be smaller than the in-state. This can be seen from the fact that after the first iteration the out-state can only change by a change of the in-state. Since the in-state starts as the empty set, it can only grow in further iterations.

Other approaches

Several modern compilers use static single-assignment form as the method for analysis of variable dependencies. [6]

In 2002, Markus Mohnen described a new method of data-flow analysis that does not require the explicit construction of a data-flow graph, [7] instead relying on abstract interpretation of the program and keeping a working set of program counters. At each conditional branch, both targets are added to the working set. Each path is followed for as many instructions as possible (until end of program or until it has looped with no changes), and then removed from the set and the next program counter retrieved.

A combination of control flow analysis and data flow analysis has shown to be useful and complementary in identifying cohesive source code regions implementing functionalities of a system (e.g., features, requirements or use cases). [8]

Special classes of problems

There are a variety of special classes of dataflow problems which have efficient or general solutions.

Bit vector problems

The examples above are problems in which the data-flow value is a set, e.g. the set of reaching definitions (Using a bit for a definition position in the program), or the set of live variables. These sets can be represented efficiently as bit vectors , in which each bit represents set membership of one particular element. Using this representation, the join and transfer functions can be implemented as bitwise logical operations. The join operation is typically union or intersection, implemented by bitwise logical or and logical and. The transfer function for each block can be decomposed in so-called gen and kill sets.

As an example, in live-variable analysis, the join operation is union. The kill set is the set of variables that are written in a block, whereas the gen set is the set of variables that are read without being written first. The data-flow equations become

In logical operations, this reads as

out(b) = 0 forsin succ(b)     out(b) = out(b) or in(s) in(b) = (out(b) and not kill(b)) or gen(b)

Dataflow problems which have sets of data-flow values which can be represented as bit vectors are called bit vector problems, gen-kill problems, or locally separable problems. [9] Such problems have generic polynomial-time solutions. [10]

In addition to the reaching definitions and live variables problems mentioned above, the following problems are instances of bitvector problems: [10]

IFDS problems

Interprocedural, finite, distributive, subset problems or IFDS problems are another class of problem with a generic polynomial-time solution. [9] [11] Solutions to these problems provide context-sensitive and flow-sensitive dataflow analyses.

There are several implementations of IFDS-based dataflow analyses for popular programming languages, e.g. in the Soot [12] and WALA [13] frameworks for Java analysis.

Every bitvector problem is also an IFDS problem, but there are several significant IFDS problems that are not bitvector problems, including truly-live variables and possibly-uninitialized variables.

Sensitivities

Data-flow analysis is typically path-insensitive, though it is possible to define data-flow equations that yield a path-sensitive analysis.

List of data-flow analyses

See also

Related Research Articles

<span class="mw-page-title-main">Binary search algorithm</span> Search algorithm finding the position of a target value within a sorted array

In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.

A* is a graph traversal and path search algorithm, which is used in many fields of computer science due to its completeness, optimality, and optimal efficiency. One major practical drawback is its space complexity, as it stores all generated nodes in memory. Thus, in practical travel-routing systems, it is generally outperformed by algorithms that can pre-process the graph to attain better performance, as well as memory-bounded approaches; however, A* is still the best solution in many cases.

A Bayesian network is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). It is one of several forms of causal notation. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

<span class="mw-page-title-main">Hill climbing</span> Optimization algorithm

In numerical analysis, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is an iterative algorithm that starts with an arbitrary solution to a problem, then attempts to find a better solution by making an incremental change to the solution. If the change produces a better solution, another incremental change is made to the new solution, and so on until no further improvements can be found.

<span class="mw-page-title-main">Dominator (graph theory)</span> When every path in a control-flow graph must go through one node to reach another

In computer science, a node d of a control-flow graph dominates a node n if every path from the entry node to n must go through d. Notationally, this is written as d dom n. By definition, every node dominates itself.

Branch and bound is a method for solving optimization problems by breaking them down into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the optimal solution. It is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means of state space search: the set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The algorithm explores branches of this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm.

In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is difficult. This sequence can be used to approximate the joint distribution ; to approximate the marginal distribution of one of the variables, or some subset of the variables ; or to compute an integral. Typically, some of the variables correspond to observations whose values are known, and hence do not need to be sampled.

In computer science, a binary decision diagram (BDD) or branching program is a data structure that is used to represent a Boolean function. On a more abstract level, BDDs can be considered as a compressed representation of sets or relations. Unlike other compressed representations, operations are performed directly on the compressed representation, i.e. without decompression.

<span class="mw-page-title-main">Flow network</span> Directed graph where edges have a capacity

In graph theory, a flow network is a directed graph where each edge has a capacity and each edge receives a flow. The amount of flow on an edge cannot exceed the capacity of the edge. Often in operations research, a directed graph is called a network, the vertices are called nodes and the edges are called arcs. A flow must satisfy the restriction that the amount of flow into a node equals the amount of flow out of it, unless it is a source, which has only outgoing flow, or sink, which has only incoming flow. A network can be used to model traffic in a computer network, circulation with demands, fluids in pipes, currents in an electrical circuit, or anything similar in which something travels through a network of nodes.

In computer science, corecursion is a type of operation that is dual to recursion. Whereas recursion works analytically, starting on data further from a base case and breaking it down into smaller data and repeating until one reaches a base case, corecursion works synthetically, starting from a base case and building it up, iteratively producing data further removed from a base case. Put simply, corecursive algorithms use the data that they themselves produce, bit by bit, as they become available, and needed, to produce further bits of data. A similar but distinct concept is generative recursion which may lack a definite "direction" inherent in corecursion and recursion.

In compiler theory, a reaching definition for a given instruction is an earlier instruction whose target variable can reach the given one without an intervening assignment. For example, in the following code:

d1 : y := 3 d2 : x := y
<span class="mw-page-title-main">Recursion (computer science)</span> Use of functions that call themselves

In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.

The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement. In the same manner, an infinite number of computations can be described by a finite recursive program, even if this program contains no explicit repetitions.

In compilers, live variable analysis is a classic data-flow analysis to calculate the variables that are live at each point in the program. A variable is live at some point if it holds a value that may be needed in the future, or equivalently if its value may be read before the next time the variable is written to.

<span class="mw-page-title-main">Conditional random field</span> Class of statistical modeling methods

Conditional random fields (CRFs) are a class of statistical modeling methods often applied in pattern recognition and machine learning and used for structured prediction. Whereas a classifier predicts a label for a single sample without considering "neighbouring" samples, a CRF can take context into account. To do so, the predictions are modelled as a graphical model, which represents the presence of dependencies between the predictions. What kind of graph is used depends on the application. For example, in natural language processing, "linear chain" CRFs are popular, for which each prediction is dependent only on its immediate neighbours. In image processing, the graph typically connects locations to nearby and/or similar locations to enforce that they receive similar predictions.

A signal-flow graph or signal-flowgraph (SFG), invented by Claude Shannon, but often called a Mason graph after Samuel Jefferson Mason who coined the term, is a specialized flow graph, a directed graph in which nodes represent system variables, and branches represent functional connections between pairs of nodes. Thus, signal-flow graph theory builds on that of directed graphs, which includes as well that of oriented graphs. This mathematical theory of digraphs exists, of course, quite apart from its applications.

Verification-based message-passing algorithms (VB-MPAs) in compressed sensing (CS), a branch of digital signal processing that deals with measuring sparse signals, are some methods to efficiently solve the recovery problem in compressed sensing. One of the main goal in compressed sensing is the recovery process. Generally speaking, recovery process in compressed sensing is a method by which the original signal is estimated using the knowledge of the compressed signal and the measurement matrix. Mathematically, the recovery process in Compressed Sensing is finding the sparsest possible solution of an under-determined system of linear equations. Based on the nature of the measurement matrix one can employ different reconstruction methods. If the measurement matrix is also sparse, one efficient way is to use Message Passing Algorithms for signal recovery. Although there are message passing approaches that deals with dense matrices, the nature of those algorithms are to some extent different from the algorithms working on sparse matrices.

Graph cut optimization is a combinatorial optimization method applicable to a family of functions of discrete variables, named after the concept of cut in the theory of flow networks. Thanks to the max-flow min-cut theorem, determining the minimum cut over a graph representing a flow network is equivalent to computing the maximum flow over the network. Given a pseudo-Boolean function , if it is possible to construct a flow network with positive weights such that

References

  1. Kildall, Gary Arlen (May 1972). Global expression optimization during compilation (Ph.D. dissertation). Seattle, Washington, USA: University of Washington, Computer Science Group. Thesis No. 20506, Technical Report No. 72-06-02.
  2. Kildall, Gary Arlen (1973-10-01). "A Unified Approach to Global Program Optimization" (PDF). Proceedings of the 1st Annual ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages (POPL). POPL '73. Boston, Massachusetts, USA: 194–206. doi:10.1145/512927.512945. hdl:10945/42162. S2CID   10219496. Archived (PDF) from the original on 2017-06-29. Retrieved 2006-11-20. ()
  3. Rüthing, Oliver; Knoop, Jens; Steffen, Bernhard (2003-07-31) [1999]. "Optimization: Detecting Equalities of Variables, Combining Efficiency with Precision". In Cortesi, Agostino; Filé, Gilberto (eds.). Static Analysis: 6th International Symposium, SAS'99, Venice, Italy, September 22–24, 1999, Proceedings. Lecture Notes in Computer Science. Vol. 1694 (illustrated ed.). Springer. pp. 232–247 [233]. ISBN   9783540664598. ISSN   0302-9743.
  4. Huitt, Robert; Eubanks, Gordon; Rolander, Thomas "Tom" Alan; Laws, David; Michel, Howard E.; Halla, Brian; Wharton, John Harrison; Berg, Brian; Su, Weilian; Kildall, Scott; Kampe, Bill (2014-04-25). Laws, David (ed.). "Legacy of Gary Kildall: The CP/M IEEE Milestone Dedication" (PDF) (video transscription). Pacific Grove, California, USA: Computer History Museum. CHM Reference number: X7170.2014. Retrieved 2020-01-19. […] Eubanks: […] Gary […] was an inventor, he was inventive, he did things. His Ph.D. thesis proved that global flow analysis converges. […] This is a fundamental idea in computer science. […] I took a […] summer course once from a guy named Dhamdhere […] they talked about optimization for like a week and then they put a slide up and said, "Kildall's Method," this is the real story. […] that's something that no one ever thinks about. […] (33 pages)
  5. Cooper, Keith D.; Harvey, Timothy J.; Kennedy, Ken (2004-03-26) [November 2002]. "Iterative Data-Flow Analysis, Revisited" (PDF). PLDI 2003. ACM. TR04-432. Retrieved 2017-07-01.[ permanent dead link ]
  6. "Static Single Assignment (with relevant examples)". GeeksforGeeks. 2021-10-02. Retrieved 2023-08-16.
  7. Mohnen, Markus (2002). A Graph-Free Approach to Data-Flow Analysis. Lecture Notes in Computer Science. Vol. 2304. pp. 185–213. doi:10.1007/3-540-45937-5_6. ISBN   978-3-540-43369-9.
  8. Kuang, Hongyu; Mäder, Patrick; Hu, Hao; Ghabi, Achraf; Huang, LiGuo; Lü, Jian; Egyed, Alexander (2015-11-01). "Can method data dependencies support the assessment of traceability between requirements and source code?". Journal of Software: Evolution and Process. 27 (11): 838–866. doi:10.1002/smr.1736. ISSN   2047-7481. S2CID   39846438.
  9. 1 2 Reps, Thomas; Horwitz, Susan; Sagiv, Mooly (1995). "Precise interprocedural dataflow analysis via graph reachability". Proceedings of the 22nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '95. New York, New York, USA: ACM Press. pp. 1, 49–61. doi:10.1145/199448.199462. ISBN   0-89791692-1. S2CID   5955667.
  10. 1 2 Knoop, Jens; Steffen, Bernhard; Vollmer, Jürgen (1996-05-01). "Parallelism for free: efficient and optimal bitvector analyses for parallel programs". ACM Transactions on Programming Languages and Systems. 18 (3): 268–299. doi: 10.1145/229542.229545 . ISSN   0164-0925. S2CID   14123780.
  11. Naeem, Nomair A.; Lhoták, Ondřej; Rodriguez, Jonathan (2010), "Practical Extensions to the IFDS Algorithm", Compiler Construction, Lecture Notes in Computer Science, vol. 6011, Berlin / Heidelberg, Germany: Springer Verlag, pp. 124–144, doi: 10.1007/978-3-642-11970-5_8 , ISBN   978-3-64211969-9
  12. Bodden, Eric (2012). "Inter-procedural data-flow analysis with IFDS/IDE and Soot". Proceedings of the ACM SIGPLAN International Workshop on State of the Art in Java Program analysis. New York, New York, USA: ACM Press. pp. 3–8. doi:10.1145/2259051.2259052. ISBN   978-1-45031490-9. S2CID   3020481.
  13. Rapoport, Marianna; Lhoták, Ondřej; Tip, Frank (2015). Precise Data Flow Analysis in the Presence of Correlated Method Calls. International Static Analysis Symposium. Lecture Notes in Computer Science. Vol. 9291. Berlin / Heidelberg, Germany: Springer Verlag. pp. 54–71. doi:10.1007/978-3-662-48288-9_4. ISBN   978-3-66248287-2.

Further reading