Constant folding and constant propagation are related compiler optimizations used by many modern compilers. [1] An advanced form of constant propagation known as sparse conditional constant propagation can more accurately propagate constants and simultaneously remove dead code.
Constant folding is the process of recognizing and evaluating constant expressions at compile time rather than computing them at runtime. Terms in constant expressions are typically simple literals, such as the integer literal 2
, but they may also be variables whose values are known at compile time. Consider the statement:
i=320*200*32;
Most compilers would not actually generate two multiply instructions and a store for this statement. Instead, they identify constructs such as these and substitute the computed values at compile time (in this case, 2,048,000
).
Constant folding can make use of arithmetic identities. If x
is numeric, the value of 0 * x
is zero even if the compiler does not know the value of x
(note that this is not valid for IEEE floats since x
could be Infinity or NaN. Still, some environments that favor performance such as GLSL shaders allow this for constants, which can occasionally cause bugs).
Constant folding may apply to more than just numbers. Concatenation of string literals and constant strings can be constant folded. Code such as "abc" + "def"
may be replaced with "abcdef"
.
In implementing a cross compiler, care must be taken to ensure that the behaviour of the arithmetic operations on the host architecture matches that on the target architecture, as otherwise enabling constant folding will change the behaviour of the program. This is of particular importance in the case of floating point operations, whose precise implementation may vary widely.
Constant propagation is the process of substituting the values of known constants in expressions at compile time. Such constants include those defined above, as well as intrinsic functions applied to constant values. Consider the following pseudocode:
intx=14;inty=7-x/2;returny*(28/x+2);
Propagating x yields:
intx=14;inty=7-14/2;returny*(28/14+2);
Continuing to propagate yields the following (which would likely be further optimized by dead-code elimination of both x and y.)
intx=14;inty=0;return0;
Constant propagation is implemented in compilers using reaching definition analysis results. If all a variable's reaching definitions are the same assignment - which assigns a same constant to the variable - then the variable will always have the same value, and can be replaced with the constant.
Constant propagation can also cause conditional branches to simplify to one or more unconditional statements, if the conditional expression can be evaluated to true or false at compile time to determine the only possible outcome.
Constant folding and propagation are typically used together to achieve many simplifications and reductions, and their interleaved, iterative application continues until those effects cease.
Consider this unoptimized pseudocode returning a number unknown pending analysis:
inta=30;intb=9-(a/5);intc=b*4;if(c>10){c=c-10;}returnc*(60/a);
Applying constant propagation once, followed by constant folding, yields:
inta=30;intb=3;intc=b*4;if(c>10){c=c-10;}returnc*2;
Repeating both steps twice produces:
inta=30;intb=3;intc=12;if(true){c=2;}returnc*2;
Having replaced all uses of variables a
and b
with constants, the compiler's dead-code elimination applies to those variables, leaving:
intc=12;if(true){c=2;}returnc*2;
(Boolean constructs vary among languages and compilers, but their details—such as the status, origin, and representation of true—do not affect these optimization principles.)
Traditional constant propagation produces no further optimization; it does not restructure programs.
However, a similar optimization, sparse conditional constant propagation, goes further by selecting the appropriate conditional branch, [2] and removing the always-true conditional test. Thus, variable c
becomes redundant, and only an operation on a constant remains:
return4;
If that pseudocode constitutes a function body, the compiler knows the function evaluates to integer constant 4
, allowing replacement of calls to the function with 4
, and further increasing program efficiency.
In programming language theory, lazy evaluation, or call-by-need, is an evaluation strategy which delays the evaluation of an expression until its value is needed and which also avoids repeated evaluations.
An optimizing compiler is a compiler designed to generate code that is optimized in aspects such as minimizing program execution time, memory usage, storage size, and power consumption. Optimization is generally implemented as a sequence of optimizing transformations—algorithms that transform code to produce semantically equivalent code optimized for some aspect.
Standard ML (SML) is a general-purpose, high-level, modular, functional programming language with compile-time type checking and type inference. It is popular for writing compilers, for programming language research, and for developing theorem provers.
In computing, inline expansion, or inlining, is a manual or compiler optimization that replaces a function call site with the body of the called function. Inline expansion is similar to macro expansion, but occurs during compilation, without changing the source code, while macro expansion occurs prior to compilation, and results in different text that is then processed by the compiler.
In computer programming, a parameter or a formal argument is a special kind of variable used in a subroutine to refer to one of the pieces of data provided as input to the subroutine. These pieces of data are the values of the arguments with which the subroutine is going to be called/invoked. An ordered list of parameters is usually included in the definition of a subroutine, so that, each time the subroutine is called, its arguments for that call are evaluated, and the resulting values can be assigned to the corresponding parameters.
The syntax of the C programming language is the set of rules governing writing of software in C. It is designed to allow for programs that are extremely terse, have a close relationship with the resulting object code, and yet provide relatively high-level data abstraction. C was the first widely successful high-level language for portable operating-system development.
In computer science, conditionals are programming language constructs that perform different computations or actions or return different values depending on the value of a Boolean expression, called a condition.
In compiler theory, dead-code elimination is a compiler optimization to remove dead code. Removing such code has several benefits: it shrinks program size, an important consideration in some contexts, it reduces resource usage such as the number of bytes to be transferred and it allows the running program to avoid executing irrelevant operations, which reduces its running time. It can also enable further optimizations by simplifying program structure. Dead code includes code that can never be executed, and code that only affects dead variables, that is, irrelevant to the program.
In computer programming, the ternary conditional operator is a ternary operator that is part of the syntax for basic conditional expressions in several programming languages. It is commonly referred to as the conditional operator, conditional expression, ternary if, or inline if. An expression if a then b else c
or a ? b : c
evaluates to b
if the value of a
is true, and otherwise to c
. One can read it aloud as "if a then b otherwise c". The form a ? b : c
is the most common, but alternative syntax do exist; for example, Raku uses the syntax a ?? b !! c
to avoid confusion with the infix operators ?
and !
, whereas in Visual Basic .NET, it instead takes the form If(a, b, c)
.
Short-circuit evaluation, minimal evaluation, or McCarthy evaluation is the semantics of some Boolean operators in some programming languages in which the second argument is executed or evaluated only if the first argument does not suffice to determine the value of the expression: when the first argument of the AND
function evaluates to false
, the overall value must be false
; and when the first argument of the OR
function evaluates to true
, the overall value must be true
.
In computer programming languages, a switch statement is a type of selection control mechanism used to allow the value of a variable or expression to change the control flow of program execution via search and map.
In computer science, sparse conditional constant propagation (SCCP) is an optimization frequently applied in compilers after conversion to static single assignment form (SSA). It propagates constants, which is the calculation of static values which can be calculated at compile time. Moreover, it can find more constant values, and thus more opportunities for improvement, than separately applying dead code elimination and constant propagation in any order or any number of repetitions.
In compiler theory, copy propagation is the process of replacing the occurrences of targets of direct assignments with their values. A direct assignment is an instruction of the form x = y
, which simply assigns the value of y
to x
.
The computer programming languages C and Pascal have similar times of origin, influences, and purposes. Both were used to design their own compilers early in their lifetimes. The original Pascal definition appeared in 1969 and a first compiler in 1970. The first version of C appeared in 1972.
In computing, IIf is a function in several editions of the Visual Basic programming language and ColdFusion Markup Language (CFML), and on spreadsheets that returns the second or third parameter based on the evaluation of the first parameter. It is an example of a conditional expression, which is similar to a conditional statement.
C++11 is a version of a joint technical standard, ISO/IEC 14882, by the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), for the C++ programming language. C++11 replaced the prior version of the C++ standard, named C++03, and was later replaced by C++14. The name follows the tradition of naming language versions by the publication year of the specification, though it was formerly named C++0x because it was expected to be published before 2010.
In computing, compile-time function execution is the ability of a compiler, that would normally compile a function to machine code and execute it at run time, to execute the function at compile time. This is possible if the arguments to the function are known at compile time, and the function does not make any reference to or attempt to modify any global state.
This article compares a large number of programming languages by tabulating their data types, their expression, statement, and declaration syntax, and some common operating-system interfaces.
C++14 is a version of the ISO/IEC 14882 standard for the C++ programming language. It is intended to be a small extension over C++11, featuring mainly bug fixes and small improvements, and was replaced by C++17. Its approval was announced on August 18, 2014. C++14 was published as ISO/IEC 14882:2014 in December 2014.
Value numbering is a technique of determining when two computations in a program are equivalent and eliminating one of them with a semantics-preserving optimization.
constant propagation OR constant folding.