Interprocedural optimization

Last updated

Interprocedural optimization (IPO) is a collection of compiler techniques used in computer programming to improve performance in programs containing many frequently used functions of small or medium length. IPO differs from other compiler optimizations by analyzing the entire program as opposed to a single function or block of code.

Contents

IPO seeks to reduce or eliminate duplicate calculations and inefficient use of memory and to simplify iterative sequences such as loops. If a call to another routine occurs within a loop, IPO analysis may determine that it is best to inline that routine. Additionally, IPO may re-order the routines for better memory layout and locality.

IPO may also include typical compiler optimizations applied on a whole-program level, for example dead code elimination (DCE), which removes code that is never executed. IPO also tries to ensure better use of constants. Modern compilers offer IPO as an option at compile-time. The actual IPO process may occur at any step between the human-readable source code and producing a finished executable binary program.

For languages that compile on a file-by-file basis, effective IPO across translation units (module files) requires knowledge of the "entry points" of the program so that a whole program optimization (WPO) can be run. In many cases, this is implemented as a link-time optimization (LTO) pass, because the whole program is visible to the linker.

Analysis

The objective of any optimization for speed is to have the program run as swiftly as possible; the problem is that it is not possible for a compiler to correctly analyze a program and determine what it will do, much less what the programmer intended for it to do. By contrast, human programmers start at the other end with a purpose, and attempt to produce a program that will achieve it, preferably without expending a lot of thought in the process.

For various reasons, including readability, programs are frequently broken up into a number of procedures that handle a few general cases. However, the generality of each procedure may result in wasted effort in specific usages. Interprocedural optimization represents an attempt at reducing this waste.

Suppose there is a procedure that evaluates F(x), and that F is a pure function, and the code requests the result of F(6) and then later, F(6) again. This second evaluation is almost certainly unnecessary: the result could have instead been saved and referred to later. This simple optimization is foiled the moment that the implementation of F(x) becomes impure; that is, its execution involves references to parameters other than the explicit argument 6 that has been changed between the invocations, or side effects such as printing some message to a log, counting the number of evaluations, accumulating the CPU time consumed, preparing internal tables so that subsequent invocations for related parameters will be facilitated, and so forth. Losing these side effects via non-evaluation a second time may be acceptable, or they may not.

More generally, aside from optimization, the second reason to use procedures is to avoid duplication of code that would produce the same results, or almost the same results, each time the procedure is performed. A general approach to optimization would therefore be to reverse this: some or all invocations of a certain procedure are replaced by the respective code, with the parameters appropriately substituted. The compiler will then try to optimize the result.

WPO and LTO

Whole program optimization (WPO) is the compiler optimization of a program using information about all the modules in the program. Normally, optimizations are performed on a per module, "compiland", basis; but this approach, while easier to write and test and less demanding of resources during the compilation itself, does not allow certainty about the safety of a number of optimizations such as aggressive inlining and thus cannot perform them even if they would actually turn out to be efficiency gains that do not change the semantics of the emitted object code.

Link-time optimization (LTO) is a type of program optimization performed by a compiler to a program at link time. Link time optimization is relevant in programming languages that compile programs on a file-by-file basis, and then link those files together (such as C and Fortran), rather than all at once (such as Java's just-in-time compilation (JIT)).

Once all files have been compiled separately into object files, traditionally, a compiler links (merges) the object files into a single file, the executable. However, in LTO as implemented by the GNU Compiler Collection (GCC) and LLVM, the compiler is able to dump its intermediate representation (IR), i.e. GIMPLE bytecode or LLVM bitcode, respectively, so that all the different compilation units that will go to make up a single executable can be optimized as a single module when the link finally happens. This expands the scope of interprocedural optimizations to encompass the whole program (or, rather, everything that is visible at link time). With link-time optimization, the compiler can apply various forms of interprocedural optimization to the whole program, allowing for deeper analysis, more optimization, and ultimately better program performance.

In practice, LTO does not always optimize the entire program—library functions, especially dynamically linked shared objects, are intentionally kept out to avoid excessive duplication and to allow for updating. Static linking does naturally lend to the concept of LTO, but it only works with library archives that contain IR objects as opposed to machine-code only object files. [1] Due to performance concerns, not even the entire unit is always directly used—a program could be partitioned in a divide-and-conquer style LTO such as GCC's WHOPR. [2] And of course, when the program being built is itself a library, the optimization would keep every externally-available (exported) symbol, without trying too hard at removing them as a part of DCE. [1]

A much more limited form of WPO is still possible without LTO, as exemplified by GCC's -fwhole-program switch. This mode makes GCC assume that the module being compiled contains the entry point of the entire program, so that every other function in it is not externally used and can be safely optimized away. Since it only applies to a single module, it cannot truly encompass the whole program. It can be combined with LTO in the one-big-module sense, which is useful when the linker is not communicating back to GCC about what entry points or symbols are being used externally. [1]

Example

Programexample;integerb;{A variable "global" to the procedure Silly.}ProcedureSilly(a,x)ifx<0thena:=x+belsea:=-6;EndSilly;{Reference to b, not a parameter, makes Silly "impure" in general.}integera,x;{These variables are visible to Silly only if parameters.}x:=7;b:=5;Silly(a,x);write(x);Silly(x,a);write(x);Silly(b,b);write(b);Endexample;

If the parameters to Silly are passed by value, the actions of the procedure have no effect on the original variables, and since Silly does nothing to its environment (read from a file, write to a file, modify global variables such as b, etc.) its code plus all invocations may be optimized away entirely, leaving the value of a undefined (which doesn't matter) so that just the write statements remain, simply printing constant values.

If instead the parameters are passed by reference, then action on them within Silly does indeed affect the originals. This is usually done by passing the machine address of the parameters to the procedure so that the procedure's adjustments are to the original storage area. Thus in the case of pass by reference, procedure Silly does have an effect. Suppose that its invocations are expanded in place, with parameters identified by address: the code amounts to

x:=7;b:=5;ifx<0thena:=x+belsea:=-6;write(x);{a is changed.}ifa<0thenx:=a+belsex:=-6;write(x);{Because the parameters are swapped.}ifb<0thenb:=b+belseb:=-6;write(b);{Two versions of variable b in Silly, plus the global usage.}

The compiler could then in this rather small example follow the constants along the logic (such as it is) and find that the predicates of the if-statements are constant and so...

x:=7;b:=5;a:=-6;write(7);{b is not referenced, so this usage remains "pure".}x:=-1;write(-1);{b is referenced...}b:=-6;write(-6);{b is modified via its parameter manifestation.}

And since the assignments to a, b and x deliver nothing to the outside world - they do not appear in output statements, nor as input to subsequent calculations (whose results in turn do lead to output, else they also are needless) - there is no point in this code either, and so the result is

write(7);write(-1);write(-6);

A variant method for passing parameters that appear to be "by reference" is copy-in, copy-out whereby the procedure works on a local copy of the parameters whose values are copied back to the originals on exit from the procedure. If the procedure has access to the same parameter but in different ways as in invocations such as Silly(a,a) or Silly(a,b), discrepancies can arise. So, if the parameters were passed by copy-in, copy-out in left-to-right order then Silly(b,b) would expand into

p1:=b;p2:=b;{Copy in. Local variables p1 and p2 are equal.}ifp2<0thenp1:=p2+belsep1:=-6;{Thus p1 may no longer equal p2.}b:=p1;b:=p2;{Copy out. In left-to-right order, the value from p1 is overwritten.}

And in this case, copying the value of p1 (which has been changed) to b is pointless, because it is immediately overwritten by the value of p2, which value has not been modified within the procedure from its original value of b, and so the third statement becomes

write(5);{Not -6}

Such differences in behavior are likely to cause puzzlement, exacerbated by questions as to the order in which the parameters are copied: will it be left to right on exit as well as entry? These details are probably not carefully explained in the compiler manual, and if they are, they will likely be passed over as being not relevant to the immediate task and long forgotten by the time a problem arises. If (as is likely) temporary values are provided via a stack storage scheme, then it is likely that the copy-back process will be in the reverse order to the copy-in, which in this example would mean that p1 would be the last value returned to b instead.

The process of expanding a procedure in-line should not be regarded as a variant of textual replacement (as in macro expansions) because syntax errors may arise as when parameters are modified and the particular invocation uses constants as parameters. Because it is important to be sure that any constants supplied as parameters will not have their value changed (constants can be held in memory just as variables are) lest subsequent usages of that constant (made via reference to its memory location) go awry, a common technique is for the compiler to generate code copying the constant's value into a temporary variable whose address is passed to the procedure, and if its value is modified, no matter; it is never copied back to the location of the constant.

Put another way, a carefully written test program can report on whether parameters are passed by value or reference, and if used, what sort of copy-in and copy-out scheme. However, variation is endless: simple parameters might be passed by copy whereas large aggregates such as arrays might be passed by reference; simple constants such as zero might be generated by special machine codes (such as Clear, or LoadZ) while more complex constants might be stored in memory tagged as read-only with any attempt at modifying it resulting in immediate program termination, etc.

In general

This example is extremely simple, although complications are already apparent. More likely it will be a case of many procedures, having a variety of deducible or programmer-declared properties that may enable the compiler's optimizations to find some advantage. Any parameter to a procedure might be read only, be written to, be both read and written to, or be ignored altogether giving rise to opportunities such as constants not needing protection via temporary variables, but what happens in any given invocation may well depend on a complex web of considerations. Other procedures, especially function-like procedures will have certain behaviours that in specific invocations may enable some work to be avoided: for instance, the Gamma function, if invoked with an integer parameter, could be converted to a calculation involving integer factorials.

Some computer languages enable (or even require) assertions as to the usage of parameters, and might further offer the opportunity to declare that variables have their values restricted to some set (for instance, 6 < x ≤ 28) thus providing further grist for the optimisation process to grind through, and also providing worthwhile checks on the coherence of the source code to detect blunders. But this is never enough - only some variables can be given simple constraints, while others would require complex specifications: how might it be specified that variable P is to be a prime number, and if so, is or is not the value 1 included? Complications are immediate: what are the valid ranges for a day-of-month D given that M is a month number? And are all violations worthy of immediate termination? Even if all that could be handled, what benefit might follow? And at what cost? Full specifications would amount to a re-statement of the program's function in another form and quite aside from the time the compiler would consume in processing them, they would thus be subject to bugs. Instead, only simple specifications are allowed with run-time range checking provided.

In cases where a program reads no input (as in the example), one could imagine the compiler's analysis being carried forward so that the result will be no more than a series of print statements, or possibly some loops expediently generating such values. Would it then recognise a program to generate prime numbers, and convert to the best-known method for doing so, or, present instead a reference to a library? Unlikely! In general, arbitrarily complex considerations arise (the Entscheidungsproblem) to preclude this, and there is no option but to run the code with limited improvements only.

History

For procedural languages like ALGOL, interprocedural analysis and optimization appear to have entered commercial practice in the early 1970s. IBM's PL/I Optimizing Compiler performed interprocedural analysis to understand the side effects of both procedure calls and exceptions (cast, in PL/I terms as "on conditions") [3] and in papers by Fran Allen. [4] [5] Work on compilation of the APL programming language was necessarily interprocedural. [6] [7]

The techniques of interprocedural analysis and optimization were the subject of academic research in the 1980s and 1990s. They re-emerged into the commercial compiler world in the early 1990s with compilers from both Convex Computer Corporation (the "Application Compiler" for the Convex C4) and from Ardent (the compiler for the Ardent Titan). These compilers demonstrated that the technologies could be made sufficiently fast to be acceptable in a commercial compiler; subsequently interprocedural techniques have appeared in a number of commercial and non-commercial systems.

Flags and implementation

Unix-like

The GNU Compiler Collection has function inlining at all optimization levels. At -O1 this only applies to those only called once (-finline-functions-once), at -O2 this constraint is relaxed (-finline-functions). By default this is a single-file-only behavior, but with link-time optimization -flto it becomes whole program. [1] Clang's command-line interface is similar to that of GCC, with the exception that there is no -fwhole-program option. [8]

Object files produced by LTO contain a compiler-specific intermediate representation (IR) that is interpreted at link-time. To make sure this plays well with static libraries, newer GNU linkers have a "linker plugin" interface that allows the compiler to convert the object files into a machine code form when needed. This plugin also helps drive the LTO process in general. Alternatively, a "fat LTO" object can be produced to contain both machine code and the IR, but this takes more space. [1]

Since both GCC and LLVM (clang) are able produce an IR from a variety of programming languages, link-time IPO can happen even across language boundaries. This is most commonly demonstrated with C and C++, [9] but LLVM makes it possible for Rust and all other LLVM-based compilers too. [10]

Non-LTO options

GCC and Clang perform IPO by default at optimization level 2. However, the degree of optimization is limited when LTO is disabled, as IPO can only happen within an object file and non-static functions can never be eliminated. The latter problem has a non-LTO solution: the -fwhole-program switch can be used to assume that only main() is non-static, i.e. visible from the outside. [11]

Another non-LTO technique is "function sections" (-ffunction-sections in GCC and Clang). By placing each function into its own section in the object file, the linker can perform dead code removal without an IR by removing unreferenced sections (using the linker option --gc-sections). [12] A similar option is available for variables, but it causes much worse code to be produced.

Other

The Intel C/C++ compilers allow whole-program IPO. The flag to enable interprocedural optimizations for a single file is -ip, the flag to enable interprocedural optimization across all files in the program is -ipo. [13] [14]

The MSVC compiler, integrated into Visual Studio, also supports interprocedural optimization on the whole program. [15]

A compiler-independent interface for enabling whole-program interprocedural optimizations is via the INTERPROCEDURAL_OPTIMIZATION property in CMake. [16]

See also

Related Research Articles

In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. Common requirements are to minimize a program's execution time, memory footprint, storage size, and power consumption.

In compiler design, static single assignment form is a property of an intermediate representation (IR) that requires each variable to be assigned exactly once and defined before it is used. Existing variables in the original IR are split into versions, new variables typically indicated by the original name with a subscript in textbooks, so that every definition gets its own version. In SSA form, use-def chains are explicit and each contains a single element.

<span class="mw-page-title-main">OpenMP</span> Open standard for parallelizing

OpenMP is an application programming interface (API) that supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran, on many platforms, instruction-set architectures and operating systems, including Solaris, AIX, FreeBSD, HP-UX, Linux, macOS, and Windows. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.

In the C and C++ programming languages, an inline function is one qualified with the keyword inline; this serves two purposes:

  1. It serves as a compiler directive that suggests that the compiler substitute the body of the function inline by performing inline expansion, i.e. by inserting the function code at the address of each function call, thereby saving the overhead of a function call. In this respect it is analogous to the register storage class specifier, which similarly provides an optimization hint.
  2. The second purpose of inline is to change linkage behavior; the details of this are complicated. This is necessary due to the C/C++ separate compilation + linkage model, specifically because the definition (body) of the function must be duplicated in all translation units where it is used, to allow inlining during compiling, which, if the function has external linkage, causes a collision during linking. C and C++ resolve this in different ways.

In computer programming, undefined behavior (UB) is the result of executing a program whose behavior is prescribed to be unpredictable, in the language specification to which the computer code adheres. This is different from unspecified behavior, for which the language specification does not prescribe a result, and implementation-defined behavior that defers to the documentation of another component of the platform.

Buffer overflow protection is any of various techniques used during software development to enhance the security of executable programs by detecting buffer overflows on stack-allocated variables, and preventing them from causing program misbehavior or from becoming serious security vulnerabilities. A stack buffer overflow occurs when a program writes to a memory address on the program's call stack outside of the intended data structure, which is usually a fixed-length buffer. Stack buffer overflow bugs are caused when a program writes more data to a buffer located on the stack than what is actually allocated for that buffer. This almost always results in corruption of adjacent data on the stack, which could lead to program crashes, incorrect operation, or security issues.

<span class="mw-page-title-main">LLVM</span> Compiler backend for multiple programming languages

LLVM is a set of compiler and toolchain technologies that can be used to develop a frontend for any programming language and a backend for any instruction set architecture. LLVM is designed around a language-independent intermediate representation (IR) that serves as a portable, high-level assembly language that can be optimized with a variety of transformations over multiple passes. The name LLVM originally stood for Low Level Virtual Machine, though the project has expanded and the name is no longer officially an initialism.

In class-based, object-oriented programming, a constructor is a special type of function called to create an object. It prepares the new object for use, often accepting arguments that the constructor uses to set required member variables.

In computer programming, an inline assembler is a feature of some compilers that allows low-level code written in assembly language to be embedded within a program, among code that otherwise has been compiled from a higher-level language such as C or Ada.

In computer science, a tail call is a subroutine call performed as the final action of a procedure. If the target of a tail is the same subroutine, the subroutine is said to be tail recursive, which is a special case of direct recursion. Tail recursion is particularly useful, and is often easy to optimize in implementations.

In computer programming, a nested function is a function which is defined within another function, the enclosing function. Due to simple recursive scope rules, a nested function is itself invisible outside of its immediately enclosing function, but can see (access) all local objects of its immediately enclosing function as well as of any function(s) which, in turn, encloses that function. The nesting is theoretically possible to unlimited depth, although only a few levels are normally used in practical programs.

In computer programming, thread-local storage (TLS) is a memory management method that uses static or global memory local to a thread.

In computer programming, a one-pass compiler is a compiler that passes through the parts of each compilation unit only once, immediately translating each part into its final machine code. This is in contrast to a multi-pass compiler which converts the program into one or more intermediate representations in steps between source code and machine code, and which reprocesses the entire compilation unit in each sequential pass.

IP Pascal is an implementation of the Pascal programming language using the IP portability platform, a multiple machine, operating system and language implementation system. It implements the language "Pascaline", and has passed the Pascal Validation Suite.

This article describes the calling conventions used when programming x86 architecture microprocessors.

In computer programming, an anonymous function is a function definition that is not bound to an identifier. Anonymous functions are often arguments being passed to higher-order functions or used for constructing the result of a higher-order function that needs to return a function. If the function is only used once, or a limited number of times, an anonymous function may be syntactically lighter than using a named function. Anonymous functions are ubiquitous in functional programming languages and other languages with first-class functions, where they fulfil the same role for the function type as literals do for other data types.

Clang is a compiler front end for the C, C++, Objective-C, and Objective-C++ programming languages, as well as the OpenMP, OpenCL, RenderScript, CUDA, SYCL, and HIP frameworks. It acts as a drop-in replacement for the GNU Compiler Collection (GCC), supporting most of its compilation flags and unofficial language extensions. It includes a static analyzer, and several code analysis tools.

Blocks are a non-standard extension added by Apple Inc. to Clang's implementations of the C, C++, and Objective-C programming languages that uses a lambda expression-like syntax to create closures within these languages. Blocks are supported for programs developed for Mac OS X 10.6+ and iOS 4.0+, although third-party runtimes allow use on Mac OS X 10.5 and iOS 2.2+ and non-Apple systems.

Objective-C is a high-level general-purpose, object-oriented programming language that adds Smalltalk-style messaging to the C programming language. Originally developed by Brad Cox and Tom Love in the early 1980s, it was selected by NeXT for its NeXTSTEP operating system. Due to Apple macOS’s direct lineage from NeXTSTEP, Objective-C was the standard programming language used, supported, and promoted by Apple for developing macOS and iOS applications until the introduction of the Swift programming language in 2014.

Control-flow integrity (CFI) is a general term for computer security techniques that prevent a wide variety of malware attacks from redirecting the flow of execution of a program.

References

  1. 1 2 3 4 5 "Optimize Options". Using the GNU Compiler Collection (GCC). Link-time optimizations do not require the presence of the whole program to operate. If the program does not require any symbols to be exported, it is possible to combine -flto and -fwhole-program to allow the interprocedural optimizers to use more aggressive assumptions which may lead to improved optimization opportunities. Use of -fwhole-program is not needed when linker plugin is active (see -fuse-linker-plugin).
  2. "LTO Overview". GNU Compiler Collection (GCC) Internals.
  3. Thomas C. Spillman, "Exposing side effects in a PL/I optimizing compiler", in Proceedings of IFIPS 1971, North-Holland Publishing Company, pages 376-381.
  4. Frances E. Allen, "Interprocedural Data Flow Analysis", IFIPS Proceedings, 1974.
  5. Frances E. Allen, and Jack Schwartz, "Determining the Data Flow Relationships in a Collection of Procedures", IBM Research Report RC 4989, Aug. 1974.
  6. Philip Abrams, "An APL Machine", Stanford University Computer Science Department, Report STAN-CS-70-158, February, 1970.
  7. Terrence C. Miller, "Tentative Compilation: A Design for an APL Compiler", Ph.D. Thesis, Yale University, 1978.
  8. "Clang command line argument reference". Clang 11 documentation.
  9. Reinhart, Jonathan. "Can LTO for gcc or clang optimize across C and C++ methods". Stack Overflow.
  10. Woerister, Michael (19 September 2019). "Closing the gap: cross-language LTO between Rust and C/C++". LLVM Dev Blog.
  11. "Optimize Options". Using the GNU Compiler Collection (GCC).
  12. "Function sections". elinux.org.
  13. "Intel compiler 8 documentation". Archived from the original on 2006-09-21. Retrieved 2007-02-13.
  14. Intel Visual Fortran Compiler 9.1, Standard and Professional Editions, for Windows* - Intel Software Network
  15. "/GL (Whole Program Optimization)". Microsoft Docs. 2019-03-12. Retrieved 2020-01-26.
  16. "INTERPROCEDURAL_OPTIMIZATION". CMake 3.17.2 Documentation.