Object code optimizer

Last updated

A binary optimizer takes the existing output from a compiler and produces a better execution file with the same functionality. Binary optimizer.png
A binary optimizer takes the existing output from a compiler and produces a better execution file with the same functionality.

An object code optimizer, sometimes also known as a post pass optimizer or, for small sections of code, peephole optimizer, forms part of a software compiler. It takes the output from the source language compile step - the object code or binary file - and tries to replace identifiable sections of the code with replacement code that is more algorithmically efficient (usually improved speed).

Contents

Examples

Advantages

The main advantage of re-optimizing existing programs was that the stock of already compiled customer programs (object code) could be improved almost instantly with minimal effort, reducing CPU resources at a fixed cost (the price of the proprietary software). A disadvantage was that new releases of COBOL, for example, would require (charged) maintenance to the optimizer to cater for possibly changed internal COBOL algorithms. However, since new releases of COBOL compilers frequently coincided with hardware upgrades, the faster hardware would usually more than compensate for the application programs reverting to their pre-optimized versions (until a supporting optimizer was released).

Other optimizers

Some binary optimizers do executable compression, which reduces the size of binary files using generic data compression techniques, reducing storage requirements and transfer and loading times, but not improving run-time performance. Actual consolidation of duplicate library modules would also reduce memory requirements.

Some binary optimizers utilize run-time metrics (profiling) to introspectively improve performance using techniques similar to JIT compilers.

Recent developments

More recently developed "binary optimizers" for various platforms, some claiming novelty but, nevertheless, essentially using the same (or similar) techniques described above, include:

See also

Related Research Articles

<span class="mw-page-title-main">Linker (computing)</span> Computer program which combines multiple object files into a single file

In computing, a linker or link editor is a computer system program that takes one or more object files and combines them into a single executable file, library file, or another "object" file.

PL/I is a procedural, imperative computer programming language initially developed by IBM. It is designed for scientific, engineering, business and system programming. It has been in continuous use by academic, commercial and industrial organizations since it was introduced in the 1960s.

<span class="mw-page-title-main">Library (computing)</span> Collection of resources used to develop a computer program

In computer science, a library is a collection of read-only resources that is leveraged during software development to implement a computer program.

In computer science, dynamic recompilation is a feature of some emulators and virtual machines, where the system may recompile some part of a program during execution. By compiling during execution, the system can tailor the generated code to reflect the program's run-time environment, and potentially produce more efficient code by exploiting information that is not available to a traditional static compiler.

In computer science, self-modifying code is code that alters its own instructions while it is executing – usually to reduce the instruction path length and improve performance or simply to reduce otherwise repetitively similar code, thus simplifying maintenance. The term is usually only applied to code where the self-modification is intentional, not in situations where code accidentally modifies itself due to an error such as a buffer overflow.

In computing, binary translation is a form of binary recompilation where sequences of instructions are translated from a source instruction set to the target instruction set. In some cases such as instruction set simulation, the target instruction set may be the same as the source instruction set, providing testing and debugging features such as instruction trace, conditional breakpoints and hot spot detection.

In computing, just-in-time (JIT) compilation is compilation during execution of a program rather than before execution. This may consist of source code translation but is more commonly bytecode translation to machine code, which is then executed directly. A system implementing a JIT compiler typically continuously analyses the code being executed and identifies parts of the code where the speedup gained from compilation or recompilation would outweigh the overhead of compiling that code.

In compiler theory, dead-code elimination is a compiler optimization to remove dead code. Removing such code has several benefits: it shrinks program size, an important consideration in some contexts, it reduces resource usage such as the number of bytes to be transferred and it allows the running program to avoid executing irrelevant operations, which reduces its running time. It can also enable further optimizations by simplifying program structure. Dead code includes code that can never be executed, and code that only affects dead variables, that is, irrelevant to the program.

A fat binary is a computer executable program or library which has been expanded with code native to multiple instruction sets which can consequently be run on multiple processor types. This results in a file larger than a normal one-architecture binary file, thus the name.

In computer programming, a runtime system or runtime environment is a sub-system that exists both in the computer where a program is created, as well as in the computers where the program is intended to be run. The name comes from the compile time and runtime division from compiled languages, which similarly distinguishes the computer processes involved in the creation of a program (compilation) and its execution in the target machine.

In software engineering, profiling is a form of dynamic program analysis that measures, for example, the space (memory) or time complexity of a program, the usage of particular instructions, or the frequency and duration of function calls. Most commonly, profiling information serves to aid program optimization, and more specifically, performance engineering.

Code morphing is an approach used in obfuscating software to protect software applications from reverse engineering, analysis, modifications, and cracking. This technology protects intermediate level code such as compiled from Java and .NET languages rather than binary object code. Code morphing breaks up the protected code into several processor commands or small command snippets and replaces them by others, while maintaining the same end result. Thus the protector obfuscates the code at the intermediate level.

Thread Level Speculation (TLS), also known as Speculative Multi-threading, or Speculative Parallelization, is a technique to speculatively execute a section of computer code that is anticipated to be executed later in parallel with the normal execution on a separate independent thread. Such a speculative thread may need to make assumptions about the values of input variables. If these prove to be invalid, then the portions of the speculative thread that rely on these input variables will need to be discarded and squashed. If the assumptions are correct the program can complete in a shorter time provided the thread was able to be scheduled efficiently.

<span class="mw-page-title-main">Capex Corporation</span> American software company (1969–1982)

Capex Corporation was an American computer software company in existence from 1969 through 1982 and based in Phoenix, Arizona. It made a variety of software products, mostly system utilities for the IBM mainframe platform, and was known for its Optimizer add-on to the IBM COBOL compiler. Capex was acquired by Computer Associates in 1982.

In computer science, ahead-of-time compilation is the act of compiling an (often) higher-level programming language into an (often) lower-level language before execution of a program, usually at build-time, to reduce the amount of work needed to be performed at run time.

Eclipse OpenJ9 is a high performance, scalable, Java virtual machine (JVM) implementation that is fully compliant with the Java Virtual Machine Specification.

Profile-guided optimization, also known as profile-directed feedback (PDF), and feedback-directed optimization (FDO) is a compiler optimization technique in computer programming that uses profiling to improve program runtime performance.

<span class="mw-page-title-main">History of compiler construction</span>

In computing, a compiler is a computer program that transforms source code written in a programming language or computer language, into another computer language. The most common reason for transforming source code is to create an executable program.

Pin is a platform for creating analysis tools. A pin tool comprises instrumentation, analysis and callback routines. Instrumentation routines are called when code that has not yet been recompiled is about to be run, and enable the insertion of analysis routines. Analysis routines are called when the code associated with them is run. Callback routines are only called when specific conditions are met, or when a certain event has occurred. Pin provides an extensive application programming interface (API) for instrumentation at different abstraction levels, from one instruction to an entire binary module. It also supports callbacks for many events such as library loads, system calls, signals/exceptions and thread creation events.

<span class="mw-page-title-main">IBM COBOL</span>

IBM has offered the computer programming language COBOL on many platforms, starting with the IBM 1400 series and IBM 7000 series, continuing into the industry-dominant IBM System/360 and IBM System/370 mainframe systems, and then through IBM Power Systems (AIX), IBM Z, and x86 (Linux).

References

  1. "Archived copy" (PDF). Archived from the original (PDF) on 2010-07-11. Retrieved 2010-01-07.{{cite web}}: CS1 maint: archived copy as title (link)
  2. Evans, Michael (1982-12-01). "Software engineering for the Cobol environment". Communications of the ACM. 25 (12): 874–882. doi: 10.1145/358728.358732 . S2CID   17268690. Archived from the original on 2021-10-27. Retrieved 2021-10-27.
  3. "IBM Automatic Binary Optimizer for z/OS - Overview". www.ibm.com. 2015. Archived from the original on 2020-10-18. Retrieved 2020-05-15.
  4. "IBM Automatic Binary Optimizer for z/OS Trial Cloud Service". optimizer.ibm.com. 2020. Archived from the original on 2021-01-19. Retrieved 2021-10-27.
  5. "The Binary Code Optimizer". Archived from the original on 2010-07-22. Retrieved 2010-01-07.
  6. Duesterwald, E. (2005). "Design and Engineering of a Dynamic Binary Optimizer". Proceedings of the IEEE. 93 (2): 436–448. doi:10.1109/JPROC.2004.840302. S2CID   2217101.
  7. Xu, Chaohao; Li, Jianhui; Bao, Tao; Wang, Yun; Huang, Bo (2007-06-13). "Metadata driven memory optimizations in dynamic binary translator". Proceedings of the 3rd international conference on Virtual execution environments - VEE '07. Association for Computing Machinery. pp. 148–157. doi:10.1145/1254810.1254831. ISBN   978-1-59593630-1. S2CID   15234434. Archived from the original on 2021-10-27. Retrieved 2021-10-27 via ACM Digital Library.
  8. "Archived copy" (PDF). Archived (PDF) from the original on 2009-04-19. Retrieved 2010-01-07.{{cite web}}: CS1 maint: archived copy as title (link)
  9. Kim, Jinpyo; Hsu, Wei-Chung; Yew, Pen-Chung (2007). "COBRA: An Adaptive Runtime Binary Optimization Framework for Multithreaded Applications". 2007 International Conference on Parallel Processing (ICPP 2007). p. 25. doi:10.1109/ICPP.2007.23. ISBN   978-0-7695-2933-2. S2CID   15079211.
  10. "Archived copy" (PDF). Archived from the original (PDF) on 2010-09-11. Retrieved 2010-01-07.{{cite web}}: CS1 maint: archived copy as title (link)
  11. ""SOLAR" Software Optimization at Link-time And Run-time". Archived from the original on 2016-02-14.
  12. "Dynimize Product Overview". dynimize.com. Archived from the original on 2021-10-25. Retrieved 2021-04-26.
  13. Panchenko, Maksim; Auler, Rafael; Nell, Bill; Ottoni, Guilherme (2019-02-16). "BOLT: A Practical Binary Optimizer for Data Centers and Beyond". 2019 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). pp. 2–14. arXiv: 1807.06735 . doi:10.1109/CGO.2019.8661201. ISBN   978-1-7281-1436-1. S2CID   49869552.