This article needs additional citations for verification .(September 2025) |
Program execution |
---|
General concepts |
Types of code |
Compilation strategies |
Notable runtimes |
|
Notable compilers & toolchains |
|
In computing, an interpreter is software that executes source code without first compiling it to machine code. Interpreted languages differ from compiled languages, which involve the translation of source code into CPU-native executable code. Depending on the runtime environment, interpreters may first translate the source code to an intermediate format, such as bytecode. Hybrid runtime environments may also translate the bytecode into machine code via just-in-time compilation, as in the case of .NET and Java, instead of interpreting the bytecode directly.
Before the widespread adoption of interpreters, the execution of computer programs often relied on compilers, which translate and compile source code into machine code. Early runtime environments for Lisp and BASIC could parse source code directly. Thereafter, runtime environments were developed for languages (such as Perl, Raku, Python, MATLAB, and Ruby), which translated source code into an intermediate format before executing to enhance runtime performance.
Code that runs in an interpreter can be run on any platform that has a compatible interpreter. The same code can be distributed to any such platform, instead of an executable having to be built for each platform. Although each programming language is usually associated with a particular runtime environment, a language can be used in different environments. Interpreters have been constructed for languages traditionally associated with compilation, such as ALGOL, Fortran, COBOL, C and C++.
In the early days of computing, compilers were more commonly found and used than interpreters because hardware at that time could not support both the interpreter and interpreted code and the typical batch environment of the time limited the advantages of interpretation. [1]
Interpreters were used as early as 1952 to ease programming within the limitations of computers at the time (e.g. a shortage of program storage space, or no native support for floating point numbers). Interpreters were also used to translate between low-level machine languages, allowing code to be written for machines that were still under construction and tested on computers that already existed. [2] The first interpreted high-level language was Lisp. Lisp was first implemented by Steve Russell on an IBM 704 computer. Russell had read John McCarthy's paper, "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I", and realized (to McCarthy's surprise) that the Lisp eval function could be implemented in machine code. [3] The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions".
The development of editing interpreters was influenced by the need for interactive computing. In the 1960s, the introduction of time-sharing systems allowed multiple users to access a computer simultaneously, and editing interpreters became essential for managing and modifying code in real-time. The first editing interpreters were likely developed for mainframe computers, where they were used to create and modify programs on the fly. One of the earliest examples of an editing interpreter is the EDT (Editor and Debugger for the TECO) system, which was developed in the late 1960s for the PDP-1 computer. EDT allowed users to edit and debug programs using a combination of commands and macros, paving the way for modern text editors and interactive development environments.[ citation needed ]
Notable uses for interpreters include:
Interpretive overhead is the runtime cost of executing code via an interpreter instead of as native (compiled) code. Interpreting is slower because the interpreter executes multiple machine-code instructions for the equivalent functionality in the native code. In particular, access to variables is slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at compile time. [4] But faster development (due to factors such as shorter edit-build-run cycle) can outweigh the value of faster execution speed; especially when prototyping and testing when the edit-build-run cycle is frequent. [4] [5]
An interpreter may generate an intermediate representation (IR) of the program from source code in order to achieve goals such as fast runtime performance. A compiler may also generate an IR, but the compiler generates machine code for later execution whereas the interpreter prepares to execute the program. These differing goals lead to differing IR design. Many BASIC interpreters replace keywords with single byte tokens which can be used to find the instruction in a jump table. [4] A few interpreters, such as the PBASIC interpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally "16-bit" constants are stored in a variable-length code requiring 3, 6, 10, or 18 bits, and address operands include a "bit offset". Many BASIC interpreters can store and read back their own tokenized internal representation.
There are various compromises between the development speed when using an interpreter and the execution speed when using a compiler. Some systems (such as some Lisps) allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed.[ citation needed ]
Since the early stages of interpreting and compiling are similar, an interpreter might use the same lexical analyzer and parser as a compiler and then interpret the resulting abstract syntax tree.
An expression interpreter written in C++.
importstd;usingstd::runtime_error;usingstd::unique_ptr;usingstd::variant;// data types for abstract syntax treeenumclassKind:char{VAR,CONST,SUM,DIFF,MULT,DIV,PLUS,MINUS,NOT};// forward declarationclassNode;classVariable{public:int*memory;};classConstant{public:intvalue;};classUnaryOperation{public:unique_ptr<Node>right;};classBinaryOperation{public:unique_ptr<Node>left;unique_ptr<Node>right;};usingExpression=variant<Variable,Constant,BinaryOperation,UnaryOperation>;classNode{public:Kindkind;Expressione;};// interpreter procedure[[nodiscard]]intexecuteIntExpression(constNode&n){intleftValue;intrightValue;switch(n->kind){caseKind::VAR:returnstd::get<Variable>(n.e).memory;caseKind::CONST:returnstd::get<Constant>(n.e).value;caseKind::SUM:caseKind::DIFF:caseKind::MULT:caseKind::DIV:constBinaryOperation&bin=std::get<BinaryOperation>(n.e);leftValue=executeIntExpression(bin.left.get());rightValue=executeIntExpression(bin.right.get());switch(n.kind){caseKind::SUM:returnleftValue+rightValue;caseKind::DIFF:returnleftValue-rightValue;caseKind::MULT:returnleftValue*rightValue;caseKind::DIV:if(rightValue==0){throwruntime_error("Division by zero");}returnleftValue/rightValue;}caseKind::PLUS:caseKind::MINUS:caseKind::NOT:constUnaryOperation&un=std::get<UnaryOperation>(n.e);rightValue=executeIntExpression(un.right.get());switch(n.kind){caseKind::PLUS:return+rightValue;caseKind::MINUS:return-rightValue;caseKind::NOT:return!rightValue;}default:std::unreachable();}}
Just-in-time (JIT) compilation is the process of converting an intermediate format (i.e. bytecode) to native code at runtime. As this results in native code execution, it is a method of avoiding the runtime cost of using an interpreter while maintaining some of the benefits that lead to the development of interpreters.