In the x86 computer architecture, HLT
(halt) is an assembly language instruction which halts the central processing unit (CPU) until the next external interrupt is fired. [1] Interrupts are signals sent by hardware devices to the CPU alerting it that an event occurred to which it should react. For example, hardware timers send interrupts to the CPU at regular intervals.
Most operating systems execute a HLT
instruction when there is no immediate work to be done, putting the processor into an idle state. In Windows NT, for example, this instruction is run in the "System Idle Process". On x86 processors, the opcode of HLT
is 0xF4
.
On ARM processors, the similar instructions are WFI
(Wait For Interrupt) and WFE
(Wait For Event).
All x86 processors from the 8086 onward had the HLT
instruction, but it was not used by MS-DOS prior to 6.0 [2] and was not specifically designed to reduce power consumption until the release of the Intel DX4 processor in 1994. MS-DOS 6.0 provided a POWER.EXE that could be installed in CONFIG.SYS and in Microsoft's tests it saved 5%. [3] Some of the first 100 MHz DX chips had a buggy HLT state, prompting the developers of Linux to implement a "no-hlt" option for use when running on those chips, [4] but this was fixed in later chips.
Intel has since introduced additional processor-yielding instructions. These include:
Almost every modern processor instruction set includes an instruction or sleep mode which halts the processor until more work needs to be done. In interrupt-driven processors, this instruction halts the CPU until an external interrupt is received. On most architectures, executing such an instruction allows the processor to significantly reduce its power usage and heat output, which is why it is commonly used instead of busy waiting for sleeping and idling. In most processors, halting (instead of looping) also reduces the latency of the next interrupt.
Since issuing the HLT
instruction requires ring 0 access, it can only be run by privileged system software such as the kernel. Because of this, it is often best practice in application programming to use the application programming interface (API) provided for that purpose by the operating system when no more work can be done, such as Linux's sched_yield()
. [5] This is referred to as "yielding" the processor. This allows the operating system's scheduler to decide whether other processes are runnable; if not. If every process is sleeping or waiting, it will normally execute a HLT instruction to cut power usage until the next hardware interrupt.
x86 is a family of complex instruction set computer (CISC) instruction set architectures initially developed by Intel based on the 8086 microprocessor and its 8-bit-external-bus variant, the 8088. The 8086 was introduced in 1978 as a fully 16-bit extension of 8-bit Intel's 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address. The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486. Colloquially, their names were "186", "286", "386" and "486".
Real mode, also called real address mode, is an operating mode of all x86-compatible CPUs. The mode gets its name from the fact that addresses in real mode always correspond to real locations in memory. Real mode is characterized by a 20-bit segmented memory address space and unlimited direct software access to all addressable memory, I/O addresses and peripheral hardware. Real mode provides no support for memory protection, multitasking, or code privilege levels.
In computing, a system call is the programmatic way in which a computer program requests a service from the operating system on which it is executed. This may include hardware-related services, creation and execution of new processes, and communication with integral kernel services such as process scheduling. System calls provide an essential interface between a process and the operating system.
Hyper-threading is Intel's proprietary simultaneous multithreading (SMT) implementation used to improve parallelization of computations performed on x86 microprocessors. It was introduced on Xeon server processors in February 2002 and on Pentium 4 desktop processors in November 2002. Since then, Intel has included this technology in Itanium, Atom, and Core 'i' Series CPUs, among others.
x86 assembly language is the name for the family of assembly languages which provide some level of backward compatibility with CPUs back to the Intel 8008 microprocessor, which was launched in April 1972. It is used to produce object code for the x86 class of processors.
x86-64 is a 64-bit version of the x86 instruction set, first announced in 1999. It introduced two new modes of operation, 64-bit mode and compatibility mode, along with a new 4-level paging mode.
Memory-mapped I/O (MMIO) and port-mapped I/O (PMIO) are two complementary methods of performing input/output (I/O) between the central processing unit (CPU) and peripheral devices in a computer. An alternative approach is using dedicated I/O processors, commonly known as channels on mainframe computers, which execute their own instructions.
In computer engineering, Halt and Catch Fire, known by the assembly language mnemonic HCF, is an idiom referring to a computer machine code instruction that causes the computer's central processing unit (CPU) to cease meaningful operation, typically requiring a restart of the computer. It originally referred to a fictitious instruction in IBM System/360 computers, making a joke about its numerous non-obvious instruction mnemonics.
The x86 instruction set refers to the set of instructions that x86-compatible microprocessors support. The instructions are usually part of an executable program, often stored as a computer file and executed on the processor.
In the 80386 microprocessor and later, virtual 8086 mode allows the execution of real mode applications that are incapable of running directly in protected mode while the processor is running a protected mode operating system. It is a hardware virtualization technique that allowed multiple 8086 processors to be emulated by the 386 chip. It emerged from the painful experiences with the 80286 protected mode, which by itself was not suitable to run concurrent real-mode applications well. John Crawford developed the Virtual Mode bit at the register set, paving the way to this environment.
In Windows NT operating systems, the System Idle Process contains one or more kernel threads which run when no other runnable thread can be scheduled on a CPU. In a multiprocessor system, there is one idle thread associated with each CPU core. For a system with hyperthreading enabled, there is an idle thread for each logical processor.
The Pentium F00F bug is a design flaw in the majority of Intel Pentium, Pentium MMX, and Pentium OverDrive processors. Discovered in 1997, it can result in the processor ceasing to function until the computer is physically rebooted. The bug has been circumvented through operating system updates.
Rosetta is a dynamic binary translator developed by Apple Inc. for macOS, an application compatibility layer between different instruction set architectures. It enables a transition to newer hardware, by automatically translating software. The name is a reference to the Rosetta Stone, the artifact which enabled translation of Egyptian hieroglyphs.
Enhanced SpeedStep is a series of dynamic frequency scaling technologies built into some Intel's microprocessors that allow the clock speed of the processor to be dynamically changed by software. This allows the processor to meet the instantaneous performance needs of the operation being performed, while minimizing power draw and heat generation. EIST was introduced in several Prescott 6 series in the first quarter of 2005, namely the Pentium 4 660. Intel Speed Shift Technology (SST) was introduced in Intel Skylake Processor.
System Management Mode is an operating mode of x86 central processor units (CPUs) in which all normal execution, including the operating system, is suspended. An alternate software system which usually resides in the computer's firmware, or a hardware-assisted debugger, is then executed with high privileges.
In computer science, hierarchical protection domains, often called protection rings, are mechanisms to protect data and functionality from faults and malicious behavior.
ntoskrnl.exe, also known as the kernel image, contains the kernel and executive layers of the Microsoft Windows NT kernel, and is responsible for hardware abstraction, process handling, and memory management. In addition to the kernel and executive layers, it contains the cache manager, security reference monitor, memory manager, scheduler (Dispatcher), and blue screen of death.
The Interrupt flag (IF) is a flag bit in the CPU's FLAGS register, which determines whether or not the (CPU) will respond immediately to maskable hardware interrupts. If the flag is set to 1
maskable interrupts are enabled. If reset such interrupts will be disabled until interrupts are enabled. The Interrupt flag does not affect the handling of non-maskable interrupts (NMIs) or software interrupts generated by the INT instruction.
Dynamic frequency scaling is a power management technique in computer architecture whereby the frequency of a microprocessor can be automatically adjusted "on the fly" depending on the actual needs, to conserve power and reduce the amount of heat generated by the chip. Dynamic frequency scaling helps preserve battery on mobile devices and decrease cooling cost and noise on quiet computing settings, or can be useful as a security measure for overheated systems.
A computer processor is described as idle when it is not being used by any program.