64-bit computing

Last updated
Hex dump of the section table in a 64-bit Portable Executable File. A 64-bit word can be expressed as a sequence of 16 hexadecimal digits. Hex dump of the Section Table in a 64 bit PE File.jpg
Hex dump of the section table in a 64-bit Portable Executable File. A 64-bit word can be expressed as a sequence of 16 hexadecimal digits.

In computer architecture, 64-bit integers, memory addresses, or other data units [lower-alpha 1] are those that are 64 bits wide. Also, 64-bit central processing units (CPU) and arithmetic logic units (ALU) are those that are based on processor registers, address buses, or data buses of that size. A computer that uses such a processor is a 64-bit computer.

Contents

From the software perspective, 64-bit computing means the use of machine code with 64-bit virtual memory addresses. However, not all 64-bit instruction sets support full 64-bit virtual memory addresses; x86-64 and AArch64 for example, support only 48 bits of virtual address, with the remaining 16 bits of the virtual address required to be all zeros (000...) or all ones (111...), and several 64-bit instruction sets support fewer than 64 bits of physical memory address.

The term 64-bit also describes a generation of computers in which 64-bit processors are the norm. 64 bits is a word size that defines certain classes of computer architecture, buses, memory, and CPUs and, by extension, the software that runs on them. 64-bit CPUs have been used in supercomputers since the 1970s (Cray-1, 1975) and in reduced instruction set computers (RISC) based workstations and servers since the early 1990s. In 2003, 64-bit CPUs were introduced to the mainstream PC market in the form of x86-64 processors and the PowerPC G5.

A 64-bit register can hold any of 264 (over 18 quintillion or 1.8×1019) different values. The range of integer values that can be stored in 64 bits depends on the integer representation used. With the two most common representations, the range is 0 through 18,446,744,073,709,551,615 (equal to 264 − 1) for representation as an (unsigned) binary number, and −9,223,372,036,854,775,808 (−263) through 9,223,372,036,854,775,807 (263 − 1) for representation as two's complement. Hence, a processor with 64-bit memory addresses can directly access 264 bytes (16 exbibytes or EiB) of byte-addressable memory.

With no further qualification, a 64-bit computer architecture generally has integer and addressing registers that are 64 bits wide, allowing direct support for 64-bit data types and addresses. However, a CPU might have external data buses or address buses with different sizes from the registers, even larger (the 32-bit Pentium had a 64-bit data bus, for instance). [1]

Architectural implications

Processor registers are typically divided into several groups: integer, floating-point, single instruction, multiple data (SIMD), control, and often special registers for address arithmetic which may have various uses and names such as address, index, or base registers. However, in modern designs, these functions are often performed by more general purpose integer registers. In most processors, only integer or address-registers can be used to address data in memory; the other types of registers cannot. The size of these registers therefore normally limits the amount of directly addressable memory, even if there are registers, such as floating-point registers, that are wider.

Most high performance 32-bit and 64-bit processors (some notable exceptions are older or embedded ARM architecture (ARM) and 32-bit MIPS architecture (MIPS) CPUs) have integrated floating point hardware, which is often, but not always, based on 64-bit units of data. For example, although the x86/x87 architecture has instructions able to load and store 64-bit (and 32-bit) floating-point values in memory, the internal floating-point data and register format is 80 bits wide, while the general-purpose registers are 32 bits wide. In contrast, the 64-bit Alpha family uses a 64-bit floating-point data and register format, and 64-bit integer registers.

History

Many computer instruction sets are designed so that a single integer register can store the memory address to any location in the computer's physical or virtual memory. Therefore, the total number of addresses to memory is often determined by the width of these registers. The IBM System/360 of the 1960s was an early 32-bit computer; it had 32-bit integer registers, although it only used the low order 24 bits of a word for addresses, resulting in a 16  MiB (16 × 10242 bytes) address space. 32-bit superminicomputers, such as the DEC VAX, became common in the 1970s, and 32-bit microprocessors, such as the Motorola 68000 family and the 32-bit members of the x86 family starting with the Intel 80386, appeared in the mid-1980s, making 32 bits something of a de facto consensus as a convenient register size.

A 32-bit address register meant that 232 addresses, or 4  GiB of random-access memory (RAM), could be referenced. When these architectures were devised, 4 GiB of memory was so far beyond the typical amounts (4 MiB) in installations, that this was considered to be enough headroom for addressing. 4.29 billion addresses were considered an appropriate size to work with for another important reason: 4.29 billion integers are enough to assign unique references to most entities in applications like databases.

Some supercomputer architectures of the 1970s and 1980s, such as the Cray-1, [2] used registers up to 64 bits wide, and supported 64-bit integer arithmetic, although they did not support 64-bit addressing. In the mid-1980s, Intel i860 [3] development began culminating in a 1989 release; the i860 had 32-bit integer registers and 32-bit addressing, so it was not a fully 64-bit processor, although its graphics unit supported 64-bit integer arithmetic. [4] However, 32 bits remained the norm until the early 1990s, when the continual reductions in the cost of memory led to installations with amounts of RAM approaching 4 GiB, and the use of virtual memory spaces exceeding the 4 GiB ceiling became desirable for handling certain types of problems. In response, MIPS and DEC developed 64-bit microprocessor architectures, initially for high-end workstation and server machines. By the mid-1990s, HAL Computer Systems, Sun Microsystems, IBM, Silicon Graphics, and Hewlett-Packard had developed 64-bit architectures for their workstation and server systems. A notable exception to this trend were mainframes from IBM, which then used 32-bit data and 31-bit address sizes; the IBM mainframes did not include 64-bit processors until 2000. During the 1990s, several low-cost 64-bit microprocessors were used in consumer electronics and embedded applications. Notably, the Nintendo 64 [5] and the PlayStation 2 had 64-bit microprocessors before their introduction in personal computers. High-end printers, network equipment, and industrial computers, also used 64-bit microprocessors, such as the Quantum Effect Devices R5000.[ citation needed ] 64-bit computing started to trickle down to the personal computer desktop from 2003 onward, when some models in Apple's Macintosh lines switched to PowerPC 970 processors (termed G5 by Apple), and Advanced Micro Devices (AMD) released its first 64-bit x86-64 processor. Physical memory eventually caught up with 32 bit limits. In 2023, laptop computers were commonly equipped with 16GB and servers up to 64GB of memory, greatly exceeding the 4GB address capacity of 32 bits.

64-bit data timeline

1961
IBM delivers the IBM 7030 Stretch supercomputer, which uses 64-bit data words and 32- or 64-bit instruction words.
1974
Control Data Corporation launches the CDC Star-100 vector supercomputer, which uses a 64-bit word architecture (prior CDC systems were based on a 60-bit architecture).
International Computers Limited launches the ICL 2900 Series with 32-bit, 64-bit, and 128-bit two's complement integers; 64-bit and 128-bit floating point; 32-bit, 64-bit, and 128-bit packed decimal and a 128-bit accumulator register. The architecture has survived through a succession of ICL and Fujitsu machines. The latest is the Fujitsu Supernova, which emulates the original environment on 64-bit Intel processors.
1976
Cray Research delivers the first Cray-1 supercomputer, which is based on a 64-bit word architecture and will form the basis for later Cray vector supercomputers.
1983
Elxsi launches the Elxsi 6400 parallel minisupercomputer. The Elxsi architecture has 64-bit data registers but a 32-bit address space.
1989
Intel introduces the Intel i860 reduced instruction set computer (RISC) processor. Marketed as a "64-Bit Microprocessor", it had essentially a 32-bit architecture, enhanced with a 3D graphics unit capable of 64-bit integer operations. [6]
1993
Atari introduces the Atari Jaguar video game console, which includes some 64-bit wide data paths in its architecture. [7]

64-bit address timeline

1991
MIPS Computer Systems produces the first 64-bit microprocessor, the R4000, which implements the MIPS III architecture, the third revision of its MIPS architecture. [8] The CPU is used in SGI graphics workstations starting with the IRIS Crimson. Kendall Square Research deliver their first KSR1 supercomputer, based on a proprietary 64-bit RISC processor architecture running OSF/1.
1992
Digital Equipment Corporation (DEC) introduces the pure 64-bit Alpha architecture which was born from the PRISM project. [9]
1994
Intel announces plans for the 64-bit IA-64 architecture (jointly developed with Hewlett-Packard) as a successor to its 32-bit IA-32 processors. A 1998 to 1999 launch date was targeted.
1995
Sun launches a 64-bit SPARC processor, the UltraSPARC. [10] Fujitsu-owned HAL Computer Systems launches workstations based on a 64-bit CPU, HAL's independently designed first-generation SPARC64. IBM releases the A10 and A30 microprocessors, the first 64-bit PowerPC AS processors. [11] IBM also releases a 64-bit AS/400 system upgrade, which can convert the operating system, database and applications.
1996
Nintendo introduces the Nintendo 64 video game console, built around a low-cost variant of the MIPS R4000. HP releases the first implementation of its 64-bit PA-RISC 2.0 architecture, the PA-8000. [12]
1998
IBM releases the POWER3 line of full-64-bit PowerPC/POWER processors. [13]
1999
Intel releases the instruction set for the IA-64 architecture. AMD publicly discloses its set of 64-bit extensions to IA-32, called x86-64 (later branded AMD64).
2000
IBM ships its first 64-bit z/Architecture mainframe, the zSeries z900. z/Architecture is a 64-bit version of the 32-bit ESA/390 architecture, a descendant of the 32-bit System/360 architecture.
2001
Intel ships its IA-64 processor line, after repeated delays in getting to market. Now branded Itanium and targeting high-end servers, sales fail to meet expectations.
2003
AMD introduces its Opteron and Athlon 64 processor lines, based on its AMD64 architecture which is the first x86-based 64-bit processor architecture. Apple also ships the 64-bit "G5" PowerPC 970 CPU produced by IBM. Intel maintains that its Itanium chips would remain its only 64-bit processors.
2004
Intel, reacting to the market success of AMD, admits it has been developing a clone of the AMD64 extensions named IA-32e (later renamed EM64T, then yet again renamed to Intel 64). Intel ships updated versions of its Xeon and Pentium 4 processor families supporting the new 64-bit instruction set.
VIA Technologies announces the Isaiah 64-bit processor. [14]
2006
Sony, IBM, and Toshiba begin manufacturing the 64-bit Cell processor for use in the PlayStation 3, servers, workstations, and other appliances. Intel released Core 2 Duo as the first mainstream x86-64 processor for its mobile, desktop, and workstation line. Prior 64-bit extension processor lines were not widely available in the consumer retail market (most of 64-bit Pentium 4/D were OEM), 64-bit Pentium 4, Pentium D, and Celeron were not into mass production until late 2006 due to poor yield issue (most of good yield wafers were targeted at server and mainframe while mainstream still remain 130 nm 32-bit processor line until 2006) and soon became low end after Core 2 debuted. AMD released their first 64-bit mobile processor and manufactured in 90 nm.
2011
ARM Holdings announces ARMv8-A, the first 64-bit version of the ARM architecture family. [15]
2012
ARM Holdings announced their Cortex-A53 and Cortex-A57 cores, their first cores based on their 64-bit architecture, on 30 October 2012. [16] [17]
2013
Apple announces the iPhone 5S, with the world's first 64-bit processor in a smartphone, which uses their A7 ARMv8-A-based system-on-a-chip alongside the iPad Air and iPad Mini 2 which are the world's first 64-bit processor in a tablet.
2014
Google announces the Nexus 9 tablet, the first Android device to run on the 64-bit Tegra K1 chip.
2015
Apple announces the iPod Touch (6th generation), the first iPod Touch to use the 64-bit processor A8 ARMv8-A-based system-on-a-chip alongside the Apple TV (4th generation) which is the world's first 64-bit processor in an Apple TV.
2018
Apple announces the Apple Watch Series 4, the first Apple Watch to use the 64-bit processor S4 ARMv8-A-based system-on-a-chip.
2020
Synopsis announce the ARCv3 ISA, the first 64-bit version of the ARC ISA. [18]

64-bit operating system timeline

1985
Cray releases UNICOS, the first 64-bit implementation of the Unix operating system. [19]
1993
DEC releases the 64-bit DEC OSF/1 AXP Unix-like operating system (later renamed Tru64 UNIX) for its systems based on the Alpha architecture.
1994
Support for the R8000 processor is added by Silicon Graphics to the IRIX operating system in release 6.0.
1995
DEC releases OpenVMS 7.0, the first full 64-bit version of OpenVMS for Alpha. First 64-bit Linux distribution for the Alpha architecture is released. [20]
1996
Support for the R4x00 processors in 64-bit mode is added by Silicon Graphics to the IRIX operating system in release 6.2.
1998
Sun releases Solaris 7, with full 64-bit UltraSPARC support.
2000
IBM releases z/OS, a 64-bit operating system descended from MVS, for the new zSeries 64-bit mainframes; 64-bit Linux on z Systems follows the CPU release almost immediately.
2001
Linux becomes the first OS kernel to fully support x86-64 (on a simulator, as no x86-64 processors had been released yet). [21]
2001
Microsoft releases Windows XP 64-Bit Edition for the Itanium's IA-64 architecture; it could run 32-bit applications through an execution layer.
2003
Apple releases its Mac OS X 10.3 "Panther" operating system which adds support for native 64-bit integer arithmetic on PowerPC 970 processors. [22] Several Linux distributions release with support for AMD64. FreeBSD releases with support for AMD64.
2005
On January 4, Microsoft discontinues Windows XP 64-Bit Edition, as no PCs with IA-64 processors had been available since the previous September, and announces that it is developing x86-64 versions of Windows to replace it. [23] On January 31, Sun releases Solaris 10 with support for AMD64 and EM64T processors. On April 29, Apple releases Mac OS X 10.4 "Tiger" which provides limited support for 64-bit command-line applications on machines with PowerPC 970 processors; later versions for Intel-based Macs supported 64-bit command-line applications on Macs with EM64T processors. On April 30, Microsoft releases Windows XP Professional x64 Edition and Windows Server 2003 x64 Edition for AMD64 and EM64T processors. [24]
2006
Microsoft releases Windows Vista, including a 64-bit version for AMD64/EM64T processors that retains 32-bit compatibility. In the 64-bit version, all Windows applications and components are 64-bit, although many also have their 32-bit versions included for compatibility with plug-ins.
2007
Apple releases Mac OS X 10.5 "Leopard", which fully supports 64-bit applications on machines with PowerPC 970 or EM64T processors.
2009
Microsoft releases Windows 7, which, like Windows Vista, includes a full 64-bit version for AMD64/Intel 64 processors; most new computers are loaded by default with a 64-bit version. Microsoft also releases Windows Server 2008 R2, which is the first 64-bit only server operating system. Apple releases Mac OS X 10.6, "Snow Leopard", which ships with a 64-bit kernel for AMD64/Intel64 processors, although only certain recent models of Apple computers will run the 64-bit kernel by default. Most applications bundled with Mac OS X 10.6 are now also 64-bit. [22]
2011
Apple releases Mac OS X 10.7, "Lion", which runs the 64-bit kernel by default on supported machines. Older machines that are unable to run the 64-bit kernel run the 32-bit kernel, but, as with earlier releases, can still run 64-bit applications; Lion does not support machines with 32-bit processors. Nearly all applications bundled with Mac OS X 10.7 are now also 64-bit, including iTunes.
2012
Microsoft releases Windows 8 which supports UEFI Class 3 (UEFI without CSM) and Secure Boot. [25]
2013
Apple releases iOS 7, which, on machines with AArch64 processors, has a 64-bit kernel that supports 64-bit applications.
2014
Google releases Android Lollipop, the first version of the Android operating system with support for 64-bit processors.
2017
Apple releases iOS 11, supporting only machines with AArch64 processors. It has a 64-bit kernel that only supports 64-bit applications. 32-bit applications are no longer compatible.
2018
Apple releases watchOS 5, the first watchOS version to bring the 64-bit support.
2019
Apple releases macOS 10.15 "Catalina", dropping support for 32-bit Intel applications.
2021
Microsoft releases Windows 11 on October 5, which only supports 64-bit systems, dropping support for IA-32 systems.
2022
Google releases the Pixel 7, which drops support for non-64-bit applications. Apple releases watchOS 9, the first watchOS version to run exclusively on the Apple Watch models with 64-bit processors (including Apple Watch Series 4 or newer, Apple Watch SE (1st generation) or newer and the newly introduced Apple Watch Ultra), dropping support for Apple Watch Series 3 as the final Apple Watch model with 32-bit processor.

Limits of processors

In principle, a 64-bit microprocessor can address 16 EiB (16 × 10246 = 264 = 18,446,744,073,709,551,616 bytes, or about 18.4 exabytes) of memory. However, not all instruction sets, and not all processors implementing those instruction sets, support a full 64-bit virtual or physical address space.

The x86-64 architecture (as of 2016) allows 48 bits for virtual memory and, for any given processor, up to 52 bits for physical memory. [26] [27] These limits allow memory sizes of 256  TiB (256 × 10244 bytes) and 4  PiB (4 × 10245 bytes), respectively. A PC cannot currently contain 4  pebibytes of memory (due to the physical size of the memory chips), but AMD envisioned large servers, shared memory clusters, and other uses of physical address space that might approach this in the foreseeable future. Thus the 52-bit physical address provides ample room for expansion while not incurring the cost of implementing full 64-bit physical addresses. Similarly, the 48-bit virtual address space was designed to provide 65,536 (216) times the 32-bit limit of 4 GiB (4 × 10243 bytes), allowing room for later expansion and incurring no overhead of translating full 64-bit addresses.

The Power ISA v3.0 allows 64 bits for an effective address, mapped to a segmented address with between 65 and 78 bits allowed, for virtual memory, and, for any given processor, up to 60 bits for physical memory. [28]

The Oracle SPARC Architecture 2015 allows 64 bits for virtual memory and, for any given processor, between 40 and 56 bits for physical memory. [29]

The ARM AArch64 Virtual Memory System Architecture allows 48 bits for virtual memory and, for any given processor, from 32 to 48 bits for physical memory. [30]

The DEC Alpha specification requires minimum of 43 bits of virtual memory address space (8 TiB) to be supported, and hardware need to check and trap if the remaining unsupported bits are zero (to support compatibility on future processors). Alpha 21064 supported 43 bits of virtual memory address space (8 TiB) and 34 bits of physical memory address space (16 GiB). Alpha 21164 supported 43 bits of virtual memory address space (8 TiB) and 40 bits of physical memory address space (1 TiB). Alpha 21264 supported user-configurable 43 or 48 bits of virtual memory address space (8 TiB or 256 TiB) and 44 bits of physical memory address space (16 TiB).

64-bit applications

32-bit vs 64-bit

A change from a 32-bit to a 64-bit architecture is a fundamental alteration, as most operating systems must be extensively modified to take advantage of the new architecture, because that software has to manage the actual memory addressing hardware. [31] Other software must also be ported to use the new abilities; older 32-bit software may be supported either by virtue of the 64-bit instruction set being a superset of the 32-bit instruction set, so that processors that support the 64-bit instruction set can also run code for the 32-bit instruction set, or through software emulation, or by the actual implementation of a 32-bit processor core within the 64-bit processor, as with some Itanium processors from Intel, which included an IA-32 processor core to run 32-bit x86 applications. The operating systems for those 64-bit architectures generally support both 32-bit and 64-bit applications. [32]

One significant exception to this is the IBM AS/400, software for which is compiled into a virtual instruction set architecture (ISA) called Technology Independent Machine Interface (TIMI); TIMI code is then translated to native machine code by low-level software before being executed. The translation software is all that must be rewritten to move the full OS and all software to a new platform, as when IBM transitioned the native instruction set for AS/400 from the older 32/48-bit IMPI to the newer 64-bit PowerPC-AS, codenamed Amazon. The IMPI instruction set was quite different from even 32-bit PowerPC, so this transition was even bigger than moving a given instruction set from 32 to 64 bits.

On 64-bit hardware with x86-64 architecture (AMD64), most 32-bit operating systems and applications can run with no compatibility issues. While the larger address space of 64-bit architectures makes working with large data sets in applications such as digital video, scientific computing, and large databases easier, there has been considerable debate on whether they or their 32-bit compatibility modes will be faster than comparably priced 32-bit systems for other tasks.

A compiled Java program can run on a 32- or 64-bit Java virtual machine with no modification. The lengths and precision of all the built-in types, such as char, short, int, long, float, and double, and the types that can be used as array indices, are specified by the standard and are not dependent on the underlying architecture. Java programs that run on a 64-bit Java virtual machine have access to a larger address space. [33]

Speed is not the only factor to consider in comparing 32-bit and 64-bit processors. Applications such as multi-tasking, stress testing, and clustering – for high-performance computing (HPC) – may be more suited to a 64-bit architecture when deployed appropriately. For this reason, 64-bit clusters have been widely deployed in large organizations, such as IBM, HP, and Microsoft.

Summary:

Pros and cons

A common misconception is that 64-bit architectures are no better than 32-bit architectures unless the computer has more than 4 GiB of random-access memory. [34] This is not entirely true:

Example in C:
inta,b,c,d,e;for(a=0;a<100;a++){b=a;c=b;d=c;e=d;}
This code first creates 5 values: a, b, c, d and e; and then puts them in a loop. During the loop, this code changes the value of b to the value of a, the value of c to the value of b, the value of d to the value of c and the value of e to the value of d. This has the same effect as changing all the values to a.
If a processor can keep only two or three values or variables in registers, it would need to move some values between memory and registers to be able to process variables d and e also; this is a process that takes many CPU cycles. A processor that can hold all values and variables in registers can loop through them with no need to move data between registers and memory for each iteration. This behavior can easily be compared with virtual memory, although any effects are contingent on the compiler.

The main disadvantage of 64-bit architectures is that, relative to 32-bit architectures, the same data occupies more space in memory (due to longer pointers and possibly other types, and alignment padding). This increases the memory requirements of a given process and can have implications for efficient processor cache use. Maintaining a partial 32-bit model is one way to handle this, and is in general reasonably effective. For example, the z/OS operating system takes this approach, requiring program code to reside in 31-bit address spaces (the high order bit is not used in address calculation on the underlying hardware platform) while data objects can optionally reside in 64-bit regions. Not all such applications require a large address space or manipulate 64-bit data items, so these applications do not benefit from these features.

Software availability

x86-based 64-bit systems sometimes lack equivalents of software that is written for 32-bit architectures. The most severe problem in Microsoft Windows is incompatible device drivers for obsolete hardware. Most 32-bit application software can run on a 64-bit operating system in a compatibility mode, also termed an emulation mode, e.g., Microsoft WoW64 Technology for IA-64 and AMD64. The 64-bit Windows Native Mode [37] driver environment runs atop 64-bit NTDLL.DLL, which cannot call 32-bit Win32 subsystem code (often devices whose actual hardware function is emulated in user mode software, like Winprinters). Because 64-bit drivers for most devices were unavailable until early 2007 (Vista x64), using a 64-bit version of Windows was considered a challenge. However, the trend has since moved toward 64-bit computing, more so as memory prices dropped and the use of more than 4 GiB of RAM increased. Most manufacturers started to provide both 32-bit and 64-bit drivers for new devices, so unavailability of 64-bit drivers ceased to be a problem. 64-bit drivers were not provided for many older devices, which could consequently not be used in 64-bit systems.

Driver compatibility was less of a problem with open-source drivers, as 32-bit ones could be modified for 64-bit use. Support for hardware made before early 2007, was problematic for open-source platforms,[ citation needed ] due to the relatively small number of users.

64-bit versions of Windows cannot run 16-bit software. However, most 32-bit applications will work well. 64-bit users are forced to install a virtual machine of a 16- or 32-bit operating system to run 16-bit applications or use one of the alternatives for NTVDM. [38]

Mac OS X 10.4 "Tiger" and Mac OS X 10.5 "Leopard" had only a 32-bit kernel, but they can run 64-bit user-mode code on 64-bit processors. Mac OS X 10.6 "Snow Leopard" had both 32- and 64-bit kernels, and, on most Macs, used the 32-bit kernel even on 64-bit processors. This allowed those Macs to support 64-bit processes while still supporting 32-bit device drivers; although not 64-bit drivers and performance advantages that can come with them. Mac OS X 10.7 "Lion" ran with a 64-bit kernel on more Macs, and OS X 10.8 "Mountain Lion" and later macOS releases only have a 64-bit kernel. On systems with 64-bit processors, both the 32- and 64-bit macOS kernels can run 32-bit user-mode code, and all versions of macOS up to macOS Mojave (10.14) include 32-bit versions of libraries that 32-bit applications would use, so 32-bit user-mode software for macOS will run on those systems. The 32-bit versions of libraries have been removed by Apple in macOS Catalina (10.15).

Linux and most other Unix-like operating systems, and the C and C++ toolchains for them, have supported 64-bit processors for many years. Many applications and libraries for those platforms are open-source software, written in C and C++, so that if they are 64-bit-safe, they can be compiled into 64-bit versions. This source-based distribution model, with an emphasis on frequent releases, makes availability of application software for those operating systems less of an issue.

64-bit data models

In 32-bit programs, pointers and data types such as integers generally have the same length. This is not necessarily true on 64-bit machines. [39] [40] [41] Mixing data types in programming languages such as C and its descendants such as C++ and Objective-C may thus work on 32-bit implementations but not on 64-bit implementations.

In many programming environments for C and C-derived languages on 64-bit machines, int variables are still 32 bits wide, but long integers and pointers are 64 bits wide. These are described as having an LP64 data model, which is an abbreviation of "Long, Pointer, 64". [42] [43] Other models are the ILP64 data model in which all three data types are 64 bits wide, [44] [43] and even the SILP64 model where short integers are also 64 bits wide. [45] [46] However, in most cases the modifications required are relatively minor and straightforward, and many well-written programs can simply be recompiled for the new environment with no changes. Another alternative is the LLP64 model, which maintains compatibility with 32-bit code by leaving both int and long as 32-bit. [47] [43] LL refers to the long long integer type, which is at least 64 bits on all platforms, including 32-bit environments.

There are also systems with 64-bit processors using an ILP32 data model, with the addition of 64-bit long long integers; this is also used on many platforms with 32-bit processors. This model reduces code size and the size of data structures containing pointers, at the cost of a much smaller address space, a good choice for some embedded systems. For instruction sets such as x86 and ARM in which the 64-bit version of the instruction set has more registers than does the 32-bit version, it provides access to the additional registers without the space penalty. It is common in 64-bit RISC machines,[ citation needed ] explored in x86 as x32 ABI, and has recently been used in the Apple Watch Series 4 and 5. [48] [49]

64-bit data models
Data
model
short
int
intlong
int
long
long
Pointer,
size_t
Sample operating systems
ILP321632326432x32 and arm64ilp32 ABIs on Linux systems; MIPS N32 ABI.
LLP641632326464 Microsoft Windows (x86-64, IA-64, and ARM64) using Visual C++; and MinGW
LP641632646464Most Unix and Unix-like systems, e.g., Solaris, Linux, BSD, macOS. Windows when using Cygwin; z/OS
ILP641664646464 HAL Computer Systems port of Solaris to the SPARC64
SILP646464646464Classic UNICOS [45] [46] (versus UNICOS/mp, etc.)

Many 64-bit platforms today use an LP64 model (including Solaris, AIX, HP-UX, Linux, macOS, BSD, and IBM z/OS). Microsoft Windows uses an LLP64 model. The disadvantage of the LP64 model is that storing a long into an int truncates. On the other hand, converting a pointer to a long will "work" in LP64. In the LLP64 model, the reverse is true. These are not problems which affect fully standard-compliant code, but code is often written with implicit assumptions about the widths of data types. C code should prefer (u)intptr_t instead of long when casting pointers into integer objects.

A programming model is a choice made to suit a given compiler, and several can coexist on the same OS. However, the programming model chosen as the primary model for the OS application programming interface (API) typically dominates.

Another consideration is the data model used for device drivers. Drivers make up the majority of the operating system code in most modern operating systems[ citation needed ] (although many may not be loaded when the operating system is running). Many drivers use pointers heavily to manipulate data, and in some cases have to load pointers of a certain size into the hardware they support for direct memory access (DMA). As an example, a driver for a 32-bit PCI device asking the device to DMA data into upper areas of a 64-bit machine's memory could not satisfy requests from the operating system to load data from the device to memory above the 4 gibibyte barrier, because the pointers for those addresses would not fit into the DMA registers of the device. This problem is solved by having the OS take the memory restrictions of the device into account when generating requests to drivers for DMA, or by using an input–output memory management unit (IOMMU).

Current 64-bit architectures

As of August 2023, 64-bit architectures for which processors are being manufactured include:

Most architectures of 64 bits that are derived from the same architecture of 32 bits can execute code written for the 32-bit versions natively, with no performance penalty.[ citation needed ] This kind of support is commonly called bi-arch support or more generally multi-arch support.

See also

Notes

  1. such as floating-point numbers.

Related Research Articles

IA-32 is the 32-bit version of the x86 instruction set architecture, designed by Intel and first implemented in the 80386 microprocessor in 1985. IA-32 is the first incarnation of x86 that supports 32-bit computing; as a result, the "IA-32" term may be used as a metonym to refer to all x86 versions that support 32-bit computing.

x86 Family of instruction set architectures

x86 is a family of complex instruction set computer (CISC) instruction set architectures initially developed by Intel based on the Intel 8086 microprocessor and its 8088 variant. The 8086 was introduced in 1978 as a fully 16-bit extension of Intel's 8-bit 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address. The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors. Colloquially, their names were "186", "286", "386" and "486".

<span class="mw-page-title-main">Endianness</span> Order of bytes in a computer word

In computing, endianness is the order in which bytes within a word of digital data are transmitted over a data communication medium or addressed in computer memory, counting only byte significance compared to earliness. Endianness is primarily expressed as big-endian (BE) or little-endian (LE), terms introduced by Danny Cohen into computer science for data ordering in an Internet Experiment Note published in 1980. The adjective endian has its origin in the writings of 18th century Anglo-Irish writer Jonathan Swift. In the 1726 novel Gulliver's Travels, he portrays the conflict between sects of Lilliputians divided into those breaking the shell of a boiled egg from the big end or from the little end. By analogy, a CPU may read a digital word big end first, or little end first.

In computing, Streaming SIMD Extensions (SSE) is a single instruction, multiple data (SIMD) instruction set extension to the x86 architecture, designed by Intel and introduced in 1999 in their Pentium III series of central processing units (CPUs) shortly after the appearance of Advanced Micro Devices (AMD's) 3DNow!. SSE contains 70 new instructions, most of which work on single precision floating-point data. SIMD instructions can greatly increase performance when exactly the same operations are to be performed on multiple data objects. Typical applications are digital signal processing and graphics processing.

In computer architecture, 32-bit computing refers to computer systems with a processor, memory, and other major system components that operate on data in 32-bit units. Compared to smaller bit widths, 32-bit computers can perform large calculations more efficiently and process more data per clock cycle. Typical 32-bit personal computers also have a 32-bit address bus, permitting up to 4 GB of RAM to be accessed, far more than previous generations of system architecture allowed.

IA-64 is the instruction set architecture (ISA) of the discontinued Itanium family of 64-bit Intel microprocessors. The basic ISA specification originated at Hewlett-Packard (HP), and was subsequently implemented by Intel in collaboration with HP. The first Itanium processor, codenamed Merced, was released in 2001.

x86 memory segmentation refers to the implementation of memory segmentation in the Intel x86 computer instruction set architecture. Segmentation was introduced on the Intel 8086 in 1978 as a way to allow programs to address more than 64 KB (65,536 bytes) of memory. The Intel 80286 introduced a second version of segmentation in 1982 that added support for virtual memory and memory protection. At this point the original mode was renamed to real mode, and the new version was named protected mode. The x86-64 architecture, introduced in 2003, has largely dropped support for segmentation in 64-bit mode.

In computing, protected mode, also called protected virtual address mode, is an operational mode of x86-compatible central processing units (CPUs). It allows system software to use features such as segmentation, virtual memory, paging and safe multi-tasking designed to increase an operating system's control over application software.

x86 assembly language is the name for the family of assembly languages which provide some level of backward compatibility with CPUs back to the Intel 8008 microprocessor, which was launched in April 1972. It is used to produce object code for the x86 class of processors.

x86-64 64-bit version of x86 architecture

x86-64 is a 64-bit version of the x86 instruction set, first announced in 1999. It introduced two new modes of operation, 64-bit mode and compatibility mode, along with a new 4-level paging mode.

SSE2 is one of the Intel SIMD processor supplementary instruction sets introduced by Intel with the initial version of the Pentium 4 in 2000. It extends the earlier SSE instruction set, and is intended to fully replace MMX. Intel extended SSE2 to create SSE3 in 2004. SSE2 added 144 new instructions to SSE, which has 70 instructions. Competing chip-maker AMD added support for SSE2 with the introduction of their Opteron and Athlon 64 ranges of AMD64 64-bit CPUs in 2003.

Memory-mapped I/O (MMIO) and port-mapped I/O (PMIO) are two complementary methods of performing input/output (I/O) between the central processing unit (CPU) and peripheral devices in a computer. An alternative approach is using dedicated I/O processors, commonly known as channels on mainframe computers, which execute their own instructions.

<span class="mw-page-title-main">Index register</span> CPU register used for modifying operand addresses

An index register in a computer's CPU is a processor register used for pointing to operand addresses during the run of a program. It is useful for stepping through strings and arrays. It can also be used for holding loop iterations and counters. In some architectures it is used for read/writing blocks of memory. Depending on the architecture it may be a dedicated index register or a general-purpose register. Some instruction sets allow more than one index register to be used; in that case additional instruction fields may specify which index registers to use.

In computing, Physical Address Extension (PAE), sometimes referred to as Page Address Extension, is a memory management feature for the x86 architecture. PAE was first introduced by Intel in the Pentium Pro, and later by AMD in the Athlon processor. It defines a page table hierarchy of three levels (instead of two), with table entries of 64 bits each instead of 32, allowing these CPUs to directly access a physical address space larger than 4 gigabytes (232 bytes).

A processor register is a quickly accessible location available to a computer's processor. Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write-only. In computer architecture, registers are typically addressed by mechanisms other than main memory, but may in some cases be assigned a memory address e.g. DEC PDP-10, ICT 1900.

Memory segmentation is an operating system memory management technique of dividing a computer's primary memory into segments or sections. In a computer system using segmentation, a reference to a memory location includes a value that identifies a segment and an offset within that segment. Segments or sections are also used in object files of compiled programs when they are linked together into a program image and when the image is loaded into memory.

In computer security, executable-space protection marks memory regions as non-executable, such that an attempt to execute machine code in these regions will cause an exception. It makes use of hardware features such as the NX bit, or in some cases software emulation of those features. However, technologies that emulate or supply an NX bit will usually impose a measurable overhead while using a hardware-supplied NX bit imposes no measurable overhead.

This article describes the calling conventions used when programming x86 architecture microprocessors.

Advanced Vector Extensions are SIMD extensions to the x86 instruction set architecture for microprocessors from Intel and Advanced Micro Devices (AMD). They were proposed by Intel in March 2008 and first supported by Intel with the Sandy Bridge processor shipping in Q1 2011 and later by AMD with the Bulldozer processor shipping in Q3 2011. AVX provides new features, new instructions, and a new coding scheme.

<span class="mw-page-title-main">Meltdown (security vulnerability)</span> Microprocessor security vulnerability

Meltdown is one of the two original transient execution CPU vulnerabilities. Meltdown affects Intel x86 microprocessors, IBM POWER processors, and some ARM-based microprocessors. It allows a rogue process to read all memory, even when it is not authorized to do so.

References

  1. Pentium Processor User's Manual Volume 1: Pentium Processor Data Book (PDF). Intel. 1993.
  2. "Cray-1 Computer System Hardware Reference Manual" (PDF). Cray Research. 1977. Retrieved October 8, 2013.
  3. Grimes, Jack; Kohn, Les; Bharadhwaj, Rajeev (July–August 1989). "The Intel i860 64-Bit Processor: A General-Purpose CPU with 3D Graphics Capabilities". IEEE Computer Graphics and Applications. 9 (4): 85–94. doi:10.1109/38.31467. S2CID   38831149 . Retrieved 2010-11-19.
  4. "i860 Processor Family Programmer's Reference Manual" (PDF). Intel. 1991. Retrieved September 12, 2019.
  5. "NEC Offers Two High Cost Performance 64-bit RISC Microprocessors" (Press release). NEC. 1998-01-20. Retrieved 2011-01-09. Versions of the VR4300 processor are widely used in consumer and office automation applications, including the popular Nintendo 64TM video game and advanced laser printers such as the recently announced, award-winning Hewlett-Packard LaserJet 4000 printer family.
  6. "i860 64-Bit Microprocessor". Intel. 1989. Retrieved 30 November 2010.
  7. "Atari Jaguar History". AtariAge .
  8. Joe Heinrich (1994). MIPS R4000 Microprocessor User's Manual (2nd ed.). MIPS Technologies, Inc.
  9. Richard L. Sites (1992). "Alpha AXP Architecture". Digital Technical Journal. 4 (4). Digital Equipment Corporation.
  10. Gwennap, Linley (3 October 1994). "UltraSparc Unleashes SPARC Performance". Microprocessor Report. 8 (13). MicroDesign Resources.
  11. Bishop, J. W.; et al. (July 1996). "PowerPC AS A10 64-bit RISC microprocessor". IBM Journal of Research and Development. 40 (4). IBM Corporation: 495–505. doi:10.1147/rd.404.0495.
  12. Gwennap, Linley (14 November 1994). "PA-8000 Combines Complexity and Speed". Microprocessor Report. 8 (15). MicroDesign Resources.
  13. F. P. O'Connell; S. W. White (November 2000). "POWER3: The next generation of PowerPC processors". IBM Journal of Research and Development. 44 (6). IBM Corporation: 873–884. doi:10.1147/rd.446.0873.
  14. "VIA Unveils Details of Next-Generation Isaiah Processor Core". VIA Technologies, Inc. Archived from the original on 2007-10-11. Retrieved 2007-07-18.
  15. "ARMv8 Technology Preview" (PDF). October 31, 2011. Archived from the original (PDF) on November 11, 2011. Retrieved November 15, 2012.
  16. "ARM Launches Cortex-A50 Series, the World's Most Energy-Efficient 64-bit Processors" (Press release). ARM Holdings . Retrieved 2012-10-31.
  17. "ARM Keynote: ARM Cortex-A53 and ARM Cortex-A57 64bit ARMv8 processors launched". ARMdevices.net. 2012-10-31.
  18. "Synopsys Introduces New 64-bit ARC Processor IP". Archived from the original on 31 March 2022.
  19. Stefan Berka. "Unicos Operating System". www.operating-system.org. Archived from the original on 26 November 2010. Retrieved 2010-11-19.
  20. Jon "maddog" Hall (Jun 1, 2000). "My Life and Free Software". Linux Journal.
  21. Andi Kleen. Porting Linux to x86-64 (PDF). Ottawa Linux Symposium 2001. Status: The kernel, compiler, tool chain work. The kernel boots and work on simulator and is used for porting of userland and running programs
  22. 1 2 John Siracusa (September 2009). "Mac OS X 10.6 Snow Leopard: the Ars Technica review". Ars Technica. p. 5. Archived from the original on 9 October 2009. Retrieved 2009-09-06.
  23. Joris Evers (5 January 2005). "Microsoft nixes Windows XP for Itanium". Computerworld. Archived from the original on 18 June 2013. Retrieved 17 October 2017.
  24. "Microsoft Raises the Speed Limit with the Availability of 64-Bit Editions of Windows Server 2003 and Windows XP Professional" (Press release). Microsoft. April 25, 2005. Retrieved September 10, 2015.
  25. "UEFI_on_Dell BizClient_Platforms" (PDF).
  26. "AMD64 Programmer's Manual Volume 2: System Programming" (PDF). Advanced Micro Devices. December 2016. p. 120.
  27. "Intel 64 and IA-32 Architectures Software Developer's Manual Volume 3A: System Programming Guide, Part 1" (PDF). Intel. September 2016. p. 4-2.
  28. "Power ISA Version 3.0". IBM. November 30, 2015. p. 983.
  29. "Oracle SPARC Architecture 2015 Draft D1.0.9". Oracle. p. 475.
  30. "ARM Architecture Reference Manual ARMv8, for ARMv8-A architecture profile". pp. D4-1723, D4-1724, D4-1731.
  31. Mashey, John (October 2006). "The Long Road to 64 Bits". ACM Queue. 4 (8): 85–94. doi: 10.1145/1165754.1165766 .
  32. "Windows 7: 64 bit vs 32 bit?". W7 Forums. Archived from the original on 5 April 2009. Retrieved 2009-04-05.
  33. "Frequently Asked Questions About the Java HotSpot VM". Sun Microsystems, Inc. Archived from the original on 10 May 2007. Retrieved 2007-05-03.
  34. "A description of the differences between 32-bit versions of Windows Vista and 64-bit versions of Windows Vista" . Retrieved 2011-10-14.
  35. Mark Russinovich (2008-07-21). "Pushing the Limits of Windows: Physical Memory" . Retrieved 2017-03-09.
  36. Chappell, Geoff (2009-01-27). "Licensed Memory in 32-Bit Windows Vista". geoffchappell.com. WP:SPS . Retrieved 9 March 2017.
  37. "Inside Native Applications". Technet.microsoft.com. 2006-11-01. Archived from the original on 23 October 2010. Retrieved 2010-11-19.
  38. Lincoln Spector (August 12, 2013). "Run an old program on a new PC".
  39. Peter Seebach (2006). "Exploring 64-bit development on POWER5: How portable is your code, really?". IBM .
  40. Henry Spencer. "The Ten Commandments for C Programmers".
  41. "The Story of Thud and Blunder". Datacenterworks.com. Retrieved 2010-11-19.
  42. "ILP32 and LP64 data models and data type sizes". z/OS XL C/C++ Programming Guide.
  43. 1 2 3 "64-Bit Programming Models" . Retrieved 2020-06-05.
  44. "Using the ILP64 Interface vs. LP64 Interface". Intel. Retrieved Jun 24, 2020.
  45. 1 2 "Cray C/C++ Reference Manual". August 1998. Table 9-1. Cray Research systems data type mapping. Archived from the original on October 16, 2013. Retrieved October 15, 2013.
  46. 1 2 "Cray C and C++ Reference Manual (8.7) S-2179" . Retrieved Jun 24, 2020.
  47. "Abstract Data Models - Windows applications". May 30, 2018.
  48. "ILP32 for AArch64 Whitepaper". ARM Limited. June 9, 2015. Archived from the original on December 30, 2018. Retrieved October 9, 2018.
  49. "Apple devices in 2018". woachk, security researcher. October 6, 2018.