High memory

Last updated

High memory is the part of physical memory in a computer which is not directly mapped by the page tables of its operating system kernel. The phrase is also sometimes used as shorthand for the High Memory Area, which is a different concept entirely.

Contents

Some operating system kernels, such as Linux, divide their virtual address space into two regions, devoting the larger to user space and the smaller to the kernel. In current 32-bit x86 computers, this commonly (although does not have to, as this is a configurable option) takes the form of a 3GB/1GB split of the 4 GB address space, so kernel virtual addresses start at 0xC0000000 and go to 0xFFFFFFFF. The lower 896 MB, from 0xC0000000 to 0xF7FFFFFF, is directly mapped to the kernel physical address space, and the remaining 128 MB, from 0xF8000000 to 0xFFFFFFFF, is used on demand by the kernel to be mapped to high memory. When in user mode, translations are only effective for the first region, thus protecting the kernel from user programs, but when in kernel mode, translations are effective for both regions, thus giving the kernel an easy way to refer to the buffers of processes—it just uses the process' own mappings. [1]

However, if the kernel needs to refer to physical memory for which a userspace translation has not already been provided, it has only 1 GB (for example) of virtual memory to use. On computers with a lot of physical memory, this can mean that there exists memory that the kernel cannot refer to directly—this is called high memory. When the kernel wishes to address high memory, it creates a mapping on the fly and destroys the mapping when done, which incurs a performance penalty.

See also

Related Research Articles

Virtual memory Computer memory management technique

In computing, virtual memory, or virtual storage is a memory management technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory".

Direct memory access (DMA) is a feature of computer systems that allows certain hardware subsystems to access main system memory independently of the central processing unit (CPU).

A modern computer operating system usually segregates virtual memory into kernel space and user space. Primarily, this separation serves to provide memory protection and hardware protection from malicious or errant software behaviour.

In computer architecture, 64-bit integers, memory addresses, or other data units are those that are 64-bit (8-octet) wide. Also, 64-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on processor registers, address buses, or data buses of that size. 64-bit microcomputers are computers in which 64-bit microprocessors are the norm. From the software perspective, 64-bit computing means the use of machine code with 64-bit virtual memory addresses. However, not all 64-bit instruction sets support full 64-bit virtual memory addresses; x86-64 and ARMv8, for example, support only 48 bits of virtual address, with the remaining 16 bits of the virtual address required to be all 0's or all 1's, and several 64-bit instruction sets support fewer than 64 bits of physical memory address.

x86 memory segmentation refers to the implementation of memory segmentation in the Intel x86 computer instruction set architecture. Segmentation was introduced on the Intel 8086 in 1978 as a way to allow programs to address more than 64 KB (65,536 bytes) of memory. The Intel 80286 introduced a second version of segmentation in 1982 that added support for virtual memory and memory protection. At this point the original model was renamed real mode, and the new version was named protected mode. The x86-64 architecture, introduced in 2003, has largely dropped support for segmentation in 64-bit mode.

Memory management unit Hardware translating virtual addresses to physical address

A memory management unit (MMU), sometimes called paged memory management unit (PMMU), is a computer hardware unit having all memory references passed through itself, primarily performing the translation of virtual memory addresses to physical addresses.

x86-64 Type of instruction set which is a 64-bit version of the x86 instruction set

x86-64 is a 64-bit version of the x86 instruction set, first released in 1999. It introduced two new modes of operation, 64-bit mode and compatibility mode, along with a new 4-level paging mode.

In computer operating systems, memory paging is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in main memory. In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.

Memory-mapped I/O (MMIO) and port-mapped I/O (PMIO) are two complementary methods of performing input/output (I/O) between the central processing unit (CPU) and peripheral devices in a computer. An alternative approach is using dedicated I/O processors, commonly known as channels on mainframe computers, which execute their own instructions.

DOS memory management

In IBM PC compatible computing, DOS memory management refers to software and techniques employed to give applications access to more than 640 kibibytes (kiB) of "conventional memory". The 640 KiB limit was specific to the IBM PC and close compatibles; other machines running MS-DOS had different limits, for example the Apricot PC could have up to 768 KiB and the Sirius Victor 9000, 896 KiB. Memory management on the IBM family was made complex by the need to maintain backward compatibility to the original PC design and real-mode DOS, while allowing computer users to take advantage of large amounts of low-cost memory and new generations of processors. Since DOS has given way to Microsoft Windows and other 32-bit operating systems not restricted by the original arbitrary 640 KiB limit of the IBM PC, managing the memory of a personal computer no longer requires the user to manually manipulate internal settings and parameters of the system.

In computing, Physical Address Extension (PAE), sometimes referred to as Page Address Extension, is a memory management feature for the x86 architecture. PAE was first introduced by Intel in the Pentium Pro, and later by AMD in the Athlon processor. It defines a page table hierarchy of three levels (instead of two), with table entries of 64 bits each instead of 32, allowing these CPUs to directly access a physical address space larger than 4 gigabytes (232 bytes).

Conventional memory First 640K of RAM under DOS

In DOS memory management, conventional memory, also called base memory, is the first 640 kilobytes of the memory on IBM PC or compatible systems. It is the read-write memory directly addressable by the processor for use by the operating system and application programs. As memory prices rapidly declined, this design decision became a limitation in the use of large memory capacities until the introduction of operating systems and processors that made it irrelevant.

A translation lookaside buffer (TLB) is a memory cache that is used to reduce the time taken to access a user memory location. It is a part of the chip's memory-management unit (MMU). The TLB stores the recent translations of virtual memory to physical memory and can be called an address-translation cache. A TLB may reside between the CPU and the CPU cache, between CPU cache and the main memory or between the different levels of the multi-level cache. The majority of desktop, laptop, and server processors include one or more TLBs in the memory-management hardware, and it is nearly always present in any processor that utilizes paged or segmented virtual memory.

Virtual address space

In computing, a virtual address space (VAS) or address space is the set of ranges of virtual addresses that an operating system makes available to a process. The range of virtual addresses usually starts at a low address and can extend to the highest address allowed by the computer's instruction set architecture and supported by the operating system's pointer size implementation, which can be 4 bytes for 32-bit or 8 bytes for 64-bit OS versions. This provides several benefits, one of which is security through process isolation assuming each process is given a separate address space.

In computer science, a memory map is a structure of data that indicates how memory is laid out. The term "memory map" can have different meanings in different contexts.

In the x86-64 computer architecture, long mode is the mode where a 64-bit operating system can access 64-bit instructions and registers. 64-bit programs are run in a sub-mode called 64-bit mode, while 32-bit programs and 16-bit protected mode programs are executed in a sub-mode called compatibility mode. Real mode or virtual 8086 mode programs cannot be natively run in long mode.

Input–output memory management unit

In computing, an input–output memory management unit (IOMMU) is a memory management unit (MMU) that connects a direct-memory-access–capable (DMA-capable) I/O bus to the main memory. Like a traditional MMU, which translates CPU-visible virtual addresses to physical addresses, the IOMMU maps device-visible virtual addresses to physical addresses. Some units also provide memory protection from faulty or malicious devices.

Address Windowing Extensions (AWE) is a Microsoft Windows application programming interface that allows a 32-bit software application to access more physical memory than it has virtual address space, even in excess of the 4 GB limit. The process of mapping an application's virtual address space to physical memory under AWE is known as "windowing", and is similar to the overlay concept of other environments. AWE is beneficial to certain data-intensive applications, such as database management systems and scientific and engineering software, that need to manipulate very large data sets while minimizing paging.

In computing, the term 3 GB barrier refers to a limitation of some 32-bit operating systems running on x86 microprocessors. It prevents the operating systems from using all of 4 GiB (4 × 10243 bytes) of main memory. The exact barrier varies by motherboard and I/O device configuration, particularly the size of video RAM; it may be in the range of 2.75 GB to 3.5 GB. The barrier is not present with a 64-bit processor and 64-bit operating system, or with certain x86 hardware and an operating system such as Linux or certain versions of Windows Server and macOS that allow use of Physical Address Extension (PAE) mode on x86 to access more than 4 GiB of RAM.

Meltdown (security vulnerability) Microprocessor security vulnerability

Meltdown is a hardware vulnerability affecting Intel x86 microprocessors, IBM POWER processors, and some ARM-based microprocessors. It allows a rogue process to read all memory, even when it is not authorized to do so.

References

  1. Virtual Memory I: the problem