The separation of mechanism and policy [1] is a design principle in computer science. It states that mechanisms (those parts of a system implementation that control the authorization of operations and the allocation of resources) should not dictate (or overly restrict) the policies according to which decisions are made about which operations to authorize, and which resources to allocate.
While most commonly discussed in the context of security mechanisms (authentication and authorization), separation of mechanism and policy is applicable to a range of resource allocation problems (e.g. CPU scheduling, memory allocation, quality of service) as well as the design of software abstractions.[ citation needed ]
Per Brinch Hansen introduced the concept of separation of policy and mechanism in operating systems in the RC 4000 multiprogramming system. [2] Artsy and Livny, in a 1987 paper, discussed an approach for an operating system design having an "extreme separation of mechanism and policy". [3] [4] In a 2000 article, Chervenak et al. described the principles of mechanism neutrality and policy neutrality. [5]
The separation of mechanism and policy is the fundamental approach of a microkernel that distinguishes it from a monolithic one. In a microkernel, the majority of operating system services are provided by user-level server processes. [6] It is important for an operating system to have the flexibility of providing adequate mechanisms to support the broadest possible spectrum of real-world security policies. [7]
It is almost impossible to envision all of the different ways in which a system might be used by different types of users over the life of the product. This means that any hard-coded policies are likely to be inadequate or inappropriate for some (or perhaps even most) potential users. Decoupling the mechanism implementations from the policy specifications makes it possible for different applications to use the same mechanism implementations with different policies. This means that those mechanisms are likely to better meet the needs of a wider range of users, for a longer period of time.
If it is possible to enable new policies without changing the implementing mechanisms, the costs and risks of such policy changes can be greatly reduced. In the first instance, this could be accomplished merely by segregating mechanisms and their policies into distinct modules: by replacing the module which dictates a policy (e.g. CPU scheduling policy) without changing the module which executes this policy (e.g. the scheduling mechanism), we can change the behaviour of the system. Further, in cases where a wide or variable range of policies are anticipated depending on applications' needs, it makes sense to create some non-code means for specifying policies, i.e. policies are not hardcoded into executable code but can be specified as an independent description. For instance, file protection policies (e.g. Unix's user/group/other read/write/execute) might be parametrized. Alternatively an implementing mechanism could be designed to include an interpreter for a new policy specification language. In both cases, the systems are usually accompanied by a deferred binding mechanism (e.g. late binding of configuration options via configuration files, or runtime programmability via APIs) that permits policy specifications to be incorporated to the system or replaced by another after it has been delivered to the customer.
An everyday example of mechanism/policy separation is the use of card keys to gain access to locked doors. The mechanisms (magnetic card readers, remote controlled locks, connections to a security server) do not impose any limitations on entrance policy (which people should be allowed to enter which doors, at which times). These decisions are made by a centralized security server, which (in turn) probably makes its decisions by consulting a database of room access rules. Specific authorization decisions can be changed by updating a room access database. If the rule schema of that database proved too limiting, the entire security server could be replaced while leaving the fundamental mechanisms (readers, locks, and connections) unchanged.
Contrast this with issuing physical keys: if you want to change who can open a door, you have to issue new keys and change the lock. This intertwines the unlocking mechanisms with the access policies. For a hotel, this is significantly less effective than using key cards.
In computer science, a microkernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, thread management, and inter-process communication (IPC).
Mach is a kernel developed at Carnegie Mellon University by Richard Rashid and Avie Tevanian to support operating system research, primarily distributed and parallel computing. Mach is often considered one of the earliest examples of a microkernel. However, not all versions of Mach are microkernels. Mach's derivatives are the basis of the operating system kernel in GNU Hurd and of Apple's XNU kernel used in macOS, iOS, iPadOS, tvOS, and watchOS.
In distributed computing, a remote procedure call (RPC) is when a computer program causes a procedure (subroutine) to execute in a different address space, which is written as if it were a normal (local) procedure call, without the programmer explicitly writing the details for the remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote. This is a form of client–server interaction, typically implemented via a request–response message passing system. In the object-oriented programming paradigm, RPCs are represented by remote method invocation (RMI). The RPC model implies a level of location transparency, namely that calling procedures are largely the same whether they are local or remote, but usually, they are not identical, so local calls can be distinguished from remote calls. Remote calls are usually orders of magnitude slower and less reliable than local calls, so distinguishing them is important.
The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system that lie outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the system's security policy.
In computer systems security, role-based access control (RBAC) or role-based security is an approach to restricting system access to authorized users, and to implementing mandatory access control (MAC) or discretionary access control (DAC).
L4 is a family of second-generation microkernels, used to implement a variety of types of operating systems (OS), though mostly for Unix-like, Portable Operating System Interface (POSIX) compliant types.
Regnecentralen (RC) was the first Danish computer company, founded on October 12, 1955. Through the 1950s and 1960s, they designed a series of computers, originally for their own use, and later to be sold commercially. Descendants of these systems sold well into the 1980s. They also developed a series of high-speed paper tape machines, and produced Data General Nova machines under license.
Per Brinch Hansen was a Danish-American computer scientist known for his work in operating systems, concurrent programming and parallel and distributed computing.
Jochen Liedtke was a German computer scientist, noted for his work on microkernel operating systems, especially in creating the L4 microkernel family.
Secure by design, in software engineering, means that software products and capabilities have been designed to be foundationally secure.
The RC 4000 Multiprogramming System is a discontinued operating system developed for the RC-4000 minicomputer in 1969. For clarity, this article mostly uses the term Monitor.
In computer science, capability-based addressing is a scheme used by some computers to control access to memory as an efficient implementation of capability-based security. Under a capability-based addressing scheme, pointers are replaced by protected objects which specify both a location in memory, along with access rights which define the set of operations which can be carried out on the memory location. Capabilities can only be created or modified through the use of privileged instructions which may be executed only by either the kernel or some other privileged process authorised to do so. Thus, a kernel can limit application code and other subsystems access to the minimum necessary portions of memory, without the need to use separate address spaces and therefore require a context switch when an access occurs.
Gernot Heiser is a Scientia Professor and the John Lions Chair for operating systems at UNSW Sydney, where he leads the Trustworthy Systems group (TS).
Hydra is an early, discontinued, capability-based, object-oriented microkernel designed to support a wide range of possible operating systems to run on it. Hydra was created as part of the C.mmp project at Carnegie Mellon University in 1971.
In computer sciences, the separation of protection and security is an application of the separation of mechanism and policy principle. The protection mechanism is supposed to be a component that implements the security policy. However, many frameworks consider both as security controls of varying types. For example, protection mechanisms would be considered technical controls, while a policy would be considered an administrative control.
The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. The kernel is also responsible for preventing and mitigating conflicts between different processes. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup. It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit.
A distributed operating system is system software over a collection of independent software, networked, communicating, and physically separate computational nodes. They handle jobs which are serviced by multiple CPUs. Each individual node holds a specific software subset of the global aggregate operating system. Each subset is a composite of two distinct service provisioners. The first is a ubiquitous minimal kernel, or microkernel, that directly controls that node's hardware. Second is a higher-level collection of system management components that coordinate the node's individual and collaborative activities. These components abstract microkernel functions and support user applications.
In information security, computer science, and other fields, the principle of least privilege (PoLP), also known as the principle of minimal privilege (PoMP) or the principle of least authority (PoLA), requires that in a particular abstraction layer of a computing environment, every module must be able to access only the information and resources that are necessary for its legitimate purpose.
A unikernel is a computer program statically linked with the operating system code on which it depends. Unikernels are built with a specialized compiler that identifies the operating system services that a program uses and links it with one or more library operating systems that provide them. Such a program requires no separate operating system and can run instead as the guest of a hypervisor.