Saltzer and Schroeder's design principles

Last updated

Saltzer and Schroeder's design principles are design principles enumerated by Jerome Saltzer and Michael Schroeder in their 1975 article The Protection of Information in Computer Systems, [1] that from their experience are important for the design of secure software systems.

The design principles

Related Research Articles

<span class="mw-page-title-main">Information security</span> Protecting information by mitigating risk

Information security, sometimes shortened to InfoSec, is the practice of protecting information by mitigating information risks. It is part of information risk management. It typically involves preventing or reducing the probability of unauthorized/inappropriate access to data, or the unlawful use, disclosure, disruption, deletion, corruption, modification, inspection, recording, or devaluation of information. It also involves actions intended to reduce the adverse impacts of such incidents. Protected information may take any form, e.g. electronic or physical, tangible or intangible. Information security's primary focus is the balanced protection of the confidentiality, integrity, and availability of data while maintaining a focus on efficient policy implementation, all without hampering organization productivity. This is largely achieved through a structured risk management process that involves:

<span class="mw-page-title-main">Multics</span> Time-sharing operating system

Multics is an influential early time-sharing operating system based on the concept of a single-level memory. Nathan Gregory writes that Multics "has influenced all modern operating systems since, from microcomputers to mainframes."

<span class="mw-page-title-main">Microkernel</span> Kernel that provides fewer services than a traditional kernel

In computer science, a microkernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, thread management, and inter-process communication (IPC).

<span class="mw-page-title-main">Password</span> Used for user authentication to prove identity or access approval

A password, sometimes called a passcode, is secret data, typically a string of characters, usually used to confirm a user's identity. Traditionally, passwords were expected to be memorized, but the large number of password-protected services that a typical individual accesses can make memorization of unique passwords for each service impractical. Using the terminology of the NIST Digital Identity Guidelines, the secret is held by a party called the claimant while the party verifying the identity of the claimant is called the verifier. When the claimant successfully demonstrates knowledge of the password to the verifier through an established authentication protocol, the verifier is able to infer the claimant's identity.

The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system that lie outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the system's security policy.

The GE 645 mainframe computer was a development of the GE 635 for use in the Multics project. This was the first computer that implemented a configurable hardware protected memory system. The original CTSS was implemented on a modified IBM 7094 with two banks of memory and bank-switching between user and supervisor mode, i.e. programs running in the A-core memory bank had access to instructions that programs running in the B-core bank did not. The Multics operating system implemented multilevel security (MLS) on a GE 635 by running a simulator of the 645 starting on October 18, 1965, in the MIT Tech Center. This simulated environment was replaced by the first 645 hardware in 1967. The GECOS operating system was fully replaced by Multics in 1969 with the Multics supervisor separated by protection rings with "gates" allowing access from user mode. A later generation in the form of the 645F wasn't completed by the time the division was sold to Honeywell, and became known as the Honeywell 6180. The original access control mechanism of the GE/Honeywell 645 were found inadequate for high speed trapping of access instructions and the re-implementation in the 6180 solved those problems. The bulk of these computers running time-sharing on Multics were installed at the NSA and similar governmental sites. Their usage was limited by the extreme security measures and had limited impact on subsequent systems, other than the protection ring.

<span class="mw-page-title-main">Rootkit</span> Software designed to enable access to unauthorized locations in a computer

A rootkit is a collection of computer software, typically malicious, designed to enable access to a computer or an area of its software that is not otherwise allowed and often masks its existence or the existence of other software. The term rootkit is a compound of "root" and the word "kit". The term "rootkit" has negative connotations through its association with malware.

Jerome Howard "Jerry" Saltzer is an American computer scientist.

<span class="mw-page-title-main">Vulnerability (computing)</span> Exploitable weakness in a computer system

Vulnerabilities are flaws in a computer system that weaken the overall security of the device/system. Vulnerabilities can be weaknesses in either the hardware itself, or the software that runs on the hardware. Vulnerabilities can be exploited by a threat actor, such as an attacker, to cross privilege boundaries within a computer system. To exploit a vulnerability, an attacker must have at least one applicable tool or technique that can connect to a system weakness. In this frame, vulnerabilities are also known as the attack surface.

<span class="mw-page-title-main">Secure by design</span> Software engineering approach

Secure by design, in software engineering, means that software products and capabilities have been designed to be foundationally secure.

In computing, privilege is defined as the delegation of authority to perform security-relevant functions on a computer system. A privilege allows a user to perform an action with security consequences. Examples of various privileges include the ability to create a new user, install software, or change kernel functions.

<span class="mw-page-title-main">Protection ring</span> Layer of protection in computer systems

In computer science, hierarchical protection domains, often called protection rings, are mechanisms to protect data and functionality from faults and malicious behavior.

<span class="mw-page-title-main">Tamperproofing</span> Security methodology

Tamperproofing, conceptually, is a methodology used to hinder, deter or detect unauthorised access to a device or circumvention of a security system. Since any device or system can be foiled by a person with sufficient knowledge, equipment, and time, the term "tamperproof" is a misnomer unless some limitations on the tampering party's resources is explicit or assumed.

Database security concerns the use of a broad range of information security controls to protect databases against compromises of their confidentiality, integrity and availability. It involves various types or categories of controls, such as technical, procedural/administrative and physical.

Michael Schroeder is an American computer scientist. His areas of research include computer security, distributed systems and operating systems and he is perhaps best known as the co-inventor of the Needham–Schroeder protocol. In 2001 he co-founded the Microsoft Research Silicon Valley lab and was the assistant managing director until the lab was disbanded in 2014.

There are a number of security and safety features new to Windows Vista, most of which are not available in any prior Microsoft Windows operating system release.

The separation of mechanism and policy is a design principle in computer science. It states that mechanisms should not dictate the policies according to which decisions are made about which operations to authorize, and which resources to allocate.

<span class="mw-page-title-main">Kernel (operating system)</span> Core of a computer operating system

The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup. It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit.

<span class="mw-page-title-main">Mobile security</span> Security risk and prevention for mobile devices

Mobile security, or mobile device security, is the protection of smartphones, tablets, and laptops from threats associated with wireless computing. It has become increasingly important in mobile computing. The security of personal and business information now stored on smartphones is of particular concern.

In information security, computer science, and other fields, the principle of least privilege (PoLP), also known as the principle of minimal privilege (PoMP) or the principle of least authority (PoLA), requires that in a particular abstraction layer of a computing environment, every module must be able to access only the information and resources that are necessary for its legitimate purpose.

References

  1. Smith, R. E. (November 2012). "A Contemporary Look at Saltzer and Schroeder's 1975 Design Principles". IEEE Security Privacy. 10 (6): 20–25. doi:10.1109/MSP.2012.85. ISSN   1540-7993. S2CID   13371996.