Trusted computing base

Last updated

The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system that lie outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the system's security policy.

Contents

The careful design and implementation of a system's trusted computing base is paramount to its overall security. Modern operating systems strive to reduce the size of the TCB[ not verified in body ] so that an exhaustive examination of its code base (by means of manual or computer-assisted software audit or program verification) becomes feasible.

Definition and characterization

The term goes back to John Rushby, [1] who defined it as the combination of operating system kernel and trusted processes. The latter refers to processes which are allowed to violate the system's access-control rules. In the classic paper Authentication in Distributed Systems: Theory and Practice [2] Lampson et al. define the TCB of a computer system as simply

a small amount of software and hardware that security depends on and that we distinguish from a much larger amount that can misbehave without affecting security.

Both definitions, while clear and convenient, are neither theoretically exact nor intended to be, as e.g. a network server process under a UNIX-like operating system might fall victim to a security breach and compromise an important part of the system's security, yet is not part of the operating system's TCB. The Orange Book, another classic computer security literature reference, therefore provides [3] a more formal definition of the TCB of a computer system, as

the totality of protection mechanisms within it, including hardware, firmware, and software, the combination of which is responsible for enforcing a computer security policy.

In other words, trusted computing base (TCB) is a combination of hardware, software, and controls that work together to form a trusted base to enforce your security policy.

The Orange Book further explains that

[t]he ability of a trusted computing base to enforce correctly a unified security policy depends on the correctness of the mechanisms within the trusted computing base, the protection of those mechanisms to ensure their correctness, and the correct input of parameters related to the security policy.

In other words, a given piece of hardware or software is a part of the TCB if and only if it has been designed to be a part of the mechanism that provides its security to the computer system. In operating systems, this typically consists of the kernel (or microkernel) and a select set of system utilities (for example, setuid programs and daemons in UNIX systems). In programming languages designed with built-in security features, such as Java and E, the TCB is formed of the language runtime and standard library. [4]

Properties

Predicated upon the security policy

As a consequence of the above Orange Book definition, the boundaries of the TCB depend closely upon the specifics of how the security policy is fleshed out. In the network server example above, even though, say, a Web server that serves a multi-user application is not part of the operating system's TCB, it has the responsibility of performing access control so that the users cannot usurp the identity and privileges of each other. In this sense, it definitely is part of the TCB of the larger computer system that comprises the UNIX server, the user's browsers and the Web application; in other words, breaching into the Web server through e.g. a buffer overflow may not be regarded as a compromise of the operating system proper, but it certainly constitutes a damaging exploit on the Web application.

This fundamental relativity of the boundary of the TCB is exemplified by the concept of the 'target of evaluation' ('TOE') in the Common Criteria security process: in the course of a Common Criteria security evaluation, one of the first decisions that must be made is the boundary of the audit in terms of the list of system components that will come under scrutiny.

A prerequisite to security

Systems that don't have a trusted computing base as part of their design do not provide security of their own: they are only secure insofar as security is provided to them by external means (e.g. a computer sitting in a locked room without a network connection may be considered secure depending on the policy, regardless of the software it runs). This is because, as David J. Farber et al. put it, [5] [i]n a computer system, the integrity of lower layers is typically treated as axiomatic by higher layers. As far as computer security is concerned, reasoning about the security properties of a computer system requires being able to make sound assumptions about what it can, and more importantly, cannot do; however, barring any reason to believe otherwise, a computer is able to do everything that a general Von Neumann machine can. This obviously includes operations that would be deemed contrary to all but the simplest security policies, such as divulging an email or password that should be kept secret; however, barring special provisions in the architecture of the system, there is no denying that the computer could be programmed to perform these undesirable tasks.

These special provisions that aim at preventing certain kinds of actions from being executed, in essence, constitute the trusted computing base. For this reason, the Orange Book (still a reference on the design of secure operating systems as of 2007) characterizes the various security assurance levels that it defines mainly in terms of the structure and security features of the TCB.

Software parts of the TCB need to protect themselves

As outlined by the aforementioned Orange Book, software portions of the trusted computing base need to protect themselves against tampering to be of any effect. This is due to the von Neumann architecture implemented by virtually all modern computers: since machine code can be processed as just another kind of data, it can be read and overwritten by any program. This can be prevented by special memory management provisions that subsequently have to be treated as part of the TCB. Specifically, the trusted computing base must at least prevent its own software from being written to.

In many modern CPUs, the protection of the memory that hosts the TCB is achieved by adding in a specialized piece of hardware called the memory management unit (MMU), which is programmable by the operating system to allow and deny a running program's access to specific ranges of the system memory. Of course, the operating system is also able to disallow such programming to the other programs. This technique is called supervisor mode; compared to more crude approaches (such as storing the TCB in ROM, or equivalently, using the Harvard architecture), it has the advantage of allowing security-critical software to be upgraded in the field, although allowing secure upgrades of the trusted computing base poses bootstrap problems of its own. [6]

Trusted vs. trustworthy

As stated above, trust in the trusted computing base is required to make any progress in ascertaining the security of the computer system. In other words, the trusted computing base is “trusted” first and foremost in the sense that it has to be trusted, and not necessarily that it is trustworthy. Real-world operating systems routinely have security-critical bugs discovered in them, which attests to the practical limits of such trust. [7]

The alternative is formal software verification, which uses mathematical proof techniques to show the absence of bugs. Researchers at NICTA and its spinout Open Kernel Labs have recently performed such a formal verification of seL4, a member of the L4 microkernel family, proving functional correctness of the C implementation of the kernel. [8] This makes seL4 the first operating-system kernel which closes the gap between trust and trustworthiness, assuming the mathematical proof is free from error.

TCB size

Due to the aforementioned need to apply costly techniques such as formal verification or manual review, the size of the TCB has immediate consequences on the economics of the TCB assurance process, and the trustworthiness of the resulting product (in terms of the mathematical expectation of the number of bugs not found during the verification or review). In order to reduce costs and security risks, the TCB should therefore be kept as small as possible. This is a key argument in the debate preferring microkernels to monolithic kernels. [9]

Examples

AIX materializes the trusted computing base as an optional component in its install-time package management system. [10]

See also

Related Research Articles

<span class="mw-page-title-main">IBM AIX</span> Series of Unix operating systems from IBM

AIX is a series of proprietary Unix operating systems developed and sold by IBM for several of its computer platforms.

<span class="mw-page-title-main">GNU Hurd</span> Operating system kernel designed as a replacement for Unix

GNU Hurd is a collection of microkernel servers written as part of GNU, for the GNU Mach microkernel. It has been under development since 1990 by the GNU Project of the Free Software Foundation, designed as a replacement for the Unix kernel, and released as free software under the GNU General Public License. When the Linux kernel proved to be a viable solution, development of GNU Hurd slowed, at times alternating between stasis and renewed activity and interest.

<span class="mw-page-title-main">Microkernel</span> Kernel that provides fewer services than a traditional kernel

In computer science, a microkernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, thread management, and inter-process communication (IPC).

Mach is a kernel developed at Carnegie Mellon University by Richard Rashid and Avie Tevanian to support operating system research, primarily distributed and parallel computing. Mach is often considered one of the earliest examples of a microkernel. However, not all versions of Mach are microkernels. Mach's derivatives are the basis of the operating system kernel in GNU Hurd and of Apple's XNU kernel used in macOS, iOS, iPadOS, tvOS, and watchOS.

In the security engineering subspecialty of computer science, a trusted system is one that is relied upon to a specified extent to enforce a specified security policy. This is equivalent to saying that a trusted system is one whose failure would break a security policy.

L4 is a family of second-generation microkernels, used to implement a variety of types of operating systems (OS), though mostly for Unix-like, Portable Operating System Interface (POSIX) compliant types.

A rootkit is a collection of computer software, typically malicious, designed to enable access to a computer or an area of its software that is not otherwise allowed and often masks its existence or the existence of other software. The term rootkit is a compound of "root" and the word "kit". The term "rootkit" has negative connotations through its association with malware.

Unix security refers to the means of securing a Unix or Unix-like operating system.

OSF/1 is a variant of the Unix operating system developed by the Open Software Foundation during the late 1980s and early 1990s. OSF/1 is one of the first operating systems to have used the Mach kernel developed at Carnegie Mellon University, and is probably best known as the native Unix operating system for DEC Alpha architecture systems.

In the context of hardware and software systems, formal verification is the act of proving or disproving the correctness of a system with respect to a certain formal specification or property, using formal methods of mathematics. Formal verification is a key incentive for formal specification of systems, and is at the core of formal methods. It represents an important dimension of analysis and verification in electronic design automation and is one approach to software verification. The use of formal verification enables the highest Evaluation Assurance Level (EAL7) in the framework of common criteria for computer security certification.

Extremely Reliable Operating System (EROS) is an operating system developed starting in 1991 at the University of Pennsylvania, and then Johns Hopkins University, and The EROS Group, LLC. Features include automatic data and process persistence, some preliminary real-time support, and capability-based security. EROS is purely a research operating system, and was never deployed in real world use. As of 2005, development stopped in favor of a successor system, CapROS.

Capability-based security is a concept in the design of secure computing systems, one of the existing security models. A capability is a communicable, unforgeable token of authority. It refers to a value that references an object along with an associated set of access rights. A user program on a capability-based operating system must use a capability to access an object. Capability-based security refers to the principle of designing user programs such that they directly share capabilities with each other according to the principle of least privilege, and to the operating system infrastructure necessary to make such transactions efficient and secure. Capability-based security is to be contrasted with an approach that uses traditional UNIX permissions and Access Control Lists.

Intel Trusted Execution Technology is a computer hardware technology of which the primary goals are:

The XTS-400 is a multilevel secure computer operating system. It is multiuser and multitasking that uses multilevel scheduling in processing data and information. It works in networked environments and supports Gigabit Ethernet and both IPv4 and IPv6.

<span class="mw-page-title-main">Gernot Heiser</span> Australian computer scientist

Gernot Heiser is a Scientia Professor and the John Lions Chair for operating systems at UNSW Sydney, where he leads the Trustworthy Systems group (TS).

JX is a free, open source, microkernel operating system developed by the University of Erlangen with both the kernel and applications implemented using the Java programming language.

Multiple Independent Levels of Security/Safety (MILS) is a high-assurance security architecture based on the concepts of separation and controlled information flow. It is implemented by separation mechanisms that support both untrusted and trustworthy components; ensuring that the total security solution is non-bypassable, evaluatable, always invoked, and tamperproof.

<span class="mw-page-title-main">Kernel (operating system)</span> Core of a computer operating system

The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. The kernel is also responsible for preventing and mitigating conflicts between different processes. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup. It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit.

An embedded hypervisor is a hypervisor that supports the requirements of embedded systems.

<span class="mw-page-title-main">Genode</span> Free and open-source software operating system

Genode is a free and open-source software operating system (OS) framework consisting of a microkernel abstraction layer and a set of user space components. The framework is notable as one of the few open-source operating systems not derived from a proprietary OS, such as Unix. The characteristic design philosophy is that a small trusted computing base is of primary concern in a security-oriented OS.

References

  1. Rushby, John (1981). "Design and Verification of Secure Systems". 8th ACM Symposium on Operating System Principles. Pacific Grove, California, US. pp. 12–21.
  2. B. Lampson, M. Abadi, M. Burrows and E. Wobber, Authentication in Distributed Systems: Theory and Practice, ACM Transactions on Computer Systems 1992, on page 6.
  3. Department of Defense trusted computer system evaluation criteria, DoD 5200.28-STD, 1985. In the glossary under entry Trusted Computing Base (TCB).
  4. M. Miller, C. Morningstar and B. Frantz, Capability-based Financial Instruments (An Ode to the Granovetter diagram), in paragraph Subjective Aggregation.
  5. W. Arbaugh, D. Farber and J. Smith, A Secure and Reliable Bootstrap Architecture, 1997, also known as the “aegis papers”.
  6. A Secure and Reliable Bootstrap Architecture, op. cit.
  7. Bruce Schneier, The security patch treadmill (2001)
  8. Klein, Gerwin; Elphinstone, Kevin; Heiser, Gernot; Andronick, June; Cock, David; Derrin, Philip; Elkaduwe, Dhammika; Engelhardt, Kai; Kolanski, Rafal; Norrish, Michael; Sewell, Thomas; Tuch, Harvey; Winwood, Simon (October 2009). "seL4: Formal verification of an OS kernel" (PDF). 22nd ACM Symposium on Operating System Principles. Big Sky, Montana, US. pp. 207–220.
  9. Andrew S. Tanenbaum, Tanenbaum-Torvalds debate, part II (12 May 2006)
  10. AIX 4.3 Elements of Security, August 2000, chapter 6.