Trusted system

Last updated

In the security engineering subspecialty of computer science, a trusted system is one that is relied upon to a specified extent to enforce a specified security policy. This is equivalent to saying that a trusted system is one whose failure would break a security policy (if a policy exists that the system is trusted to enforce).

Contents

The word "trust" is critical, as it does not carry the meaning that might be expected in everyday usage. A trusted system is one that the user feels safe to use, and trusts to perform tasks without secretly executing harmful or unauthorized programs; trusted computing refers to whether programs can trust the platform to be unmodified from the expected, and whether or not those programs are innocent or malicious or whether they execute tasks that are undesired by the user.

A trusted system can also be seen as a level-based security system where protection is provided and handled according to different levels. This is commonly found in the military, where information is categorized as unclassified (U), confidential (C), secret (S), top secret (TS), and beyond. These also enforce the policies of no read-up and no write-down.

Trusted systems in classified information

A subset of trusted systems ("Division B" and "Division A") implement mandatory access control (MAC) labels, and as such, it is often assumed that they can be used for processing classified information. However, this is generally untrue. There are four modes in which one can operate a multilevel secure system: multilevel, compartmented, dedicated, and system-high modes. The National Computer Security Center's "Yellow Book" specifies that B3 and A1 systems can only be used for processing a strict subset of security labels, and only when operated according to a particularly strict configuration.

Central to the concept of U.S. Department of Defense-style trusted systems is the notion of a "reference monitor", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is

According to the U.S. National Security Agency's 1983 Trusted Computer System Evaluation Criteria (TCSEC), or "Orange Book", a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system.

The dedication of significant system engineering toward minimizing the complexity (not size, as often cited) of the trusted computing base (TCB) is key to the provision of the highest levels of assurance (B3 and A1). This is defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy. An inherent engineering conflict would appear to arise in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB and is, therefore, untrusted. Although this may lead the more technically naive to sophists' arguments about the nature of trust, the argument confuses the issue of "correctness" with that of "trustworthiness".

TCSEC has a precisely defined hierarchy of six evaluation classes; the highest of these, A1, is featurally identical to B3differing only in documentation standards. In contrast, the more recently introduced Common Criteria (CC), which derive from a blend of technically mature standards from various NATO countries, provide a tenuous spectrum of seven "evaluation classes" that intermix features and assurances in a non-hierarchical manner, and lack the precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" (TOE) and support even encourage an inter-mixture of security requirements culled from a variety of predefined "protection profiles." While a case can be made that even the seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest (E7) level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.[ citation needed ]

The mathematical notions of trusted systems for the protection of classified information derive from two independent but interrelated corpora of work. In 1974, David Bell and Leonard LaPadula of MITRE, under the technical guidance and financial sponsorship of Maj. Roger Schell, Ph.D., of the U.S. Army Electronic Systems Command (Fort Hanscom, MA), devised the Bell–LaPadula model, in which a trustworthy computer system is modeled in terms of objects (passive repositories or destinations for data such as files, disks, or printers) and subjects (active entities that cause information to flow among objects e.g. users, or system processes or threads operating on behalf of users). The entire operation of a computer system can indeed be regarded as a "history" (in the serializability-theoretic sense) of pieces of information flowing from object to object in response to subjects' requests for such flows. At the same time, Dorothy Denning at Purdue University was publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. (A mathematical "lattice" is a partially ordered set, characterizable as a directed acyclic graph, in which the relationship between any two vertices either "dominates", "is dominated by," or neither.) She defined a generalized notion of "labels" that are attached to entitiescorresponding more or less to the full security markings one encounters on classified military documents, e.g. TOP SECRET WNINTEL TK DUMBO. Bell and LaPadula integrated Denning's concept into their landmark MITRE technical reportentitled, Secure Computer System: Unified Exposition and Multics Interpretation. They stated that labels attached to objects represent the sensitivity of data contained within the object, while those attached to subjects represent the trustworthiness of the user executing the subject. (However, there can be a subtle semantic difference between the sensitivity of the data within the object and the sensitivity of the object itself.)

The concepts are unified with two properties, the "simple security property" (a subject can only read from an object that it dominates [is greater than is a close, albeit mathematically imprecise, interpretation]) and the "confinement property," or "*-property" (a subject can only write to an object that dominates it). (These properties are loosely referred to as "no read-up" and "no write-down," respectively.) Jointly enforced, these properties ensure that information cannot flow "downhill" to a repository where insufficiently trustworthy recipients may discover it. By extension, assuming that the labels assigned to subjects are truly representative of their trustworthiness, then the no read-up and no write-down rules rigidly enforced by the reference monitor are sufficient to constrain Trojan horses, one of the most general classes of attacks (sciz., the popularly reported worms and viruses are specializations of the Trojan horse concept).

The Bell–LaPadula model technically only enforces "confidentiality" or "secrecy" controls, i.e. they address the problem of the sensitivity of objects and attendant trustworthiness of subjects to not inappropriately disclose it. The dual problem of "integrity" (i.e. the problem of accuracy, or even provenance of objects) and attendant trustworthiness of subjects to not inappropriately modify or destroy it, is addressed by mathematically affine models; the most important of which is named for its creator, K. J. Biba. Other integrity models include the Clark-Wilson model and Shockley and Schell's program integrity model, "The SeaView Model" [1]

An important feature of MACs, is that they are entirely beyond the control of any user. The TCB automatically attaches labels to any subjects executed on behalf of users and files they access or modify. In contrast, an additional class of controls, termed discretionary access controls(DACs), are under the direct control of system users. Familiar protection mechanisms such as permission bits (supported by UNIX since the late 1960s and – in a more flexible and powerful form – by Multics since earlier still) and access control list (ACLs) are familiar examples of DACs.

The behavior of a trusted system is often characterized in terms of a mathematical model. This may be rigorous depending upon applicable operational and administrative constraints. These take the form of a finite state machine (FSM) with state criteria, state transition constraints (a set of "operations" that correspond to state transitions), and a descriptive top-level specification, DTLS (entails a user-perceptible interface such as an API, a set of system calls in UNIX or system exits in mainframes). Each element of the aforementioned engenders one or more model operations.

Trusted systems in trusted computing

The Trusted Computing Group creates specifications that are meant to address particular requirements of trusted systems, including attestation of configuration and safe storage of sensitive information.

Trusted systems in policy analysis

In the context of national or homeland security, law enforcement, or social control policy, trusted systems provide conditional prediction about the behavior of people or objects prior to authorizing access to system resources. [2] For example, trusted systems include the use of "security envelopes" in national security and counterterrorism applications, "trusted computing" initiatives in technical systems security, and credit or identity scoring systems in financial and anti-fraud applications. In general, they include any system in which

The widespread adoption of these authorization-based security strategies (where the default state is DEFAULT=DENY) for counterterrorism, anti-fraud, and other purposes is helping accelerate the ongoing transformation of modern societies from a notional Beccarian model of criminal justice based on accountability for deviant actions after they occur [3] to a Foucauldian model based on authorization, preemption, and general social compliance through ubiquitous preventative surveillance and control through system constraints. [4]

In this emergent model, "security" is not geared towards policing but to risk management through surveillance, information exchange, auditing, communication, and classification. These developments have led to general concerns about individual privacy and civil liberty, and to a broader philosophical debate about appropriate social governance methodologies.

Trusted systems in information theory

Trusted systems in the context of information theory are based on the following definition:

"Trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel"

Ed Gerck [5]

In information theory, information has nothing to do with knowledge or meaning; it is simply that which is transferred from source to destination, using a communication channel. If, before transmission, the information is available at the destination, then the transfer is zero. Information received by a party is that which the party does not expectas measured by the uncertainty of the party as to what the message will be.

Likewise, trust as defined by Gerck, has nothing to do with friendship, acquaintances, employee-employer relationships, loyalty, betrayal and other overly-variable concepts. Trust is not taken in the purely subjective sense either, nor as a feeling or something purely personal or psychologicaltrust is understood as something potentially communicable. Further, this definition of trust is abstract, allowing different instances and observers in a trusted system to communicate based on a common idea of trust (otherwise communication would be isolated in domains), where all necessarily different subjective and intersubjective realizations of trust in each subsystem (man and machines) may coexist. [6]

Taken together in the model of information theory, "information is what you do not expect" and "trust is what you know". Linking both concepts, trust is seen as "qualified reliance on received information". In terms of trusted systems, an assertion of trust cannot be based on the record itself, but on information from other information channels. [7] The deepening of these questions leads to complex conceptions of trust, which have been thoroughly studied in the context of business relationships. [8] It also leads to conceptions of information where the "quality" of information integrates trust or trustworthiness in the structure of the information itself and of the information system(s) in which it is conceivedhigher quality in terms of particular definitions of accuracy and precision means higher trustworthiness. [9]

An example of the calculus of trust is "If I connect two trusted systems, are they more or less trusted when taken together?". [6]

The IBM Federal Software Group [10] has suggested that "trust points" [5] provide the most useful definition of trust for application in an information technology environment, because it is related to other information theory concepts and provides a basis for measuring trust. In a network-centric enterprise services environment, such a notion of trust is considered [10] to be requisite for achieving the desired collaborative, service-oriented architecture vision.

See also

Related Research Articles

The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system that lie outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the system's security policy.

<span class="mw-page-title-main">Security-Enhanced Linux</span> Linux kernel security module

Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies, including mandatory access controls (MAC).

In computer security, a covert channel is a type of attack that creates a capability to transfer information objects between processes that are not supposed to be allowed to communicate by the computer security policy. The term, originated in 1973 by Butler Lampson, is defined as channels "not intended for information transfer at all, such as the service program's effect on system load," to distinguish it from legitimate channels that are subjected to access controls by COMPUSEC.

The Bell–LaPadula Model (BLP) is a state machine model used for enforcing access control in government and military applications. It was developed by David Elliott Bell and Leonard J. LaPadula, subsequent to strong guidance from Roger R. Schell, to formalize the U.S. Department of Defense (DoD) multilevel security (MLS) policy. The model is a formal state transition model of computer security policy that describes a set of access control rules which use security labels on objects and clearances for subjects. Security labels range from the most sensitive, down to the least sensitive.

Rule-set-based access control (RSBAC) is an open source access control framework for current Linux kernels, which has been in stable production use since January 2000.

In computer security, mandatory access control (MAC) refers to a type of access control by which the operating system or database constrains the ability of a subject or initiator to access or generally perform some sort of operation on an object or target. In the case of operating systems, a subject is usually a process or thread; objects are constructs such as files, directories, TCP/UDP ports, shared memory segments, IO devices, etc. Subjects and objects each have a set of security attributes. Whenever a subject attempts to access an object, an authorization rule enforced by the operating system kernel examines these security attributes and decides whether the access can take place. Any operation by any subject on any object is tested against the set of authorization rules to determine if the operation is allowed. A database management system, in its access control mechanism, can also apply mandatory access control; in this case, the objects are tables, views, procedures, etc.

In computer security, discretionary access control (DAC) is a type of access control defined by the Trusted Computer System Evaluation Criteria (TCSEC) as a means of restricting access to objects based on the identity of subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission on to any other subject.

A computer security policy defines the goals and elements of an organization's computer systems. The definition can be highly formal or informal. Security policies are enforced by organizational policies or security mechanisms. A technical implementation defines whether a computer system is secure or insecure. These formal policy models can be categorized into the core security principles of Confidentiality, Integrity, and Availability. For example, the Bell-La Padula model is a confidentiality policy model, whereas the Biba model is an integrity policy model.

Multilevel security or multiple levels of security (MLS) is the application of a computer system to process information with incompatible classifications, permit access by users with different security clearances and needs-to-know, and prevent users from obtaining access to information for which they lack authorization. There are two contexts for the use of multilevel security. One is to refer to a system that is adequate to protect itself from subversion and has robust mechanisms to separate information domains, that is, trustworthy. Another context is to refer to an application of a computer that will require the computer to be strong enough to protect itself from subversion and possess adequate mechanisms to separate information domains, that is, a system we must trust. This distinction is important because systems that need to be trusted are not necessarily trustworthy.

In operating systems architecture a reference monitor concept defines a set of design requirements on a reference validation mechanism, which enforces an access control policy over subjects' ability to perform operations on objects on a system. The properties of a reference monitor are captured by the acronym NEAT, which means:

The concept of type enforcement (TE), in the field of information technology, is an access control mechanism for regulating access in computer systems. Implementing TE gives priority to mandatory access control (MAC) over discretionary access control (DAC). Access clearance is first given to a subject accessing objects based on rules defined in an attached security context. A security context in a domain is defined by a domain security policy. In the Linux security module (LSM) in SELinux, the security context is an extended attribute. Type enforcement implementation is a prerequisite for MAC, and a first step before multilevel security (MLS) or its replacement multi categories security (MCS). It is a complement of role-based access control (RBAC).

Multi categories security (MCS) is an access control method in Security-Enhanced Linux that uses categories attached to objects (files) and granted to subjects at the operating system level. The implementation in Fedora Core 5 is advisory because there is nothing stopping a process from increasing its access. The eventual aim is to make MCS a hierarchical mandatory access control system. Currently, MCS controls access to files and to ptrace or kill processes. It has not yet decided what level of control it should have over access to directories and other file system objects. It is still evolving.

Multiple single-level or multi-security level (MSL) is a means to separate different levels of data by using separate computers or virtual machines for each level. It aims to give some of the benefits of multilevel security without needing special changes to the OS or applications, but at the cost of needing extra hardware.

The XTS-400 is a multilevel secure computer operating system. It is multiuser and multitasking that uses multilevel scheduling in processing data and information. It works in networked environments and supports Gigabit Ethernet and both IPv4 and IPv6.

Solaris Trusted Extensions is a set of security extensions incorporated in the Solaris 10 operating system by Sun Microsystems, featuring a mandatory access control model. It succeeds Trusted Solaris, a family of security-evaluated operating systems based on earlier versions of Solaris.

A secure state is an information systems security term to describe where entities in a computer system are divided into subjects and objects, and it can be formally proven that each state transition preserves security by moving from one secure state to another secure state. Thereby it can be inductively proven that the system is secure. As defined in the Bell–LaPadula model, the secure state is built on the concept of a state machine with a set of allowable states in a system. The transition from one state to another state is defined by transition functions.

In information security, computational trust is the generation of trusted authorities or user trust through cryptography. In centralised systems, security is typically based on the authenticated identity of external parties. Rigid authentication mechanisms, such as public key infrastructures (PKIs) or Kerberos, have allowed this model to be extended to distributed systems within a few closely collaborating domains or within a single administrative domain. During recent years, computer science has moved from centralised systems to distributed computing. This evolution has several implications for security models, policies and mechanisms needed to protect users’ information and resources in an increasingly interconnected computing infrastructure.

<span class="mw-page-title-main">Trusted Computer System Evaluation Criteria</span>

Trusted Computer System Evaluation Criteria (TCSEC) is a United States Government Department of Defense (DoD) standard that sets basic requirements for assessing the effectiveness of computer security controls built into a computer system. The TCSEC was used to evaluate, classify, and select computer systems being considered for the processing, storage, and retrieval of sensitive or classified information.

In information security, computer science, and other fields, the principle of least privilege (PoLP), also known as the principle of minimal privilege (PoMP) or the principle of least authority (PoLA), requires that in a particular abstraction layer of a computing environment, every module must be able to access only the information and resources that are necessary for its legitimate purpose.

<span class="mw-page-title-main">Computer access control</span>

In computer security, general access control includes identification, authorization, authentication, access approval, and audit. A more narrow definition of access control would cover only access approval, whereby the system makes a decision to grant or reject an access request from an already authenticated subject, based on what the subject is authorized to access. Authentication and access control are often combined into a single operation, so that access is approved based on successful authentication, or based on an anonymous access token. Authentication methods and tokens include passwords, biometric scans, physical keys, electronic keys and devices, hidden paths, social barriers, and monitoring by humans and automated systems.

References

  1. Lunt, Teresa & Denning, Dorothy & R. Schell, Roger & Heckman, Mark & R. Shockley, William. (1990). The SeaView Security Model.. IEEE Trans. Software Eng.. 16. 593-607. 10.1109/SECPRI.1988.8114. (Source)
  2. The concept of trusted systems described here is discussed in Taipale, K.A. (2005). The Trusted Systems Problem: Security Envelopes, Statistical Threat Analysis, and the Presumption of Innocence, Homeland Security - Trends and Controversies, IEEE Intelligent Systems, Vol. 20 No. 5, pp. 80-83 (Sept./Oct. 2005).
  3. Cesare Beccaria, On Crimes and Punishment (1764)
  4. Michel Foucault, Discipline and Punish (1975, Alan Sheridan, tr., 1977, 1995)
  5. 1 2 Feghhi, J. and P. Williams (1998) Trust Points, in Digital Certificates: Applied Internet Security. Addison-Wesley, ISBN   0-201-30980-7; Toward Real-World Models of Trust: Reliance on Received Information
  6. 1 2 Trust as Qualified Reliance on Information, Part I, The COOK Report on Internet, Volume X, No. 10, January 2002, ISSN   1071-6327.
  7. Gregory, John D. (1997). John D. Electronic Legal Records: Pretty Good Authentication?
  8. Huemer, L. (1998). Trust in business relations: Economic logic or social interaction? Umeå: Boréa. ISBN   91-89140-02-8.
  9. Ivanov, K. (1972). Quality-control of information: On the concept of accuracy of information in data banks and in management information systems.The University of Stockholm and The Royal Institute of Technology.
  10. 1 2 Daly, Christopher. (2004). A Trust Framework for the DoD Network-Centric Enterprise Services (NCES) Environment, IBM Corp., 2004. (Request from the IEEE Computer Society's ISSAA Archived 2011-07-26 at the Wayback Machine ).