This article needs additional citations for verification .(July 2018) |
Trusted Computer System Evaluation Criteria (TCSEC) is a United States Government Department of Defense (DoD) standard that sets basic requirements for assessing the effectiveness of computer security controls built into a computer system. The TCSEC was used to evaluate, classify, and select computer systems being considered for the processing, storage, and retrieval of sensitive or classified information. [1]
The TCSEC, frequently referred to as the Orange Book, is the centerpiece of the DoD Rainbow Series publications. Initially issued in 1983 by the National Computer Security Center (NCSC), an arm of the National Security Agency, and then updated in 1985, TCSEC was eventually replaced by the Common Criteria international standard, originally published in 2005.[ citation needed ]
By the late 1960s, government agencies, like other computer users, had gone far in the transition from batch processing to multiuser and time-sharing systems. The US Department of Defense (DoD) Advanced Research Projects Agency (ARPA), now DARPA was a primary funder of research into time-sharing. [1] By 1970, DoD was planning a major procurement of mainframe computers referred to as the Worldwide Military Command and Control System (WWMCCS) to support military command operations. The desire to meet the more advanced challenges emerged early. The Air Force's Military Airlift Command (MAC), for example, provided the military services with a largely unclassified air cargo and passenger service but on rare occasions was required to classify some of its missions using the same aircraft and crews—for example, in cases of military contingencies or special operations. By 1970, MAC had articulated a requirement to process classified information on its soon-to-arrive WWMCCS mainframes while allowing users without security clearance to access classified information (uncleared users) access to the mainframes. [2]
The national security community responded to the challenges in two ways: the Office of the Secretary of Defense commissioned a study of the policy and technical issues associated with securing computer systems, while ARPA funded the development of a prototype secure operating system that could process and protect classified information.
The study effort was organized as the Defense Science Board (DSB) Task Force on Computer Security under the chairmanship of the late Willis Ware. Its membership included technologists from the government and defense contractors as well as security officials from the DoD and intelligence community. The task force met between 1967 and 1969 and produced a classified report that was made available to organizations with appropriate security clearance beginning in 1970. [3] The Ware Report, as the DSB task force report came to be called, provided guidance on the development and operation of multiuser computer systems that would be used to process classified information.
In the early 1970s, United States Air Force requirements for the development of new computer system capabilities were addressed to the Air Force Electronic Systems Division (ESD) later known as the Electronic Systems Center at Hanscom Air Force Base in Massachusetts. ESD received technical advice and support of the Mitre Corporation, one of the countries federally funded research and development centers (FFRDC). An early MITRE report [2] suggested alternative approaches to meeting the MAC requirement without developing a new multilevel secure operating system in hopes that these approaches might avoid the problems the Ware Report characterized as intractable.
Grace Hammonds Nibaldi while she worked at the Mitre Corporation published a report that laid out the initial plans for the evaluation of commercial off-the-shelf operating systems. [4] The Nibaldi paper places great emphasis on the importance of mandatory security. Like the Orange Book to follow, it defines seven levels of evaluated products with the lowest, least-secure level (0) reserved for “unevaluated.” In the Nibaldi scheme, all but level 1 (the lowest level that actually undergoes evaluation) must include features for extensive mandatory security.
Work on the Orange book began in 1979. The creation of the Orange Book was a major project spanning the period from Nibaldi's 1979 report [4] to the official release of the Orange Book in 1983. The first public draft of the evaluation criteria was the Blue Book released in May 1982. [1] The Orange book was published in August 1983. Sheila Brand was the primary author and several other people were core contributors to its development. These included Grace Hammonds Nibaldi and Peter Tasker of Mitre Corporation; Dan Edwards, Roger Schell, and Marvin Schaeffer of National Computer Security Conference; and Ted Lee of Univac. A number of people from government, government contractors, and vendors, including Jim Anderson, Steve Walker, Clark Weissman, and Steve Lipner were cited as reviewers who influenced the content of the final product. [1]
In 1999, the Orange book was replaced by the International Common Criteria for Information Technology Security Evaluation. [1]
On 24 October 2002, The Orange Book (aka DoDD 5200.28-STD) was canceled by DoDD 8500.1, which was later reissued as DoDI 8500.02, on 14 March 2014. [5]
The security policy must be explicit, well-defined, and enforced by the computer system. Three basic security policies are specified: [6]
Individual accountability regardless of policy must be enforced. A secure means must exist to ensure the access of an authorized and competent agent that can then evaluate the accountability information within a reasonable amount of time and without undue difficulty. The accountability objective includes three requirements: [6]
The computer system must contain hardware/software mechanisms that can be independently evaluated to provide sufficient assurance that the system enforces the above requirements. By extension, assurance must include a guarantee that the trusted portion of the system works only as intended. To accomplish these objectives, two types of assurance are needed with their respective elements: [6]
Within each class, an additional set of documentation addresses the development, deployment, and management of the system rather than its capabilities. This documentation includes:[ citation needed ]
The TCSEC defines four divisions: D, C, B, and A, where division A has the highest security. Each division represents a significant difference in the trust an individual or organization can place on the evaluated system. Additionally divisions C, B and A are broken into a series of hierarchical subdivisions called classes: C1, C2, B1, B2, B3, and A1. [7]
Each division and class expands or modifies as indicated the requirements of the immediately prior division or class. [7]
The publication entitled "Army Regulation 380-19" is an example of a guide to determining which system class should be used in a given situation. [12]
The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system that lie outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the system's security policy.
In the security engineering subspecialty of computer science, a trusted system is one that is relied upon to a specified extent to enforce a specified security policy. This is equivalent to saying that a trusted system is one whose failure would break a security policy.
The Common Criteria for Information Technology Security Evaluation is an international standard for computer security certification. It is currently in version 3.1 revision 5.
In computer security, a covert channel is a type of attack that creates a capability to transfer information objects between processes that are not supposed to be allowed to communicate by the computer security policy. The term, originated in 1973 by Butler Lampson, is defined as channels "not intended for information transfer at all, such as the service program's effect on system load," to distinguish it from legitimate channels that are subjected to access controls by COMPUSEC.
In computing, security-evaluated operating systems have achieved certification from an external security-auditing organization, the most popular evaluations are Common Criteria (CC) and FIPS 140-2.
In computer security, mandatory access control (MAC) refers to a type of access control by which the operating system or database constrains the ability of a subject or initiator to access or generally perform some sort of operation on an object or target. In the case of operating systems, a subject is usually a process or thread; objects are constructs such as files, directories, TCP/UDP ports, shared memory segments, IO devices, etc. Subjects and objects each have a set of security attributes. Whenever a subject attempts to access an object, an authorization rule enforced by the operating system kernel examines these security attributes and decides whether the access can take place. Any operation by any subject on any object is tested against the set of authorization rules to determine if the operation is allowed. A database management system, in its access control mechanism, can also apply mandatory access control; in this case, the objects are tables, views, procedures, etc.
In computer security, discretionary access control (DAC) is a type of access control defined by the Trusted Computer System Evaluation Criteria (TCSEC) as a means of restricting access to objects based on the identity of subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission on to any other subject.
Multilevel security or multiple levels of security (MLS) is the application of a computer system to process information with incompatible classifications, permit access by users with different security clearances and needs-to-know, and prevent users from obtaining access to information for which they lack authorization. There are two contexts for the use of multilevel security.
The Evaluation Assurance Level of an IT product or system is a numerical grade assigned following the completion of a Common Criteria security evaluation, an international standard in effect since 1999. The increasing assurance levels reflect added assurance requirements that must be met to achieve Common Criteria certification. The intent of the higher levels is to provide higher confidence that the system's principal security features are reliably implemented. The EAL level does not measure the security of the system itself, it simply states at what level the system was tested.
In operating systems architecture a reference monitor concept defines a set of design requirements on a reference validation mechanism, which enforces an access control policy over subjects' ability to perform operations on objects on a system. The properties of a reference monitor are captured by the acronym NEAT, which means:
Multiple single-level or multi-security level (MSL) is a means to separate different levels of data by using separate computers or virtual machines for each level. It aims to give some of the benefits of multilevel security without needing special changes to the OS or applications, but at the cost of needing extra hardware.
The XTS-400 is a multilevel secure computer operating system. It is multiuser and multitasking that uses multilevel scheduling in processing data and information. It works in networked environments and supports Gigabit Ethernet and both IPv4 and IPv6.
Solaris Trusted Extensions is a set of security extensions incorporated in the Solaris 10 operating system by Sun Microsystems, featuring a mandatory access control model. It succeeds Trusted Solaris, a family of security-evaluated operating systems based on earlier versions of Solaris.
A Protection Profile (PP) is a document used as part of the certification process according to ISO/IEC 15408 and the Common Criteria (CC). As the generic form of a Security Target (ST), it is typically created by a user or user community and provides an implementation independent specification of information assurance security requirements. A PP is a combination of threats, security objectives, assumptions, security functional requirements (SFRs), security assurance requirements (SARs) and rationales.
A cross-domain solution (CDS) is an integrated information assurance system composed of specialized software, and sometimes hardware, that provides a controlled interface to manually or automatically enable and/or restrict the access or transfer of information between two or more security domains based on a predetermined security policy. CDSs are designed to enforce domain separation and typically include some form of content filtering, which is used to designate information that is unauthorized for transfer between security domains or levels of classification, such as between different military divisions, intelligence agencies, or other operations which depend on the timely sharing of potentially sensitive information.
System high mode, or simply system high, is a security mode of using an automated information system (AIS) that pertains to an environment that contains restricted data that is classified in a hierarchical scheme, such as Top Secret, Secret and Unclassified. System high pertains to the IA features of information processed, and specifically not to the strength or trustworthiness of the system.
A separation kernel is a type of security kernel used to simulate a distributed environment. The concept was introduced by John Rushby in a 1981 paper. Rushby proposed the separation kernel as a solution to the difficulties and problems that had arisen in the development and verification of large, complex security kernels that were intended to "provide multilevel secure operation on general-purpose multi-user systems." According to Rushby, "the task of a separation kernel is to create an environment which is indistinguishable from that provided by a physically distributed system: it must appear as if each regime is a separate, isolated machine and that information can only flow from one machine to another along known external communication lines. One of the properties we must prove of a separation kernel, therefore, is that there are no channels for information flow between regimes other than those explicitly provided."
In information security, a guard is a device or system for allowing computers on otherwise separate networks to communicate, subject to configured constraints. In many respects a guard is like a firewall and guards may have similar functionality to a gateway.
The Controlled Access Protection Profile, also known as CAPP, is a Common Criteria security profile that specifies a set of functional and assurance requirements for information technology products. Software and systems that conform to CAPP standards provide access controls that are capable of enforcing access limitations on individual users and data objects. CAPP-conformant products also provide an audit capability which records the security-relevant events which occur within the system.
Security Controls for Computer Systems, commonly called the Ware report, is a 1970 text by Willis Ware that was foundational in the field of computer security.
{{cite web}}
: CS1 maint: unfit URL (link)