Multiple single-level

Last updated

Multiple single-level or multi-security level (MSL) is a means to separate different levels of data by using separate computers or virtual machines for each level. It aims to give some of the benefits of multilevel security without needing special changes to the OS or applications, but at the cost of needing extra hardware.

Contents

The drive to develop MLS operating systems was severely hampered by the dramatic fall in data processing costs in the early 1990s. Before the advent of desktop computing, users with classified processing requirements had to either spend a lot of money for a dedicated computer or use one that hosted an MLS operating system. Throughout the 1990s, however, many offices in the defense and intelligence communities took advantage of falling computing costs to deploy desktop systems classified to operate only at the highest classification level used in their organization. These desktop computers operated in system high mode and were connected with LANs that carried traffic at the same level as the computers.

MSL implementations such as these neatly avoided the complexities of MLS but traded off technical simplicity for inefficient use of space. Because most users in classified environments also needed unclassified systems, users often had at least two computers and sometimes more (one for unclassified processing and one for each classification level processed). In addition, each computer was connected to its own LAN at the appropriate classification level, meaning that multiple dedicated cabling plants were incorporated (at considerable cost in terms of both installation and maintenance).

Limits of MSL versus MLS

The obvious shortcoming of MSL (as compared to MLS) is that it does not support immixture of various classification levels in any manner. For example, the notion of concatenating a SECRET data stream (taken from a SECRET file) with a TOP SECRET data stream (read from a TOP SECRET file) and directing the resultant TOP SECRET data stream into a TOP SECRET file is unsupported. In essence, an MSL system can be thought of as a set of parallel (and collocated) computer systems, each restricted to operation at one, and only one, security level. Indeed, the individual MSL operating systems may not even understand the concept of security levels, since they operate as single-level systems. For example, while one of a set of collocated MSL OS may be configured to affix the character string "SECRET" to all output, that OS has no understanding of how the data compares in sensitivity and criticality to the data processed by its peer OS that affixes the string "UNCLASSIFIED" to all of its output.

Operating across two or more security levels then, must use methods extraneous to the purview of the MSL "operating systems" per se, and needing human intervention, termed "manual review". For example, an independent monitor (not in Brinch Hansen's sense of the term) may be provided to support migration of data among multiple MSL peers (e.g., copying a data file from the UNCLASSIFIED peer to the SECRET peer). Although no strict requirements by way of federal legislation specifically address the concern, it would be appropriate for such a monitor to be quite small, purpose-built, and supportive of only a small number of very rigidly defined operations, such as importing and exporting files, configuring output labels, and other maintenance/administration tasks that require handling all the collocated MSL peers as a unit rather than as individual, single-level systems. It may also be appropriate to utilize a hypervisor software architecture, such as VMware , to provide a set of peer MSL "OS" in the form of distinct, virtualized environments supported by an underlying OS that is only accessible to administrators cleared for all of the data managed by any of the peers. From the users' perspectives, each peer would present a login or X display manager session logically indistinguishable from the underlying "maintenance OS" user environment.

Advances in MSL

The cost and complexity involved in maintaining distinct networks for each level of classification led the National Security Agency (NSA) to begin research into ways in which the MSL concept of dedicated system high systems could be preserved while reducing the physical investment demanded by multiple networks and computers. Periods processing was the first advance in this area, establishing protocols by which agencies could connect a computer to a network at one classification, process information, sanitize the system, and connect it to a different network with another classification. The periods processing model offered the promise of a single computer but did nothing to reduce multiple cabling plants and proved enormously inconvenient to users; accordingly, its adoption was limited.

In the 1990s, the rise of virtualization technology changed the playing field for MSL systems. Suddenly, it was possible to create virtual machines (VMs) that behaved as independent computers but ran on a common hardware platform. With virtualization, NSA saw a way to preserve periods processing on a virtual level, no longer needing the physical system to be sanitized by performing all processing within dedicated, system-high VMs. To make MSL work in a virtual environment, however, it was necessary to find a way to securely control the virtual session manager and ensure that no compromising activity directed at one VM could compromise another.

MSL solutions

NSA pursued multiple programs aimed at creating viable, secure MSL technologies leveraging virtualization. To date, three major solutions have materialized.

Both the NetTop and Trusted Multi-Net solutions have been approved for use. In addition, Trusted Computer Solutions has developed a thin-client product, originally based on the NetTop technology concepts through a licensing agreement with NSA. This product is called SecureOffice(r) Trusted Thin Client(tm), and runs on the LSPP configuration of Red Hat Enterprise Linux version 5 (RHEL5).

Three competing companies have implemented MILS separation kernels:

In addition, there have been advances in the development of non-virtualization MSL systems through the use of specialized hardware, resulting in at least one viable solution:

Philosophical aspects, ease of use, flexibility

It is interesting to consider the philosophical implications of the MSL "solution path." Rather than providing MLS abilities within a classical OS, the chosen direction is to build a set of "virtual OS" peers that can be managed, individually and as a collective, by an underlying real OS. If the underlying OS (let us introduce the term maintenance operating system, or MOS) is to have sufficient understanding of MLS semantics to prevent grievous errors, such as copying data from a TOP SECRET MSL peer to an UNCLASSIFIED MSL peer, then the MOS must have the ability to: represent labels; associate labels with entities (here we rigorously avoid the terms "subject" and "object"); compare labels (rigorously avoiding the term "reference monitor"); distinguish between those contexts where labels are meaningful and those where they are not (rigorously avoiding the term "trusted computing base" [TCB]); the list goes on. One readily perceives that the MLS architecture and design issues have not been eliminated, merely deferred to a separate stratum of software that invisibly manages mandatory access control concerns so that superjacent strata need not. This concept is none other than the geminal architectural concept (taken from the Anderson Report) underlying DoD-style trusted systems in the first place.

What has been positively achieved by the set-of-MSL-peers abstraction, albeit, is radical restriction of the scope of MAC-cognizant software mechanisms to the small, subjacent MOS. This has been accomplished, however, at the cost of eliminating any practical MLS abilities, even the most elementary ones, as when a SECRET-cleared user appends an UNCLASSIFIED paragraph, taken from an UNCLASSIFIED file, to his SECRET report. The MSL implementation would obviously require every "reusable" resource (in this example, the UNCLASSIFIED file) to be replicated across every MSL peer that might find it useful—meaning either much secondary storage needlessly expended or intolerable burden on the cleared administrator able to effect such replications in response to users' requests therefor. (Of course, since the SECRET user cannot "browse" the system's UNCLASSIFIED offerings other than by logging out and beginning an UNCLASSIFIED system afresh, one evidences yet another severe limitation on functionality and flexibility.) Alternatively, less sensitive file systems could be NFS-mounted read-only so that more trustworthy users could browse, but not modify, their content. Albeit, the MLS OS peer would have no actual means for distinguishing (via a directory listing command, e.g.) that the NFS-mounted resources are at a different level of sensitivity than the local resources, and no strict means for preventing illegal uphill flow of sensitive information other than the brute-force, all-or-nothing mechanism of read-only NFS mounting.

To demonstrate just what a handicap this drastic effectuation of "cross-level file sharing" actually is, consider the case of an MLS system that supports UNCLASSIFIED, SECRET, and TOP SECRET data, and a TOP SECRET cleared user who logs into the system at that level. MLS directory structures are built around the containment principle, which, loosely speaking, dictates that higher sensitivity levels reside deeper in the tree: commonly, the level of a directory must match or dominate that of its parent, while the level of a file (more specifically, of any link thereto) must match that of the directory that catalogs it. (This is strictly true of MLS UNIX: alternatives that support different conceptions of directories, directory entries, i-nodes, etc.—such as Multics, which adds the "branch" abstraction to its directory paradigm—tolerate a broader set of alternative implementations.) Orthogonal mechanisms are provided for publicly shared and spool directories, such as /tmp or C:\TEMP, which are automatically—and invisibly—partitioned by the OS, with users' file access requests automatically "deflected" to the appropriately labeled directory partition. The TOP SECRET user is free to browse the entire system, his only restriction being that—while logged in at that level—he is only allowed to create fresh TOP SECRET files within specific directories or their descendants. In the MSL alternative, where any browsable content must be specifically, laboriously replicated across all applicable levels by a fully cleared administrator—meaning, in this case, that all SECRET data must be replicated to the TOP SECRET MSL peer OS, while all UNCLASSIFIED data must be replicated to both the SECRET and TOP SECRET peers—one can readily perceive that, the more highly cleared the user, the more frustrating his timesharing computing experience will be.

In a classical trusted systems-theoretic sense—relying upon terminology and concepts taken from the Orange Book, the foundation of trusted computing—a system that supports MSL peers could not achieve a level of assurance beyond (B1). This is because the (B2) criteria require, among other things, both clear identification of a TCB perimeter and the existence of a single, identifiable entity that has the ability and authority to adjudicate access to all data represented throughout all accessible resources of the ADP system. In a very real sense, then, the application of the term "high assurance" as a descriptor of MSL implementations is nonsensical, since the term "high assurance" is properly limited to (B3) and (A1) systems—and, with some laxity albeit, to (B2) systems.

Cross-domain solutions

MSL systems, whether virtual or physical in nature, are designed to preserve isolation between different classification levels. Consequently, (unlike MLS systems), an MSL environment has no innate abilities to move data from one level to another.

To permit data sharing between computers working at different classification levels, such sites deploy cross-domain solutions (CDS), which are commonly referred to as gatekeepers or guards. Guards, which often leverage MLS technologies themselves, filter traffic flowing between networks; unlike a commercial Internet firewall, however, a guard is built to much more stringent assurance requirements and its filtering is carefully designed to try to prevent any improper leakage of classified information between LANs operating at different security levels.

Data diode technologies are used extensively where data flows are required to be restricted to one direction between levels, with a high level of assurance that data will not flow in the opposite direction. In general, these are subject to the same restrictions that have imposed challenges on other MLS solutions: strict security assessment and the need to provide an electronic equivalent of stated policy for moving information between classifications. (Moving information down in classification level is particularly challenging and typically requires approval from several different people.)

As of late 2005, numerous high-assurance platforms and guard applications have been approved for use in classified environments. N.b. that the term "high-assurance" as employed here is to be evaluated in the context of DCID 6/3 (read "dee skid six three"), a quasi-technical guide to the construction and deployment of various systems for processing classified information, lacking both the precise legal rigidity of the Orange Book criteria and the underlying mathematical rigor. (The Orange Book is motivated by, and derived from, a logical "chain of reasoning" constructed as follows: [a] a "secure" state is mathematically defined, and a mathematical model is constructed, the operations upon which preserve secure state so that any conceivable sequence of operations starting from a secure state yields a secure state; [b] a mapping of judiciously chosen primitives to sequences of operations upon the model; and [c] a "descriptive top-level specification" that maps actions that can be transacted at the user interface (such as system calls) into sequences of primitives; but stopping short of either [d] formally demonstrating that a live software implementation correctly implements said sequences of actions; or [e] formally arguing that the executable, now "trusted," system is generated by correct, reliable tools [e.g., compilers, librarians, linkers].)

Related Research Articles

<span class="mw-page-title-main">Microkernel</span> Kernel that provides fewer services than a traditional kernel

In computer science, a microkernel is the near-minimum amount of software that can provide the mechanisms needed to implement an operating system (OS). These mechanisms include low-level address space management, thread management, and inter-process communication (IPC).

In the security engineering subspecialty of computer science, a trusted system is one that is relied upon to a specified extent to enforce a specified security policy. This is equivalent to saying that a trusted system is one whose failure would break a security policy.

<span class="mw-page-title-main">Security-Enhanced Linux</span> Linux kernel security module

Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies, including mandatory access controls (MAC).

In computer operating systems, memory paging is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in main memory. In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. Paging is an important part of virtual memory implementations in modern operating systems, using secondary storage to let programs exceed the size of available physical memory.

The Bell–LaPadula model (BLP) is a state machine model used for enforcing access control in government and military applications. It was developed by David Elliott Bell, and Leonard J. LaPadula, subsequent to strong guidance from Roger R. Schell, to formalize the U.S. Department of Defense (DoD) multilevel security (MLS) policy. The model is a formal state transition model of computer security policy that describes a set of access control rules which use security labels on objects and clearances for subjects. Security labels range from the most sensitive, down to the least sensitive.

In computer security, mandatory access control (MAC) refers to a type of access control by which a secured environment constrains the ability of a subject or initiator to access or modify on an object or target. In the case of operating systems, the subject is a process or thread, while objects are files, directories, TCP/UDP ports, shared memory segments, or IO devices. Subjects and objects each have a set of security attributes. Whenever a subject attempts to access an object, the operating system kernel examines these security attributes, examines the authorization rules in place, and decides whether to grant access. A database management system, in its access control mechanism, can also apply mandatory access control; in this case, the objects are tables, views, procedures, etc.

<span class="mw-page-title-main">File system</span> Computer filing system

In computing, a file system or filesystem governs file organization and access. A local file system is a capability of an operating system that services the applications running on the same computer. A distributed file system is a protocol that provides file access between networked computers.

Multilevel security or multiple levels of security (MLS) is the application of a computer system to process information with incompatible classifications, permit access by users with different security clearances and needs-to-know, and prevent users from obtaining access to information for which they lack authorization. There are two contexts for the use of multilevel security.

The National Security Agency took over responsibility for all U.S. Government encryption systems when it was formed in 1952. The technical details of most NSA-approved systems are still classified, but much more about its early systems have become known and its most modern systems share at least some features with commercial products.

RACF [pronounced Rack-Eff], short for Resource Access Control Facility, is an IBM software product. It is a security system that provides access control and auditing functionality for the z/OS and z/VM operating systems. RACF was introduced in 1976. Originally called RACF it was renamed to z/OS Security Server (RACF), although most mainframe folks still refer to it as RACF.

Redaction or sanitization is the process of removing sensitive information from a document so that it may be distributed to a broader audience. It is intended to allow the selective disclosure of information. Typically, the result is a document that is suitable for publication or for dissemination to others rather than the intended audience of the original document.

<span class="mw-page-title-main">Protection ring</span> Layer of protection in computer systems

In computer science, hierarchical protection domains, often called protection rings, are mechanisms to protect data and functionality from faults and malicious behavior.

The United States government classification system is established under Executive Order 13526, the latest in a long series of executive orders on the topic of classified information beginning in 1951. Issued by President Barack Obama in 2009, Executive Order 13526 replaced earlier executive orders on the topic and modified the regulations codified to 32 C.F.R. 2001. It lays out the system of classification, declassification, and handling of national security information generated by the U.S. government and its employees and contractors, as well as information received from other governments.

The XTS-400 is a multilevel secure computer operating system. It is multiuser and multitasking that uses multilevel scheduling in processing data and information. It works in networked environments and supports Gigabit Ethernet and both IPv4 and IPv6.

A High Assurance Guard (HAG) is a multilevel security computer device which is used to communicate between different security domains, such as NIPRNet to SIPRNet. A HAG is one example of a Controlled Interface between security levels. HAGs are approved through the Common Criteria process.

A cross-domain solution (CDS) is an integrated information assurance system composed of specialized software, and sometimes hardware, that provides a controlled interface to manually or automatically enable and/or restrict the access or transfer of information between two or more security domains based on a predetermined security policy. CDSs are designed to enforce domain separation and typically include some form of content filtering, which is used to designate information that is unauthorized for transfer between security domains or levels of classification, such as between different military divisions, intelligence agencies, or other operations which depend on the timely sharing of potentially sensitive information.

System high mode, or simply system high, is a security mode of using an automated information system (AIS) that pertains to an environment that contains restricted data that is classified in a hierarchical scheme, such as Top Secret, Secret and Unclassified. System high pertains to the IA features of information processed, and specifically not to the strength or trustworthiness of the system.

A virtual security appliance is a computer appliance that runs inside virtual environments. It is called an appliance because it is pre-packaged with a hardened operating system and a security application and runs on a virtualized hardware. The hardware is virtualized using hypervisor technology delivered by companies such as VMware, Citrix and Microsoft. The security application may vary depending on the particular network security vendor. Some vendors such as Reflex Systems have chosen to deliver Intrusion Prevention technology as a Virtualized Appliance, or as a multifunctional server vulnerability shield delivered by Blue Lane. The type of security technology is irrelevant when it comes to the definition of a Virtual Security Appliance and is more relevant when it comes to the performance levels achieved when deploying various types of security as a virtual security appliance. Other issues include visibility into the hypervisor and the virtual network that runs inside.

<span class="mw-page-title-main">Kernel (operating system)</span> Core of a computer operating system

The kernel is a computer program at the core of a computer's operating system and generally has complete control over everything in the system. The kernel is also responsible for preventing and mitigating conflicts between different processes. It is the portion of the operating system code that is always resident in memory and facilitates interactions between hardware and software components. A full kernel controls all hardware resources via device drivers, arbitrates conflicts between processes concerning such resources, and optimizes the utilization of common resources e.g. CPU & cache usage, file systems, and network sockets. On most systems, the kernel is one of the first programs loaded on startup. It handles the rest of startup as well as memory, peripherals, and input/output (I/O) requests from software, translating them into data-processing instructions for the central processing unit.

<span class="mw-page-title-main">Qubes OS</span> Security-focused Linux-based operating system

Qubes OS is a security-focused desktop operating system that aims to provide security through isolation. Isolation is provided through the use of virtualization technology. This allows the segmentation of applications into secure virtual machines called qubes. Virtualization services in Qubes OS are provided by the Xen hypervisor.