HRU (security)

Last updated

The HRU security model (Harrison, Ruzzo, Ullman model) is an operating system level computer security model which deals with the integrity of access rights in the system. It is an extension of the Graham-Denning model, based around the idea of a finite set of procedures being available to edit the access rights of a subject on an object . It is named after its three authors, Michael A. Harrison, Walter L. Ruzzo and Jeffrey D. Ullman. [1]

Contents

Along with presenting the model, Harrison, Ruzzo and Ullman also discussed the possibilities and limitations of proving the safety of systems using an algorithm. [1]

Description of the model

The HRU model defines a protection system consisting of a set of generic rights R and a set of commands C. An instantaneous description of the system is called a configuration and is defined as a tuple of current subjects , current objects and an access matrix . Since the subjects are required to be part of the objects, the access matrix contains one row for each subject and one column for each subject and object. An entry for subject and object is a subset of the generic rights .

The commands are composed of primitive operations and can additionally have a list of pre-conditions that require certain rights to be present for a pair of subjects and objects.

The primitive requests can modify the access matrix by adding or removing access rights for a pair of subjects and objects and by adding or removing subjects or objects. Creation of a subject or object requires the subject or object not to exist in the current configuration, while deletion of a subject or object requires it to have existed prior to deletion. In a complex command, a sequence of operations is executed only as a whole. A failing operation in a sequence makes the whole sequence fail, a form of database transaction.

Discussion of safety

Harrison, Ruzzo and Ullman [1] discussed whether there is an algorithm that takes an arbitrary initial configuration and answers the following question: is there an arbitrary sequence of commands that adds a generic right into a cell of the access matrix where it has not been in the initial configuration?

They showed that there is no such algorithm, thus the problem is undecidable in the general case. They also showed a limitation of the model to commands with only one primitive operation to render the problem decidable.

See also

Related Research Articles

In formal language theory, a context-free language (CFL) is a language generated by a context-free grammar (CFG).

Fast Fourier transform O(N logN) divide and conquer algorithm to calculate the discrete Fourier transforms

A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain to a representation in the frequency domain and vice versa. The DFT is obtained by decomposing a sequence of values into components of different frequencies. This operation is useful in many fields, but computing it directly from the definition is often too slow to be practical. An FFT rapidly computes such transformations by factorizing the DFT matrix into a product of sparse factors. As a result, it manages to reduce the complexity of computing the DFT from , which arises if one simply applies the definition of DFT, to , where is the data size. The difference in speed can be enormous, especially for long data sets where N may be in the thousands or millions. In the presence of round-off error, many FFT algorithms are much more accurate than evaluating the DFT definition directly or indirectly. There are many different FFT algorithms based on a wide range of published theories, from simple complex-number arithmetic to group theory and number theory.

Turing machine Mathematical model of computation that defines an abstract machine

A Turing machine is a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, given any computer algorithm, a Turing machine capable of simulating that algorithm's logic can be constructed.

Data type classification of data in computer science

In computer science and computer programming, a data type or simply type is an attribute of data which tells the compiler or interpreter how the programmer intends to use the data. Most programming languages support basic data types of integer numbers, Floating-point numbers, characters and booleans. A data type constrains the values that an expression, such as a variable or a function, might take. This data type defines the operations that can be done on the data, the meaning of the data, and the way values of that type can be stored. A data type provides a set of values from which an expression may take its values.

Hidden Markov Model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobservable states.

Inverse kinematics

Inverse kinematics is the mathematical process of recovering the movements of an object in the world from some other data, such as a film of those movements, or a film of the world as seen by a camera which is itself making those movements. This is useful in robotics and in film animation.

Needleman–Wunsch algorithm algorithm

The Needleman–Wunsch algorithm is an algorithm used in bioinformatics to align protein or nucleotide sequences. It was one of the first applications of dynamic programming to compare biological sequences. The algorithm was developed by Saul B. Needleman and Christian D. Wunsch and published in 1970. The algorithm essentially divides a large problem into a series of smaller problems, and it uses the solutions to the smaller problems to find an optimal solution to the larger problem. It is also sometimes referred to as the optimal matching algorithm and the global alignment technique. The Needleman–Wunsch algorithm is still widely used for optimal global alignment, particularly when the quality of the global alignment is of the utmost importance. The algorithm assigns a score to every possible alignment, and the purpose of the algorithm is to find all possible alignments having the highest score.

In computer science, a disjoint-set data structure is a data structure that tracks a set of elements partitioned into a number of disjoint (non-overlapping) subsets. It provides near-constant-time operations to add new sets, to merge existing sets, and to determine whether elements are in the same set. In addition to many other uses, disjoint-sets play a key role in Kruskal's algorithm for finding the minimum spanning tree of a graph.

Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithms, in the sense that a function is computable if there exists an algorithm that can do the job of the function, i.e. given an input of the function domain it can return the corresponding output. Computable functions are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the μ-recursive functions.

Smith–Waterman algorithm

The Smith–Waterman algorithm performs local sequence alignment; that is, for determining similar regions between two strings of nucleic acid sequences or protein sequences. Instead of looking at the entire sequence, the Smith–Waterman algorithm compares segments of all possible lengths and optimizes the similarity measure.

In computing, a cache-oblivious algorithm is an algorithm designed to take advantage of a CPU cache without having the size of the cache as an explicit parameter. An optimal cache-oblivious algorithm is a cache-oblivious algorithm that uses the cache optimally. Thus, a cache-oblivious algorithm is designed to perform well, without modification, on multiple machines with different cache sizes, or for a memory hierarchy with different levels of cache having different sizes. Cache-oblivious algorithms are contrasted with explicit blocking, as in loop nest optimization, which explicitly breaks a problem into blocks that are optimally sized for a given cache.

In computing, external memory algorithms or out-of-core algorithms are algorithms that are designed to process data that is too large to fit into a computer's main memory at one time. Such algorithms must be optimized to efficiently fetch and access data stored in slow bulk memory such as hard drives or tape drives, or when memory is on a computer network. External memory algorithms are analyzed in the external memory model.

The Graham-Denning model is a computer security model that shows how subjects and objects should be securely created and deleted. It also addresses how to assign specific access rights. It is mainly used in access control mechanisms for distributed systems. There are three main parts to the model: A set of subjects, a set of objects, and a set of eight rules. A subject may be a process or a user that makes a request to access a resource. An object is the resource that a user or process wants to access.

A computer security model is a scheme for specifying and enforcing security policies. A security model may be founded upon a formal model of access rights, a model of computation, a model of distributed computing, or no particular theoretical grounding at all. A computer security model is implemented through a computer security policy.

In information theory and computer science, the Damerau–Levenshtein distance is a string metric for measuring the edit distance between two sequences. Informally, the Damerau–Levenshtein distance between two words is the minimum number of operations required to change one word into the other.

In computer science, an Access Control Matrix or Access Matrix is an abstract, formal security model of protection state in computer systems, that characterizes the rights of each subject with respect to every object in the system. It was first introduced by Butler W. Lampson in 1971.

Operational transformation (OT) is a technology for supporting a range of collaboration functionalities in advanced collaborative software systems. OT was originally invented for consistency maintenance and concurrency control in collaborative editing of plain text documents. Two decades of research have extended its capabilities and expanded its applications to include group undo, locking, conflict resolution, operation notification and compression, group-awareness, HTML/XML and tree-structured document editing, collaborative office productivity tools, application-sharing, and collaborative computer-aided media design tools. In 2009 OT was adopted as a core technique behind the collaboration features in Apache Wave and Google Docs.

Michael A. Harrison is a computer scientist, in particular a pioneer in the area of formal languages.

The Mivar-based approach is a mathematical tool for designing artificial intelligence (AI) systems. Mivar was developed by combining production and Petri nets. The Mivar-based approach was developed for semantic analysis and adequate representation of humanitarian epistemological and axiological principles in the process of developing artificial intelligence. The Mivar-based approach incorporates computer science, informatics and discrete mathematics, databases, expert systems, graph theory, matrices and inference systems. The Mivar-based approach involves two technologies:

The Ruzzo–Tompa algorithm is a linear-time algorithm for finding all non-overlapping, contiguous, maximal scoring subsequences in a sequence of real numbers. This algorithm is an improvement over previously known quadratic time algorithms. The maximum scoring subsequence from the set produced by the algorithm is also a solution to the maximum subarray problem.

References

  1. 1 2 3 Harrison, Michael A.; Ruzzo, Walter L.; Ullman, Jeffrey D. (August 1976). "Protection in Operating Systems". Communications of the ACM. 19 (8): 461–471. CiteSeerX   10.1.1.106.7226 . doi:10.1145/360303.360333.