AIM Multiuser Benchmark

Last updated

The AIM Multiuser Benchmark, also called the AIM Benchmark Suite VII or AIM7, is a job throughput benchmark widely used by UNIX computer system vendors. Current research operating systems such as K42 use [1] the reaim [2] form of the benchmark for performance analysis. The AIM7 benchmark measures some of the same things as the SDET benchmark.

K42 is a discontinued open-source research operating system for cache-coherent 64-bit multiprocessor systems. It was developed primarily at IBM Thomas J. Watson Research Center in collaboration with University of Toronto and University of New Mexico. The main focus of this OS is to address performance and scalability issues of system software on large-scale, shared memory, NUMA multiprocessor computers.

The original code was developed by Gene Dronek for AIM Technology, Inc., who licensed it to others. The first AIM Benchmarks were for single user PCs. The suite was expanded and enhanced to become multi-user benchmarks by Donald Steiny. Caldera International, Inc., bought the license and released [3] the source code for Suite VII and Suite IX under the GPL.

AIM7 is a program written in C that forks many processes called tasks, each of which concurrently runs in random order a set of subtests called jobs. There are 53 kinds of jobs, each of which exercises a different aspect of the operating system, such as disk-file operations, process creation, user virtual memory operations, pipe I/O, and compute-bound arithmetic loops . [4]

C (programming language) general-purpose programming language

C is a general-purpose, procedural computer programming language supporting structured programming, lexical variable scope, and recursion, while a static type system prevents unintended operations. By design, C provides constructs that map efficiently to typical machine instructions and has found lasting use in applications previously coded in assembly language. Such applications include operating systems and various application software for computers, from supercomputers to embedded systems.

An AIM7 benchmark run is composed of a sequence of subruns with the number of tasks incrementing by one between each subrun. Each subrun goes until each of its tasks has completed its set of jobs. Each subrun reports a metric of jobs completed per minute, with the final report for the overall benchmark being a table of that throughput metric versus number of tasks. A given system will have a peak number of tasks N at which the jobs per minute is maximized. Either N or the value of the jobs per minute at N is typically used as the metric of interest.

Related Research Articles

Mach is a kernel developed at Carnegie Mellon University to support operating system research, primarily distributed and parallel computing. Mach is often mentioned as one of the earliest examples of a microkernel. However, not all versions of Mach are microkernels. Mach's derivatives are the basis of the modern operating system kernels in GNU Hurd and Apple's operating systems macOS, iOS, iPadOS, tvOS, and watchOS.

Operating system software that manages computer hardware resources

An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs.

In general terms, throughput is the rate of production or the rate at which something is processed.

Thread (computing) smallest sequence of programmed instructions that can be managed independently by a scheduler

In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. The implementation of threads and processes differs between operating systems, but in most cases a thread is a component of a process. Multiple threads can exist within one process, executing concurrently and sharing resources such as memory, while different processes do not share these resources. In particular, the threads of a process share its executable code and the values of its dynamically allocated variables and non-thread-local global variables at any given time.

QNX Unix-like, real-time, embedded operating system

QNX is a commercial Unix-like real-time operating system, aimed primarily at the embedded systems market. The product was originally developed in the early 1980s by Canadian company Quantum Software Systems, later renamed QNX Software Systems and ultimately acquired by BlackBerry in 2010. QNX was one of the first commercially successful microkernel operating systems and is used in a variety of devices including cars and mobile phones.

Earth Simulator highly parallel vector supercomputer system for running global climate models

The Earth Simulator (ES), developed by the Japanese government's initiative "Earth Simulator Project", was a highly parallel vector supercomputer system for running global climate models to evaluate the effects of global warming and problems in solid earth geophysics. The system was developed for Japan Aerospace Exploration Agency, Japan Atomic Energy Research Institute, and Japan Marine Science and Technology Center (JAMSTEC) in 1997. Construction started in October 1999, and the site officially opened on 11 March 2002. The project cost 60 billion yen.

In computing, scheduling is the method by which work is assigned to resources that complete the work. The work may be virtual computation elements such as threads, processes or data flows, which are in turn scheduled onto hardware resources such as processors, network links or expansion cards.

Standard Performance Evaluation Corporation company

The Standard Performance Evaluation Corporation (SPEC) is an American non-profit corporation that aims to "produce, establish, maintain and endorse a standardized set" of performance benchmarks for computers.

The MCP is the proprietary operating system of the Burroughs small, medium and large systems, including the Unisys Clearpath/MCP systems.

A predetermined motion time system (PMTS) is frequently used to perform Labor Minute Costing in order to set piece-rates, wage-rates and/or incentives in labor (labour) oriented industries by quantifying the amount of time required to perform specific tasks under defined conditions. Today the PMTS is mainly used in work measurement for shorter cycles in labour oriented industries such as apparel and footwear. This topic comes under wider industrial and production engineering.

In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it. The term benchmark is also commonly utilized for the purposes of elaborately designed benchmarking programs themselves.

SPECint is a computer benchmark specification for CPU integer processing power. It is maintained by the Standard Performance Evaluation Corporation (SPEC). SPECint is the integer performance testing component of the SPEC test suite. The first SPEC test suite, CPU92, was announced in 1992. It was followed by CPU95, CPU2000, and CPU2006. The latest standard is SPEC CPU 2017 and consists of SPECspeed and SPECrate.

Apache Hadoop is a collection of open-source software utilities that facilitate using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Originally designed for computer clusters built from commodity hardware—still the common use—it has also found use on clusters of higher-end hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.

In computing, computer performance is the amount of useful work accomplished by a computer system. Outside of specific contexts, computer performance is estimated in terms of accuracy, efficiency and speed of executing computer program instructions. When it comes to high computer performance, one or more of the following factors might be involved:

High-throughput computing (HTC) is a computer science term to describe the use of many computing resources over long periods of time to accomplish a computational task.

Transaction Processing over XML (TPoX) is a computing benchmark for XML database systems. As a benchmark, TPoX is used for the performance testing of database management systems that are capable of storing, searching, modifying and retrieving XML data. The goal of TPoX is to allow database designers, developers and users to evaluate the performance of XML database features, such as the XML query languages XQuery and SQL/XML, XML storage, XML indexing, XML Schema support, XML updates, transaction processing and logging, and concurrency control. TPoX includes XML update tests based on the XQuery Update Facility.

Computer architecture Set of rules and methods that describe the functionality, organization, and implementation of computer systems

In computer engineering, computer architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems. Some definitions of architecture define it as describing the capabilities and programming model of a computer but not a particular implementation. In other definitions computer architecture involves instruction set architecture design, microarchitecture design, logic design, and implementation.

Couchbase Server

Couchbase Server, originally known as Membase, is an open-source, distributed multi-model NoSQL document-oriented database software package that is optimized for interactive applications. These applications may serve many concurrent users by creating, storing, retrieving, aggregating, manipulating and presenting data. In support of these kinds of application needs, Couchbase Server is designed to provide easy-to-scale key-value or JSON document access with low latency and high sustained throughput. It is designed to be clustered from a single machine to very large-scale deployments spanning many machines. A version originally called Couchbase Lite was later marketed as Couchbase Mobile combined with other software.

The LINPACK Benchmarks are a measure of a system's floating point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense n by n system of linear equations Ax = b, which is a common task in engineering.

References