The Whetstone benchmark is a synthetic benchmark for evaluating the performance of computers. [1] It was first written in ALGOL 60 in 1972 at the Technical Support Unit of the Department of Trade and Industry (later part of the Central Computer and Telecommunications Agency) in the United Kingdom. It was derived from statistics on program behaviour gathered on the KDF9 computer at NPL National Physical Laboratory, using a modified version of its Whetstone ALGOL 60 compiler. [2] The workload on the machine was represented as a set of frequencies of execution of the 124 instructions of the Whetstone Code. The Whetstone Compiler was built at the Atomic Power Division of the English Electric Company in Whetstone, Leicestershire, England, [3] hence its name. Dr. B.A. Wichman at NPL produced a set of 42 simple ALGOL 60 statements, which in a suitable combination matched the execution statistics.
To make a more practical benchmark Harold Curnow of TSU wrote a program incorporating the 42 statements. This program worked in its ALGOL 60 version, but when translated into FORTRAN it was not executed correctly by the IBM optimizing compiler. Calculations whose results were not output were omitted. He then produced a set of program fragments which were more like real code and which collectively matched the original 124 Whetstone instructions. Timing this program gave a measure of the machine's speed in thousands of Whetstone instructions per second (kWIPS). The Fortran version became the first general purpose benchmark that set industry standards of computer system performance. Further development was carried out by Roy Longbottom, also of TSU/CCTA, who became the official design authority.
The Algol 60 program ran under the Whetstone compiler in July 2010, for the first time since the last KDF9 was shut down in 1980, but now executed by a KDF9 emulator. [4]
The benchmark employs 8 test procedures, with three executing standard floating point calculations, two with such as COS or EXP functions, one each for integer arithmetic, branching or memory assignments. Output from the original comprised parameters used for each test, numeric results produced and the overall KWIPS performance rating. In 1978, the program was updated to log running time of each of the tests, allowing MFLOPS (Millions of Floating Point Operations Per Second) to be included in reports, along with an estimation of Integer MIPS (Millions of Instructions Per Second). In 1987, MFLOPS calculations were included in the log for the three appropriate tests and MOPS (Millions of Operations Per Second) for the others. Code changes were also carried out, including by Bangor University, necessary to identify unexpected behaviour, without changing the implementation of the original 124 Whetstone instructions. One necessary change was to maintain measurement accuracy at increasing CPU speeds, with self calibration to run for a noticeable finite time, typically set for 10 seconds or 100 for early PCs with low timer resolution.
Note that there are other versions of the Whetstone Benchmark available online, some claiming copyright, without reference to CCTA or the design authority.
In conjunction with the undertaking controlled by the Contracts Division, CCTA engineers had responsibility to design and supervise acceptance trials [5] of all UK Government computers and those for centrally funded for Universities and Research Councils, with systems varying from minicomputers to supercomputers. This provided the opportunity to gather verified Whetstone Benchmark results. Other results were obtained via new computer system appraisal activities.
CCTA records are now available in The UK National Archives, [6] including technical reports. Original Whetstone Benchmark results are in the 1985 CCTA Technical Memorandum 1182, where overall speed is only shown as MWIPS. This contains more than 1000 results for 244 computers from 32 manufacturers, including the first for PCs and Supercomputers. The report might well be accessible from the Archive. The details were later included in a publicly available report (see Available Reports below).
Roy Longbottom converted the original Whetstone Benchmark to fully exploit capabilities of the new vector processors. Results were included in the paper “Performance of Multi-User Supercomputing Facilities” presented in the 1989 Fourth International Conference on Supercomputing, Santa Clara [7] . [8]
This was also repeated in the Harold Curnow paper “Whither Whetstone? The synthetic benchmark after 15 years” presented at the “Evaluating supercomputers: strategies for exploiting, evaluating and benchmarking computers with advanced architecture” conference in 1990, in book . [9]
Harold also reported comments from the 1989 conference “Software for Parallel Computers” in a presentation by Gordon Bell, designer of the Digital Equipment Corporation VAX range of minicomputers, indicating that the range was designed to perform well on the Whetstone Benchmark.
The Whetstone Benchmark also had high visibility concerning floating point performance of Intel CPUs and PCs, starting with the 1980 Intel 8087 coprocessor. This was reported in the 1986 Intel Application Report “High Speed Numerics with the 80186/80188 and 8087” . [10] The latter includes hardware functions for exponential, logarithmic or trigonometric calculations, as used in two of the eight Whetstone Benchmark tests, where these can dominate running time. Only two other benchmarks were included in the Intel procedures, showing huge gains over the earlier software based routines on all three programs.
Later tests, by a SSEMC Laboratory, evaluated Intel 80486 compatible CPU chips using their Universal Chip Analyzer . [11] Considering two floating point benchmarks, as used by Intel in the above report, they preferred Whetstone, stating “Whetstone utilizes the complete set of instructions available on early x87 FPUs”. This might suggest that the Whetstone Benchmark influenced the hardware instruction set.
By the 1990s the Whetstone Benchmark and results had become relatively popular. A notable quotation in 1985 was in “A portable seismic computing benchmark” quoting "The only commonly used benchmark to my knowledge is the venerable Whetstone benchmark, designed many years ago to test floating point operations" in the European Association of Geoscientists and Engineers Journal . [12]
Details of the Vector Whetstone Benchmark performance were also repeated, by Roy Longbottom, at the June 1990 Advanced Computing Seminar at Natural Environment Research Council Wallingford. This led to Council for the Central Laboratory of the Research Councils Distributed Computing Support collecting results from running “on a variety of machines, including vector supercomputers, minisupers, super-workstations and workstations, together with that obtained on a number of vector CPUs and on single nodes of various MPP machines “. More than 200 results are included, up to 2006, in the report available on the Wayback Machine Archive in entries to at least the year 2007 section . [13] The report also indicated “The wide variety of standard functions exercised (sqrt, exp, cos etc.) consume a far larger fraction of the reported times.”.
On achieving 1 MWIPS, the Digital Equipment Corporation VAX-11/780 minicomputer became accepted as the first commercially available 32-bit computer to demonstrate 1 MIPS (Millions of Instructions Per Second), CERN , [14] not really appropriate for a benchmark dependent on floating point speed. This had an impact on the Dhrystone Benchmark, the second accepted general purpose computer performance measurement program, with no floating point calculations. This produced a result of 1757 Dhrystones Per Second on the VAX 11/780, leading to a revised measurement of 1 DMIPS, (AKA Vax MIPS), by dividing the original result by 1757.
Following retirement from CCTA, Roy Longbottom continued providing free benchmarking and stress testing programs available on his web site, latterly roylongbottom.org.uk, with most development using C (programming language), via Microsoft Windows and Linux based Operating Systems on PCs. This was initially in conjunction with the Compuserve Benchmarks and Standards Forum, see Wayback Machine Archive, [15] covering PC hardware 1997 to 2008, providing numerous new benchmark results.
From 2008 to 2013 further PC results were collected privately. By then, PC processor operating clock speeds reached 4000 MHz and did not increase that much by the 2020s, reducing the need to gather results of the original scalar benchmark. In 2017 “Whetstone Benchmark History and Results” [16] was published for public access, with identified year of first delivery and purchase prices were added, also doubling the number of computers covered in the CCTA report. The most notable citation for this was by Tony Voellm, then Google Cloud Performance Engineering Manager, entitled “Cloud Benchmarking: Fight the black hole” . [17] This considered available benchmarks and performance by time with detailed graphs, including those from the Whetstone reports. At a later stage, 504 of the results, by year, were included in the report “Techniques used for analyzing basic performance measurements” . [18]
During this period, versions of the Whetstone Benchmark were produced to access Multithreading (computer architecture), initially for PCs running under Microsoft Windows, the latest supporting up to 8 CPUs or CPU cores particularly for those known as 4 core/8 thread varieties.
The History report includes new sections for PC results, with CPUs from 1979, particularly those produced by up to 12 different compilers or interpreters, covering C/C++ ( up to 64 bit SSE level), Old Fortran, Basic and Java. These are based on the ratio MWIPS per MHz (multiplied by 100) to represent efficiency. Bottom line is one with a Core i7 CPU with ratings varying from 0.39, via the Basic Interpreter, to 311, via C, using 64 bit SSE options, then 1003 with the multithreading benchmarks, using all four CPU cores.
Another report “Whetstone Benchmark Detailed Later Results” [19] was produced in 2017. This document provides a summary of speeds of the eight test loops in the benchmark, as MfLOPS or MOPS plus the MWIPS ratings. There are 22 pages of results covering the same Windows based PCs as the Historic file with different compilers and compiling options, some with multithreaded versions. Later results cover PCs using Linux. Then there are others for a sample of Android phones and tablets and, at the time, the full range of Raspberry Pi computers. For the latter, Roy Longbottom had been recruited as a voluntary member of Raspberry Pi Foundation new products Alpha Testing Team.
Later scalar, vector and multithreading results were included in a 2022 report “Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets” . [20] This included the following, originally in a report on the first Raspberry Pi computer:
"In 1978, the Cray 1 supercomputer cost $7 Million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world. The Raspberry Pi costs around $70 (CPU board, case, power supply, SD card), weighs a few ounces, uses a 5 watt power supply and is more than 4.5 times faster than the Cray 1"
This claim was based on the official average performance of the Livermore Loops Benchmark that was used to demonstrate that the first Cray 1 met the required contractual requirements. The scalar Whetstone Benchmark achieved a much higher gain of 16.7 times improvement. The report includes comparisons with other supercomputers, a modern fairly fast laptop PC and the 2020 Raspberry Pi 400, where the latter obtained MWIPS gains over the Cray 1 of 155 times scalar, 38 vector and 593 scalar multithreading (4 CPU cores versus 1). The quad core laptop, using advanced SIMD compilations, obtained gains of 400, 215 and 3520 times respectively.
Whetstone Benchmark source codes, compiled programs and reports including results are currently (at the time of writing) on Roy Longbottom’s website roylongbottom.org.uk, but this has a limited lifetime.
For main reference purposes the HTML based reports were converted to PDF format and uploaded to ResearchGate. Brief descriptions of all files are included in an indexing file [21] (download via More v for menu choices). Unfortunately, the file structure was changed, disabling access to most older compressed files containing benchmark source codes and compiled programs.
The original website provides the same indexing format but includes the links to access both local files and those at ResearchGate, the former having options to download program codes. [22]
Presently, and hopefully for longtime future access, the website has been captured numerous times by the Wayback Machine Internet Archive site, [23] but all captures do not necessarily include compressed program files. If the file name is known, available captures can be found, such as for benchnt.zip (copy and modify link address), [24]
The Whetstone benchmark primarily measures the floating-point arithmetic performance. A similar benchmark for integer and string operations is the Dhrystone.
A central processing unit (CPU), also called a central processor, main processor, or just processor, is the most important processor in a given computer. Its electronic circuitry executes instructions of a computer program, such as arithmetic, logic, controlling, and input/output (I/O) operations. This role contrasts with that of external components, such as main memory and I/O circuitry, and specialized coprocessors such as graphics processing units (GPUs).
Dhrystone is a synthetic computing benchmark program developed in 1984 by Reinhold P. Weicker intended to be representative of system (integer) programming. The Dhrystone grew to become representative of general processor (CPU) performance. The name "Dhrystone" is a pun on a different benchmark algorithm called Whetstone, which emphasizes floating point performance.
Alpha is a 64-bit reduced instruction set computer (RISC) instruction set architecture (ISA) developed by Digital Equipment Corporation (DEC). Alpha was designed to replace 32-bit VAX complex instruction set computers (CISC) and to be a highly competitive RISC processor for Unix workstations and similar markets.
Instructions per second (IPS) is a measure of a computer's processor speed. For complex instruction set computers (CISCs), different instructions take different amounts of time, so the value measured depends on the instruction mix; even for comparing processors in the same family the IPS measurement can be problematic. Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches and no cache contention, whereas realistic workloads typically lead to significantly lower IPS values. Memory hierarchy also greatly affects processor performance, an issue barely considered in IPS calculations. Because of these problems, synthetic benchmarks such as Dhrystone are now generally used to estimate computer performance in commonly used applications, and raw IPS has fallen into disuse.
The Cray-1 was a supercomputer designed, manufactured and marketed by Cray Research. Announced in 1975, the first Cray-1 system was installed at Los Alamos National Laboratory in 1976. Eventually, eighty Cray-1s were sold, making it one of the most successful supercomputers in history. It is perhaps best known for its unique shape, a relatively small C-shaped cabinet with a ring of benches around the outside covering the power supplies and the cooling system.
Single instruction, multiple data (SIMD) is a type of parallel processing in Flynn's taxonomy. SIMD can be internal and it can be directly accessible through an instruction set architecture (ISA), but it should not be confused with an ISA. SIMD describes computers with multiple processing elements that perform the same operation on multiple data points simultaneously.
The CDC 6600 was the flagship of the 6000 series of mainframe computer systems manufactured by Control Data Corporation. Generally considered to be the first successful supercomputer, it outperformed the industry's prior recordholder, the IBM 7030 Stretch, by a factor of three. With performance of up to three megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600.
Floating point operations per second is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations.
The Advanced Scientific Computer (ASC) is a supercomputer designed and manufactured by Texas Instruments (TI) between 1966 and 1973. The ASC's central processing unit (CPU) supported vector processing, a performance-enhancing technique which was key to its high-performance. The ASC, along with the Control Data Corporation STAR-100 supercomputer, were the first computers to feature vector processing. However, this technique's potential was not fully realized by either the ASC or STAR-100 due to an insufficient understanding of the technique; it was the Cray Research Cray-1 supercomputer, announced in 1975 that would fully realize and popularize vector processing. The more successful implementation of vector processing in the Cray-1 would demarcate the ASC as first-generation vector processors, with the Cray-1 belonging in the second.
A processor register is a quickly accessible location available to a computer's processor. Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write-only. In computer architecture, registers are typically addressed by mechanisms other than main memory, but may in some cases be assigned a memory address e.g. DEC PDP-10, ICT 1900.
The CDC STAR-100 is a vector supercomputer that was designed, manufactured, and marketed by Control Data Corporation (CDC). It was one of the first machines to use a vector processor to improve performance on appropriate scientific applications. It was also the first supercomputer to use integrated circuits and the first to be equipped with one million words of computer memory.
KDF9 was an early British 48-bit computer designed and built by English Electric. The first machine came into service in 1964 and the last of 29 machines was decommissioned in 1980 at the National Physical Laboratory. The KDF9 was designed for, and used almost entirely in, the mathematical and scientific processing fields – in 1967, nine were in use in UK universities and technical colleges. The KDF8, developed in parallel, was aimed at commercial processing workloads.
In electronics, computer science and computer engineering, microarchitecture, also called computer organization and sometimes abbreviated as μarch or uarch, is the way a given instruction set architecture (ISA) is implemented in a particular processor. A given ISA may be implemented with different microarchitectures; implementations may vary due to different goals of a given design or due to shifts in technology.
Loongson is the name of a family of general-purpose, MIPS architecture-compatible, later in-house LoongArch architecture microprocessors, as well as the name of the Chinese fabless company that develops them. The processors are alternately called Godson processors, which is described as its academic name.
In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it.
SPEC INT is a computer benchmark specification for CPU integer processing power. It is maintained by the Standard Performance Evaluation Corporation (SPEC). SPEC INT is the integer performance testing component of the SPEC test suite. The first SPEC test suite, CPU92, was announced in 1992. It was followed by CPU95, CPU2000, and CPU2006. The latest standard is SPEC CPU 2017 and consists of SPEC speed and SPEC rate.
Super PI is a computer program that calculates pi to a specified number of digits after the decimal point—up to a maximum of 32 million. It uses Gauss–Legendre algorithm and is a Windows port of the program used by Yasumasa Kanada in 1995 to compute pi to 232 digits.
The Central Computer and Telecommunications Agency (CCTA) was a UK government agency providing computer and telecoms support to government departments.
NBench, short for Native mode Benchmark and later known as BYTEmark, is a synthetic computing benchmark program developed in the mid-1990s by the now defunct BYTE magazine intended to measure a computer's CPU, FPU, and Memory System speed.
The LINPACK Benchmarks are a measure of a system's floating-point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense n by n system of linear equations Ax = b, which is a common task in engineering.
{{cite web}}
: CS1 maint: DOI inactive as of November 2024 (link)