Abbreviation | CCTA |
---|---|
Formation | 1957 (as the TSU) |
Dissolved | 2000 (subsumed into the OGC) |
Legal status | Defunct executive government agency |
Purpose | New telecommunications and computer technology for the UK government |
Location |
|
Region served | UK |
Membership | Electronics and computer engineers |
Parent organization | HM Treasury |
Website | www.ccta.gov.uk |
The Central Computer and Telecommunications Agency (CCTA) was a UK government agency providing computer and telecoms support to government departments.
CCTA records are held by The National Archives. [1]
In 1957, the UK government formed the Central Computer Agency (CCA) Technical Support Unit (TSU) within HM Treasury to evaluate and advise on computers, initially based around engineers from the telecommunications service. As this unit evolved, it morphed into the Central Computer and Telecommunications Agency, which also had responsibilities for procurement of United Kingdom Government technological equipment, and later, that centrally funded for University and Research Council systems.
Note that nearly all names and authors, quoted or referenced in this section, were CCTA engineers or scientists.
The first external technical publication was in 1960 by J. W. Freebody and J. W. Heron, as “Some engineering factors of importance in relation to the reliability of government A.D.P. systems”. Nearly 30 computer systems had been installed at that time. [2] The conclusion was that reliability was the most important single factor, identifying areas and activities that required investigation by the new organisation. A later career review confirmed that John Freebody was promoted to Staff Engineer and set the task of founding the Technical Support Unit. [3]
In 1965 responsibility for TSU was transferred from HM Treasury to the Ministry of Technology. At that time telecommunications engineering staff comprised 8 dealing with Systems Evaluations, 6 with Peripheral Equipment and 10 in the areas of Accommodation, [4] Testing, and Maintenance. Details of names, grades, qualifications, salary and relevant experience can be found in Hansard Volume 717: debated on Tuesday 27 July 1965. [5]
Procurement contracts included guaranteed service levels where, at least in the early days, was monitored by TSU engineers, to whom all fault incident occurrences and system availability levels were submitted on a monthly basis. The contracts also included requirements to run on-site and sometimes predelivery acceptance trials of a specified format, designed and supervised by engineering staff.
The acceptance tests comprised a series of demonstrations to verify that everything had been delivered and appeared to function, followed by stress testing of up to 40 hours, over a few days, depending on system size. For the latter, engineering test programs were included and available user applications. Then, the criterion of success was to achieve a given level of uptime. In 1968, new procedures were introduced, particularly involving stress testing, where each main tests were aimed to run for 15 minutes, with criteria that, besides a maximum time limit, each test was required to run failure free six times in succession.
During this period, on invitation, five CCTA engineers presented papers on acceptance testing at the Institution of Electrical Engineers. [6]
At this stage, concern was raised regarding how to test computers with the new Multiprogramming Operating Systems. The problem was solved by Roy Longbottom who, at various promotion levels between 1968 and 1982, was responsible for designing and supervising acceptance trials of the larger scientific systems. He produced 17 programs, written in the FORTRAN programming language, 5 for CPUs, 4 for disk drives, 3 for magnetic tape units and others for printers, card and paper tape punchers and readers. Program code listings are included in the book *Computer System Reliability* (Appendix 1). [7]
By 1972, 800 acceptance tests of computers systems and enhancements had been carried out including 500 for complete systems, reported in The Post Office Electrical Engineers Journal. [8] The latter tests included 100 using the new procedures from 11 different contractors. The first candidate was an IBM 360 Model 65 at University College London in 1971, then in 1972 by trials on all mainframes, minicomputers and supercomputers covered by CCTA contracts. Later that year, top end systems tested were the $5 million scalar supercomputers CDC 7600 at University of London Computer Centre and IBM 360/195 at UK Meteorological Office. Not included in these 100, but significant, 1973 trials included the Atlas (computer) at Cambridge University, a latter day version of the 1962 UK supercomputer. During the 100 trials, 23 systems failed to meet the specified criteria, at the first attempt.
By 1979 more than 1600 acceptance tests of computers systems and enhancements had been carried out. For the latest 400 system tests, carried out under the new procedures, 14% were recorded as failures and 24% as having a conditional pass. Up to three attempts were allowed with none being completely rejected, albeit some accepted with penalty conditions. See Chapter 10 in the Longbottom book. [7]
Detailed analysis of fault returns, hands on observations during acceptance trials and system appraisal activities lead to a deeper understanding of reliability issues, published in a 1972 Radio and Electronic Engineer Journal, titled “Analysis of Computer System Reliability and Maintainability”, with probability considerations. [9] Later, came a conference paper “Reliability of Computer Systems” (Archive) [10] and the Roy Longbottom book [7] that particularly acknowledges input provided by Ian Thomson on computer system maintainability and Trevor Jones on environmental aspects.
Trials in 1979 included the first Cray 1 vector supercomputer to be delivered to the UK at Atomic Weapons Research Establishment and, by 1982, the CDC Cyber 205 for UK Meteorological Office, where total system costs could be $10 million. Both these systems had pre-delivery trials in the USA. For these, Roy Longbottom converted the scalar CPU programs to fully exploit capabilities of the new vector processors. Results of the converted Vector Whetstone (benchmark) were included in the paper “Performance of Multi-User Supercomputing Facilities” presented in the 1989 Fourth International Conference on Supercomputing, Santa Clara. [11] [12]
Details were also included in the June 1990 Advanced Computing Seminar at Natural Environment Research Council Wallingford. This led to Council for the Central Laboratory of the Research Councils Distributed Computing Support collecting results from running “on a variety of machines, including vector supercomputers, minisupers, super-workstations and workstations, together with that obtained on a number of vector CPUs and on single nodes of various MPP machines “. More than 200 results are included, up to 2006, in the report available on the Wayback Machine Archive in entries to at least the year 2007 section. [13]
For the systems identified as supercomputers, there were nine acceptance testing sessions, two of which were failures, one due to excessive CPU problems and the other due to design issues on the I/O subsystem. Both of these were induced by the CCTA stress testing programs.
During the early days there were considerations of future technology, including telecommunications in the 1970 book “Data transmission - the future : the development of data transmission to meet future users' needs” [14] found in National Library of Australia catalog 169638. But the main emphasis was appraisal of the latest computer system hardware and software. Initially, this involved collecting information on all appropriate new products, followed by more detailed investigation when being considered for a new project. This included a tour of the production factory and discussions with higher level engineering, design and quality control staff.
The National Archives CCTA records [1] include technical appraisal reports (at the time of writing), up to 1986 (search for quoted reports). The first in a finally standard format was “System Summary Notes” (range 5000 to 6999), starting in 1967, with such as early IBM 360 mainframes and Digital Equipment Corporation PDP 8 minicomputer, up to the last issue in 1980. These are based on standard forms with numerous entries. Other reports identified in the Archives are “Technical Notes” between 1975 and 1986, “Internal Technical Memoranda” 1973 to 1986 and “Technical Memoranda”1975 to 1986”. The number of reports cannot be easily determined from the provided data..
Before cross the board standard benchmarks became available, average speed rating of computers was based on calculations for a mix of instructions with the result given in Kilo Instructions Per Second (KIPS). The most famous was the Gibson Mix for scientific computing. This was included in CCTA calculations that included those for an ADP Mix and a Process Control Mix, in CCTA Technical Note 3806 Issue 5 with 212 sets of results from 18 manufacturers, pre- 1960 to 1971. In 1977, later results were included in CCTA Technical Memorandum 1163, (both via [1] ). All those results are also available in a 2017 PDF file. [15]
In 1972 Harold Curnow wrote the Whetstone Benchmark in the FORTRAN programming Language, based on the work of Brian Wichmann of the National Physical Laboratory. [16] This executes 8 test functions, 5 of which involve floating point calculations that dominate running time. Overall performance was calculated in thousands of Whetstone instructions per second (KWIPS). The program became the first general purpose benchmark that set industry standards of computer system performance. Enhancements by Roy Longbottom provided self timing arrangements and calibration to run for a predetermined time on present and future systems, also performance of each of the 8 tests. The calibrated time was mainly for 10 seconds and is still applicable after 50 years.
In 1978, Roy Longbottom, who inherited the role of design authority ot the benchmark, also produced a version to exploit supercomputer processing hardware, covered in reports “Performance of Multi-User Supercomputing Facilities” [11] and “Whither Whetstone? The synthetic benchmark after 15 years” [17] in book. [18]
Original Whetstone Benchmark results are in 1985 CCTA Technical Memorandum 1182, (via Archive [1] ). where overall speed is shown as MWIPS (Millions). This contains more than 1000 results for 244 computers from 32 manufacturers.
On achieving 1 MWIPS, the Digital Equipment Corporation VAX-11/780 minicomputer became accepted as the first commercially available 32-bit computer to demonstrate 1 MIPS (Millions of Instructions Per Second), CERN, [19] not really appropriate for a benchmark dependent on floating point speed. This had an impact on the Dhrystone Benchmark, the second accepted general purpose computer performance measurement program, with no floating point calculations. This produced a result of 1757 Dhrystones Per Second on the VAX 11/780, leading to a revised measurement of 1 DMIPS, (AKA Vax MIPS), by dividing the original result by 1757.
The Whetstone Benchmark also had high visibility concerning floating point performance of Intel CPUs and PCs, starting with the 1980 Intel 8087 coprocessor. This was reported in the 1986 Intel Application Report “High Speed Numerics with the 80186/80188 and 8087”. [20] The latter includes hardware functions for exponential, logarithmic or trigonometric calculations, as used in two of the eight Whetstone Benchmark tests, where these can dominate running time. Only two other benchmarks were included in the Intel procedures, showing huge gains over the earlier software based routines on all three programs.
Later tests, by a SSEMC Laboratory, evaluated Intel 80486 compatible CPU chips using their Universal Chip Analyzer. [21] Considering two floating point benchmarks, as used by Intel in the above report, they preferred Whetstone, stating “ Whetstone utilizes the complete set of instructions available on early x87 FPUs”. This might suggest that the Whetstone Benchmark influenced the hardware instruction set.
CCTA also influenced the programming code for Linpack and Livermore loops floating point benchmarks, initially for PC versions, where the original programs were unsuitable, particularly due to the PC low resolution timer. The new versions, in the C programming language, included the new CCTA automatic calibration function to run for a specified finite time, still applicable 50 years later. Netlib accepted the former, renaming it as linpack-pc.c. [22] For the Livermore benchmark, C programming code was available for executing the loops but extensive background code, for such as data generation, timing parameters and numeric results validation, were in FORTRAN. This was converted to C. At least one other organisation has published a claimed completely rewritten C version that incorporates the CCTA unique background code. with no attribution.
CCTA test programs used in acceptance trials had parameters to control running times, enabling valid comparisons of CPU performance of all systems tested. Following a request for information, these and Whetstone Benchmark results were included in the external publication “A Guide to the Processing Speeds of Computers”, over 100 different computers with more than 700 results. [23] This included the acknowledgment “The authors would like to thank colleagues from the Central Computer Agency, namely Mr G Brownlee, Mr H J Curnow and Mr R Longbottom who have helped to collect much of the data making this system possible”.
From 1980 Roy Longbottom spent most of his time providing performance consultancy services to Departments and Universities. The latter included attending meetings of the Computer Board for Universities and Research Councils National Archives. [24] He became a member of the Technical Subgroup of the National Policy Committee on Advanced Research Computers and the Universities’ Benchmark Options Group. The latter involved leading a party to the USA including having discussions with Jack Dongarra and Frank McMahon, respectively authors of the Linpack and Livermore Loops, key benchmarks of the day for scientific applications.
In 1992, the Science and Engineering Research Council requested CCTA to provide independent observation and reporting on benchmarking a new supercomputer for University of London Computer Centre, comprising a large sample of typical user applications. Roy Longbottom covered Fujitsu and NEC computers in Japan and Rob Whetnall overseeing Cray and Convex Computer Corporation systems, in the USA. The CCTA scalar and vector Whetstone Benchmarks were also run. A combination of the latter can help in evaluating performance of multi-user supercomputing operation, [11] where the one that can demonstrate superior performance on specific applications is not necessarily the best choice and the level of vectorisation and number of scalar processors can be more important. In this case, calculations from results of the CCTA programs indicated the same choice of system as that from the university's benchmark.
The aforementioned performance consultancy covered more than 45 projects between 1990 and 1993, mainly for data processing applications, with systems from 18 manufacturers, including mainframes, minicomputers and PCs. Activities included detailed sizing, modelling, user application based benchmarking, general advice and troubleshooting. CCTA's work was publicised at various conferences, starting with one on in-house software for benchmarking and capacity planning at ECOMA 12 in Munich, 1984, [25] then benchmarking and workload characterisation at Edinburgh University, 1986 (Page 5). [26]
The next one, on Database System Benchmarks and Performance Testing was in a Conference on Parallel Processors, at NPL in 1992, providing a warning of the dangers for the supercomputer community, and published in a later book. [27]
Finally, a new approach to performance management was suggested based on the assumption that initial sizing estimates would be incorrect and actions should be considered for application at each stage of procurement, presented at UKCMG Conference Brighton, in 1992. [28] It was proposed following performance issues on a number of new small systems using the UNIX operating system. In this case, the reasons were identified by measuring CPU, input/output, communications and memory utilisation of a number of transactions, using the UNIX SAR performance monitor. Then the first problems was mainly transactions using too much CPU time, requiring more efficient code or a CPU upgrade. Secondly it was the single disk drive, with adequate capacity, being unable to handle the high random access rate, the solution being to spread the data over more than one drive. To help in identifying solutions or “what if” considerations, a sizing model "A Spreadsheet Computer Performance Queuing Model for Finite User Populations" was produced, to instantly indicate the likely impact of changes in response times, throughput and hardware utilisation. [29]
Other data processing benchmarks produced by CCTA Performance Branch included one measuring performance of mixes of processor bound activities, written in the COBOL programming language. A total of 129 sets of results over computers from 22 different manufacturers are in Internal Memo 5219. A second one is the Medium System Benchmark, with limited results in Internal Memo 5365 covering 35 systems from 8 manufacturers. This also indicates Technical Memoranda numbers of reports containing full results, in the range 15047 to 15247 (example ICL reports are 15147/1 to 15147/14) - see Archived Information for quoted reports. [1] The benchmark comprised six real representative programs with disk and magnetic tape input/output, covering updates, sorting, compiling and multi-stream operation, measuring CPU and elapsed times and the number of data transfers.
After retirement, Roy Longbottom, as the latter day design authority of the Whetstone Benchmark, converted the latest FORTRAN code into the C programming language, also creating a new series of benchmarks and stress testing programs based on previous CCTA activities. These were freely available, produced in conjunction with the Compuserve Benchmarks and Standards Forum, see Wayback Machine Archive, [30] covering PC hardware 1997 to 2008.
Later, with further development, programs and results were made freely available in a dedicated website (that will have a limited lifetime). Historic details from 2008 onwards are in Wayback Machine Archive, [31] where all files appear to be downloadable from most impressions. From 2017 onwards, the details were made available at ResearchGate in more referenceable PDF files. In 2024 there were 40 of these reports to read or download, when a total of more than 76,000 Reads and 79 Citations were reported. Brief descriptions of all files are included in an indexing file [32] (Download to open files). The PDF files include 12 for Raspberry Pi computers, for which Roy Longbottom had been recruited by the Raspberry Pi Foundation as a voluntary member of Raspberry Pi pre-release Alpha Testing Team from 2019.
By the 1990s the Whetstone Benchmark and results had become relatively popular. A notable quotation in 1985 was in “A portable seismic computing benchmark” quoting "The only commonly used benchmark to my knowledge is the venerable Whetstone benchmark, designed many years ago to test floating point operations" in the European Association of Geoscientists and Engineers Journal. [33]
Then there was great interest in historic performance. Unlike the other Classic Benchmarks, Dhrystone, Linpack and Livermore loops, Whetstone result tables were not available in the public domain but, (in honour of CCTA, for this and other publications), was rectified from 2017. The first new report was “Computer Speeds From Instruction Mixes pre-1960 to 1971”. [15] As with the following one, identified year of first delivery and purchase prices were added.
The second was “Whetstone Benchmark History and Results”, [34] with more detail and added results, particularly for PCs, up to 2013, and double the number of computers covered. The most notable citation, for this and Gibson Mix, was by Tony Voellm, then Google Cloud Performance Engineering Manager, entitled “Cloud Benchmarking: Fight the black hole”. [35] This considered available benchmarks and performance by time with detailed graphs, including those from the Mix and Whetstone reports.
The first of other reports, attributable to earlier CCTA gained knowledge but not previously published, is “Computer Speed Claims 1980 to 1996”. [36] This covers more than 2000 mainframes, minicomputers, supercomputers and workstations, from around 120 suppliers, with main speeds in Millions of Instructions Per Second (MIPS), Millions of Floating Point Operations Per Second (MFLOPS) and CPU clock speed in MHz. Cost and production year are also included, when available.
Next, based on programming in Intel 8086 assembly code, learned earlier, is “PC CPUID 1994 to 2013, plus Measured Maximum Speeds Via Assembler Code. [37] This contains 27 pages of PC CPU identification numbers, operating speeds, range of models and cache sizes, by year, then performance of more than 30 types of processor over 12 CPU and memory benchmarks. Separate performance comparison tables are provided for handling data provided the CPU, caches and RAM. The diversity of results demonstrates the useless of general performance comparisons based on a single number.
The following reports highlight earlier unique CCTA experiences, without which they could not have been produced. The first is “Cray 1 Supercomputer Performance Comparisons With Home Computers Phones and Tablets”. [38] Results are initially based on the Classic Benchmarks that were the first programs that set standards of performance for scientific computing, comprising the 1970 Livermore Loops, the 1972 Whetstone and the 1979 Linpack 100 benchmarks. Further results cover the 1979 Vector Whetstone performance, high speed floating point calculations and multiprocessing. The report includes the following comparison with the first version of the Raspberry Pi computer based on average Livermore Loop speeds, as this benchmark was used to verify performance of the first Cray 1.
"In 1978, the Cray 1 supercomputer cost $7 Million, weighed 10,500 pounds and had a 115 kilowatt power supply. It was, by far, the fastest computer in the world. The Raspberry Pi costs around $70 (CPU board, case, power supply, SD card), weighs a few ounces, uses a 5 watt power supply and is more than 4.5 times faster than the Cray 1".
The later Pi 400 PC is shown to be 78.8 times faster and that could increase up to four times, using all CPU cores.
That quotation was reproduced in numerous Internet posts, some including a reference to the author worked for “the UK Government Central Computer Agency”, as quoted in the report. A total of more than 60 posts were found across LinkedIn, X (Twitter) and Facebook, with more than 30 thousand views. This was based on an HTML version of the comparisons on the author's website (Archive Copy), [39] where site Analytics registered almost 190,000 HTML file views between December 2023 and January 2024, with nearly 90% for the Cray report. Accesses were from North America 47%, Europe 37%, Asia 11%, Oceania 3% and Other 2%, the Agency's involvement being spread around the world.
CCTA influence is also highlighted in “Celebrating 50 years of computer benchmarking and stress testing”. [40]
In this area, CCTA's work during the 1970s, 1980s and 1990s was primarily to (a) develop central government IT professionalism, (b) create a body of knowledge and experience in the successful development and implementation of IS/IT within UK central government (c) to brief Government Ministers on the opportunities for use of IS/IT to support policy initiatives (e.g. "Citizen's Charter" / "e-government") and (d) to encourage and assist UK private sector companies to develop and offer products and services aligned to government needs.
Over the 3 decades, CCTA's focus shifted from hardware to a business oriented systems approach with strong emphasis on business led IS/IT Strategies which crossed Departmental (Ministry) boundaries encompassing several "Departments" (e.g. CCCJS – Computerisation of the Central Criminal Justice System). This inter-departmental approach (first mooted in the mid to late 1980s) was revolutionary and met considerable political and departmental opposition.
In October 1994, MI5 took over its work on computer security from hacking into the government's (usually the Treasury) network. In November 1994, CCTA launched its website. In February 1998 it built and ran the government's secure intranet. The MoD was connected to a separate network. In December 1998, the DfEE moved its server from CCTA at Norwich to NISS (National Information Services and Systems) in Bath when it relaunched its website. [41]
Between 1989 and 1992, CCTA's "Strategic Programmes" Division undertook research on exploiting Information Systems as a medium for improving the relationship between citizens, businesses and government. This parallelled the launch of the "Citizen's Charter" by the then Prime Minister, John Major, and the creation within the Cabinet Office of the "Citizen's Charter Unit" (CCTA had at this point been moved from HM Treasury to the Cabinet Office). The research and work focused on identifying ways of simplifying the interaction between citizens and government through the use of IS/IT. Two major TV documentaries were produced by CCTA – "Information and the Citizen" and "Hymns Ancient and Modern" which explored the business and political issues associated with what was to become "e-government". These were aimed at widening the understanding of senior civil servants (the Whitehall Mandarins) of the significant impact of the "Information Age" and identifying wider social and economic issues likely to arise from e-government.[ citation needed ]
During the late 1990s, its strategic role was eroded by the Cabinet Office's Central IT Unit (CITU – created by Michael Heseltine in November 1995), and in 2000 CCTA was fully subsumed into the Office of Government Commerce (OGC). [42]
Since then, the non-procurement IT / Telecommunications co-ordination role has remained in the Cabinet Office, under a number of successive guises:
CCTA was the sponsor of a number of methodologies, including:
The CCTA Security Group created the first UK Government National Information Security Policy, and developed the early approaches to structured information security for commercial organisations which saw wider use in the DTI Security Code of Practice, BS 7799 and eventually ISO/IEC 27000
CCTA also promoted the use of emerging IT standards in UK government and in the EU, such as OSI and BS5750 (Quality Management) which led to the publishing of the Quality Management Library and the inception of the TickIT assessment scheme with DTI, MOD and participation of software development companies.
In addition to the development of methodologies, CCTA produced a comprehensive set of managerial guidance covering the development of Information Systems under 5 major headings: A. – Management and Planning of IS; B. – Systems Development; C. – Service Management; D – Office Users; E. – IS Services Industry. The guidance consisted of 27 individual guides and were published commercially as "The Information Systems Guides" ( ISBN 0-471-92556-X) by John Wiley and Sons. The publication is no longer available. This guidance was developed from the practical experience and lessons learned from many UK Government Departments in planning, designing, implementing and monitoring Information Systems and was highly regarded as "best practice". Some parts were translated into other European languages and adopted as national standards.
It also was involved in technical developments, for instance as the sponsor of Project SPACE in the mid-1980s. Under Project SPACE, the ICL Defence Technology Centre (DTC), working closely with technical staff from CCTA and key security-intensive projects in the Ministry of Defence (such as OPCON CCIS) and in other sensitive departments, developed an enhanced security variant of VME.
It managed (ran the servers) of UK national government websites, including those such as the Royal Family's and www.open.gov.uk.
CCTA's headquarters were in London at Riverwalk House, Vauxhall Bridge Road, SW1, since used by the Government Office for London. This housed the main divisions with a satellite office in Norwich which focused on IS/IT Procurement – a function which had been taken over from HMSO (the Stationery Office) when CCTA was formed.
The office in Norwich was in the east of the city, off the former A47 (now A1042), just west of the present A47 interchange near the former St Andrew's Hospital. The site is now used by the OGC.
The HQ in London had four divisions:
A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers have run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers. Since June 2022, all of the supercomputers listed by TOP500 have been 64-bit supercomputers. The first exascale supercomputer was announced in May 2022.
The Cray-1 was a supercomputer designed, manufactured and marketed by Cray Research. Announced in 1975, the first Cray-1 system was installed at Los Alamos National Laboratory in 1976. Eventually, eighty Cray-1s were sold, making it one of the most successful supercomputers in history. It is perhaps best known for its unique shape, a relatively small C-shaped cabinet with a ring of benches around the outside covering the power supplies and the cooling system.
The CDC 6600 was the flagship of the 6000 series of mainframe computer systems manufactured by Control Data Corporation. Generally considered to be the first successful supercomputer, it outperformed the industry's prior recordholder, the IBM 7030 Stretch, by a factor of three. With performance of up to three megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600.
Floating point operations per second is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations.
Cray Inc., a subsidiary of Hewlett Packard Enterprise, is an American supercomputer manufacturer headquartered in Seattle, Washington. It also manufactures systems for data storage and analytics. Several Cray supercomputer systems are listed in the TOP500, which ranks the most powerful supercomputers in the world.
The ETA10 is a vector supercomputer designed, manufactured, and marketed by ETA Systems, a spin-off division of Control Data Corporation (CDC). The ETA10 was an evolution of the CDC Cyber 205, which can trace its origins back to the CDC STAR-100, one of the first vector supercomputers to be developed.
High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.
The Whetstone benchmark is a synthetic benchmark for evaluating the performance of computers. It was first written in ALGOL 60 in 1972 at the Technical Support Unit of the Department of Trade and Industry in the United Kingdom. It was derived from statistics on program behaviour gathered on the KDF9 computer at NPL National Physical Laboratory, using a modified version of its Whetstone ALGOL 60 compiler. The workload on the machine was represented as a set of frequencies of execution of the 124 instructions of the Whetstone Code. The Whetstone Compiler was built at the Atomic Power Division of the English Electric Company in Whetstone, Leicestershire, England, hence its name. Dr. B.A. Wichman at NPL produced a set of 42 simple ALGOL 60 statements, which in a suitable combination matched the execution statistics.
ASCI Red was the first computer built under the Accelerated Strategic Computing Initiative (ASCI), the supercomputing initiative of the United States government created to help the maintenance of the United States nuclear arsenal after the 1992 moratorium on nuclear testing.
The VP2000 was the second series of vector supercomputers from Fujitsu. Announced in December 1988, they replaced Fujitsu's earlier FACOM VP Model E Series. The VP2000 was succeeded in 1995 by the VPP300, a massively parallel supercomputer with up to 256 vector processors.
NEC SX describes a series of vector supercomputers designed, manufactured, and marketed by NEC. This computer series is notable for providing the first computer to exceed 1 gigaflop, as well as the fastest supercomputer in the world between 1992–1993, and 2002–2004. The current model, as of 2018, is the SX-Aurora TSUBASA.
In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it.
Scalable POWERparallel (SP) is a series of supercomputers from IBM. SP systems were part of the IBM RISC System/6000 (RS/6000) family, and were also called the RS/6000 SP. The first model, the SP1, was introduced in February 1993, and new models were introduced throughout the 1990s until the RS/6000 was succeeded by eServer pSeries in October 2000. The SP is a distributed memory system, consisting of multiple RS/6000-based nodes interconnected by an IBM-proprietary switch called the High Performance Switch (HPS). The nodes are clustered using software called PSSP, which is mainly written in Perl.
The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.
This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.
HPC Challenge Benchmark combines several benchmarks to test a number of independent attributes of the performance of high-performance computer (HPC) systems. The project has been co-sponsored by the DARPA High Productivity Computing Systems program, the United States Department of Energy and the National Science Foundation.
The K computer – named for the Japanese word/numeral "kei" (京), meaning 10 quadrillion (1016) – was a supercomputer manufactured by Fujitsu, installed at the Riken Advanced Institute for Computational Science campus in Kobe, Hyōgo Prefecture, Japan. The K computer was based on a distributed memory architecture with over 80,000 compute nodes. It was used for a variety of applications, including climate research, disaster prevention and medical research. The K computer's operating system was based on the Linux kernel, with additional drivers designed to make use of the computer's hardware.
The LINPACK Benchmarks are a measure of a system's floating-point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense n by n system of linear equations Ax = b, which is a common task in engineering.
Titan or OLCF-3 was a supercomputer built by Cray at Oak Ridge National Laboratory for use in a variety of science projects. Titan was an upgrade of Jaguar, a previous supercomputer at Oak Ridge, that uses graphics processing units (GPUs) in addition to conventional central processing units (CPUs). Titan was the first such hybrid to perform over 10 petaFLOPS. The upgrade began in October 2011, commenced stability testing in October 2012 and it became available to researchers in early 2013. The initial cost of the upgrade was US$60 million, funded primarily by the United States Department of Energy.
XK7 is a supercomputing platform, produced by Cray, launched on October 29, 2012. XK7 is the second platform from Cray to use a combination of central processing units ("CPUs") and graphical processing units ("GPUs") for computing; the hybrid architecture requires a different approach to programming to that of CPU-only supercomputers. Laboratories that host XK7 machines host workshops to train researchers in the new programming languages needed for XK7 machines. The platform is used in Titan, the world's second fastest supercomputer in the November 2013 list as ranked by the TOP500 organization. Other customers include the Swiss National Supercomputing Centre which has a 272 node machine and Blue Waters has a machine that has Cray XE6 and XK7 nodes that performs at approximately 1 petaFLOPS (1015 floating-point operations per second).
{{cite book}}
: CS1 maint: location missing publisher (link){{cite journal}}
: Check |url=
value (help)