Neil J. Gunther

Last updated

Neil James Gunther
NJG BletchleyPk.jpg
Neil Gunther at Bletchley Park 2002
"A quantum leap is neither"
Born (1950-08-15) 15 August 1950 (age 73)
Preston, Victoria, Australia
Alma mater La Trobe University
University of Southampton
Known for Performance analysis
Capacity planning tools
Theory of large transients
Universal scalability law
Scientific career
FieldsComputational information systems (classical and quantum)
Institutions San Jose State University
Syncal Corporation
Xerox Palo Alto Research Center
Performance Dynamics Company (Founder)
École Polytechnique Fédérale de Lausanne (EPFL)
Doctoral advisor Tomas M. Kalotas (Honors)
Christie J. Eliezer (Masters)
David J. Wallace (Doctorate)

Neil Gunther (born 15 August 1950) is a computer information systems researcher best known internationally for developing the open-source performance modeling software Pretty Damn Quick and developing the Guerrilla approach to computer capacity planning and performance analysis. He has also been cited for his contributions to the theory of large transients in computer systems and packet networks, and his universal law of computational scalability. [1] [2] [3] [4] [5] [6]

Contents

Gunther is a Senior Member of both the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE), as well as a member of the American Mathematical Society (AMS), American Physical Society (APS), Computer Measurement Group (CMG) and ACM SIGMETRICS.

He is currently focused on developing quantum information system technologies. [7]

Biography

Gunther is an Australian of German and Scots ancestry, born in Melbourne on 15 August 1950. He attended Preston East Primary School from 1955 to 1956, and Balwyn North Primary School from 1956 until 1962. For his tenth birthday, Gunther received a copy of the now famous book entitled The Golden Book of Chemistry Experiments from an older cousin. Inspired by the book, he started working on various experiments, making use of various chemicals that could be found around in his house. After he spilled some potassium permanganate solution on his bedroom carpet his mother confined him to an alcove in the garage which he turned into a small laboratory, replete with industrial chemicals and second-hand laboratory glassware. Gunther was interested in finding out how things like detergents and oils were composed by cracking them in his fractionating column. He took particular interest in mixing paints for his art classes, as well as his chemistry classes in Balwyn High School. His father, being the Superintendent of Melbourne's electrical power station, borrowed an organic chemistry text from the chemists in the quality control laboratory. This ultimately led to an intense interest in synthesizing Azo dyes. At around age 14, Gunther attempted to predict the color of azo dyes based on the chromophore-auxochrome combination. Apart from drawing up empirical tables, this effort was largely unsuccessful due to his lack of knowledge of quantum theory.

Post-Doc years

Gunther taught physics at San Jose State University from 1980 to 1981. He then joined Syncal Corporation, a small company contracted by NASA and JPL to develop thermoelectric materials for their deep-space missions. Gunther was asked to analyze the thermal stability test data from the Voyager RTGs. He discovered that the stability of the silicon-germanium (Si-Ge) thermoelectric alloy was controlled by a soliton-based precipitation mechanism. [8] JPL used his work to select the next generation of RTG materials for the Galileo mission launched in 1989.

Xerox years

In 1982, Gunther joined Xerox PARC to develop parametric and functional test software for PARC's small-scale VLSI design fabrication line. Ultimately, he was recruited onto the Dragon multiprocessor workstation project where he also developed the PARCbench multiprocessor benchmark. This was his first foray into computer performance analysis.

1989, he developed a Wick-rotated version of Richard Feynman's quantum path integral formalism for analyzing performance degradation in large-scale computer systems and packet networks. [9]

Pyramid years

In 1990 Gunther joined Pyramid Technology (now part of Fujitsu Siemens Computers) where he held positions as senior scientist and manager of the Performance Analysis Group that was responsible for attaining industry-high TPC benchmarks on their Unix multiprocessors. He also performed simulations for the design of the Reliant RM1000 parallel database server.

Consulting practice

Gunther founded Performance Dynamics Company as a sole proprietorship, registered in California in 1994, to provide consulting and educational services for the management of high performance computer systems with an emphasis on performance analysis and enterprise-wide capacity planning. He went on to release and develop his own open-source performance modeling software called "PDQ (Pretty Damn Quick)" around 1998. That software also accompanied his first textbook on performance analysis entitled The Practical Performance Analyst. Several other books have followed since then.

Current research interests

Quantum information systems

In 2004, Gunther has embarked on joint research into quantum information systems based on photonics. [7] During the course of his research in this area, he has developed a theory of photon bifurcation that is currently being tested experimentally at École Polytechnique Fédérale de Lausanne. [10] This represents yet another application of path integral formulation to circumvent the wave-particle duality of light.

In its simplest rendition, this theory can be considered as providing the quantum corrections to the Abbe-Rayleigh diffraction theory of imaging and the Fourier theory of optical information processing. [11]

Performance visualization

Inspired by the work of Tukey, Gunther explored ways to help systems analyst visualize performance in a manner similar to that already available in scientific visualization and information visualization. In 1991, he developed a tool called Barry, which employs barycentric coordinates to visualize sampled CPU usage data on large-scale multiprocessor systems. [12] More recently, he has applied the same 2-simplex barycentric coordinates to visualizing the Apdex application performance metric, which is based on categorical response time data. A barycentric 3-simplex (a tetrahedron), that can be swivelled on the computer screen using a mouse, has been found useful for visualizing packet network performance data. In 2008, he co-founded the PerfViz google group.

Universal Law of Computational Scalability

The throughput capacity X(N) of a computational platform is given by:

where N represents either the number of physical processors in the hardware configuration or the number of users driving the software application. The parameters , and respectively represent the levels of contention (e.g., queueing for shared resources), coherency delay (i.e., latency for data to become consistent) and concurrency (or effective parallelism) in the system. The parameter also quantifies the retrograde throughput seen in many stress tests but not accounted for in either Amdahl's law or event-based simulations. This scalability law was originally developed by Gunther in 1993 while he was employed at Pyramid Technology. [13] Since there are no topological dependencies, C(N) can model symmetric multiprocessors, multicores, clusters, and GRID architectures. Also, because each of the three terms has a definite physical meaning, they can be employed as a heuristic to determine where to make performance improvements in hardware platforms or software applications.

At a more fundamental level, the above equation can be derived [14] from the Machine Repairman queueing model: [15]

Theorem (Gunther 2008): The universal scalability law is equivalent to the synchronous queueing bound on throughput in a modified Machine Repairman with state-dependent service times.

The following corollary (Gunther 2008 with ) corresponds to Amdahl's law: [16]

Theorem (Gunther 2002): Amdahl's law for parallel speedup is equivalent to the synchronous queueing bound on throughput in a Machine Repairman model of a multiprocessor.

Awards

Selected bibliography

Theses

BSc Honors dissertation, department of physics, October (1974)

Books

Heidelberg, Germany, October 2001, ISBN   3-540-42145-9 (Contributed chapter)

Invited presentations

Papers

Related Research Articles

<span class="mw-page-title-main">Amdahl's law</span> Formula in computer architecture

In computer architecture, Amdahl's law is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It states that "the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used". It is named after computer scientist Gene Amdahl, and was presented at the American Federation of Information Processing Societies (AFIPS) Spring Joint Computer Conference in 1967.

<span class="mw-page-title-main">Queueing theory</span> Mathematical study of waiting lines, or queues

Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service.

<span class="mw-page-title-main">Parallel computing</span> Programming paradigm in which many processes are executed simultaneously

Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.

In mathematical queueing theory, Little's law is a theorem by John Little which states that the long-term average number L of customers in a stationary system is equal to the long-term average effective arrival rate λ multiplied by the average time W that a customer spends in the system. Expressed algebraically the law is

Scalability is the property of a system to handle a growing amount of work. One definition for software systems specifies that this may be done by adding resources to the system.

<span class="mw-page-title-main">Norton's theorem</span> DC circuit analysis technique

In direct-current circuit theory, Norton's theorem, also called the Mayer–Norton theorem, is a simplification that can be applied to networks made of linear time-invariant resistances, voltage sources, and current sources. At a pair of terminals of the network, it can be replaced by a current source and a single resistor in parallel.

In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows.

<span class="mw-page-title-main">Gene Amdahl</span> American computer architect and high-tech entrepreneur

Gene Myron Amdahl was an American computer architect and high-tech entrepreneur, chiefly known for his work on mainframe computers at IBM and later his own companies, especially Amdahl Corporation. He formulated Amdahl's law, which states a fundamental limitation of parallel computing.

<span class="mw-page-title-main">Theoretical computer science</span> Subfield of computer science and mathematics

Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation, formal language theory, the lambda calculus and type theory.

<span class="mw-page-title-main">Amdahl Corporation</span> American mainframe computer manufacturer

Amdahl Corporation was an information technology company which specialized in IBM mainframe-compatible computer products, some of which were regarded as supercomputers competing with those from Cray Research. Founded in 1970 by Gene Amdahl, a former IBM computer engineer best known as chief architect of System/360, it has been a wholly owned subsidiary of Fujitsu since 1997. The company was located in Sunnyvale, California.

MOSIX is a proprietary distributed operating system. Although early versions were based on older UNIX systems, since 1999 it focuses on Linux clusters and grids. In a MOSIX cluster/grid there is no need to modify or to link applications with any library, to copy files or login to remote nodes, or even to assign processes to different nodes – it is all done automatically, like in an SMP.

<span class="mw-page-title-main">Gustafson's law</span> Theoretical speedup formula in computer architecture

In computer architecture, Gustafson's law gives the speedup in the execution time of a task that theoretically gains from parallel computing, using a hypothetical run of the task on a single-core machine as the baseline. To put it another way, it is the theoretical "slowdown" of an already parallelized task if running on a serial machine. It is named after computer scientist John L. Gustafson and his colleague Edwin H. Barsis, and was presented in the article Reevaluating Amdahl's Law in 1988.

A discrete-event simulation (DES) models the operation of a system as a (discrete) sequence of events in time. Each event occurs at a particular instant in time and marks a change of state in the system. Between consecutive events, no change in the system is assumed to occur; thus the simulation time can directly jump to the occurrence time of the next event, which is called next-event time progression.

In computing, computer performance is the amount of useful work accomplished by a computer system. Outside of specific contexts, computer performance is estimated in terms of accuracy, efficiency and speed of executing computer program instructions. When it comes to high computer performance, one or more of the following factors might be involved:

In multiprocessor computer systems, software lockout is the issue of performance degradation due to the idle wait times spent by the CPUs in kernel-level critical sections. Software lockout is the major cause of scalability degradation in a multiprocessor system, posing a limit on the maximum useful number of processors. To mitigate the phenomenon, the kernel must be designed to have its critical sections as short as possible, therefore decomposing each data structure in smaller substructures.

In queueing theory, a discipline within the mathematical theory of probability, mean value analysis (MVA) is a recursive technique for computing expected queue lengths, waiting time at queueing nodes and throughput in equilibrium for a closed separable system of queues. The first approximate techniques were published independently by Schweitzer and Bard, followed later by an exact version by Lavenberg and Reiser published in 1980.

Within theoretical computer science, the Sun–Ni law is a memory-bounded speedup model which states that as computing power increases the corresponding increase in problem size is constrained by the system’s memory capacity. In general, as a system grows in computational power, the problems run on the system increase in size. Analogous to Amdahl's law, which says that the problem size remains constant as system sizes grow, and Gustafson's law, which proposes that the problem size should scale but be bound by a fixed amount of time, the Sun–Ni law states the problem size should scale but be bound by the memory capacity of the system. Sun–Ni law was initially proposed by Xian-He Sun and Lionel Ni at the Proceedings of IEEE Supercomputing Conference 1990.

MQX is a real-time operating system (RTOS) developed by Precise Software Technologies, Inc., and currently sold by Synopsys, Embedded Access, Inc., and NXP Semiconductors.

<span class="mw-page-title-main">Kunle Olukotun</span> British-born Nigerian computer scientist

Oyekunle Ayinde "Kunle" Olukotun is a British-born Nigerian computer scientist who is the Cadence Design Systems Professor of the Stanford School of Engineering, Professor of Electrical Engineering and Computer Science at Stanford University and the director of the Stanford Pervasive Parallelism Lab. Olukotun is known as the “father of the multi-core processor”, and the leader of the Stanford Hydra Chip Multiprocessor research project. Olukotun's achievements include designing the first general-purpose multi-core CPU, innovating single-chip multiprocessor and multi-threaded processor design, and pioneering multicore CPUs and GPUs, transactional memory technology and domain-specific languages programming models. Olukotun's research interests include computer architecture, parallel programming environments and scalable parallel systems, domain specific languages and high-level compilers.

References

  1. Microsoft developer blog comparing Amdahl's law with Gunther's law (2009)
  2. Computer Measurement Group Interview part 1 Archived 22 July 2011 at the Wayback Machine and part 2 (2009)
  3. Springer author biography
  4. Oracle performance experts
  5. La Trobe University alumnus profile Archived 7 June 2011 at the Wayback Machine
  6. Interview with John C. Dvorak (1998)
  7. 1 2 D. L. Boiko; Neil J. Gunther; N. Brauer; M. Sergio; C. Niclass; G. Beretta.; E. Charbon (2009). "A Quantum Imager for Intensity Correlated Photons". New Journal of Physics.
  8. Gunther, Neil J. (1982). ""Solitons and Their Role in the Degradation of Modified Silicon-Germanium Alloys" in Proc. IEEE Fourth Int. Conf. on Thermoelectric Energy Conversion" (PDF). IEEE, Volume 82CH1763-2, Pages 89–95.
  9. Gunther, Neil J. (1989). "Path Integral Methods for Computer Performance Analysis". Information Processing Letters. 32: 7–13. doi:10.1016/0020-0190(89)90061-6.
  10. Gunther, Neil J.; Charbon, E.; Boiko, D. L.; Beretta, G. (2006). "Photonic Information Processing Needs Quantum Design Rules". SPIE Online.
  11. E. G. Steward (2004). Fourier Optics: An Introduction. Dover. ISBN   978-0-486-43504-6.
  12. Gunther, Neil J. (1992). "On the Application of Barycentric Coordinates to the Prompt and Visually Efficient Display of Multiprocessor Performance Data" in Proc. VI International Conf. on Modelling Techniques and Tools for Computer Performance Evaluation, Edinburgh, Scotland. Antony Rowe Ltd., Wiltshire, U.K., Pages 67–80. ISBN   978-0-7486-0425-8.
  13. Gunther, Neil J. (1993). ""A Simple Capacity Model for Massively Parallel Transaction Systems" in Proc. CMG Conf., San Diego, California" (PDF). CMG, Pages 1035–1044.
  14. Neil J. Gunther (2008). "A General Theory of Computational Scalability Based on Rational Functions". arXiv: 0808.1431v2 [cs.PF].
  15. D. Gross & C. M. Harris (1998). Fundamentals of Queueing Theory. Wiley-Interscience. ISBN   978-0-471-17083-9.
  16. Gunther, Neil J. (2002). "A New Interpretation of Amdahl's Law and Geometric Scalability". arXiv: cs/0210017 .