This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
Peter Cornwell (born 2 July 1958) is a British computer scientist and media theorist. He developed integrated circuits for early parallel computers and undertook pioneering work in high performance computer image generation and public display systems.
Cornwell studied electronics and computing science in London after fine art in the Netherlands, and then joined Texas Instruments, working on the first microprocessors and becoming head of European research for TI's Industrial Systems Division.
He worked as an expert for the EU's European Strategic Program on Research in Information Technology; for the UK's Science and Engineering Research Council on advanced computer architectures and also developed London University's successful bid to establish its Centre for Parallel Computing. His current research is in archive infrastructures and sustainable data.
In 2016 he founded the archive research company data-futures. Since 2016 he is a research fellow at the École normale supérieure lettres et sciences humaines, [1] Lyon. Since 1998 he has run the London media research company "BLIP", which undertook commercial and cultural public display installations, operated an international display infrastructure and funds university research projects. He has exhibited media art in Austria, Finland, Germany, Japan, the U.K. and U.S., organised the 2007 Media Architecture conference and has been visiting professor of computing and art at several Austrian, German and UK universities.
In 1989 Cornwell started Division Inc. a California high performance computer graphics company, developing 3D simulation systems for NASA and aerospace, architecture, networking and pharmaceutical companies. He founded the Visual Theory Group at Imperial College, London, and later became head of the Institute of Visual Media at ZKM, Karlsruhe.
The LEO was a series of early computer systems created by J. Lyons and Co. The first in the series, the LEO I, was the first computer used for commercial business applications.
A supercomputer is a computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.
Grid computing is the use of widely distributed computer resources to reach a common goal. A computing grid can be thought of as a distributed system with non-interactive workloads that involve many files. Grid computing is distinguished from conventional high-performance computing systems such as cluster computing in that grid computers have each node set to perform a different task/application. Grid computers also tend to be more heterogeneous and geographically dispersed than cluster computers. Although a single grid can be dedicated to a particular application, commonly a grid is used for a variety of purposes. Grids are often constructed with general-purpose grid middleware software libraries. Grid sizes can be quite large.
Very long instruction word (VLIW) refers to instruction set architectures designed to exploit instruction level parallelism (ILP). Whereas conventional central processing units mostly allow programs to specify instructions to execute in sequence only, a VLIW processor allows programs to explicitly specify instructions to execute in parallel. This design is intended to allow higher performance without the complexity inherent in some other designs.
A Connection Machine (CM) is a member of a series of massively parallel supercomputers that grew out of doctoral research on alternatives to the traditional von Neumann architecture of computers by Danny Hillis at Massachusetts Institute of Technology (MIT) in the early 1980s. Starting with CM-1, the machines were intended originally for applications in artificial intelligence (AI) and symbolic processing, but later versions found greater success in the field of computational science.
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, mainly in the form of multi-core processors.
Interactive art is a form of art that involves the spectator in a way that allows the art to achieve its purpose. Some interactive art installations achieve this by letting the observer walk through, over or around them; others ask the artist or the spectators to become part of the artwork in some way.
Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.
William Daniel Hillis is an American inventor, entrepreneur, and computer scientist, who pioneered parallel computers and their use in artificial intelligence. He founded Thinking Machines Corporation, a parallel supercomputer manufacturer, and subsequently was Vice President of Research and Disney Fellow at Walt Disney Imagineering.
High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.
The Pittsburgh Supercomputing Center (PSC) is a high performance computing and networking center founded in 1986 and one of the original five NSF Supercomputing Centers. PSC is a joint effort of Carnegie Mellon University and the University of Pittsburgh in Pittsburgh, Pennsylvania, United States.
The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that is based on comprehensive advanced computing resources and supports services to researchers in Texas and across the U.S. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable the research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.
Andrew James Herbert, OBE, FREng is a British computer scientist, formerly Chairman of Microsoft Research, for the Europe, Middle East and Africa region.
Parallel I/O, in the context of a computer, means the performance of multiple input/output operations at the same time, for instance simultaneously outputs to storage devices and display devices. It is a fundamental feature of operating systems.
Computer graphics deals with generating images and art with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, digital art, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.
Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches.
Alan F. Blackwell is a New Zealand-British cognition scientist and professor at the Computer Laboratory, University of Cambridge, known for his work on diagrammatic representation, on data and language modelling, investment modelling, and end-user software engineering.
Claudio Silva is a Brazilian American computer scientist and data scientist. He is a professor of computer science and engineering at the New York University Tandon School of Engineering, the head of disciplines at the NYU Center for Urban Science and Progress (CUSP) and affiliate faculty member at NYU's Courant Institute of Mathematical Sciences. He co-developed the open-source data-exploration system VisTrails with his wife Juliana Freire and many other collaborators. He is a former chair of the executive committee for the IEEE Computer Society Technical Committee on Visualization and Graphics.
John Darlington is a British academic, researcher and author. He is an Emeritus Professor at Imperial College London. He was Director of the London e-Science Centre and was head of the Functional Programming and Social Computing Sections at Imperial.