Keno Fischer | |
---|---|
Born | Germany |
Nationality | German [1] |
Alma mater | Harvard |
Known for | Julia (programming language), and projects including Celeste, and exaflops project to remediate nuclear waste |
Awards | Individual: Forbes 2019 30 Under 30 – Enterprise Technology For collaboration with others: 2017 HPC Innovation Excellence Award |
Scientific career | |
Fields | Computer science, Mathematics |
Website | https://github.com/Keno |
Keno Fischer is a German computer scientist known for being a core member implementing the Julia programming language [2] (e.g. its Windows support). [3] [4] [5] [6] [7] He is an alumnus of Harvard for both his BA and MA. He works at Julia Computing, which he co-founded with Julia co-creators, Alan Edelman, Jeff Bezanson, Stefan Karpinski, Viral B. Shah and Deepak Vinchhi. [8] He received a B.A. in mathematics and physics from Harvard in 2016, [9] and he completed a Master of Arts in Physics also from Harvard in 2016.
At the age of 25, Fischer was selected by Forbes for their 2019 30 Under 30 – Enterprise Technology list [10] for his work with Julia Computing company.
Fischer, along with the rest of the Celeste team, [11] was awarded the 2017 HPC Innovation Excellence Award for "the outstanding application of HPC for business and scientific achievements." [12] The Celeste project, that ran on a top 6 supercomputer "created the first comprehensive catalog of visible objects in our universe by processing 178 terabytes of SDSS (Sloan Digital Sky Survey) data". [13] "Collecting all known data about the visible universe into a meaningful model certainly is a big data problem." [14]
Fischer is one of the computer exascale simulation researchers helping to remediate nuclear waste, in a collaboration including e.g. Brown University, Nvidia, Lawrence Berkeley National Laboratory with "a deep learning application [..] focused on the Hanford Site, established in 1943 as part of the Manhattan Project to produce plutonium for nuclear weapons and eventually home to the first full-scale plutonium production reactor in the world [..] When plutonium production ended in 1989, left behind were tens of millions of gallons of radioactive and chemical waste in large underground tanks and more than 100 square miles of contaminated groundwater [..] the team was able to achieve 1.2 exaflop peak and sustained performance – the first example of a large-scale GAN architecture applied to SPDEs." [15] "They trained the GAN on the Summit supercomputer, which (as of the June 2019 Top500 list) remains the world’s fastest publicly-ranked supercomputer at 148.6 Linpack petaflops. The team achieved peak and sustained performance of 1.2 exaflops, scaling to 27,504 of Summit’s Nvidia V100 GPUs and 4,584 of its nodes. [..] This physics-informed GAN, trained by HPC, allowed the researchers to quantify their uncertainties about the subsurface flow in the site." [16] The site is "one of the most contaminated sites in the western hemisphere". [17]
Fischer is also the lead programmer of several projects using the Julia language, such as Cxx.jl [18] and XLA.jl (to support Google's TPUs). [19] He also works on supporting the Julia language on WebAssembly. In H1 2019, Mozilla, the maker of the Firefox web browser, sponsored "a member of the official Julia team" for the project "Bringing Julia to the Browser" as part of their research grants. [20] Additionally, Fischer has worked on Mozilla's rr tool.
A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed, which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.
High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.
Linaro DDT is a commercial C, C++ and Fortran 90 debugger. It is widely used for debugging parallel Message Passing Interface (MPI) and threaded programs, including those running on clusters of Linux machines.
The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.
Sun Constellation System is an open petascale computing environment introduced by Sun Microsystems in 2007.
Blue Waters was a petascale supercomputer operated by the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. On August 8, 2007, the National Science Board approved a resolution which authorized the National Science Foundation to fund "the acquisition and deployment of the world's most powerful leadership-class supercomputer." The NSF awarded $208 million for the Blue Waters project.
PERCS is IBM's answer to DARPA's High Productivity Computing Systems (HPCS) initiative. The program resulted in commercial development and deployment of the Power 775, a supercomputer design with extremely high performance ratios in fabric and memory bandwidth, as well as very high performance density and power efficiency.
Exascale computing refers to computing systems capable of calculating at least "1018 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (exaFLOPS)"; it is a measure of supercomputer performance.
This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.
Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.
Julia is a high-level, general-purpose dynamic programming language, most commonly used for numerical analysis and computational science. Distinctive aspects of Julia's design include a type system with parametric polymorphism and the use of multiple dispatch as a core programming paradigm, efficient garbage collection, and a just-in-time (JIT) compiler.
The Multiprogram Research Facility is a facility at the Oak Ridge National Laboratory in Oak Ridge, Tennessee. It is used by the U.S. National Security Agency (NSA) to design and build supercomputers for cryptanalysis and other classified projects. It houses the classified component program of the High Productivity Computing Systems (HPCS) project sponsored by the Defense Advanced Research Projects Agency (DARPA).
The National Strategic Computing Initiative (NSCI) is a United States initiative calling for the accelerated development of technologies for exascale supercomputers, and funding research into post-semiconductor computing. The initiative was created by an executive order issued by President Barack Obama in July 2015. Ten United States government departments and independent agencies are involved in the initiative. The initiative initially brought together existing programs, with some dedicated funding increases proposed in the Obama administration's 2017 budget request. The initiative's strategic plan was released in July 2016.
Stefan Karpinski is an American computer scientist known for being a co-creator of the Julia programming language. He is an alumnus of Harvard and works at Julia Computing, which he co-founded with Julia co-creators, Alan Edelman, Jeff Bezanson, Viral B. Shah as well as Keno Fischer and Deepak Vinchhi.
ROCm is an Advanced Micro Devices (AMD) software stack for graphics processing unit (GPU) programming. ROCm spans several domains: general-purpose computing on graphics processing units (GPGPU), high performance computing (HPC), heterogeneous computing. It offers several programming models: HIP, OpenMP/Message Passing Interface (MPI), and OpenCL.
Flux is an open-source machine-learning software library and ecosystem written in Julia. Its current stable release is v0.14.5 . It has a layer-stacking-based interface for simpler models, and has a strong support on interoperability with other Julia packages instead of a monolithic design. For example, GPU support is implemented transparently by CuArrays.jl This is in contrast to some other machine learning frameworks which are implemented in other languages with Julia bindings, such as TensorFlow.jl, and thus are more limited by the functionality present in the underlying implementation, which is often in C or C++. Flux joined NumFOCUS as an affiliated project in December of 2021.
The European High-Performance Computing Joint Undertaking is a public-private partnership in High Performance Computing (HPC), enabling the pooling of European Union–level resources with the resources of participating EU Member States and participating associated states of the Horizon Europe and Digital Europe programmes, as well as private stakeholders. The Joint Undertaking has the twin stated aims of developing a pan-European supercomputing infrastructure, and supporting research and innovation activities. Located in Luxembourg City, Luxembourg, the Joint Undertaking started operating in November 2018 under the control of the European Commission and became autonomous in 2020.
Cerebras Systems Inc. is an American artificial intelligence company with offices in Sunnyvale and San Diego, Toronto, Tokyo and Bangalore, India. Cerebras builds computer systems for complex artificial intelligence deep learning applications.
LUMI is a petascale supercomputer located at the CSC data center in Kajaani, Finland. As of January 2023, the computer is the fastest supercomputer in Europe.
JUWELS is a supercomputer developed by Atos and hosted by the Jülich Supercomputing Centre (JSC) of the Forschungszentrum Jülich. It is capable of a theoretical peak of 70.980 petaflops and it serves as the replacement of the now out-of-operation JUQUEEN supercomputer. JUWELS Booster Module was ranked as the seventh fastest supercomputer in the world at its debut on the November 2020 TOP500 list. The JUWELS Booster Module is part of a modular system architecture and a second Xeon based JUWELS Cluster Module ranked separately as the 44th fastest supercomputer in the world on the November 2020 TOP500 list.
how a 16 year-old German exchange student became a 19 year-old co-founder of Julia Computing