Keno Fischer

Last updated

Keno Fischer
Born
Germany
NationalityGerman [1]
Alma mater Harvard
Known for Julia (programming language), and projects including Celeste, and exaflops project to remediate nuclear waste
AwardsIndividual: Forbes 2019 30 Under 30   Enterprise Technology
For collaboration with others: 2017 HPC Innovation Excellence Award
Scientific career
Fields Computer science, Mathematics
Website https://github.com/Keno

Keno Fischer is a German computer scientist known for being a core member implementing the Julia programming language [2] (e.g. its Windows support). [3] [4] [5] [6] [7] He is an alumnus of Harvard for both his BA and MA. He works at Julia Computing, which he co-founded with Julia co-creators, Alan Edelman, Jeff Bezanson, Stefan Karpinski, Viral B. Shah and Deepak Vinchhi. [8] He received a B.A. in mathematics and physics from Harvard in 2016, [9] and he completed a Master of Arts in Physics also from Harvard in 2016.

At the age of 25, Fischer was selected by Forbes for their 2019 30 Under 30  Enterprise Technology list [10] for his work with Julia Computing company.

Fischer, along with the rest of the Celeste team, [11] was awarded the 2017 HPC Innovation Excellence Award for "the outstanding application of HPC for business and scientific achievements." [12] The Celeste project, that ran on a top 6 supercomputer "created the first comprehensive catalog of visible objects in our universe by processing 178 terabytes of SDSS (Sloan Digital Sky Survey) data". [13] "Collecting all known data about the visible universe into a meaningful model certainly is a big data problem." [14]

Fischer is one of the computer exascale simulation researchers helping to remediate nuclear waste, in a collaboration including e.g. Brown University, Nvidia, Lawrence Berkeley National Laboratory with "a deep learning application [..] focused on the Hanford Site, established in 1943 as part of the Manhattan Project to produce plutonium for nuclear weapons and eventually home to the first full-scale plutonium production reactor in the world [..] When plutonium production ended in 1989, left behind were tens of millions of gallons of radioactive and chemical waste in large underground tanks and more than 100 square miles of contaminated groundwater [..] the team was able to achieve 1.2 exaflop peak and sustained performance – the first example of a large-scale GAN architecture applied to SPDEs." [15] "They trained the GAN on the Summit supercomputer, which (as of the June 2019 Top500 list) remains the world’s fastest publicly-ranked supercomputer at 148.6 Linpack petaflops. The team achieved peak and sustained performance of 1.2 exaflops, scaling to 27,504 of Summit’s Nvidia V100 GPUs and 4,584 of its nodes. [..] This physics-informed GAN, trained by HPC, allowed the researchers to quantify their uncertainties about the subsurface flow in the site." [16] The site is "one of the most contaminated sites in the western hemisphere". [17]

Fischer is also the lead programmer of several projects using the Julia language, such as Cxx.jl [18] and XLA.jl (to support Google's TPUs). [19] He also works on supporting the Julia language on WebAssembly. In H1 2019, Mozilla, the maker of the Firefox web browser, sponsored "a member of the official Julia team" for the project "Bringing Julia to the Browser" as part of their research grants. [20] Additionally, Fischer has worked on Mozilla's rr tool.

See also

Related Research Articles

<span class="mw-page-title-main">Supercomputer</span> Type of extremely powerful computer

A supercomputer is a type of computer with a high level of performance as compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). Since 2017, supercomputers have existed, which can perform over 1017 FLOPS (a hundred quadrillion FLOPS, 100 petaFLOPS or 100 PFLOPS). For comparison, a desktop computer has performance in the range of hundreds of gigaFLOPS (1011) to tens of teraFLOPS (1013). Since November 2017, all of the world's fastest 500 supercomputers run on Linux-based operating systems. Additional research is being conducted in the United States, the European Union, Taiwan, Japan, and China to build faster, more powerful and technologically superior exascale supercomputers.

<span class="mw-page-title-main">High-performance computing</span> Computing with supercomputers and clusters

High-performance computing (HPC) uses supercomputers and computer clusters to solve advanced computation problems.

Linaro DDT is a commercial C, C++ and Fortran 90 debugger. It is widely used for debugging parallel Message Passing Interface (MPI) and threaded programs, including those running on clusters of Linux machines.

<span class="mw-page-title-main">TOP500</span> Database project devoted to the ranking of computers

The TOP500 project ranks and details the 500 most powerful non-distributed computer systems in the world. The project was started in 1993 and publishes an updated list of the supercomputers twice a year. The first of these updates always coincides with the International Supercomputing Conference in June, and the second is presented at the ACM/IEEE Supercomputing Conference in November. The project aims to provide a reliable basis for tracking and detecting trends in high-performance computing and bases rankings on HPL benchmarks, a portable implementation of the high-performance LINPACK benchmark written in Fortran for distributed-memory computers.

Sun Constellation System is an open petascale computing environment introduced by Sun Microsystems in 2007.

<span class="mw-page-title-main">Blue Waters</span> Supercomputer at the University of Illinois at Urbana-Champaign, United States

Blue Waters was a petascale supercomputer operated by the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign. On August 8, 2007, the National Science Board approved a resolution which authorized the National Science Foundation to fund "the acquisition and deployment of the world's most powerful leadership-class supercomputer." The NSF awarded $208 million for the Blue Waters project.

<span class="mw-page-title-main">PERCS</span>

PERCS is IBM's answer to DARPA's High Productivity Computing Systems (HPCS) initiative. The program resulted in commercial development and deployment of the Power 775, a supercomputer design with extremely high performance ratios in fabric and memory bandwidth, as well as very high performance density and power efficiency.

Exascale computing refers to computing systems capable of calculating at least "1018 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (exaFLOPS)"; it is a measure of supercomputer performance.

This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.

<span class="mw-page-title-main">Supercomputing in Europe</span> Overview of supercomputing in Europe

Several centers for supercomputing exist across Europe, and distributed access to them is coordinated by European initiatives to facilitate high-performance computing. One such initiative, the HPC Europa project, fits within the Distributed European Infrastructure for Supercomputing Applications (DEISA), which was formed in 2002 as a consortium of eleven supercomputing centers from seven European countries. Operating within the CORDIS framework, HPC Europa aims to provide access to supercomputers across Europe.

<span class="mw-page-title-main">Julia (programming language)</span> Dynamic programming language

Julia is a high-level, general-purpose dynamic programming language, most commonly used for numerical analysis and computational science. Distinctive aspects of Julia's design include a type system with parametric polymorphism and the use of multiple dispatch as a core programming paradigm, efficient garbage collection, and a just-in-time (JIT) compiler.

The Multiprogram Research Facility is a facility at the Oak Ridge National Laboratory in Oak Ridge, Tennessee. It is used by the U.S. National Security Agency (NSA) to design and build supercomputers for cryptanalysis and other classified projects. It houses the classified component program of the High Productivity Computing Systems (HPCS) project sponsored by the Defense Advanced Research Projects Agency (DARPA).

The National Strategic Computing Initiative (NSCI) is a United States initiative calling for the accelerated development of technologies for exascale supercomputers, and funding research into post-semiconductor computing. The initiative was created by an executive order issued by President Barack Obama in July 2015. Ten United States government departments and independent agencies are involved in the initiative. The initiative initially brought together existing programs, with some dedicated funding increases proposed in the Obama administration's 2017 budget request. The initiative's strategic plan was released in July 2016.

Stefan Karpinski is an American computer scientist known for being a co-creator of the Julia programming language. He is an alumnus of Harvard and works at Julia Computing, which he co-founded with Julia co-creators, Alan Edelman, Jeff Bezanson, Viral B. Shah as well as Keno Fischer and Deepak Vinchhi.

<span class="mw-page-title-main">ROCm</span> Parallel computing platform: GPGPU libraries and application programming interface

ROCm is an Advanced Micro Devices (AMD) software stack for graphics processing unit (GPU) programming. ROCm spans several domains: general-purpose computing on graphics processing units (GPGPU), high performance computing (HPC), heterogeneous computing. It offers several programming models: HIP, OpenMP/Message Passing Interface (MPI), and OpenCL.

<span class="mw-page-title-main">Flux (machine-learning framework)</span> Open-source machine-learning software library

Flux is an open-source machine-learning software library and ecosystem written in Julia. Its current stable release is v0.14.5 . It has a layer-stacking-based interface for simpler models, and has a strong support on interoperability with other Julia packages instead of a monolithic design. For example, GPU support is implemented transparently by CuArrays.jl This is in contrast to some other machine learning frameworks which are implemented in other languages with Julia bindings, such as TensorFlow.jl, and thus are more limited by the functionality present in the underlying implementation, which is often in C or C++. Flux joined NumFOCUS as an affiliated project in December of 2021.

The European High-Performance Computing Joint Undertaking is a public-private partnership in High Performance Computing (HPC), enabling the pooling of European Union–level resources with the resources of participating EU Member States and participating associated states of the Horizon Europe and Digital Europe programmes, as well as private stakeholders. The Joint Undertaking has the twin stated aims of developing a pan-European supercomputing infrastructure, and supporting research and innovation activities. Located in Luxembourg City, Luxembourg, the Joint Undertaking started operating in November 2018 under the control of the European Commission and became autonomous in 2020.

<span class="mw-page-title-main">Cerebras</span> American semiconductor company

Cerebras Systems Inc. is an American artificial intelligence company with offices in Sunnyvale and San Diego, Toronto, Tokyo and Bangalore, India. Cerebras builds computer systems for complex artificial intelligence deep learning applications.

<span class="mw-page-title-main">LUMI</span> Supercomputer in Finland

LUMI is a petascale supercomputer located at the CSC data center in Kajaani, Finland. As of January 2023, the computer is the fastest supercomputer in Europe.

<span class="mw-page-title-main">JUWELS</span> Supercomputer in Germany

JUWELS is a supercomputer developed by Atos and hosted by the Jülich Supercomputing Centre (JSC) of the Forschungszentrum Jülich. It is capable of a theoretical peak of 70.980 petaflops and it serves as the replacement of the now out-of-operation JUQUEEN supercomputer. JUWELS Booster Module was ranked as the seventh fastest supercomputer in the world at its debut on the November 2020 TOP500 list. The JUWELS Booster Module is part of a modular system architecture and a second Xeon based JUWELS Cluster Module ranked separately as the 44th fastest supercomputer in the world on the November 2020 TOP500 list.

References

  1. "r/IAmA - Comment by u/loladiro on "We've spent the past 9 years developing a new programming language. We're the core developers of the Julia Programming Language. AuA."". reddit. Retrieved December 20, 2020.
  2. "From Tree Leaves to Galaxies: Keno Fischer's Interview with Robin.ly - Julia Computing". juliacomputing.com. Retrieved December 20, 2020. how a 16 year-old German exchange student became a 19 year-old co-founder of Julia Computing
  3. "Why the creators of the Julia programming language just launched a startup". VentureBeat. May 18, 2015. Retrieved June 20, 2016.
  4. Bryant, Avi (October 15, 2012). "Matlab, R, and Julia: Languages for data analysis". O'Reilly Strata. Archived from the original on May 28, 2013.
  5. Krill, Paul (April 18, 2012). "New Julia language seeks to be the C for scientists". InfoWorld. Retrieved October 13, 2020.
  6. Finley, Klint (February 3, 2014). "Out in the Open: Man Creates One Programming Language to Rule Them All". Wired.
  7. Gibbs, Mark (January 9, 2013). "Pure and Julia are cool languages worth checking out". Computerworld. Retrieved October 13, 2020.
  8. www.ETtech.com. "Julia founders create new startup to take language commercial | ETtech". The Economic Times. Retrieved June 20, 2016.
  9. Fischer, Keno. "Resume". linkedin.com. Retrieved October 2, 2020.
  10. "Keno Fischer on Forbes 30 under 30". Forbes. Retrieved October 2, 2020.
  11. Regier, Jeffrey; Fischer, Keno; Pamnany, Kiran; Noack, Andreas; Revels, Jarrett; Lam, Maximilian; Howard, Steve; Giordano, Ryan; Schlegel, David; McAuliffe, Jon; Thomas, Rollin (May 1, 2019). "Cataloging the visible universe through Bayesian inference in Julia at petascale". Journal of Parallel and Distributed Computing. 127: 89–104. arXiv: 1801.10277 . doi:10.1016/j.jpdc.2018.12.008. ISSN   0743-7315. OSTI   1656511. S2CID   78090394.
  12. "National Energy Research Scientific Computing Center: 2017 Annual Report" (PDF). US Department of Energy: Office of Science. 2017.
  13. Farber, Rob (November 28, 2017). "Julia Language Delivers Petascale HPC Performance". The Next Platform. Retrieved October 13, 2020.
  14. "A Big Data Journey While Seeking to Catalog our Universe". HPCwire. January 16, 2019. Retrieved October 13, 2020.
  15. "Deep Learning Expands Study of Nuclear Waste Remediation". cs.lbl.gov. Retrieved October 14, 2020.
  16. "Leveraging Exaflops Performance to Remediate Nuclear Waste". HPCwire. November 12, 2019. Retrieved October 14, 2020.
  17. Yang, Liu; Treichler, Sean; Kurth, Thorsten; Fischer, Keno; Barajas-Solano, David; Romero, Josh; Churavy, Valentin; Tartakovsky, Alexandre; Houston, Michael; Prabhat; Karniadakis, George (October 28, 2019). "Highly-scalable, physics-informed GANs for learning solutions of stochastic PDEs". arXiv: 1910.13444 [physics.comp-ph].
  18. "JuliaInterop/Cxx.jl". GitHub . Retrieved December 20, 2020.
  19. Fischer, Keno; Saba, Elliot (October 23, 2018). "Automatic Full Compilation of Julia Programs and ML Models to Cloud TPUs". arXiv: 1810.09868 [cs.PL].
  20. Cimpanu, Catalin. "Mozilla is funding a way to support Julia in Firefox". ZDNet. Retrieved September 22, 2019.