Phillip Colella | |
---|---|
Born | June 28, 1952 |
Nationality | American |
Alma mater | University of California, Berkeley |
Known for | High-resolution schemes Adaptive mesh refinement |
Awards | Member National Academy of Sciences (2004) SIAM/ACM prize (2003) Sidney Fernbach Award (1998) |
Scientific career | |
Fields | Applied Mathematics |
Institutions | Lawrence Livermore National Laboratory Lawrence Berkeley National Laboratory University of California, Berkeley |
Thesis | An Analysis of the Effect of Operator Splitting and of the Sampling Procedure on the Accuracy of Glimm's Method (1979) |
Doctoral advisor | Alexandre Chorin |
Phillip Colella is an American applied mathematician and a member of the Applied Numerical Algorithms Group at the Lawrence Berkeley National Laboratory. He has also worked at Lawrence Livermore National Laboratory. He is known for his fundamental contributions in the development of mathematical methods and numerical tools used to solve partial differential equations, including high-resolution and adaptive mesh refinement schemes. Colella is a member of the US National Academy of Sciences. [1]
Colella received his bachelor's degree in 1974, Master's degree in 1976, and Ph.D. in 1979 degree from the University of California, Berkeley, all in applied mathematics. [2] He received the Ph.D. degree under the supervision of Alexandre Chorin. He began his research career at Lawrence Berkeley National Laboratory, University of California, California. His primary area of research involves the development of high-resolution schemes and adaptive mesh refinement methods for the solution of partial differential equations. He has also applied computational methods in a variety of scientific and engineering fields, including low-speed incompressible flows, shock wave theory, combustion, magnetohydrodynamics, and astrophysical flows. [3] Colella has also been the leader of a project in NASA's Computational Technologies for Earth and Space Sciences, called "Block-Structured Adaptive Mesh Refinement Methods for Multiphase Microgravity Flows and Star Formation". [1]
Colella is a member of the National Academy of Sciences since 2004 and Fellow of Society for Industrial and Applied Mathematics (SIAM). [4] He is the recipient of many honors, including the Sidney Fernbach Award from the IEEE Computer Society in 1998, given each year to one person who has made "an outstanding contribution in the application of high performance computers using innovative approaches." [5] He has also received the SIAM/ACM prize (with John Bell) for computational science and engineering in 2003. [6]
Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the free-stream flow of the fluid, and the interaction of the fluid with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial validation of such software is typically performed using experimental apparatus such as wind tunnels. In addition, previously performed analytical or empirical analysis of a particular problem can be used for comparison. A final validation is often performed using full-scale testing, such as flight tests.
In numerical analysis, adaptive mesh refinement (AMR) is a method of adapting the accuracy of a solution within certain sensitive or turbulent regions of simulation, dynamically and during the time the solution is being calculated. When solutions are calculated numerically, they are often limited to predetermined quantified grids as in the Cartesian plane which constitute the computational grid, or 'mesh'. Many problems in numerical analysis, however, do not require a uniform precision in the numerical grids used for graph plotting or computational simulation, and would be better suited if specific areas of graphs which needed precision could be refined in quantification only in the regions requiring the added precision. Adaptive mesh refinement provides such a dynamic programming environment for adapting the precision of the numerical computation based on the requirements of a computation problem in specific areas of multi-dimensional graphs which need precision while leaving the other regions of the multi-dimensional graphs at lower levels of precision and resolution.
Mesh generation is the practice of creating a mesh, a subdivision of a continuous geometric space into discrete geometric and topological cells. Often these cells form a simplicial complex. Usually the cells partition the geometric input domain. Mesh cells are used as discrete local approximations of the larger domain. Meshes are created by computer algorithms, often with human guidance through a GUI, depending on the complexity of the domain and the type of mesh desired. A typical goal is to create a mesh that accurately captures the input domain geometry, with high-quality (well-shaped) cells, and without so many cells as to make subsequent calculations intractable. The mesh should also be fine in areas that are important for the subsequent calculations.
High-resolution schemes are used in the numerical solution of partial differential equations where high accuracy is required in the presence of shocks or discontinuities. They have the following properties:
Bram van Leer is Arthur B. Modine Emeritus Professor of aerospace engineering at the University of Michigan, in Ann Arbor. He specializes in Computational fluid dynamics (CFD), fluid dynamics, and numerical analysis. His most influential work lies in CFD, a field he helped modernize from 1970 onwards. An appraisal of his early work has been given by C. Hirsch (1979)
Alexandre Joel Chorin is an American mathematician known for his contributions to computational fluid mechanics, turbulence, and computational statistical mechanics.
A Riemann solver is a numerical method used to solve a Riemann problem. They are heavily used in computational fluid dynamics and computational magnetohydrodynamics.
In the numerical solution of partial differential equations, a topic in mathematics, the spectral element method (SEM) is a formulation of the finite element method (FEM) that uses high-degree piecewise polynomials as basis functions. The spectral element method was introduced in a 1984 paper by A. T. Patera. Although Patera is credited with development of the method, his work was a rediscovery of an existing method
David E. Keyes is a Senior Associate to the President of King Abdullah University of Science and Technology (KAUST) and the Director of the Extreme Computing Center at King Abdullah University of Science and Technology (KAUST). He was the inaugural Dean of the Division of Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) at KAUST and remains an adjunct professor in Applied Physics and Applied Mathematics at Columbia University and an affiliate of several laboratories of the U.S. Department of Energy. With backgrounds in engineering, applied mathematics, and computer science, he works at the algorithmic interface between parallel computing and the numerical analysis of partial differential equations, across a spectrum of aerodynamic, geophysical, and chemically reacting flows.
In computational fluid dynamics, the immersed boundary method originally referred to an approach developed by Charles Peskin in 1972 to simulate fluid-structure (fiber) interactions. Treating the coupling of the structure deformations and the fluid flow poses a number of challenging problems for numerical simulations. In the immersed boundary method the fluid is represented in an Eulerian coordinate system and the structure is represented in Lagrangian coordinates. For Newtonian fluids governed by the Navier–Stokes equations, the fluid equations are
Amiram Harten (1946–1994) was an American-Israeli applied mathematician. Harten made fundamental contribution to the development of high-resolution schemes for the solution of hyperbolic partial differential equations. Among other contributions, he developed the total variation diminishing scheme, which gives an oscillation free solution for flow with shocks.
In applied mathematics, the finite pointset method (FPM) is a general approach for the numerical solution of problems in continuum mechanics, such as the simulation of fluid flows. In this approach the medium is represented by a finite set of points, each endowed with the relevant local properties of the medium such as density, velocity, pressure, and temperature.
Burton Wendroff is an American applied mathematician known for his contributions to the development of numerical methods for the solution of hyperbolic partial differential equations. The Lax–Wendroff method for the solution of hyperbolic PDE is named for Wendroff.
Francis Harvey Harlow was an American theoretical physicist known for his work in the field of fluid dynamics. He was a researcher at Los Alamos National Laboratory, Los Alamos, New Mexico. Harlow is credited with establishing the science of computational fluid dynamics (CFD) as an important discipline.
John B. Bell is an American mathematician and the Chief Scientist of the Computational Research Division at the Lawrence Berkeley National Laboratory. He has made contributions in the areas of finite difference methods, numerical methods for low Mach number flows, adaptive mesh refinement, interface tracking and parallel computing. He has also worked on the application of these numerical methods to problems from a broad range of fields, including combustion, shock physics, seismology, flow in porous media and astrophysics.
The following is a timeline of scientific computing, also known as computational science.
The following is a timeline of numerical analysis after 1945, and deals with developments after the invention of the modern electronic computer, which began during Second World War. For a fuller history of the subject before this period, see timeline and history of mathematics.
Thomas Yizhao Hou is a Chinese-American mathematician who is the Charles Lee Powell Professor of Applied and Computational Mathematics in the Department of Computing and Mathematical Sciences at the California Institute of Technology. He is known for his work in numerical analysis and mathematical analysis.
Joseph E. Oliger was an American computer scientist and professor at Stanford University. Oliger was the co-founder of the Science in Computational and Mathematical Engineering degree program at Stanford, and served as the director of the Research Institute for Advanced Computer Science.
Integrable algorithms are numerical algorithms that rely on basic ideas from the mathematical theory of integrable systems.