This article includes a list of general references, but it lacks sufficient corresponding inline citations .(September 2014) |
Computational physics |
---|
Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the free-stream flow of the fluid, and the interaction of the fluid (liquids and gases) with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as transonic or turbulent flows. Initial validation of such software is typically performed using experimental apparatus such as wind tunnels. In addition, previously performed analytical or empirical analysis of a particular problem can be used for comparison. A final validation is often performed using full-scale testing, such as flight tests.
CFD is applied to a wide range of research and engineering problems in many fields of study and industries, including aerodynamics and aerospace analysis, hypersonics, weather simulation, natural science and environmental engineering, industrial system design and analysis, biological engineering, fluid flows and heat transfer, engine and combustion analysis, and visual effects for film and games.
The fundamental basis of almost all CFD problems is the Navier–Stokes equations, which define many single-phase (gas or liquid, but not both) fluid flows. These equations can be simplified by removing terms describing viscous actions to yield the Euler equations. Further simplification, by removing terms describing vorticity yields the full potential equations. Finally, for small perturbations in subsonic and supersonic flows (not transonic or hypersonic) these equations can be linearized to yield the linearized potential equations.
Historically, methods were first developed to solve the linearized potential equations. Two-dimensional (2D) methods, using conformal transformations of the flow about a cylinder to the flow about an airfoil were developed in the 1930s. [1] [2]
One of the earliest type of calculations resembling modern CFD are those by Lewis Fry Richardson, in the sense that these calculations used finite differences and divided the physical space in cells. Although they failed dramatically, these calculations, together with Richardson's book Weather Prediction by Numerical Process, [3] set the basis for modern CFD and numerical meteorology. In fact, early CFD calculations during the 1940s using ENIAC used methods close to those in Richardson's 1922 book. [4]
The computer power available paced development of three-dimensional methods. Probably the first work using computers to model fluid flow, as governed by the Navier–Stokes equations, was performed at Los Alamos National Lab, in the T3 group. [5] [6] This group was led by Francis H. Harlow, who is widely considered one of the pioneers of CFD. From 1957 to late 1960s, this group developed a variety of numerical methods to simulate transient two-dimensional fluid flows, such as particle-in-cell method, [7] fluid-in-cell method, [8] vorticity stream function method, [9] and marker-and-cell method. [10] Fromm's vorticity-stream-function method for 2D, transient, incompressible flow was the first treatment of strongly contorting incompressible flows in the world.
The first paper with three-dimensional model was published by John Hess and A.M.O. Smith of Douglas Aircraft in 1967. [11] This method discretized the surface of the geometry with panels, giving rise to this class of programs being called Panel Methods. Their method itself was simplified, in that it did not include lifting flows and hence was mainly applied to ship hulls and aircraft fuselages. The first lifting Panel Code (A230) was described in a paper written by Paul Rubbert and Gary Saaris of Boeing Aircraft in 1968. [12] In time, more advanced three-dimensional Panel Codes were developed at Boeing (PANAIR, A502), [13] Lockheed (Quadpan), [14] Douglas (HESS), [15] McDonnell Aircraft (MACAERO), [16] NASA (PMARC) [17] and Analytical Methods (WBAERO, [18] USAERO [19] and VSAERO [20] [21] ). Some (PANAIR, HESS and MACAERO) were higher order codes, using higher order distributions of surface singularities, while others (Quadpan, PMARC, USAERO and VSAERO) used single singularities on each surface panel. The advantage of the lower order codes was that they ran much faster on the computers of the time. Today, VSAERO has grown to be a multi-order code and is the most widely used program of this class. It has been used in the development of many submarines, surface ships, automobiles, helicopters, aircraft, and more recently wind turbines. Its sister code, USAERO is an unsteady panel method that has also been used for modeling such things as high speed trains and racing yachts. The NASA PMARC code from an early version of VSAERO and a derivative of PMARC, named CMARC, [22] is also commercially available.
In the two-dimensional realm, a number of Panel Codes have been developed for airfoil analysis and design. The codes typically have a boundary layer analysis included, so that viscous effects can be modeled. Richard Eppler developed the PROFILE code, partly with NASA funding, which became available in the early 1980s. [23] This was soon followed by Mark Drela's XFOIL code. [24] Both PROFILE and XFOIL incorporate two-dimensional panel codes, with coupled boundary layer codes for airfoil analysis work. PROFILE uses a conformal transformation method for inverse airfoil design, while XFOIL has both a conformal transformation and an inverse panel method for airfoil design.
An intermediate step between Panel Codes and Full Potential codes were codes that used the Transonic Small Disturbance equations. In particular, the three-dimensional WIBCO code, [25] developed by Charlie Boppe of Grumman Aircraft in the early 1980s has seen heavy use.
Developers turned to Full Potential codes, as panel methods could not calculate the non-linear flow present at transonic speeds. The first description of a means of using the Full Potential equations was published by Earll Murman and Julian Cole of Boeing in 1970. [26] Frances Bauer, Paul Garabedian and David Korn of the Courant Institute at New York University (NYU) wrote a series of two-dimensional Full Potential airfoil codes that were widely used, the most important being named Program H. [27] A further growth of Program H was developed by Bob Melnik and his group at Grumman Aerospace as Grumfoil. [28] Antony Jameson, originally at Grumman Aircraft and the Courant Institute of NYU, worked with David Caughey to develop the important three-dimensional Full Potential code FLO22 [29] in 1975. Many Full Potential codes emerged after this, culminating in Boeing's Tranair (A633) code, [30] which still sees heavy use.
The next step was the Euler equations, which promised to provide more accurate solutions of transonic flows. The methodology used by Jameson in his three-dimensional FLO57 code [31] (1981) was used by others to produce such programs as Lockheed's TEAM program [32] and IAI/Analytical Methods' MGAERO program. [33] MGAERO is unique in being a structured cartesian mesh code, while most other such codes use structured body-fitted grids (with the exception of NASA's highly successful CART3D code, [34] Lockheed's SPLITFLOW code [35] and Georgia Tech's NASCART-GT). [36] Antony Jameson also developed the three-dimensional AIRPLANE code [37] which made use of unstructured tetrahedral grids.
In the two-dimensional realm, Mark Drela and Michael Giles, then graduate students at MIT, developed the ISES Euler program [38] (actually a suite of programs) for airfoil design and analysis. This code first became available in 1986 and has been further developed to design, analyze and optimize single or multi-element airfoils, as the MSES program. [39] MSES sees wide use throughout the world. A derivative of MSES, for the design and analysis of airfoils in a cascade, is MISES, [40] developed by Harold Youngren while he was a graduate student at MIT.
The Navier–Stokes equations were the ultimate target of development. Two-dimensional codes, such as NASA Ames' ARC2D code first emerged. A number of three-dimensional codes were developed (ARC3D, OVERFLOW, CFL3D are three successful NASA contributions), leading to numerous commercial packages.
Recently CFD methods have gained traction for modeling the flow behavior of granular materials within various chemical processes in engineering. This approach has emerged as a cost-effective alternative, offering a nuanced understanding of complex flow phenomena while minimizing expenses associated with traditional experimental methods. [41] [42]
CFD can be seen as a group of computational methodologies (discussed below) used to solve equations governing fluid flow. In the application of CFD, a critical step is to decide which set of physical assumptions and related equations need to be used for the problem at hand. [43] To illustrate this step, the following summarizes the physical assumptions/simplifications taken in equations of a flow that is single-phase (see multiphase flow and two-phase flow), single-species (i.e., it consists of one chemical species), non-reacting, and (unless said otherwise) compressible. Thermal radiation is neglected, and body forces due to gravity are considered (unless said otherwise). In addition, for this type of flow, the next discussion highlights the hierarchy of flow equations solved with CFD. Note that some of the following equations could be derived in more than one way.
In all of these approaches the same basic procedure is followed.
The stability of the selected discretisation is generally established numerically rather than analytically as with simple linear problems. Special care must also be taken to ensure that the discretisation handles discontinuous solutions gracefully. The Euler equations and Navier–Stokes equations both admit shocks and contact surfaces.
Some of the discretization methods being used are:
The finite volume method (FVM) is a common approach used in CFD codes, as it has an advantage in memory usage and solution speed, especially for large problems, high Reynolds number turbulent flows, and source term dominated flows (like combustion). [55]
In the finite volume method, the governing partial differential equations (typically the Navier-Stokes equations, the mass and energy conservation equations, and the turbulence equations) are recast in a conservative form, and then solved over discrete control volumes. This discretization guarantees the conservation of fluxes through a particular control volume. The finite volume equation yields governing equations in the form,
where is the vector of conserved variables, is the vector of fluxes (see Euler equations or Navier–Stokes equations), is the volume of the control volume element, and is the surface area of the control volume element.
The finite element method (FEM) is used in structural analysis of solids, but is also applicable to fluids. However, the FEM formulation requires special care to ensure a conservative solution. The FEM formulation has been adapted for use with fluid dynamics governing equations. [56] [57] Although FEM must be carefully formulated to be conservative, it is much more stable than the finite volume approach. [58] FEM also provides more accurate solutions for smooth problems comparing to FVM. [59] Another advantage of FEM is that it can handle complex geometries and boundary conditions. However, FEM can require more memory and has slower solution times than the FVM. [60]
In this method, a weighted residual equation is formed:
where is the equation residual at an element vertex , is the conservation equation expressed on an element basis, is the weight factor, and is the volume of the element.
The finite difference method (FDM) has historical importance [57] and is simple to program. It is currently only used in few specialized codes, which handle complex geometry with high accuracy and efficiency by using embedded boundaries or overlapping grids (with the solution interpolated across each grid).[ citation needed ]
where is the vector of conserved variables, and , , and are the fluxes in the , , and directions respectively.
Spectral element method is a finite element type method. It requires the mathematical problem (the partial differential equation) to be cast in a weak formulation. This is typically done by multiplying the differential equation by an arbitrary test function and integrating over the whole domain. Purely mathematically, the test functions are completely arbitrary - they belong to an infinite-dimensional function space. Clearly an infinite-dimensional function space cannot be represented on a discrete spectral element mesh; this is where the spectral element discretization begins. The most crucial thing is the choice of interpolating and testing functions. In a standard, low order FEM in 2D, for quadrilateral elements the most typical choice is the bilinear test or interpolating function of the form . In a spectral element method however, the interpolating and test functions are chosen to be polynomials of a very high order (typically e.g. of the 10th order in CFD applications). This guarantees the rapid convergence of the method. Furthermore, very efficient integration procedures must be used, since the number of integrations to be performed in numerical codes is big. Thus, high order Gauss integration quadratures are employed, since they achieve the highest accuracy with the smallest number of computations to be carried out. At the time there are some academic CFD codes based on the spectral element method and some more are currently under development, since the new time-stepping schemes arise in the scientific world.
The lattice Boltzmann method (LBM) with its simplified kinetic picture on a lattice provides a computationally efficient description of hydrodynamics. Unlike the traditional CFD methods, which solve the conservation equations of macroscopic properties (i.e., mass, momentum, and energy) numerically, LBM models the fluid consisting of fictive particles, and such particles perform consecutive propagation and collision processes over a discrete lattice mesh. In this method, one works with the discrete in space and time version of the kinetic evolution equation in the Boltzmann Bhatnagar-Gross-Krook (BGK) form.
The vortex method, also Lagrangian Vortex Particle Method, is a meshfree technique for the simulation of incompressible turbulent flows. In it, vorticity is discretized onto Lagrangian particles, these computational elements being called vortices, vortons, or vortex particles. [61] Vortex methods were developed as a grid-free methodology that would not be limited by the fundamental smoothing effects associated with grid-based methods. To be practical, however, vortex methods require means for rapidly computing velocities from the vortex elements – in other words they require the solution to a particular form of the N-body problem (in which the motion of N objects is tied to their mutual influences). This breakthrough came in the 1980s with the development of the Barnes-Hut and fast multipole method (FMM) algorithms. These paved the way to practical computation of the velocities from the vortex elements.
Software based on the vortex method offer a new means for solving tough fluid dynamics problems with minimal user intervention.[ citation needed ] All that is required is specification of problem geometry and setting of boundary and initial conditions. Among the significant advantages of this modern technology;
In the boundary element method, the boundary occupied by the fluid is divided into a surface mesh.
High-resolution schemes are used where shocks or discontinuities are present. Capturing sharp changes in the solution requires the use of second or higher-order numerical schemes that do not introduce spurious oscillations. This usually necessitates the application of flux limiters to ensure that the solution is total variation diminishing.[ citation needed ]
In computational modeling of turbulent flows, one common objective is to obtain a model that can predict quantities of interest, such as fluid velocity, for use in engineering designs of the system being modeled. For turbulent flows, the range of length scales and complexity of phenomena involved in turbulence make most modeling approaches prohibitively expensive; the resolution required to resolve all scales involved in turbulence is beyond what is computationally possible. The primary approach in such cases is to create numerical models to approximate unresolved phenomena. This section lists some commonly used computational models for turbulent flows.
Turbulence models can be classified based on computational expense, which corresponds to the range of scales that are modeled versus resolved (the more turbulent scales that are resolved, the finer the resolution of the simulation, and therefore the higher the computational cost). If a majority or all of the turbulent scales are not modeled, the computational cost is very low, but the tradeoff comes in the form of decreased accuracy.
In addition to the wide range of length and time scales and the associated computational cost, the governing equations of fluid dynamics contain a non-linear convection term and a non-linear and non-local pressure gradient term. These nonlinear equations must be solved numerically with the appropriate boundary and initial conditions.
Reynolds-averaged Navier–Stokes (RANS) equations are the oldest approach to turbulence modeling. An ensemble version of the governing equations is solved, which introduces new apparent stresses known as Reynolds stresses. This adds a second-order tensor of unknowns for which various models can provide different levels of closure. It is a common misconception that the RANS equations do not apply to flows with a time-varying mean flow because these equations are 'time-averaged'. In fact, statistically unsteady (or non-stationary) flows can equally be treated. This is sometimes referred to as URANS. There is nothing inherent in Reynolds averaging to preclude this, but the turbulence models used to close the equations are valid only as long as the time over which these changes in the mean occur is large compared to the time scales of the turbulent motion containing most of the energy.
RANS models can be divided into two broad approaches:
Large eddy simulation (LES) is a technique in which the smallest scales of the flow are removed through a filtering operation, and their effect modeled using subgrid scale models. This allows the largest and most important scales of the turbulence to be resolved, while greatly reducing the computational cost incurred by the smallest scales. This method requires greater computational resources than RANS methods, but is far cheaper than DNS.
Detached eddy simulations (DES) is a modification of a RANS model in which the model switches to a subgrid scale formulation in regions fine enough for LES calculations. Regions near solid boundaries and where the turbulent length scale is less than the maximum grid dimension are assigned the RANS mode of solution. As the turbulent length scale exceeds the grid dimension, the regions are solved using the LES mode. Therefore, the grid resolution for DES is not as demanding as pure LES, thereby considerably cutting down the cost of the computation. Though DES was initially formulated for the Spalart-Allmaras model (Philippe R. Spalart et al., 1997), it can be implemented with other RANS models (Strelets, 2001), by appropriately modifying the length scale which is explicitly or implicitly involved in the RANS model. So while Spalart–Allmaras model based DES acts as LES with a wall model, DES based on other models (like two equation models) behave as a hybrid RANS-LES model. Grid generation is more complicated than for a simple RANS or LES case due to the RANS-LES switch. DES is a non-zonal approach and provides a single smooth velocity field across the RANS and the LES regions of the solutions.
Direct numerical simulation (DNS) resolves the entire range of turbulent length scales. This marginalizes the effect of models, but is extremely expensive. The computational cost is proportional to . [64] DNS is intractable for flows with complex geometries or flow configurations.
The coherent vortex simulation approach decomposes the turbulent flow field into a coherent part, consisting of organized vortical motion, and the incoherent part, which is the random background flow. [65] This decomposition is done using wavelet filtering. The approach has much in common with LES, since it uses decomposition and resolves only the filtered portion, but different in that it does not use a linear, low-pass filter. Instead, the filtering operation is based on wavelets, and the filter can be adapted as the flow field evolves. Farge and Schneider tested the CVS method with two flow configurations and showed that the coherent portion of the flow exhibited the energy spectrum exhibited by the total flow, and corresponded to coherent structures (vortex tubes), while the incoherent parts of the flow composed homogeneous background noise, which exhibited no organized structures. Goldstein and Vasilyev [66] applied the FDV model to large eddy simulation, but did not assume that the wavelet filter eliminated all coherent motions from the subfilter scales. By employing both LES and CVS filtering, they showed that the SFS dissipation was dominated by the SFS flow field's coherent portion.
Probability density function (PDF) methods for turbulence, first introduced by Lundgren, [67] are based on tracking the one-point PDF of the velocity, , which gives the probability of the velocity at point being between and . This approach is analogous to the kinetic theory of gases, in which the macroscopic properties of a gas are described by a large number of particles. PDF methods are unique in that they can be applied in the framework of a number of different turbulence models; the main differences occur in the form of the PDF transport equation. For example, in the context of large eddy simulation, the PDF becomes the filtered PDF. [68] PDF methods can also be used to describe chemical reactions, [69] [70] and are particularly useful for simulating chemically reacting flows because the chemical source term is closed and does not require a model. The PDF is commonly tracked by using Lagrangian particle methods; when combined with large eddy simulation, this leads to a Langevin equation for subfilter particle evolution.
The vorticity confinement (VC) method is an Eulerian technique used in the simulation of turbulent wakes. It uses a solitary-wave like approach to produce a stable solution with no numerical spreading. VC can capture the small-scale features to within as few as 2 grid cells. Within these features, a nonlinear difference equation is solved as opposed to the finite difference equation. VC is similar to shock capturing methods, where conservation laws are satisfied, so that the essential integral quantities are accurately computed.
The Linear eddy model is a technique used to simulate the convective mixing that takes place in turbulent flow. [71] Specifically, it provides a mathematical way to describe the interactions of a scalar variable within the vector flow field. It is primarily used in one-dimensional representations of turbulent flow, since it can be applied across a wide range of length scales and Reynolds numbers. This model is generally used as a building block for more complicated flow representations, as it provides high resolution predictions that hold across a large range of flow conditions.
The modeling of two-phase flow is still under development. Different methods have been proposed, including the Volume of fluid method, the level-set method and front tracking. [72] [73] These methods often involve a tradeoff between maintaining a sharp interface or conserving mass [ according to whom? ]. This is crucial since the evaluation of the density, viscosity and surface tension is based on the values averaged over the interface.[ citation needed ]
Discretization in the space produces a system of ordinary differential equations for unsteady problems and algebraic equations for steady problems. Implicit or semi-implicit methods are generally used to integrate the ordinary differential equations, producing a system of (usually) nonlinear algebraic equations. Applying a Newton or Picard iteration produces a system of linear equations which is nonsymmetric in the presence of advection and indefinite in the presence of incompressibility. Such systems, particularly in 3D, are frequently too large for direct solvers, so iterative methods are used, either stationary methods such as successive overrelaxation or Krylov subspace methods. Krylov methods such as GMRES, typically used with preconditioning, operate by minimizing the residual over successive subspaces generated by the preconditioned operator.
Multigrid has the advantage of asymptotically optimal performance on many problems. Traditional[ according to whom? ] solvers and preconditioners are effective at reducing high-frequency components of the residual, but low-frequency components typically require many iterations to reduce. By operating on multiple scales, multigrid reduces all components of the residual by similar factors, leading to a mesh-independent number of iterations.[ citation needed ]
For indefinite systems, preconditioners such as incomplete LU factorization, additive Schwarz, and multigrid perform poorly or fail entirely, so the problem structure must be used for effective preconditioning. [74] Methods commonly used in CFD are the SIMPLE and Uzawa algorithms which exhibit mesh-dependent convergence rates, but recent advances based on block LU factorization combined with multigrid for the resulting definite systems have led to preconditioners that deliver mesh-independent convergence rates. [75]
CFD made a major break through in late 70s with the introduction of LTRAN2, a 2-D code to model oscillating airfoils based on transonic small perturbation theory by Ballhaus and associates. [76] It uses a Murman-Cole switch algorithm for modeling the moving shock-waves. [26] Later it was extended to 3-D with use of a rotated difference scheme by AFWAL/Boeing that resulted in LTRAN3. [77] [78]
CFD investigations are used to clarify the characteristics of aortic flow in details that are beyond the capabilities of experimental measurements. To analyze these conditions, CAD models of the human vascular system are extracted employing modern imaging techniques such as MRI or Computed Tomography. A 3D model is reconstructed from this data and the fluid flow can be computed. Blood properties such as density and viscosity, and realistic boundary conditions (e.g. systemic pressure) have to be taken into consideration. Therefore, making it possible to analyze and optimize the flow in the cardiovascular system for different applications. [79]
Traditionally, CFD simulations are performed on CPUs. [80]
In a more recent trend, simulations are also performed on GPUs. These typically contain slower but more processors. For CFD algorithms that feature good parallelism performance (i.e. good speed-up by adding more cores) this can greatly reduce simulation times. Fluid-implicit particle [81] and lattice-Boltzmann methods [82] are typical examples of codes that scale well on GPUs.
Aerodynamics is the study of the motion of air, particularly when affected by a solid object, such as an airplane wing. It involves topics covered in the field of fluid dynamics and its subfield of gas dynamics, and is an important domain of study in aeronautics. The term aerodynamics is often used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, and is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, which was first demonstrated by Otto Lilienthal in 1891. Since then, the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, and computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, and boundary layers and has become increasingly computational in nature.
In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids – liquids and gases. It has several subdisciplines, including aerodynamics and hydrodynamics. Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space and modelling fission weapon detonation.
Smoothed-particle hydrodynamics (SPH) is a computational method used for simulating the mechanics of continuum media, such as solid mechanics and fluid flows. It was developed by Gingold and Monaghan and Lucy in 1977, initially for astrophysical problems. It has been used in many fields of research, including astrophysics, ballistics, volcanology, and oceanography. It is a meshfree Lagrangian method, and the resolution of the method can easily be adjusted with respect to variables such as density.
Computational aeroacoustics is a branch of aeroacoustics that aims to analyze the generation of noise by turbulent flows through numerical methods.
The Advection Upstream Splitting Method (AUSM) is a numerical method used to solve the advection equation in computational fluid dynamics. It is particularly useful for simulating compressible flows with shocks and discontinuities.
Bram van Leer is Arthur B. Modine Emeritus Professor of aerospace engineering at the University of Michigan, in Ann Arbor. He specializes in Computational fluid dynamics (CFD), fluid dynamics, and numerical analysis. His most influential work lies in CFD, a field he helped modernize from 1970 onwards. An appraisal of his early work has been given by C. Hirsch (1979)
Fluid animation refers to computer graphics techniques for generating realistic animations of fluids such as water and smoke. Fluid animations are typically focused on emulating the qualitative visual behavior of a fluid, with less emphasis placed on rigorously correct physical results, although they often still rely on approximate solutions to the Euler equations or Navier–Stokes equations that govern real fluid physics. Fluid animation can be performed with different levels of complexity, ranging from time-consuming, high-quality animations for films, or visual effects, to simple and fast animations for real-time animations like computer games.
A CFD-DEM model is suitable for the modeling or simulation of fluid-solids or fluid-particles systems. In a typical CFD-DEM model, the phase motion of discrete solids or particles is obtained by the Discrete Element Method (DEM) which applies Newton's laws of motion to every particle and the flow of continuum fluid is described by the local averaged Navier–Stokes equations that can be solved by the traditional Computational Fluid Dynamics (CFD). The model is first proposed by Tsuji et al. The interactions between the fluid phase and solids phase is better modeled according to Newton's third law.
In computational fluid dynamics, the immersed boundary method originally referred to an approach developed by Charles Peskin in 1972 to simulate fluid-structure (fiber) interactions. Treating the coupling of the structure deformations and the fluid flow poses a number of challenging problems for numerical simulations. In the immersed boundary method the fluid is represented in an Eulerian coordinate system and the structure is represented in Lagrangian coordinates. For Newtonian fluids governed by the Navier–Stokes equations, the fluid equations are
The finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential. Computers are usually used to perform the calculations required. With high-speed supercomputers, better solutions can be achieved, and are often required to solve the largest and most complex problems.
Vorticity confinement (VC), a physics-based computational fluid dynamics model analogous to shock capturing methods, was invented by Dr. John Steinhoff, professor at the University of Tennessee Space Institute, in the late 1980s to solve vortex dominated flows. It was first formulated to capture concentrated vortices shed from the wings, and later became popular in a wide range of research areas. During the 1990s and 2000s, it became widely used in the field of engineering.
In applied mathematics, the finite pointset method (FPM) is a general approach for the numerical solution of problems in continuum mechanics, such as the simulation of fluid flows. In this approach the medium is represented by a finite set of points, each endowed with the relevant local properties of the medium such as density, velocity, pressure, and temperature.
Nektar++ is a spectral/hp element framework designed to support the construction of efficient high-performance scalable solvers for a wide range of partial differential equations (PDE). The code is released as open-source under the MIT license. Although primarily driven by application-based research, it has been designed as a platform to support the development of novel numerical techniques in the area of high-order finite element methods.
The finite point method (FPM) is a meshfree method for solving partial differential equations (PDEs) on scattered distributions of points. The FPM was proposed in the mid-nineties in, and with the purpose to facilitate the solution of problems involving complex geometries, free surfaces, moving boundaries and adaptive refinement. Since then, the FPM has evolved considerably, showing satisfactory accuracy and capabilities to deal with different fluid and solid mechanics problems.
Gradient-enhanced kriging (GEK) is a surrogate modeling technique used in engineering. A surrogate model is a prediction of the output of an expensive computer code. This prediction is based on a small number of evaluations of the expensive computer code.
FEATool Multiphysics is a physics, finite element analysis (FEA), and partial differential equation (PDE) simulation toolbox. FEATool Multiphysics features the ability to model fully coupled heat transfer, fluid dynamics, chemical engineering, structural mechanics, fluid-structure interaction (FSI), electromagnetics, as well as user-defined and custom PDE problems in 1D, 2D (axisymmetry), or 3D, all within a graphical user interface (GUI) or optionally as script files. FEATool has been employed and used in academic research, teaching, and industrial engineering simulation contexts.
In geology, numerical modeling is a widely applied technique to tackle complex geological problems by computational simulation of geological scenarios.
A blade vortex interaction (BVI) is an unsteady phenomenon of three-dimensional nature, which occurs when a rotor blade passes within a close proximity of the shed tip vortices from a previous blade. The aerodynamic interactions represent an important topic of investigation in rotorcraft research field due to the adverse influence produced on rotor noise, particularly in low speed descending flight condition or maneuver, which generates high amplitude impulsive noise.
Guru Guruswamy is an American engineer working as Principal Scientist at Ames Research Center since 1988. He pioneered research in the area of computational aeroelasticity that involves Unsteady Aerodynamics, Finite Element Methods, Computational Fluid Dynamics, Parallel Computing and Problem-Solving Environment. His innovative research was utilized in the first commercial 3-D computational aeroelasticity software developed by a major aerospace industry. The aeroelasticity legend Holt Ashley extensively referred to Guruswamy's research in his classical review paper. In 1988 he demonstrated the unique use of Transonic Small Perturbation based CFD for designing active controls to increase the safety of aircraft. It was followed by a break through development of Euler flow equations based Computational Aeroelasticy. It was cited by another Aeroelasticity legend John Dugundji of MIT as an important milestone in Aeroelasticity. A google search shows about 150 researchers took advantage Guruswamy's work based on the Euler equations for follow-up developments.
Lyle Norman Long is an academic, and computational scientist. He is a Professor Emeritus of Computational Science, Mathematics, and Engineering at The Pennsylvania State University, and is most known for developing algorithms and software for mathematical models, including neural networks, and robotics. His research has been focused in the fields of computational science, computational neuroscience, cognitive robotics, parallel computing, and software engineering.
{{cite book}}
: CS1 maint: multiple names: authors list (link){{cite book}}
: CS1 maint: multiple names: authors list (link){{cite book}}
: CS1 maint: multiple names: authors list (link){{cite book}}
: CS1 maint: multiple names: authors list (link){{cite book}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link)