Large eddy simulation (LES) is a mathematical model for turbulence used in computational fluid dynamics. It was initially proposed in 1963 by Joseph Smagorinsky to simulate atmospheric air currents, [1] and first explored by Deardorff (1970). [2] LES is currently applied in a wide variety of engineering applications, including combustion, [3] acoustics, [4] and simulations of the atmospheric boundary layer. [5]
The simulation of turbulent flows by numerically solving the Navier–Stokes equations requires resolving a very wide range of time and length scales, all of which affect the flow field. Such a resolution can be achieved with direct numerical simulation (DNS), but DNS is computationally expensive, and its cost prohibits simulation of practical engineering systems with complex geometry or flow configurations, such as turbulent jets, pumps, vehicles, and landing gear.
The principal idea behind LES is to reduce the computational cost by ignoring the smallest length scales, which are the most computationally expensive to resolve, via low-pass filtering of the Navier–Stokes equations. Such a low-pass filtering, which can be viewed as a time- and spatial-averaging, effectively removes small-scale information from the numerical solution. This information is not irrelevant, however, and its effect on the flow field must be modelled, a task which is an active area of research for problems in which small-scales can play an important role, such as near-wall flows, [6] [7] reacting flows, [3] and multiphase flows. [8]
An LES filter can be applied to a spatial and temporal field and perform a spatial filtering operation, a temporal filtering operation, or both. The filtered field, denoted with a bar, is defined as: [9] [10]
where is the filter convolution kernel. This can also be written as:
The filter kernel has an associated cutoff length scale and cutoff time scale . Scales smaller than these are eliminated from . Using the above filter definition, any field may be split up into a filtered and sub-filtered (denoted with a prime) portion, as
It is important to note that the large eddy simulation filtering operation does not satisfy the properties of a Reynolds operator.
The governing equations of LES are obtained by filtering the partial differential equations governing the flow field . There are differences between the incompressible and compressible LES governing equations, which lead to the definition of a new filtering operation.
For incompressible flow, the continuity equation and Navier–Stokes equations are filtered, yielding the filtered incompressible continuity equation,
and the filtered Navier–Stokes equations,
where is the filtered pressure field and is the rate-of-strain tensor evaluated using the filtered velocity. The nonlinear filtered advection term is the chief cause of difficulty in LES modeling. It requires knowledge of the unfiltered velocity field, which is unknown, so it must be modeled. The analysis that follows illustrates the difficulty caused by the nonlinearity, namely, that it causes interaction between large and small scales, preventing separation of scales.
The filtered advection term can be split up, following Leonard (1975), [11] as:
where is the residual stress tensor, so that the filtered Navier-Stokes equations become
with the residual stress tensor grouping all unclosed terms. Leonard decomposed this stress tensor as and provided physical interpretations for each term. , the Leonard tensor, represents interactions among large scales, , the Reynolds stress-like term, represents interactions among the sub-filter scales (SFS), and , the Clark tensor, [12] represents cross-scale interactions between large and small scales. [11] Modeling the unclosed term is the task of sub-grid scale (SGS) models. This is made challenging by the fact that the subgrid stress tensor must account for interactions among all scales, including filtered scales with unfiltered scales.
The filtered governing equation for a passive scalar , such as mixture fraction or temperature, can be written as
where is the diffusive flux of , and is the sub-filter flux for the scalar . The filtered diffusive flux is unclosed, unless a particular form is assumed for it, such as a gradient diffusion model . is defined analogously to ,
and can similarly be split up into contributions from interactions between various scales. This sub-filter flux also requires a sub-filter model.
Using Einstein notation, the Navier–Stokes equations for an incompressible fluid in Cartesian coordinates are
Filtering the momentum equation results in
If we assume that filtering and differentiation commute, then
This equation models the changes in time of the filtered variables . Since the unfiltered variables are not known, it is impossible to directly calculate . However, the quantity is known. A substitution is made:
Let . The resulting set of equations are the LES equations:
For the governing equations of compressible flow, each equation, starting with the conservation of mass, is filtered. This gives:
which results in an additional sub-filter term. However, it is desirable to avoid having to model the sub-filter scales of the mass conservation equation. For this reason, Favre [13] proposed a density-weighted filtering operation, called Favre filtering, defined for an arbitrary quantity as:
which, in the limit of incompressibility, becomes the normal filtering operation. This makes the conservation of mass equation:
This concept can then be extended to write the Favre-filtered momentum equation for compressible flow. Following Vreman: [14]
where is the shear stress tensor, given for a Newtonian fluid by:
and the term represents a sub-filter viscous contribution from evaluating the viscosity using the Favre-filtered temperature . The subgrid stress tensor for the Favre-filtered momentum field is given by
By analogy, the Leonard decomposition may also be written for the residual stress tensor for a filtered triple product . The triple product can be rewritten using the Favre filtering operator as , which is an unclosed term (it requires knowledge of the fields and , when only the fields and are known). It can be broken up in a manner analogous to above, which results in a sub-filter stress tensor . This sub-filter term can be split up into contributions from three types of interactions: the Leondard tensor , representing interactions among resolved scales; the Clark tensor , representing interactions between resolved and unresolved scales; and the Reynolds tensor , which represents interactions among unresolved scales. [15]
In addition to the filtered mass and momentum equations, filtering the kinetic energy equation can provide additional insight. The kinetic energy field can be filtered to yield the total filtered kinetic energy:
and the total filtered kinetic energy can be decomposed into two terms: the kinetic energy of the filtered velocity field ,
and the residual kinetic energy ,
such that .
The conservation equation for can be obtained by multiplying the filtered momentum transport equation by to yield:
where is the dissipation of kinetic energy of the filtered velocity field by viscous stress, and represents the sub-filter scale (SFS) dissipation of kinetic energy.
The terms on the left-hand side represent transport, and the terms on the right-hand side are sink terms that dissipate kinetic energy. [9]
The SFS dissipation term is of particular interest, since it represents the transfer of energy from large resolved scales to small unresolved scales. On average, transfers energy from large to small scales. However, instantaneously can be positive or negative, meaning it can also act as a source term for , the kinetic energy of the filtered velocity field. The transfer of energy from unresolved to resolved scales is called backscatter (and likewise the transfer of energy from resolved to unresolved scales is called forward-scatter). [16]
Large eddy simulation involves the solution to the discrete filtered governing equations using computational fluid dynamics. LES resolves scales from the domain size down to the filter size , and as such a substantial portion of high wave number turbulent fluctuations must be resolved. This requires either high-order numerical schemes, or fine grid resolution if low-order numerical schemes are used. Chapter 13 of Pope [9] addresses the question of how fine a grid resolution is needed to resolve a filtered velocity field . Ghosal [17] found that for low-order discretization schemes, such as those used in finite volume methods, the truncation error can be the same order as the subfilter scale contributions, unless the filter width is considerably larger than the grid spacing . While even-order schemes have truncation error, they are non-dissipative, [18] and because subfilter scale models are dissipative, even-order schemes will not affect the subfilter scale model contributions as strongly as dissipative schemes.
The filtering operation in large eddy simulation can be implicit or explicit. Implicit filtering recognizes that the subfilter scale model will dissipate in the same manner as many numerical schemes. In this way, the grid, or the numerical discretization scheme, can be assumed to be the LES low-pass filter. While this takes full advantage of the grid resolution, and eliminates the computational cost of calculating a subfilter scale model term, it is difficult to determine the shape of the LES filter that is associated with some numerical issues. Additionally, truncation error can also become an issue. [19]
In explicit filtering, an LES filter is applied to the discretized Navier–Stokes equations, providing a well-defined filter shape and reducing the truncation error. However, explicit filtering requires a finer grid than implicit filtering, and the computational cost increases with . Chapter 8 of Sagaut (2006) covers LES numerics in greater detail. [10]
Inlet boundary conditions affect the accuracy of LES significantly, and the treatment of inlet conditions for LES is a complicated problem. Theoretically, a good boundary condition for LES should contain the following features: [20]
(1) providing accurate information of flow characteristics, i.e. velocity and turbulence;
(2) satisfying the Navier-Stokes equations and other physics;
(3) being easy to implement and adjust to different cases.
Currently, methods of generating inlet conditions for LES are broadly divided into two categories classified by Tabor et al.: [21]
The first method for generating turbulent inlets is to synthesize them according to particular cases, such as Fourier techniques, principle orthogonal decomposition (POD) and vortex methods. The synthesis techniques attempt to construct turbulent field at inlets that have suitable turbulence-like properties and make it easy to specify parameters of the turbulence, such as turbulent kinetic energy and turbulent dissipation rate. In addition, inlet conditions generated by using random numbers are computationally inexpensive. However, one serious drawback exists in the method. The synthesized turbulence does not satisfy the physical structure of fluid flow governed by Navier-Stokes equations. [20]
The second method involves a separate and precursor calculation to generate a turbulent database which can be introduced into the main computation at the inlets. The database (sometimes named as ‘library’) can be generated in a number of ways, such as cyclic domains, pre-prepared library, and internal mapping. However, the method of generating turbulent inflow by precursor simulations requires large calculation capacity.
Researchers examining the application of various types of synthetic and precursor calculations have found that the more realistic the inlet turbulence, the more accurate LES predicts results. [20]
To discuss the modeling of unresolved scales, first the unresolved scales must be classified. They fall into two groups: resolved sub-filter scales (SFS), and sub-grid scales(SGS).
The resolved sub-filter scales represent the scales with wave numbers larger than the cutoff wave number , but whose effects are dampened by the filter. Resolved sub-filter scales only exist when filters non-local in wave-space are used (such as a box or Gaussian filter). These resolved sub-filter scales must be modeled using filter reconstruction.
Sub-grid scales are any scales that are smaller than the cutoff filter width . The form of the SGS model depends on the filter implementation. As mentioned in the Numerical methods for LES section, if implicit LES is considered, no SGS model is implemented and the numerical effects of the discretization are assumed to mimic the physics of the unresolved turbulent motions.
Without a universally valid description of turbulence, empirical information must be utilized when constructing and applying SGS models, supplemented with fundamental physical constraints such as Galilean invariance [9] . [22] Two classes of SGS models exist; the first class is functional models and the second class is structural models. Some models may be categorized as both.
Functional models are simpler than structural models, focusing only on dissipating energy at a rate that is physically correct. These are based on an artificial eddy viscosity approach, where the effects of turbulence are lumped into a turbulent viscosity. The approach treats dissipation of kinetic energy at sub-grid scales as analogous to molecular diffusion. In this case, the deviatoric part of is modeled as:
where is the turbulent eddy viscosity and is the rate-of-strain tensor.
Based on dimensional analysis, the eddy viscosity must have units of . Most eddy viscosity SGS models model the eddy viscosity as the product of a characteristic length scale and a characteristic velocity scale.
The first SGS model developed was the Smagorinsky–Lilly SGS model, which was developed by Smagorinsky [1] and used in the first LES simulation by Deardorff. [2] It models the eddy viscosity as:
where is the grid size and is a constant.
This method assumes that the energy production and dissipation of the small scales are in equilibrium - that is, .
Germano et al. [23] identified a number of studies using the Smagorinsky model that each found different values for the Smagorinsky constant for different flow configurations. In an attempt to formulate a more universal approach to SGS models, Germano et al. proposed a dynamic Smagorinsky model, which utilized two filters: a grid LES filter, denoted , and a test LES filter, denoted for any turbulent field . The test filter is larger in size than the grid filter and adds an additional smoothing of the turbulence field over the already smoothed fields represented by the LES. Applying the test filter to the LES equations (which are obtained by applying the "grid" filter to Navier-Stokes equations) results in a new set of equations that are identical in form but with the SGS stress replaced by . Germano et al. noted that even though neither nor can be computed exactly because of the presence of unresolved scales, there is an exact relation connecting these two tensors. This relation, known as the Germano identity is Here can be explicitly evaluated as it involves only the filtered velocities and the operation of test filtering. The significance of the identity is that if one assumes that turbulence is self similar so that the SGS stress at the grid and test levels have the same form and , then the Germano identity provides an equation from which the Smagorinsky coefficient (which is no longer a 'constant') can potentially be determined. [Inherent in the procedure is the assumption that the coefficient is invariant of scale (see review [24] )]. In order to do this, two additional steps were introduced in the original formulation. First, one assumed that even though was in principle variable, the variation was sufficiently slow that it can be moved out of the filtering operation . Second, since was a scalar, the Germano identity was contracted with a second rank tensor (the rate of strain tensor was chosen) to convert it to a scalar equation from which could be determined. Lilly [25] found a less arbitrary and therefore more satisfactory approach for obtaining C from the tensor identity. He noted that the Germano identity required the satisfaction of nine equations at each point in space (of which only five are independent) for a single quantity . The problem of obtaining was therefore over-determined. He proposed therefore that be determined using a least square fit by minimizing the residuals. This results in
Here
and for brevity , Initial attempts to implement the model in LES simulations proved unsuccessful. First, the computed coefficient was not at all "slowly varying" as assumed and varied as much as any other turbulent field. Secondly, the computed could be positive as well as negative. The latter fact in itself should not be regarded as a shortcoming as a priori tests using filtered DNS fields have shown that the local subgrid dissipation rate in a turbulent field is almost as likely to be negative as it is positive even though the integral over the fluid domain is always positive representing a net dissipation of energy in the large scales. A slight preponderance of positive values as opposed to strict positivity of the eddy-viscosity results in the observed net dissipation. This so-called "backscatter" of energy from small to large scales indeed corresponds to negative C values in the Smagorinsky model. Nevertheless, the Germano-Lilly formulation was found not to result in stable calculations. An ad hoc measure was adopted by averaging the numerator and denominator over homogeneous directions (where such directions exist in the flow)
When the averaging involved a large enough statistical sample that the computed was positive (or at least only rarely negative) stable calculations were possible. Simply setting the negative values to zero (a procedure called "clipping") with or without the averaging also resulted in stable calculations. Meneveau proposed [26] an averaging over Lagrangian fluid trajectories with an exponentially decaying "memory". This can be applied to problems lacking homogeneous directions and can be stable if the effective time over which the averaging is done is long enough and yet not so long as to smooth out spatial inhomogenieties of interest.
Lilly's modification of the Germano method followed by a statistical averaging or synthetic removal of negative viscosity regions seems ad hoc, even if it could be made to "work". An alternate formulation of the least square minimization procedure known as the "Dynamic Localization Model" (DLM) was suggested by Ghosal et al. [27] In this approach one first defines a quantity
with the tensors and replaced by the appropriate SGS model. This tensor then represents the amount by which the subgrid model fails to respect the Germano identity at each spatial location. In Lilly's approach, is then pulled out of the hat operator
making an algebraic function of which is then determined by requiring that considered as a function of C have the least possible value. However, since the thus obtained turns out to be just as variable as any other fluctuating quantity in turbulence, the original assumption of the constancy of cannot be justified a posteriori. In the DLM approach one avoids this inconsistency by not invoking the step of removing C from the test filtering operation. Instead, one defines a global error over the entire flow domain by the quantity
where the integral ranges over the whole fluid volume. This global error is then a functional of the spatially varying function (here the time instant, , is fixed and therefore appears just as a parameter) which is determined so as to minimize this functional. The solution to this variational problem is that must satisfy a Fredholm integral equation of the second kind
where the functions and are defined in terms of the resolved fields and are therefore known at each time step and the integral ranges over the whole fluid domain. The integral equation is solved numerically by an iteration procedure and convergence was found to be generally rapid if used with a pre-conditioning scheme. Even though this variational approach removes an inherent inconsistency in Lilly's approach, the obtained from the integral equation still displayed the instability associated with negative viscosities. This can be resolved by insisting that be minimized subject to the constraint . This leads to an equation for that is nonlinear
Here the suffix + indicates the "positive part of" that is, . Even though this superficially looks like "clipping" it is not an ad hoc scheme but a bonafide solution of the constrained variational problem. This DLM(+) model was found to be stable and yielded excellent results for forced and decaying isotropic turbulence, channel flows and a variety of other more complex geometries. If a flow happens to have homogeneous directions (let us say the directions x and z) then one can introduce the ansatz . The variational approach then immediately yields Lilly's result with averaging over homogeneous directions without any need for ad hoc modifications of a prior result.
One shortcoming of the DLM(+) model was that it did not describe backscatter which is known to be a real "thing" from analyzing DNS data. Two approaches were developed to address this. In one approach due to Carati et al. [28] a fluctuating force with amplitude determined by the fluctuation-dissipation theorem is added in analogy to Landau's theory of fluctuating hydrodynamics. In the second approach, one notes that any "backscattered" energy appears in the resolved scales only at the expense of energy in the subgrid scales. The DLM can be modified in a simple way to take into account this physical fact so as to allow for backscatter while being inherently stable. This k-equation version of the DLM, DLM(k) replaces in the Smagorinsky eddy viscosity model by as an appropriate velocity scale. The procedure for determining remains identical to the "unconstrained" version except that the tensors , where the sub-test scale kinetic energy K is related to the subgrid scale kinetic energy k by (follows by taking the trace of the Germano identity). To determine k we now use a transport equation
where is the kinematic viscosity and are positive coefficients representing kinetic energy dissipation and diffusion respectively. These can be determined following the dynamic procedure with constrained minimization as in DLM(+). This approach, though more expensive to implement than the DLM(+) was found to be stable and resulted in good agreement with experimental data for a variety of flows tested. Furthermore, it is mathematically impossible for the DLM(k) to result in an unstable computation as the sum of the large scale and SGS energies is non-increasing by construction. Both of these approaches incorporating backscatter works well. They yield models that are slightly less dissipative with somewhat improved performance over the DLM(+). The DLM(k) model additionally yields the subgrid kinetic energy, which may be a physical quantity of interest. These improvements are achieved at a somewhat increased cost in model implementation.
The Dynamic Model originated at the 1990 Summer Program of the Center for Turbulence Research (CTR) at Stanford University. A series of "CTR-Tea" seminars celebrated the 30th Anniversary Archived 2022-10-30 at the Wayback Machine of this important milestone in turbulence modeling.
This section is empty. You can help by adding to it. (August 2013) |
The Navier–Stokes equations are partial differential equations which describe the motion of viscous fluid substances. They were named after French engineer and physicist Claude-Louis Navier and the Irish physicist and mathematician George Gabriel Stokes. They were developed over several decades of progressively building the theories, from 1822 (Navier) to 1842–1850 (Stokes).
The vorticity equation of fluid dynamics describes the evolution of the vorticity ω of a particle of a fluid as it moves with its flow; that is, the local rotation of the fluid. The governing equation is:
Linear elasticity is a mathematical model as to how solid objects deform and become internally stressed by prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics.
A Newtonian fluid is a fluid in which the viscous stresses arising from its flow are at every point linearly correlated to the local strain rate — the rate of change of its deformation over time. Stresses are proportional to the rate of change of the fluid's velocity vector.
The Reynolds-averaged Navier–Stokes equations are time-averaged equations of motion for fluid flow. The idea behind the equations is Reynolds decomposition, whereby an instantaneous quantity is decomposed into its time-averaged and fluctuating quantities, an idea first proposed by Osborne Reynolds. The RANS equations are primarily used to describe turbulent flows. These equations can be used with approximations based on knowledge of the properties of flow turbulence to give approximate time-averaged solutions to the Navier–Stokes equations. For a stationary flow of an incompressible Newtonian fluid, these equations can be written in Einstein notation in Cartesian coordinates as:
In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mechanics and Hamiltonian mechanics.
Smoothed-particle hydrodynamics (SPH) is a computational method used for simulating the mechanics of continuum media, such as solid mechanics and fluid flows. It was developed by Gingold and Monaghan and Lucy in 1977, initially for astrophysical problems. It has been used in many fields of research, including astrophysics, ballistics, volcanology, and oceanography. It is a meshfree Lagrangian method, and the resolution of the method can easily be adjusted with respect to variables such as density.
In fluid dynamics, the Reynolds stress is the component of the total stress tensor in a fluid obtained from the averaging operation over the Navier–Stokes equations to account for turbulent fluctuations in fluid momentum.
Aeroacoustics is a branch of acoustics that studies noise generation via either turbulent fluid motion or aerodynamic forces interacting with surfaces. Noise generation can also be associated with periodically varying flows. A notable example of this phenomenon is the Aeolian tones produced by wind blowing over fixed objects.
Oblate spheroidal coordinates are a three-dimensional orthogonal coordinate system that results from rotating the two-dimensional elliptic coordinate system about the non-focal axis of the ellipse, i.e., the symmetry axis that separates the foci. Thus, the two foci are transformed into a ring of radius in the x-y plane. Oblate spheroidal coordinates can also be considered as a limiting case of ellipsoidal coordinates in which the two largest semi-axes are equal in length.
The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.
In fluid dynamics, eddy diffusion, eddy dispersion, or turbulent diffusion is a process by which fluid substances mix together due to eddy motion. These eddies can vary widely in size, from subtropical ocean gyres down to the small Kolmogorov microscales, and occur as a result of turbulence. The theory of eddy diffusion was first developed by Sir Geoffrey Ingram Taylor.
The derivation of the Navier–Stokes equations as well as their application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics.
The Cauchy momentum equation is a vector partial differential equation put forth by Cauchy that describes the non-relativistic momentum transport in any continuum.
Filtering in the context of large eddy simulation (LES) is a mathematical operation intended to remove a range of small scales from the solution to the Navier-Stokes equations. Because the principal difficulty in simulating turbulent flows comes from the wide range of length and time scales, this operation makes turbulent flow simulation cheaper by reducing the range of scales that must be resolved. The LES filter operation is low-pass, meaning it filters out the scales associated with high frequencies.
In fluid dynamics, the radiation stress is the depth-integrated – and thereafter phase-averaged – excess momentum flux caused by the presence of the surface gravity waves, which is exerted on the mean flow. The radiation stresses behave as a second-order tensor.
Menter's Shear Stress Transport turbulence model, or SST, is a widely used and robust two-equation eddy-viscosity turbulence model used in Computational Fluid Dynamics. The model combines the k-omega turbulence model and K-epsilon turbulence model such that the k-omega is used in the inner region of the boundary layer and switches to the k-epsilon in the free shear flow.
In astrophysics, the Chandrasekhar virial equations are a hierarchy of moment equations of the Euler equations, developed by the Indian American astrophysicist Subrahmanyan Chandrasekhar, and the physicist Enrico Fermi and Norman R. Lebovitz.
The variational multiscale method (VMS) is a technique used for deriving models and numerical methods for multiscale phenomena. The VMS framework has been mainly applied to design stabilized finite element methods in which stability of the standard Galerkin method is not ensured both in terms of singular perturbation and of compatibility conditions with the finite element spaces.
The Bueno-Orovio–Cherry–Fenton model, also simply called Bueno-Orovio model, is a minimal ionic model for human ventricular cells. It belongs to the category of phenomenological models, because of its characteristic of describing the electrophysiological behaviour of cardiac muscle cells without taking into account in a detailed way the underlying physiology and the specific mechanisms occurring inside the cells.