The history of numerical weather prediction considers how current weather conditions as input into mathematical models of the atmosphere and oceans to predict the weather and future sea state (the process of numerical weather prediction) has changed over the years. Though first attempted manually in the 1920s, it was not until the advent of the computer and computer simulation that computation time was reduced to less than the forecast period itself. ENIAC was used to create the first forecasts via computer in 1950, and over the years more powerful computers have been used to increase the size of initial datasets and use more complicated versions of the equations of motion. The development of global forecasting models led to the first climate models. The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclone as well as air quality in the 1970s and 1980s.
Because the output of forecast models based on atmospheric dynamics requires corrections near ground level, model output statistics (MOS) were developed in the 1970s and 1980s for individual forecast points (locations). The MOS apply statistical techniques to post-process the output of dynamical models with the most recent surface observations and the forecast point's climatology. This technique can correct for model resolution as well as model biases. Even with the increasing power of supercomputers, the forecast skill of numerical weather models only extends to about two weeks into the future, since the density and quality of observations—together with the chaotic nature of the partial differential equations used to calculate the forecast—introduce errors which double every five days. The use of model ensemble forecasts since the 1990s helps to define the forecast uncertainty and extend weather forecasting farther into the future than otherwise possible.
Until the end of the 19th century, weather prediction was entirely subjective and based on empirical rules, with only limited understanding of the physical mechanisms behind weather processes. In 1901 Cleveland Abbe, founder of the United States Weather Bureau, proposed that the atmosphere is governed by the same principles of thermodynamics and hydrodynamics that were studied in the previous century. [1] In 1904, Vilhelm Bjerknes derived a two-step procedure for model-based weather forecasting. First, a diagnostic step is used to process data to generate initial conditions, which are then advanced in time by a prognostic step that solves the initial value problem. [2] He also identified seven variables that defined the state of the atmosphere at a given point: pressure, temperature, density, humidity, and the three components of the flow velocity vector. Bjerknes pointed out that equations based on mass continuity, conservation of momentum, the first and second laws of thermodynamics, and the ideal gas law could be used to estimate the state of the atmosphere in the future through numerical methods. [3] With the exception of the second law of thermodynamics, [2] these equations form the basis of the primitive equations used in present-day weather models. [4]
In 1922, Lewis Fry Richardson published the first attempt at forecasting the weather numerically. Using a hydrostatic variation of Bjerknes's primitive equations, [2] Richardson produced by hand a 6-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so. [3] His forecast calculated that the change in surface pressure would be 145 millibars (4.3 inHg ), an unrealistic value incorrect by two orders of magnitude. The large error was caused by an imbalance in the pressure and wind velocity fields used as the initial conditions in his analysis. [2]
The first successful numerical prediction was performed using the ENIAC digital computer in 1950 by a team led by American meteorologist Jule Charney. The team include Philip Thompson, Larry Gates, and Norwegian meteorologist Ragnar Fjørtoft, applied mathematician John von Neumann, and computer programmer Klara Dan von Neumann, M. H. Frankel, Jerome Namias, John C. Freeman Jr., Francis Reichelderfer, George Platzman, and Joseph Smagorinsky. [5] [6] [7] They used a simplified form of atmospheric dynamics based on solving the barotropic vorticity equation over a single layer of the atmosphere, by computing the geopotential height of the atmosphere's 500 millibars (15 inHg) pressure surface. [8] This simplification greatly reduced demands on computer time and memory, so the computations could be performed on the relatively primitive computers of the day. [9] When news of the first weather forecast by ENIAC was received by Richardson in 1950, he remarked that the results were an "enormous scientific advance." [2] The first calculations for a 24‑hour forecast took ENIAC nearly 24 hours to produce, [2] but Charney's group noted that most of that time was spent in "manual operations", and expressed hope that forecasts of the weather before it occurs would soon be realized. [8]
In the United Kingdom the Meteorological Office first numerical weather prediction was completed by F. H. Bushby and Mavis Hinds in 1952 under the guidance of John Sawyer. These experimental forecasts were generated using a 12 × 8 grid with a grid spacing of 260 km, a one-hour time-step, and required four hours of computing time for a 24-hour forecast on the EDSAC computer at the University of Cambridge and the LEO computer developed by J. Lyons and Co. Following these initial experiments, work moved to the Ferranti Mark 1 computer at the Manchester University Department of Electrical Engineering and in 1959 a Ferranti Mercury computer, known as 'Meteor', was installed at the Met Office. [10]
In September 1954, Carl-Gustav Rossby assembled an international group of meteorologists in Stockholm and produced the first operational forecast (i.e. routine predictions for practical use) based on the barotropic equation. [11] Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit (JNWPU), a joint project by the U.S. Air Force, Navy, and Weather Bureau. [12] The JNWPU model was originally a three-layer barotropic model, also developed by Charney. [13] It only modeled the atmosphere in the Northern Hemisphere. [14] In 1956, the JNWPU switched to a two-layer thermotropic model developed by Thompson and Gates. [13] The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated using the 500-and-1,000 mb (15-and-30 inHg) geopotential height surfaces and the average thermal wind between them. [15] [16] However, due to the low skill showed by the thermotropic model, the JNWPU reverted to the single-layer barotropic model in 1958. [2] The Japanese Meteorological Agency became the third organization to initiate operational numerical weather prediction in 1959. [17] The first real-time forecasts made by Australia's Bureau of Meteorology in 1969 for portions of the Southern Hemisphere were also based on the single-layer barotropic model. [18]
Later models used more complete equations for atmospheric dynamics and thermodynamics. In 1959, Karl-Heinz Hinkelmann produced the first reasonable primitive equation forecast, 37 years after Richardson's failed attempt. Hinkelmann did so by removing small oscillations from the numerical model during initialization. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. [2] [18] Later additions to primitive equation models allowed additional insight into different weather phenomena. In the United States, solar radiation effects were added to the primitive equation model in 1967; moisture effects and latent heat were added in 1968; and feedback effects from rain on convection were incorporated in 1971. Three years later, the first global forecast model was introduced. [13] Sea ice began to be initialized in forecast models in 1971. [19] Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific. [20]
A global forecast model is a weather forecasting model which initializes and forecasts the weather throughout the Earth's troposphere. It is a computer program that produces meteorological information for future times at given locations and altitudes. Within any modern model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. [21] These equations—along with the ideal gas law—are used to evolve the density, pressure, and potential temperature scalar fields and the flow velocity vector field of the atmosphere through time. Additional transport equations for pollutants and other aerosols are included in some primitive-equation high-resolution models as well. [22] The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, [23] with the exception of a few idealized cases. [24] Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models and almost all regional models use finite difference methods for all three spatial dimensions, while other global models and a few regional models use spectral methods for the horizontal dimensions and finite-difference methods in the vertical. [23]
The National Meteorological Center's Global Spectral Model was introduced during August 1980. [14] The European Centre for Medium-Range Weather Forecasts model debuted on May 1, 1985. [25] The United Kingdom Met Office has been running their global model since the late 1980s, [26] adding a 3D-Var data assimilation scheme in mid-1999. [27] The Canadian Meteorological Centre has been running a global model since 1991. [28] The United States ran the Nested Grid Model (NGM) from 1987 to 2000, with some features lasting as late as 2009. Between 2000 and 2002, the Environmental Modeling Center ran the Aviation (AVN) model for shorter range forecasts and the Medium Range Forecast (MRF) model at longer time ranges. During this time, the AVN model was extended to the end of the forecast period, eliminating the need of the MRF and thereby replacing it. In late 2002, the AVN model was renamed the Global Forecast System (GFS). [29] The German Weather Service has been running their global hydrostatic model, the GME, using a hexagonal icosahedral grid since 2002. [30] The GFS is slated to eventually be supplanted by the Flow-following, finite-volume Icosahedral Model (FIM), which like the GME is gridded on a truncated icosahedron, in the mid-2010s.
In 1956, Norman A. Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere, which became the first successful climate model. [31] [32] Following Phillips's work, several groups began working to create general circulation models. [33] The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory. [34] By the early 1980s, the United States' National Center for Atmospheric Research had developed the Community Atmosphere Model; this model has been continuously refined into the 2000s. [35] In 1986, efforts began to initialize and model soil and vegetation types, which led to more realistic forecasts. For example, the Center for Ocean-Land Atmosphere Studies (COLA) model showed a warm temperature bias of 2–4 °C (36–39 °F) and a low precipitation bias due to incorrect parameterization of crop and vegetation type across the central United States. [36] Coupled ocean-atmosphere climate models such as the Hadley Centre for Climate Prediction and Research's HadCM3 model are currently being used as inputs for climate change studies. [33] The importance of gravity waves was neglected within these models until the mid-1980s. Now, gravity waves are required within global climate models in order to properly simulate regional and global scale circulations, though their broad spectrum makes their incorporation complicated. [37] The Climate System Model (CSM) was developed at the National Center for Atmospheric Research in January 1994. [38]
In comparison to traditional physics-based methods, machine learning (ML), or more broadly, artificial intelligence (AI) approaches, have demonstrated potential in enhancing weather forecasts (refer to the review by Shen et al. [39] ). As detailed in Table 4 of Shen et al., these AI-driven models were trained with ERA5 reanalysis data and CMIP6 datasets and evaluated using a variety of metrics such as root mean square errors (RMSE), anomaly correlation coefficients (ACC), Continuous Ranked Probability Score (CRPS), Temporal Anomaly Correlation Coefficient (TCC), Ranked Probability Skill Score (RPSS), Brier Skill Score (BSS), and bivariate correlation (COR).
By utilizing deep convolutional neural networks (CNNs), Weyn et al. [40] achieved lead times of 14 days. Notably, recent advancements in AI, especially transformer models (e.g., Vaswani et al. [41] ) and their derivatives, such as the “vision transformer” (Dosovitskiy et al. 2020 [42] ), have created substantial opportunities to lower the cost of weather forecasting and revisit the predictability limits. Among the AI-powered models mentioned, all provided forecasts that were comparable to or slightly better than those from PDE-physics-based systems for short-term forecasts (3–14 days).
Three studies have attempted to conduct simulations at subseasonal or larger scales. Of these, the ClimX system was presented in a conference paper. The enhanced Fu-Xi system, along with its base version, was documented in both a preprint and a journal article. In the third study, Bach et al. (2024) [43] utilized a hybrid dynamical and data-driven approach to show potential improvements in subseasonal monsoon prediction. Their findings indicate a correlation above 0.5 over a 46-day period in two predictions.
The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models (also known as limited-area models, or LAMs) allow for the use of finer (or smaller) grid spacing than global models. The available computational resources are focused on a specific area instead of being spread over the globe. This allows regional models to resolve explicitly smaller-scale meteorological phenomena that cannot be represented on the coarser grid of a global model. Regional models use a global model for initial conditions of the edge of their domain in order to allow systems from outside the regional model domain to move into its area. Uncertainty and errors within regional models are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as errors attributable to the regional model itself. [44]
In the United States, the first operational regional model, the limited-area fine-mesh (LFM) model, was introduced in 1971. [13] Its development was halted, or frozen, in 1986. The NGM debuted in 1987 and was also used to create model output statistics for the United States. [45] Its development was frozen in 1991. The ETA model was implemented for the United States in 1993 [14] and in turn was upgraded to the NAM in 2006. The U.S. also offers the Rapid Refresh (which replaced the RUC in 2012) for short-range and high-resolution applications; both the Rapid Refresh and NAM are built on the same framework, the WRF. Météo-France has been running their Action de Recherche Petite Échelle Grande Échelle (ALADIN) mesoscale model for France, based upon the ECMWF global model, since 1995. [46] In July 1996, the Bureau of Meteorology implemented the Limited Area Prediction System (LAPS). [47] The Canadian Regional Finite-Elements model (RFE) went into operational use on April 22, 1986. [48] It was followed by the Canadian Global Environmental Multiscale Model (GEM) mesoscale model on February 24, 1997. [46]
The German Weather Service developed the High Resolution Regional Model (HRM) in 1999, which is widely run within the operational and research meteorological communities and run with hydrostatic assumptions. [49] The Antarctic Mesoscale Prediction System (AMPS) was developed for the southernmost continent in 2000 by the United States Antarctic Program. [50] The German non-hydrostatic Lokal-Modell for Europe (LME) has been run since 2002, and an increase in areal domain became operational on September 28, 2005. [51] The Japan Meteorological Agency has run a high-resolution, non-hydrostatic mesoscale model since September 2004. [52]
The technical literature on air pollution dispersion is quite extensive and dates back to the 1930s and earlier. One of the early air pollutant plume dispersion equations was derived by Bosanquet and Pearson. [53] Their equation did not assume Gaussian distribution nor did it include the effect of ground reflection of the pollutant plume. Sir Graham Sutton derived an air pollutant plume dispersion equation in 1947 which did include the assumption of Gaussian distribution for the vertical and crosswind dispersion of the plume and also included the effect of ground reflection of the plume. [54] Under the stimulus provided by the advent of stringent environmental control regulations, there was an immense growth in the use of air pollutant plume dispersion calculations between the late 1960s and today. A great many computer programs for calculating the dispersion of air pollutant emissions were developed during that period of time and they were called "air dispersion models". The basis for most of those models was the Complete Equation For Gaussian Dispersion Modeling Of Continuous, Buoyant Air Pollution Plumes The Gaussian air pollutant dispersion equation requires the input of H which is the pollutant plume's centerline height above ground level—and H is the sum of Hs (the actual physical height of the pollutant plume's emission source point) plus ΔH (the plume rise due to the plume's buoyancy).
To determine ΔH, many if not most of the air dispersion models developed between the late 1960s and the early 2000s used what are known as "the Briggs equations." G. A. Briggs first published his plume rise observations and comparisons in 1965. [55] In 1968, at a symposium sponsored by Conservation of Clean Air and Water in Europe, he compared many of the plume rise models then available in the literature. [56] In that same year, Briggs also wrote the section of the publication edited by Slade [57] dealing with the comparative analyses of plume rise models. That was followed in 1969 by his classical critical review of the entire plume rise literature, [58] in which he proposed a set of plume rise equations which have become widely known as "the Briggs equations". Subsequently, Briggs modified his 1969 plume rise equations in 1971 and in 1972. [59] [60]
The Urban Airshed Model, a regional forecast model for the effects of air pollution and acid rain, was developed by a private company in the US in 1970. Development of this model was taken over by the Environmental Protection Agency and improved in the mid to late 1970s using results from a regional air pollution study. While developed in California, this model was later used in other areas of North America, Europe and Asia during the 1980s. [61] The Community Multiscale Air Quality model (CMAQ) is an open source air quality model run within the United States in conjunction with the NAM mesoscale model since 2004. [62] [63] The first operational air quality model in Canada, Canadian Hemispheric and Regional Ozone and NOx System (CHRONOS), began to be run in 2001. It was replaced with the Global Environmental Multiscale model – Modelling Air quality and Chemistry (GEM-MACH) model in November 2009. [64]
During 1972, the first model to forecast storm surge along the continental shelf was developed, known as the Special Program to List the Amplitude of Surges from Hurricanes (SPLASH). [65] In 1978, the first hurricane-tracking model based on atmospheric dynamics – the movable fine-mesh (MFM) model – began operating. [13] Within the field of tropical cyclone track forecasting, despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the decade of the 1980s when numerical weather prediction showed skill, and until the 1990s when it consistently outperformed statistical or simple dynamical models. [66] In the early 1980s, the assimilation of satellite-derived winds from water vapor, infrared, and visible satellite imagery was found to improve tropical cyclones track forecasting. [67] The Geophysical Fluid Dynamics Laboratory (GFDL) hurricane model was used for research purposes between 1973 and the mid-1980s. Once it was determined that it could show skill in hurricane prediction, a multi-year transition transformed the research model into an operational model which could be used by the National Weather Service in 1995. [68]
The Hurricane Weather Research and Forecasting (HWRF) model is a specialized version of the Weather Research and Forecasting (WRF) model and is used to forecast the track and intensity of tropical cyclones. The model was developed by the National Oceanic and Atmospheric Administration (NOAA), the U.S. Naval Research Laboratory, the University of Rhode Island, and Florida State University. [69] It became operational in 2007. [70] Despite improvements in track forecasting, predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statiscal methods continue to show higher skill over dynamical guidance. [71]
The first ocean wave models were developed in the 1960s and 1970s. These models had the tendency to overestimate the role of wind in wave development and underplayed wave interactions. A lack of knowledge concerning how waves interacted among each other, assumptions regarding a maximum wave height, and deficiencies in computer power limited the performance of the models. After experiments were performed in 1968, 1969, and 1973, wind input from the Earth's atmosphere was weighted more accurately in the predictions. A second generation of models was developed in the 1980s, but they could not realistically model swell nor depict wind-driven waves (also known as wind waves) caused by rapidly changing wind fields, such as those within tropical cyclones. This caused the development of a third generation of wave models from 1988 onward. [72] [73]
Within this third generation of models, the spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement (propagation within a fluid), wave shoaling, refraction, energy transfer between waves, and wave dissipation. [74] Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface. [75]
Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions near the ground, statistical corrections were developed to attempt to resolve this problem. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations, and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS), [76] and were developed by the National Weather Service for their suite of weather forecasting models by 1976. [77] The United States Air Force developed its own set of MOS based upon their dynamical weather model by 1983. [78]
As proposed by Edward Lorenz in 1963, it is impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree of skill, owing to the chaotic nature of the fluid dynamics equations involved. Extremely small errors in temperature, winds, or other initial inputs given to numerical models will amplify and double every five days. [79] Furthermore, existing observation networks have limited spatial and temporal resolution (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as the Liouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers. [80] These uncertainties limit forecast model accuracy to about six days into the future. [81]
Edward Epstein recognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed a stochastic dynamic model that produced means and variances for the state of the atmosphere. [82] While these Monte Carlo simulations showed skill, in 1974 Cecil Leith revealed that they produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution in the atmosphere. [83] It was not until 1992 that ensemble forecasts began being prepared by the European Centre for Medium-Range Weather Forecasts, the Canadian Meteorological Centre, [84] and the National Centers for Environmental Prediction. The ECMWF model, the Ensemble Prediction System, [85] uses singular vectors to simulate the initial probability density, while the NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known as vector breeding. [86] [87]
Numerical climate models are mathematical models that can simulate the interactions of important drivers of climate. These drivers are the atmosphere, oceans, land surface and ice. Scientists use climate models to study the dynamics of the climate system and to make projections of future climate and of climate change. Climate models can also be qualitative models and contain narratives, largely descriptive, of possible futures.
Weather forecasting is the application of science and technology to predict the conditions of the atmosphere for a given location and time. People have attempted to predict the weather informally for millennia and formally since the 19th century.
A general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources. These equations are the basis for computer programs used to simulate the Earth's atmosphere or oceans. Atmospheric and oceanic GCMs are key components along with sea ice and land-surface components.
Numerical weather prediction (NWP) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs.
The Environmental Modeling Center (EMC) is a United States Government agency, which improves numerical weather, marine and climate predictions at the National Centers for Environmental Prediction (NCEP), through a broad program of research in data assimilation and modeling. In support of the NCEP operational forecasting mission, the EMC develops, improves and monitors data assimilation systems and models of the atmosphere, ocean and coupled system, using advanced methods developed internally as well as cooperatively with scientists from universities, NOAA laboratories and other government agencies, and the international scientific community.
Ensemble forecasting is a method used in or within numerical weather prediction. Instead of making a single forecast of the most likely weather, a set of forecasts is produced. This set of forecasts aims to give an indication of the range of possible future states of the atmosphere.
Data assimilation is a mathematical discipline that seeks to optimally combine theory with observations. There may be a number of different goals sought – for example, to determine the optimal state estimate of a system, to determine initial conditions for a numerical forecast model, to interpolate sparse observation data using knowledge of the system being observed, to set numerical parameters based on training a model from observed data. Depending on the goal, different solution methods may be used. Data assimilation is distinguished from other forms of machine learning, image analysis, and statistical methods in that it utilizes a dynamical model of the system being analyzed.
A tropical cyclone forecast model is a computer program that uses meteorological data to forecast aspects of the future state of tropical cyclones. There are three types of models: statistical, dynamical, or combined statistical-dynamic. Dynamical models utilize powerful supercomputers with sophisticated mathematical modeling software and meteorological data to calculate future weather conditions. Statistical models forecast the evolution of a tropical cyclone in a simpler manner, by extrapolating from historical datasets, and thus can be run quickly on platforms such as personal computers. Statistical-dynamical models use aspects of both types of forecasting. Four primary types of forecasts exist for tropical cyclones: track, intensity, storm surge, and rainfall. Dynamical models were not developed until the 1970s and the 1980s, with earlier efforts focused on the storm surge problem.
In atmospheric science, an atmospheric model is a mathematical model constructed around the full set of primitive, dynamical equations which govern atmospheric motions. It can supplement these equations with parameterizations for turbulent diffusion, radiation, moist processes, heat exchange, soil, vegetation, surface water, the kinematic effects of terrain, and convection. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict microscale phenomena such as tornadoes and boundary layer eddies, sub-microscale turbulent flow over buildings, as well as synoptic and global flows. The horizontal domain of a model is either global, covering the entire Earth, or regional (limited-area), covering only part of the Earth. The different types of models run are thermotropic, barotropic, hydrostatic, and nonhydrostatic. Some of the model types make assumptions about the atmosphere which lengthens the time steps used and increases computational speed.
The National Atmospheric Release Advisory Center (NARAC) is located at the University of California's Lawrence Livermore National Laboratory. It is a national support and resource center for planning, real-time assessment, emergency response, and detailed studies of incidents involving a wide variety of hazards, including nuclear, radiological, chemical, biological, and natural emissions.
The Weather Research and Forecasting (WRF) Model is a numerical weather prediction (NWP) system designed to serve both atmospheric research and operational forecasting needs. NWP refers to the simulation and prediction of the atmosphere with a computer model, and WRF is a set of software for this. WRF features two dynamical (computational) cores, a data assimilation system, and a software architecture allowing for parallel computation and system extensibility. The model serves a wide range of meteorological applications across scales ranging from meters to thousands of kilometers.
Tropical cyclogenesis is the development and strengthening of a tropical cyclone in the atmosphere. The mechanisms through which tropical cyclogenesis occurs are distinctly different from those through which temperate cyclogenesis occurs. Tropical cyclogenesis involves the development of a warm-core cyclone, due to significant convection in a favorable atmospheric environment.
A geodesic grid is a spatial grid based on a geodesic polyhedron or Goldberg polyhedron.
In weather forecasting, model output statistics (MOS) is a multiple linear regression technique in which predictands, often near-surface quantities, are related statistically to one or more predictors. The predictors are typically forecasts from a numerical weather prediction (NWP) model, climatic data, and, if applicable, recent surface observations. Thus, output from NWP models can be transformed by the MOS technique into sensible weather parameters that are familiar to a layperson.
Teleconnection in atmospheric science refers to climate anomalies being related to each other at large distances. The most emblematic teleconnection is that linking sea-level pressure at Tahiti and Darwin, Australia, which defines the Southern Oscillation. Another well-known teleconnection links the sea-level pressure over Iceland with the one over the Azores, traditionally defining the North Atlantic Oscillation (NAO).
Dr. André Robert was a Canadian meteorologist who pioneered the modelling the Earth's atmospheric circulation.
A chemical transport model (CTM) is a type of computer numerical model which typically simulates atmospheric chemistry and may give air pollution forecasting.
A cold-core low, also known as an upper level low or cold-core cyclone, is a cyclone aloft which has an associated cold pool of air residing at high altitude within the Earth's troposphere, without a frontal structure. It is a low pressure system that strengthens with height in accordance with the thermal wind relationship. If a weak surface circulation forms in response to such a feature at subtropical latitudes of the eastern north Pacific or north Indian oceans, it is called a subtropical cyclone. Cloud cover and rainfall mainly occurs with these systems during the day.
A prognostic chart is a map displaying the likely weather forecast for a future time. Such charts generated by atmospheric models as output from numerical weather prediction and contain a variety of information such as temperature, wind, precipitation and weather fronts. They can also indicate derived atmospheric fields such as vorticity, stability indices, or frontogenesis. Forecast errors need to be taken into account and can be determined either via absolute error, or by considering persistence and absolute error combined.
The Jule G. Charney Award is the American Meteorological Society's award granted to "individuals in recognition of highly significant research or development achievement in the atmospheric or hydrologic sciences". The prize was originally known as the Second Half Century Award, and first awarded to mark to fiftieth anniversary of the society.
{{cite book}}
: |journal=
ignored (help){{cite web}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: CS1 maint: multiple names: authors list (link){{cite web}}
: CS1 maint: multiple names: authors list (link){{cite book}}
: CS1 maint: multiple names: authors list (link)