All telecommunications service providers perform forecasting calculations to assist them in planning their networks. [1] Accurate forecasting helps operators to make key investment decisions relating to product development and introduction, advertising, pricing etc., well in advance of product launch, which helps to ensure that the company will make a profit on a new venture and that capital is invested wisely. [2]
Forecasting can be conducted for many purposes, so it is important that the reason for performing the calculation is clearly defined and understood. Some common reasons for forecasting include: [2]
Knowing the purpose of the forecast will help to answer additional questions such as the following: [2]
When forecasting it is important to understand which factors may influence the calculation, and to what extent. A list of some common factors can be seen below: [2]
Before forecasting is performed, the data being used must be "prepared". If the data contains errors, then the forecast result will be equally flawed. It is therefore vital that all anomalous data be removed. Such a procedure is known as data "scrubbing". [2] Scrubbing data involved removing data points known as "outliers". Outliers are data that lie outside the normal pattern. They are usually caused by anomalous and often unique events and so are unlikely to recur. Removing outliers improves data integrity and increases the accuracy of the forecast.
There are many different methods used to conduct forecasting. They can be divided into different groups based on the theories according to which they were developed: [2]
Judgment-based methods rely on the opinions and knowledge of people who have considerable experience in the area that the forecast is being conducted. There are two main judgment based methods: [2]
Survey methods are based on the opinions of customers and are thus reasonably accurate if performed correctly. In performing a survey, the survey’s target group needs to be identified. [3] This can be achieved by considering why the forecast is being conducted in the first place. Once the target group has been identified, a sample must be chosen. The sample is a sub-set of the target and must be chosen so that it accurately reflects everyone in the target group. [3] The survey must then pose a series of questions to the sample group and their answers must be recorded.
The recorded answers must then be analyzed using statistical and analytical methods. The average opinion and the variation about that mean are statistical analytical techniques that can be used. [3] The results of the analysis should then be checked using alternative forecasting methods and the results can be published. [3] It must be kept in mind that this method is only accurate if the sample is a balanced and accurate subset of the target group and if the sample group has accurately answered the questions. [3]
Time series methods are based on measurements taken of events on a periodic basis. [2] These methods use such data to develop models which can then be used to extrapolate into the future, thereby generating the forecast. Each model operates according to a different set of assumptions and is designed for a different purpose. Examples of Time Series Methods are: [2]
Analogous Methods involve finding similarities between foreign events and the events that are being studied. The foreign events are usually selected at a time when they are more "mature" than current events. No foreign event will perfectly mirror current events and this must be kept in mind so that any necessary corrections can be made. By examining the foreign, more mature, set of events, the future of current events can be forecast. [2]
Analogous methods can be split up into two groups namely: [2]
Causal Models are the most accurate form of forecasting, and the most complex. They involve creating a complex and complete model of the events being forecast. The model must include all possible variables, and must be able to predict every possible outcome.
Causal Models are often so complex that they can only be created on computers. They are developed using data from a set of events. The model is only as accurate as the data used to develop it. [2]
Combination Forecasts combine the methods discussed above. The advantage is that in most cases accuracy is increased; however a researcher must be careful that the disadvantages of each of the above methods do not combine to produce compound errors in forecasts. Examples of combination forecasts include: "Integration of Judgment and Quantitative Forecasts" and "Simple and Weighted Averages". [2]
It is difficult to determine the accuracy of any forecast, as it represents an attempt to predict future events, which is always challenging. To help improve and test forecast accuracy researchers use many different checking methods. A simple checking method involves the use of several different forecasting methods and comparing the results to see if they are more or less equal. Another method can involve statistically calculating the errors in the forecasting calculation and expressing them in terms of the root mean squared error, thereby providing an indication of the overall error in the method. A sensitivity analysis can also be useful, as it determines what will happen if some of the original data upon which the forecast was developed turned out to be wrong. Determining forecast accuracy, like forecasting itself, can never be performed with certainty and so it is advisable to ensure that input data is measured and obtained as accurately as possible, the most appropriate forecasting methods are selected, and the forecasting process is conducted as rigorously as possible. [2]
Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.
Weather forecasting is the application of science and technology to predict the conditions of the atmosphere for a given location and time. People have attempted to predict the weather informally for millennia and formally since the 19th century. Weather forecasts are made by collecting quantitative data about the current state of the atmosphere, land, and ocean and using meteorology to project how the atmosphere will change at a given place.
A prediction, or forecast, is a statement about a future event. They are often, but not always, based upon experience or knowledge. There is no universal agreement about the exact difference from "estimation"; different authors and disciplines ascribe different connotations.
Forecasting is the process of making predictions based on past and present data and most commonly by analysis of trends. A commonplace example might be estimation of some variable of interest at some specified future date. Prediction is a similar, but more general term. Both might refer to formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods. Usage can differ between areas of application: for example, in hydrology the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.
The Delphi method or Delphi technique is a structured communication technique or method, originally developed as a systematic, interactive forecasting method which relies on a panel of experts. The technique can also be adapted for use in face-to-face meetings, and is then called mini-Delphi or Estimate-Talk-Estimate (ETE). Delphi has been widely used for business forecasting and has certain advantages over another structured forecasting approach, prediction markets.
Computer simulation is the process of mathematical modelling, performed on a computer, which is designed to predict the behaviour of, or the outcome of, a real-world or physical system. The reliability of some mathematical models can be determined by comparing their results to the real-world outcomes they aim to predict. Computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics, astrophysics, climatology, chemistry, biology and manufacturing, as well as human systems in economics, psychology, social science, health care and engineering. Simulation of a system is represented as the running of the system's model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.
Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice. In a prediction problem, a model is usually given a dataset of known data on which training is run, and a dataset of unknown data against which the model is tested. The goal of cross-validation is to test the model's ability to predict new data that was not used in estimating it, in order to flag problems like overfitting or selection bias and to give an insight on how the model will generalize to an independent dataset.
In mathematics, extrapolation is a type of estimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between known observations, but extrapolation is subject to greater uncertainty and a higher risk of producing meaningless results. Extrapolation may also mean extension of a method, assuming similar methods will be applicable. Extrapolation may also apply to human experience to project, extend, or expand known experience into an area not known or previously experienced so as to arrive at a knowledge of the unknown. The extrapolation method can be applied in the interior reconstruction problem.
Scenario planning, scenario thinking, scenario analysis, scenario prediction and the scenario method all describe a strategic planning method that some organizations use to make flexible long-term plans. It is in large part an adaptation and generalization of classic methods used by military intelligence.
In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable and one or more independent variables. The most common form of regression analysis is linear regression, in which one finds the line that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line that minimizes the sum of squared differences between the true data and that line. For specific mathematical reasons, this allows the researcher to estimate the conditional expectation of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters or estimate the conditional expectation across a broader collection of non-linear models.
Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regression parameters. One motivation is to produce statistical methods that are not unduly affected by outliers. Another motivation is to provide methods with good performance when there are small departures from parametric distribution. For example, robust methods work well for mixtures of two normal distributions with different standard-deviations; under this model, non-robust methods like a t-test work poorly.
Futures techniques used in the multi-disciplinary field of futurology by futurists in Americas and Australasia, and futurology by futurologists in EU, include a diverse range of forecasting methods, including anticipatory thinking, backcasting, simulation, and visioning. Some of the anticipatory methods include, the delphi method, causal layered analysis, environmental scanning, morphological analysis, and scenario planning.
Technology forecasting attempts to predict the future characteristics of useful technological machines, procedures or techniques. Researchers create technology forecasts based on past experience and current technological developments. Like other forecasts, technology forecasting can be helpful for both public and private organizations to make smart decisions. By analyzing future opportunities and threats, the forecaster can improve decisions in order to achieve maximum benefits. Today, most countries are experiencing huge social and economic changes, which heavily rely on technology development. By analyzing these changes, government and economic institutions could make plans for future developments. However, not all of historical data can be used for technology forecasting, forecasters also need to adopt advanced technology and quantitative modeling from experts’ researches and conclusions.
TAMDAR is a weather monitoring system that consists of an in situ atmospheric sensor mounted on commercial aircraft for data gathering. It collects information similar to that collected by radiosondes carried aloft by weather balloons. It was developed by AirDat LLC, which was acquired by Panasonic Avionics Corporation in April 2013 and operated until October 2018 under the name Panasonic Weather Solutions. It is now owned by FLYHT Aerospace Solutions Ltd.
The quantitative precipitation forecast is the expected amount of melted precipitation accumulated over a specified time period over a specified area. A QPF will be created when precipitation amounts reaching a minimum threshold are expected during the forecast's valid period. Valid periods of precipitation forecasts are normally synoptic hours such as 0000, 0600, 1200 and 1800 GMT. Terrain is considered in QPFs by use of topography or based upon climatological precipitation patterns from observations with fine detail. Starting in the mid-to-late 1990s, QPFs were used within hydrologic forecast models to simulate impact to rivers throughout the United States. Forecast models show significant sensitivity to humidity levels within the planetary boundary layer, or in the lowest levels of the atmosphere, which decreases with height. QPF can be generated on a quantitative, forecasting amounts, or a qualitative, forecasting the probability of a specific amount, basis. Radar imagery forecasting techniques show higher skill than model forecasts within 6 to 7 hours of the time of the radar image. The forecasts can be verified through use of rain gauge measurements, weather radar estimates, or a combination of both. Various skill scores can be determined to measure the value of the rainfall forecast.
Demand forecasting is a field of predictive analytics which tries to understand and predict customer demand to optimize supply decisions by corporate supply chain and business management. Demand forecasting involves quantitative methods such as the use of data, and especially historical sales data, as well as statistical techniques from test markets. Demand forecasting may be used in production planning, inventory management, and at times in assessing future capacity requirements, or in making decisions on whether to enter a new market.
Wind resource assessment is the process by which wind power developers estimate the future energy production of a wind farm. Accurate wind resource assessments are crucial to the successful development of wind farms.
Cross-impact analysis is a methodology developed by Theodore Gordon and Olaf Helmer in 1966 to help determine how relationships between events would impact resulting events and reduce uncertainty in the future. The Central Intelligence Agency (CIA) became interested in the methodology in the late 1960s and early 1970s as an analytic technique for predicting how different factors and variables would impact future decisions. In the mid-1970s, futurists began to use the methodology in larger numbers as a means to predict the probability of specific events and determine how related events impacted one another. By 2006, cross-impact analysis matured into a number of related methodologies with uses for businesses and communities as well as futurists and intelligence analysts.
The Makridakis Competitions are a series of open competitions organized by teams led by forecasting researcher Spyros Makridakis and intended to evaluate and compare the accuracy of different forecasting methods.
Convenience sampling is a type of non-probability sampling that involves the sample being drawn from that part of the population that is close to hand. This type of sampling is most useful for pilot testing.