Predictive analytics

Last updated

Predictive analytics is a form of business analytics applying machine learning to generate a predictive model for certain business applications. As such, it encompasses a variety of statistical techniques from predictive modeling and machine learning that analyze current and historical facts to make predictions about future or otherwise unknown events. [1] It represents a major subset of machine learning applications; in some contexts, it is synonymous with machine learning. [2]

Contents

In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision-making for candidate transactions. [3]

The defining functional effect of these technical approaches is that predictive analytics provides a predictive score (probability) for each individual (customer, employee, healthcare patient, product SKU, vehicle, component, machine, or other organizational unit) in order to determine, inform, or influence organizational processes that pertain across large numbers of individuals, such as in marketing, credit risk assessment, fraud detection, manufacturing, healthcare, and government operations including law enforcement.

Definition

Predictive analytics involves using statistical techniques and machine learning algorithms to analyze historical data and make forecasts about future events. The risks include data privacy issues, potential biases in data leading to inaccurate predictions, and over - reliance on automated systems. Extending the Value of Your Data Warehousing Investment [4] . |url=http://download.101com.com/pub/tdwi/files/pa_report_q107_f.pdf}}</ref> Predictive analytics statistical techniques include data modeling, machine learning, AI, deep learning algorithms and data mining. Often the unknown event of interest is in the future, but predictive analytics can be applied to any type of unknown whether it be in the past, present or future. For example, identifying suspects after a crime has been committed, or credit card fraud as it occurs. [5] The core of predictive analytics relies on capturing relationships between explanatory variables and the predicted variables from past occurrences, and exploiting them to predict the unknown outcome. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions. [1]

Predictive analytics is often defined as predicting at a more detailed level of granularity, i.e., generating predictive scores (probabilities) for each individual organizational element. This distinguishes it from forecasting. For example, "Predictive analytics—Technology that learns from experience (data) to predict the future behavior of individuals in order to drive better decisions." [2] In future industrial systems, the value of predictive analytics will be to predict and prevent potential issues to achieve near-zero break-down and further be integrated into prescriptive analytics for decision optimization. [6]

Analytical techniques

The approaches and techniques used to conduct predictive analytics can broadly be grouped into regression techniques and machine learning techniques.

Machine Learning

Machine learning can be defined as the ability of a machine to learn and then mimic human behavior that requires intelligence. This is accomplished through artificial intelligence, algorithms, and models. [7]

Autoregressive Integrated Moving Average (ARIMA)

ARIMA models are a common example of time series models. These models use autoregression, which means the model can be fitted with a regression software that will use machine learning to do most of the regression analysis and smoothing. ARIMA models are known to have no overall trend, but instead have a variation around the average that has a constant amplitude, resulting in statistically similar time patterns. Through this, variables are analyzed and data is filtered in order to better understand and predict future values. [8] [9]

One example of an ARIMA method is exponential smoothing models. Exponential smoothing takes into account the difference in importance between older and newer data sets, as the more recent data is more accurate and valuable in predicting future values. In order to accomplish this, exponents are utilized to give newer data sets a larger weight in the calculations than the older sets. [10]

Time series models

Time series models are a subset of machine learning that utilize time series in order to understand and forecast data using past values. A time series is the sequence of a variable's value over equally spaced periods, such as years or quarters in business applications. [11] To accomplish this, the data must be smoothed, or the random variance of the data must be removed in order to reveal trends in the data. There are multiple ways to accomplish this.

Moving average

Single moving average methods utilize smaller and smaller numbered sets of past data to decrease error that is associated with taking a single average, making it a more accurate average than it would be to take the average of the entire data set. [12]

Centered moving average methods utilize the data found in the single moving average methods by taking an average of the median-numbered data set. However, as the median-numbered data set is difficult to calculate with even-numbered data sets, this method works better with odd-numbered data sets than even. [13]

Predictive modeling

Predictive Modeling is a statistical technique used to predict future behavior. It utilizes predictive models to analyze a relationship between a specific unit in a given sample and one or more features of the unit. The objective of these models is to assess the possibility that a unit in another sample will display the same pattern. Predictive model solutions can be considered a type of data mining technology. The models can analyze both historical and current data and generate a model in order to predict potential future outcomes. [14]

Regardless of the methodology used, in general, the process of creating predictive models involves the same steps. First, it is necessary to determine the project objectives and desired outcomes and translate these into predictive analytic objectives and tasks. Then, analyze the source data to determine the most appropriate data and model building approach (models are only as useful as the applicable data used to build them). Select and transform the data in order to create models. Create and test models in order to evaluate if they are valid and will be able to meet project goals and metrics. Apply the model's results to appropriate business processes (identifying patterns in the data doesn't necessarily mean a business will understand how to take advantage or capitalize on it). Afterward, manage and maintain models in order to standardize and improve performance (demand will increase for model management in order to meet new compliance regulations). [15]

Regression analysis

Generally, regression analysis uses structural data along with the past values of independent variables and the relationship between them and the dependent variable to form predictions. [8]

Linear regression

In linear regression, a plot is constructed with the previous values of the dependent variable plotted on the Y-axis and the independent variable that is being analyzed plotted on the X-axis. A regression line is then constructed by a statistical program representing the relationship between the independent and dependent variables which can be used to predict values of the dependent variable based only on the independent variable. With the regression line, the program also shows a slope intercept equation for the line which includes an addition for the error term of the regression, where the higher the value of the error term the less precise the regression model is. In order to decrease the value of the error term, other independent variables are introduced to the model, and similar analyses are performed on these independent variables. [8] [16] Additionally, multiple linear regression (MLP) can be employed to address relationships involving multiple independent variables, offering a more comprehensive modeling approach. [17]

Applications

Analytical Review and Conditional Expectations in Auditing

An important aspect of auditing includes analytical review. In analytical review, the reasonableness of reported account balances being investigated is determined. Auditors accomplish this process through predictive modeling to form predictions called conditional expectations of the balances being audited using autoregressive integrated moving average (ARIMA) methods and general regression analysis methods, [8] specifically through the Statistical Technique for Analytical Review (STAR) methods. [18]

The ARIMA method for analytical review uses time-series analysis on past audited balances in order to create the conditional expectations. These conditional expectations are then compared to the actual balances reported on the audited account in order to determine how close the reported balances are to the expectations. If the reported balances are close to the expectations, the accounts are not audited further. If the reported balances are very different from the expectations, there is a higher possibility of a material accounting error and a further audit is conducted. [18]

Regression analysis methods are deployed in a similar way, except the regression model used assumes the availability of only one independent variable. The materiality of the independent variable contributing to the audited account balances are determined using past account balances along with present structural data. [8] Materiality is the importance of an independent variable in its relationship to the dependent variable. [19] In this case, the dependent variable is the account balance. Through this the most important independent variable is used in order to create the conditional expectation and, similar to the ARIMA method, the conditional expectation is then compared to the account balance reported and a decision is made based on the closeness of the two balances. [8]

The STAR methods operate using regression analysis, and fall into two methods. The first is the STAR monthly balance approach, and the conditional expectations made and regression analysis used are both tied to one month being audited. The other method is the STAR annual balance approach, which happens on a larger scale by basing the conditional expectations and regression analysis on one year being audited. Besides the difference in the time being audited, both methods operate the same, by comparing expected and reported balances to determine which accounts to further investigate. [18]

Furthermore, the incorporation of analytical procedures into auditing standards underscores the increasing necessity for auditors to modify these methodologies to suit particular datasets, which reflects the ever-changing nature of financial examination. [20]

Business Value

As we move into a world of technological advances where more and more data is created and stored digitally, businesses are looking for ways to take advantage of this opportunity and use this information to help generate profits. Predictive analytics can be used and is capable of providing many benefits to a wide range of businesses, including asset management firms, insurance companies, communication companies, and many other firms. Every company that uses project management to achieve its goals benefits immensely from predictive intelligence capabilities. In a study conducted by IDC Analyze the Future, Dan Vasset and Henry D. Morris explain how an asset management firm used predictive analytics to develop a better marketing campaign. They went from a mass marketing approach to a customer-centric approach, where instead of sending the same offer to each customer, they would personalize each offer based on their customer. Predictive analytics was used to predict the likelihood that a possible customer would accept a personalized offer. Due to the marketing campaign and predictive analytics, the firm's acceptance rate skyrocketed, with three times the number of people accepting their personalized offers. [21]

Technological advances in predictive analytics [22] have increased its value to firms. One technological advancement is more powerful computers, and with this predictive analytics has become able to create forecasts on large data sets much faster. With the increased computing power also comes more data and applications, meaning a wider array of inputs to use with predictive analytics. Another technological advance includes a more user-friendly interface, allowing a smaller barrier of entry and less extensive training required for employees to utilize the software and applications effectively. Due to these advancements, many more corporations are adopting predictive analytics and seeing the benefits in employee efficiency and effectiveness, as well as profits. [23] The percentage of projects that fail is fairly high—a whopping 70% of all projects fail to deliver what was promised to customers. The implementation of a management process, however, is shown to reduce the failure rate to 20% or below. [24]

Cash-flow Prediction

ARIMA univariate and multivariate models can be used in forecasting a company's future cash flows, with its equations and calculations based on the past values of certain factors contributing to cash flows. Using time-series analysis, the values of these factors can be analyzed and extrapolated to predict the future cash flows for a company. For the univariate models, past values of cash flows are the only factor used in the prediction. Meanwhile the multivariate models use multiple factors related to accrual data, such as operating income before depreciation. [25]

Another model used in predicting cash-flows was developed in 1998 and is known as the Dechow, Kothari, and Watts model, or DKW (1998). DKW (1998) uses regression analysis in order to determine the relationship between multiple variables and cash flows. Through this method, the model found that cash-flow changes and accruals are negatively related, specifically through current earnings, and using this relationship predicts the cash flows for the next period. The DKW (1998) model derives this relationship through the relationships of accruals and cash flows to accounts payable and receivable, along with inventory. [26]

Child protection

Some child welfare agencies have started using predictive analytics to flag high risk cases. [27] For example, in Hillsborough County, Florida, the child welfare agency's use of a predictive modeling tool has prevented abuse-related child deaths in the target population. [28]

The predicting of the outcome of juridical decisions can be done by AI programs. These programs can be used as assistive tools for professions in this industry. [29] [30]

Portfolio, product or economy-level prediction

Often the focus of analysis is not the consumer but the product, portfolio, firm, industry or even the economy. For example, a retailer might be interested in predicting store-level demand for inventory management purposes. Or the Federal Reserve Board might be interested in predicting the unemployment rate for the next year. These types of problems can be addressed by predictive analytics using time series techniques (see below). They can also be addressed via machine learning approaches which transform the original time series into a feature vector space, where the learning algorithm finds patterns that have predictive power. [31] [32]

Underwriting

Many businesses have to account for risk exposure due to their different services and determine the costs needed to cover the risk. Predictive analytics can help underwrite these quantities by predicting the chances of illness, default, bankruptcy, etc. Predictive analytics can streamline the process of customer acquisition by predicting the future risk behavior of a customer using application level data. Predictive analytics in the form of credit scores have reduced the amount of time it takes for loan approvals, especially in the mortgage market. Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default. Predictive analytics can be used to mitigate moral hazard and prevent accidents from occurring. [33]

Policing

Police agencies are now utilizing proactive strategies for crime prevention. Predictive analytics, which utilizes statistical tools to forecast crime patterns, provides new ways for police agencies to mobilize resources and reduce levels of crime. [34] With this predictive analytics of crime data, the police can better allocate the limited resources and manpower to prevent more crimes from happening. Directed patrol or problem-solving can be employed to protect crime hot spots, which exhibit crime densities much higher than the average in a city. [35]

Sports

Several firms have emerged specializing in predictive analytics in the field of professional sports for both teams and individuals. [36] While predicting human behavior creates a wide variance due to many factors that can change after predictions are made, including injuries, officiating, coaches decisions, weather, and more, the use of predictive analytics to project long term trends and performance is useful. Much of the field was started by the Moneyball concept of Billy Beane near the turn of the century, and now most professional sports teams employ their own analytics departments.

See also

Related Research Articles

<span class="mw-page-title-main">Supervised learning</span> Paradigm in machine learning

Supervised learning (SL) is a paradigm in machine learning where input objects and a desired output value train a model. The training data is processed, building a function that maps new data to expected output values. An optimal scenario will allow for the algorithm to correctly determine output values for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way. This statistical quality of an algorithm is measured through the so-called generalization error.

<span class="mw-page-title-main">Overfitting</span> Flaw in mathematical modelling

In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitted model is a mathematical model that contains more parameters than can be justified by the data. In a mathematical sense, these parameters represent the degree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation as if that variation represented underlying model structure.

<span class="mw-page-title-main">Prediction</span> Statement about a future event

A prediction or forecast is a statement about a future event or about future data. Predictions are often, but not always, based upon experience or knowledge of forecasters. There is no universal agreement about the exact difference between "prediction" and "estimation"; different authors and disciplines ascribe different connotations.

Forecasting is the process of making predictions based on past and present data. Later these can be compared (resolved) against what happens. For example, a company might estimate their revenue in the next year, then compare it against the actual results creating a variance actual analysis. Prediction is a similar but more general term. Forecasting might refer to specific formal statistical methods employing time series, cross-sectional or longitudinal data, or alternatively to less formal judgmental methods or the process of prediction and resolution itself. Usage can vary between areas of application: for example, in hydrology the terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specific future times, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.

Chemometrics is the science of extracting information from chemical systems by data-driven means. Chemometrics is inherently interdisciplinary, using methods frequently employed in core data-analytic disciplines such as multivariate statistics, applied mathematics, and computer science, in order to address problems in chemistry, biochemistry, medicine, biology and chemical engineering. In this way, it mirrors other interdisciplinary fields, such as psychometrics and econometrics.

Decision tree learning is a supervised learning approach used in statistics, data mining and machine learning. In this formalism, a classification or regression decision tree is used as a predictive model to draw conclusions about a set of observations.

In the statistical analysis of time series, autoregressive–moving-average (ARMA) models are a way to describe of a (weakly) stationary stochastic process using autoregression (AR) and a moving average (MA), each with a polynomial. They are a tool for understanding a series and predicting future values. AR involves regressing the variable on its own lagged (i.e., past) values. MA involves modeling the error as a linear combination of error terms occurring contemporaneously and at various times in the past. The model is usually denoted ARMA(p, q), where p is the order of AR and q is the order of MA.

<span class="mw-page-title-main">Regression analysis</span> Set of statistical processes for estimating the relationships among variables

In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable and one or more error-free independent variables. The most common form of regression analysis is linear regression, in which one finds the line that most closely fits the data according to a specific mathematical criterion. For example, the method of ordinary least squares computes the unique line that minimizes the sum of squared differences between the true data and that line. For specific mathematical reasons, this allows the researcher to estimate the conditional expectation of the dependent variable when the independent variables take on a given set of values. Less common forms of regression use slightly different procedures to estimate alternative location parameters or estimate the conditional expectation across a broader collection of non-linear models.

In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as "unit imputation"; when substituting for a component of a data point, it is known as "item imputation". There are three main problems that missing data causes: missing data can introduce a substantial amount of bias, make the handling and analysis of the data more arduous, and create reductions in efficiency. Because missing data can create problems for analyzing data, imputation is seen as a way to avoid pitfalls involved with listwise deletion of cases that have missing values. That is to say, when one or more values are missing for a case, most statistical packages default to discarding any case that has a missing value, which may introduce bias or affect the representativeness of the results. Imputation preserves all cases by replacing missing data with an estimated value based on other available information. Once all missing values have been imputed, the data set can then be analysed using standard techniques for complete data. There have been many theories embraced by scientists to account for missing data but the majority of them introduce bias. A few of the well known attempts to deal with missing data include: hot deck and cold deck imputation; listwise and pairwise deletion; mean imputation; non-negative matrix factorization; regression imputation; last observation carried forward; stochastic imputation; and multiple imputation.

<span class="mw-page-title-main">Granger causality</span> Statistical hypothesis test for forecasting

The Granger causality test is a statistical hypothesis test for determining whether one time series is useful in forecasting another, first proposed in 1969. Ordinarily, regressions reflect "mere" correlations, but Clive Granger argued that causality in economics could be tested for by measuring the ability to predict the future values of a time series using prior values of another time series. Since the question of "true causality" is deeply philosophical, and because of the post hoc ergo propter hoc fallacy of assuming that one thing preceding another can be used as a proof of causation, econometricians assert that the Granger test finds only "predictive causality". Using the term "causality" alone is a misnomer, as Granger-causality is better described as "precedence", or, as Granger himself later claimed in 1977, "temporally related". Rather than testing whether Xcauses Y, the Granger causality tests whether X forecastsY.

Predictive learning is a machine learning technique where an artificial intelligence model is fed new data to develop an understanding of its environment, capabilities, and limitations. The fields of neuroscience, business, robotics, computer vision, and other fields employ this technique extensively. This concept was developed and expanded by French computer scientist Yann LeCun in 1988 during his career at Bell Labs, where he trained models to detect handwriting so that financial companies could automate check processing.

Predictive modelling uses statistics to predict outcomes. Most often the event one wants to predict is in the future, but predictive modelling can be applied to any type of unknown event, regardless of when it occurred. For example, predictive models are often used to detect crimes and identify suspects, after the crime has taken place.

Customer attrition, also known as customer churn, customer turnover, or customer defection, is the loss of clients or customers.

In statistics, regression validation is the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained from regression analysis, are acceptable as descriptions of the data. The validation process can involve analyzing the goodness of fit of the regression, analyzing whether the regression residuals are random, and checking whether the model's predictive performance deteriorates substantially when applied to data that were not used in model estimation.

Demand forecasting is the prediction of the quantity of goods and services that will be demanded by consumers at a future point in time. More specifically, the methods of demand forecasting entail using predictive analytics to estimate customer demand in consideration of key economic conditions. This is an important tool in optimizing business profitability through efficient supply chain management. Demand forecasting methods are divided into two major categories, qualitative and quantitative methods. Qualitative methods are based on expert opinion and information gathered from the field. This method is mostly used in situations when there is minimal data available for analysis such as when a business or product has recently been introduced to the market. Quantitative methods, however, use available data, and analytical tools in order to produce predictions. Demand forecasting may be used in resource allocation, inventory management, assessing future capacity requirements, or making decisions on whether to enter a new market.

Fraud represents a significant problem for governments and businesses and specialized analysis techniques for discovering fraud using them are required. Some of these methods include knowledge discovery in databases (KDD), data mining, machine learning and statistics. They offer applicable and successful solutions in different areas of electronic fraud crimes.

<span class="mw-page-title-main">JASP</span> Free and open-source statistical program

JASP is a free and open-source program for statistical analysis supported by the University of Amsterdam. It is designed to be easy to use, and familiar to users of SPSS. It offers standard analysis procedures in both their classical and Bayesian form. JASP generally produces APA style results tables and plots to ease publication. It promotes open science via integration with the Open Science Framework and reproducibility by integrating the analysis settings into the results. The development of JASP is financially supported by sponsors several universities and research funds. As the JASP GUI is developed in C++ using Qt framework, some of the team left to make a notable fork which is Jamovi which has its GUI developed in JavaScript and HTML5.

In statistics, linear regression is a model that estimates the linear relationship between a scalar response and one or more explanatory variables. A model with exactly one explanatory variable is a simple linear regression; a model with two or more explanatory variables is a multiple linear regression. This term is distinct from multivariate linear regression, which predicts multiple correlated dependent variables rather than a single dependent variable.

The following outline is provided as an overview of and topical guide to machine learning:

In statistics, specifically regression analysis, a binary regression estimates a relationship between one or more explanatory variables and a single output binary variable. Generally the probability of the two alternatives is modeled, instead of simply outputting a single value, as in linear regression.

References

  1. 1 2 "To predict or not to Predict". mccoy-partners.com. Retrieved 2022-05-05.
  2. 1 2 Siegel, Eric (2013). Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die (1st ed.). Wiley. ISBN   978-1-1183-5685-2.
  3. Coker, Frank (2014). Pulse: Understanding the Vital Signs of Your Business (1st ed.). Bellevue, WA: Ambient Light Publishing. pp. 30, 39, 42, more. ISBN   978-0-9893086-0-1.
  4. Singh, Mayurendra Pratap. "Predictive analytics". TheCodeWork. Retrieved 4 November 2024.
  5. Finlay, Steven (2014). Predictive Analytics, Data Mining and Big Data. Myths, Misconceptions and Methods (1st ed.). Basingstoke: Palgrave Macmillan. p. 237. ISBN   978-1137379276.
  6. Spalek, Seweryn (2019). Data Analytics in Project Management. Taylor & Francis Group, LLC.
  7. "Machine learning, explained". MIT Sloan. Retrieved 2022-05-06.
  8. 1 2 3 4 5 6 Kinney, William R. (1978). "ARIMA and Regression in Analytical Review: An Empirical Test". The Accounting Review. 53 (1): 48–60. ISSN   0001-4826. JSTOR   245725.
  9. "Introduction to ARIMA models". people.duke.edu. Retrieved 2022-05-06.
  10. "6.4.3. What is Exponential Smoothing?". www.itl.nist.gov. Retrieved 2022-05-06.
  11. "6.4.1. Definitions, Applications and Techniques". www.itl.nist.gov. Retrieved 2022-05-06.
  12. "6.4.2.1. Single Moving Average". www.itl.nist.gov. Retrieved 2022-05-06.
  13. "6.4.2.2. Centered Moving Average". www.itl.nist.gov. Retrieved 2022-05-06.
  14. McCarthy, Richard; McCarthy, Mary; Ceccucci, Wendy (2021). Applying Predictive Analytics: Finding Value in Data. Springer.
  15. Eckerson, Wayne, W (2007). "Predictive Analytics. Extending the Value of Your Data Warehousing Investment" (PDF).{{cite web}}: CS1 maint: multiple names: authors list (link)
  16. "Linear Regression". www.stat.yale.edu. Retrieved 2022-05-06.
  17. Li, Meng; Liu, Jiqiang; Yang, Yeping (2023-10-14). "Financial Data Quality Evaluation Method Based on Multiple Linear Regression". Future Internet. 15 (10): 338. doi: 10.3390/fi15100338 . ISSN   1999-5903.
  18. 1 2 3 Kinney, William R.; Salamon, Gerald L. (1982). "Regression Analysis in Auditing: A Comparison of Alternative Investigation Rules". Journal of Accounting Research. 20 (2): 350–366. doi:10.2307/2490745. ISSN   0021-8456. JSTOR   2490745.
  19. PricewaterhouseCoopers. "Materiality in audits". PwC. Retrieved 2022-05-03.
  20. Wilson, Arlette C. (1991). "Use of Regression Models as Analytical Procedures: An Empirical Investigation of Effect of Data Dispersion on Auditor Decisions". Journal of Accounting, Auditing & Finance. 6 (3): 365–381. doi:10.1177/0148558X9100600307. ISSN   0148-558X. S2CID   154468768.
  21. Vesset, Dan; Morris, Henry D. (June 2011). "The Business Value of Predictive Analytics" (PDF). White Paper: 1–3.
  22. Clay, Halton. "Predictive Analytics: Definition, Model Types, and Uses". Investopedia. Retrieved 8 January 2024.
  23. Stone, Paul (April 2007). "Introducing Predictive Analytics: Opportunities". All Days. doi:10.2118/106865-MS.
  24. Team Stage (29 May 2021). "Project Management Statistics: Trends and Common Mistakes in 2023". TeamStage. Retrieved 8 January 2024.
  25. Lorek, Kenneth S.; Willinger, G. Lee (1996). "A Multivariate Time-Series Prediction Model for Cash-Flow Data". The Accounting Review. 71 (1): 81–102. ISSN   0001-4826. JSTOR   248356.
  26. Barth, Mary E.; Cram, Donald P.; Nelson, Karen K. (2001). "Accruals and the Prediction of Future Cash Flows". The Accounting Review. 76 (1): 27–58. doi:10.2308/accr.2001.76.1.27. ISSN   0001-4826. JSTOR   3068843.
  27. Reform, Fostering (2016-02-03). "New Strategies Long Overdue on Measuring Child Welfare Risk". The Imprint. Retrieved 2022-05-03.
  28. "Within Our Reach: A National Strategy to Eliminate Child Abuse and Neglect Fatalities" (PDF). Commission to Eliminate Child Abuse and Neglect Fatalities. 2016.
  29. Aletras, Nikolaos; Tsarapatsanis, Dimitrios; Preoţiuc-Pietro, Daniel; Lampos, Vasileios (2016). "Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective". PeerJ Computer Science . 2: e93. doi: 10.7717/peerj-cs.93 . S2CID   7630289.
  30. UCL (2016-10-24). "AI predicts outcomes of human rights trials". UCL News. Retrieved 2022-05-03.
  31. Dhar, Vasant (May 6, 2011). "Prediction in financial markets: The case for small disjuncts". ACM Transactions on Intelligent Systems and Technology. 2 (3): 1–22. doi:10.1145/1961189.1961191. ISSN   2157-6904. S2CID   11213278.
  32. Dhar, Vasant; Chou, Dashin; Provost, Foster (2000-10-01). "Discovering Interesting Patterns for Investment Decision Making with GLOWER ◯-A Genetic Learner Overlaid with Entropy Reduction". Data Mining and Knowledge Discovery. 4 (4): 251–280. doi:10.1023/A:1009848126475. ISSN   1384-5810. S2CID   1982544.
  33. Montserrat, Guillen; Cevolini, Alberto (November 2021). "Using Risk Analytics to Prevent Accidents Before They Occur – The Future of Insurance". Journal of Financial Transformation.
  34. Towers, Sherry; Chen, Siqiao; Malik, Abish; Ebert, David (2018-10-24). Eisenbarth, Hedwig (ed.). "Factors influencing temporal patterns in crime in a large American city: A predictive analytics perspective". PLOS ONE. 13 (10): e0205151. Bibcode:2018PLoSO..1305151T. doi: 10.1371/journal.pone.0205151 . ISSN   1932-6203. PMC   6200217 . PMID   30356321.
  35. Fitzpatrick, Dylan J.; Gorr, Wilpen L.; Neill, Daniel B. (2019-01-13). "Keeping Score: Predictive Analytics in Policing". Annual Review of Criminology. 2 (1): 473–491. doi:10.1146/annurev-criminol-011518-024534. ISSN   2572-4568. S2CID   169389590.
  36. "Free AI Sports Picks & Predictions for Today's Games". LEANS.AI. Retrieved 2023-07-08.

Further reading