Jurimetrics

Last updated

Jurimetrics is the application of quantitative methods, especially probability and statistics, to law. [1] In the United States, the journal Jurimetrics is published by the American Bar Association and Arizona State University. [2] The Journal of Empirical Legal Studies is another publication that emphasizes the statistical analysis of law.

Contents

The term was coined in 1949 by Lee Loevinger in his article "Jurimetrics: The Next Step Forward". [1] [3] Showing the influence of Oliver Wendell Holmes Jr., Loevinger quoted [4] Holmes' celebrated phrase that:

"For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics." [5]

The first work on this topic is attributed to Nicolaus I Bernoulli in his doctoral dissertation De Usu Artis Conjectandi in Jure, written in 1709.

Relation to law and economics

The difference between jurimetrics and law and economics is that jurimetrics investigates legal questions from a probabilistic/statistical point of view, while law and economics addresses legal questions using standard microeconomic analysis. Specifically, jurimetrics uncover patterns in decision-making and use them to identify potential biases in judgements that are passed. A synthesis of these fields is possible through the use of econometrics (statistics for economic analysis) and other quantitative methods to answer relevant legal matters. As an example, the Columbia University scholar Edgardo Buscaglia published several peer-reviewed articles by using a joint jurimetrics and law and economics approach. [6] [7]

List of Applications

Applications

Gender quotas on corporate boards

In 2018, California's legislature passed Senate Bill 826, which requires all publicly held corporations based in the state to have a minimum number of women on their board of directors. [37] [38] Boards with five or fewer members must have at least two women, while boards with six or more members must have at least three women.

Using the binomial distribution, we may compute what the probability is of violating the rule laid out in Senate Bill 826 by the number of board members. The probability mass function for the binomial distribution is:where is the probability of getting successes in trials, and is the binomial coefficient. For this computation, is the probability that a person qualified for board service is female, is the number of female board members, and is the number of board seats. We will assume that .

Depending on the number of board members, we are trying compute the cumulative distribution function:With these formulas, we are able to compute the probability of violating Senate Bill 826 by chance:

Probability of Violation by Chance (# of board members)
3456789101112
0.500.310.190.340.230.140.090.050.030.02

As Ilya Somin points out, [37] a significant percentage of firms - without any history of sex discrimination - could be in violation of the law.

In more male-dominated industries, such as technology, there could be an even greater imbalance. Suppose that instead of parity in general, the probability that a person who is qualified for board service is female is 40%; this is likely to be a high estimate, given the predominance of males in the technology industry. Then the probability of violating Senate Bill 826 by chance may be recomputed as:

Probability of Violation by Chance (# of board members)
3456789101112
0.650.480.340.540.420.320.230.170.120.08

Screening of drug users, mass shooters, and terrorists

In recent years, there has been a growing interest in the use of screening tests to identify drug users on welfare, potential mass shooters, [39] and terrorists. [40] The efficacy of screening tests can be analyzed using Bayes' theorem.

Suppose that there is some binary screening procedure for an action that identifies a person as testing positive or negative for the action. Bayes' theorem tells us that the conditional probability of taking action , given a positive test result, is:For any screening test, we must be cognizant of its sensitivity and specificity. The screening test has sensitivity and specificity . The sensitivity and specificity can be analyzed using concepts from the standard theory of statistical hypothesis testing:

Therefore, the form of Bayes' theorem that is pertinent to us is:Suppose that we have developed a test with sensitivity and specificity of 99%, which is likely to be higher than most real-world tests. We can examine several scenarios to see how well this hypothetical test works:

With these base rates and the hypothetical values of sensitivity and specificity, we may calculate the posterior probability that a positive result indicates the individual will actually engage in each of the actions:

Posterior Probabilities
Drug UseMass Shooting
0.60120.0098

Even with very high sensitivity and specificity, the screening tests only return posterior probabilities of 60.1% and 0.98% respectively for each action. Under more realistic circumstances, it is likely that screening would prove even less useful than under these hypothetical conditions. The problem with any screening procedure for rare events is that it is very likely to be too imprecise, which will identify too many people of being at risk of engaging in some undesirable action.

Historical applications

Jurimetrics utilizes many statistical methods to analyze judicial behavior, and this occurs through uncovering patterns in decision-making and using them to identify potential biases in judgements that are passed. For instance, statistical analysis can forecast the outcomes of cases, providing insights into expected resolutions based on historical data. Jurimetrics is also used to evaluate litigation trends, optimize legal strategies, and improve the efficiency of legal proceedings. [42]

One example of an application of jurimetrics is through resource allocation within court systems, where data analytics are used to identify potential difficulties and suggests improvements. Another example is the analysis of disparities within sentencing. This allows policymakers to address the inequities within legal practices. These emphasize the role of jurimetrics in the legal system, as a way to bridge quantitative analysis, and equitable judicial processes. [42]

List of methods

Bayesian analysis of evidence

Bayes' theorem states that, for events and , the conditional probability of occurring, given that has occurred, is:Using the law of total probability, we may expand the denominator as:Then Bayes' theorem may be rewritten as:This may be simplified further by defining the prior odds of event occurring and the likelihood ratio as:Then the compact form of Bayes' theorem is:Different values of the posterior probability, based on the prior odds and likelihood ratio, are computed in the following table:

with Prior Odds and Likelihood Ratio
Likelihood Ratio
Prior Odds123451015202550
0.010.010.020.030.040.050.090.130.170.200.33
0.020.020.040.060.070.090.170.230.290.330.50
0.030.030.060.080.110.130.230.310.380.430.60
0.040.040.070.110.140.170.290.380.440.500.67
0.050.050.090.130.170.200.330.430.500.560.71
0.100.090.170.230.290.330.500.600.670.710.83
0.150.130.230.310.380.430.600.690.750.790.88
0.200.170.290.380.440.500.670.750.800.830.91
0.250.200.330.430.500.560.710.790.830.860.93
0.300.230.380.470.550.600.750.820.860.880.94

If we take to be some criminal behavior and a criminal complaint or accusation, Bayes' theorem allows us to determine the conditional probability of a crime being committed. More sophisticated analyses of evidence can be undertaken with the use of Bayesian networks.

As many other fields, the changes to gurimetrics have been dynamic due to technological advancements. The integration of artificial intelligence(AI) into legal processes has been an emerging trend. Machine learning algorithms, an AI powered tool, have been used frequently to analyze legal texts, predict case outcomes, and provide data-focused insights to legal employees.

Technological advancements such as AI have been used in creating legal analytics platforms. They can review large amounts of case law, and identify patterns that assist in crafting legal arguments. These innovations improve decision-making processes by reducing the likelihood of human error, but also increase the efficiency of legal research. [43]

For example, recent studies highlight the efficiency of ML in analyzing complex datasets, such as those found in healthcare or legal domains, with high accuracy. One application discussed by Christian Garbin, Nicholas Marques, and Oge Marques (2023) involves the use of ML models to identify specific patterns in datasets characterized by class imbalances. The article discusses datasets related to opioid use disorder (OUD), and how judgements passed in legal environments have been dependent on these datasets that are connected closely to class imbalances [44] .

Despite many advancements, the integration of AI into jurimetrics presents challenges. Garbin, Marques, and Marques emphasize that many studies that use machine learning algorithms fail to transparently document essential steps, such as data preprocessing, hyperparameter tuning, or the criteria used for splitting training and test sets [44] .

Garbin, Marques, and Marques recommend prioritizing interpretable models unless the performance gap justifies the use of less transparent algorithms. Since legal decisions have high-stakes, interpretable models(logistic regression or decision trees) are often preferred over more complex "black-box" models. Often, these "black-box" models have higher predictive accuracy, but the interpretability is a central and ethical concern. [44]

History of Jurimetrics

The term "jurimetrics" was created in 1949 by Lee Loevinger. [45] It was defined as the use of quantitative methods to the study of law. Initially, jurimetrics was specifically focused on the theoretical exploration of statistical techniques on legal systems. [43]

Over time, the field evolved. In the mid-20th century, jurimetrics began to gain traction as researchers continued to explore the field and its potential for improving legal analysis. Early foundational studies created a roadmap for actually integrating the practice into the legal field. By the late 20th century, jurimetrics expanded to include applications such as evaluating the reliability of forensic evidence and modeling litigation outcomes.

In today's world, jurimetrics is recognized as a tool for the modern day legal system. It bridges the gaps between economics, data science, and the law.

Ethics of Jurimetrics

In 2021, Abigail Z. Jacobs and Hanna Wallach released a study regarding "computational systems, and how they often involve unobservable theoretical constructs, such as socioeconomic status, teacher effectiveness, and risk of recidivism" [46] . "Computational systems have long been touted as having the potential to counter societal biases and structural inequalities, yet recent work has demonstrated that they often end up encoding and exacerbating them instead" [46] .

An example of the ethical concerns in jurimetrics comes from risk assessment models used in the U.S. justice system, particularly seen in the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) tool. COMPAS is developed by Northpointe(now Equivant), and was built to evaluate a defendant's likelihood of recidivism through the analysis of various factors derived from official records and interviews. The factors are grouped into four dimensions: prior criminal history, associations with criminals, drug involvement, and indicators of juvenile delinquency.

The risk assessment model then uses the factors in a regression model to generate a recidivism risk score, scaled from one to ten, with ten indicating the highest risk. According to the model, recidivism is defined as a new misdemeanor or felony arrest within two years. However, the specific mathematical methodology that COMPAS uses remains private, which has raised concerns regarding transparency. Subsequent investigations, such as those by Angwin [47] et al., have critiqued the model for potential biases and their ethical implications. [48]

See also

Related Research Articles

<span class="mw-page-title-main">Statistical inference</span> Process of using data analysis

Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.

Bayes' theorem gives a mathematical rule for inverting conditional probabilities, allowing us to find the probability of a cause given its effect. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to an individual of a known age to be assessed more accurately by conditioning it relative to their age, rather than assuming that the individual is typical of the population as a whole. Based on Bayes law both the prevalence of a disease in a given population and the error rate of an infectious disease test have to be taken into account to evaluate the meaning of a positive test result correctly and avoid the base-rate fallacy.

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to calculate a probability of a hypothesis, given prior evidence, and update it as more information becomes available. Fundamentally, Bayesian inference uses a prior distribution to estimate posterior probabilities. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

In probability theory, de Finetti's theorem states that exchangeable observations are conditionally independent relative to some latent variable. An epistemic probability distribution could then be assigned to this variable. It is named in honor of Bruno de Finetti.

<span class="mw-page-title-main">Random walk</span> Process forming a path from many random steps

In mathematics, a random walk, sometimes known as a drunkard's walk, is a stochastic process that describes a path that consists of a succession of random steps on some mathematical space.

In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.

Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation, which views probability as the limit of the relative frequency of an event after many trials. More concretely, analysis in Bayesian methods codifies prior knowledge in the form of a prior distribution.

A prior probability distribution of an uncertain quantity, often simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the relative proportions of voters who will vote for a particular politician in a future election. The unknown quantity may be a parameter of the model or a latent variable rather than an observable variable.

<span class="mw-page-title-main">Granger causality</span> Statistical hypothesis test for forecasting

The Granger causality test is a statistical hypothesis test for determining whether one time series is useful in forecasting another, first proposed in 1969. Ordinarily, regressions reflect "mere" correlations, but Clive Granger argued that causality in economics could be tested for by measuring the ability to predict the future values of a time series using prior values of another time series. Since the question of "true causality" is deeply philosophical, and because of the post hoc ergo propter hoc fallacy of assuming that one thing preceding another can be used as a proof of causation, econometricians assert that the Granger test finds only "predictive causality". Using the term "causality" alone is a misnomer, as Granger-causality is better described as "precedence", or, as Granger himself later claimed in 1977, "temporally related". Rather than testing whether Xcauses Y, the Granger causality tests whether X forecastsY.

In game theory, a Bayesian game is a strategic decision-making model which assumes players have incomplete information. Players may hold private information relevant to the game, meaning that the payoffs are not common knowledge. Bayesian games model the outcome of player interactions using aspects of Bayesian probability. They are notable because they allowed, for the first time in game theory, for the specification of the solutions to games with incomplete information.

Bootstrapping is a procedure for estimating the distribution of an estimator by resampling one's data or a model estimated from the data. Bootstrapping assigns measures of accuracy to sample estimates. This technique allows estimation of the sampling distribution of almost any statistic using random sampling methods.

Bayesian inference of phylogeny combines the information in the prior and in the data likelihood to create the so-called posterior probability of trees, which is the probability that the tree is correct given the data, the prior and the likelihood model. Bayesian inference was introduced into molecular phylogenetics in the 1990s by three independent groups: Bruce Rannala and Ziheng Yang in Berkeley, Bob Mau in Madison, and Shuying Li in University of Iowa, the last two being PhD students at the time. The approach has become very popular since the release of the MrBayes software in 2001, and is now one of the most popular methods in molecular phylogenetics.

<span class="mw-page-title-main">Dirichlet process</span> Family of stochastic processes

In probability theory, Dirichlet processes are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a probability distribution whose range is itself a set of probability distributions. It is often used in Bayesian inference to describe the prior knowledge about the distribution of random variables—how likely it is that the random variables are distributed according to one or another particular distribution.

Polynomial chaos (PC), also called polynomial chaos expansion (PCE) and Wiener chaos expansion, is a method for representing a random variable in terms of a polynomial function of other random variables. The polynomials are chosen to be orthogonal with respect to the joint probability distribution of these random variables. Note that despite its name, PCE has no immediate connections to chaos theory. The word "chaos" here should be understood as "random".

<span class="mw-page-title-main">All-pay auction</span>

In economics and game theory, an all-pay auction is an auction in which every bidder must pay regardless of whether they win the prize, which is awarded to the highest bidder as in a conventional auction. As shown by Riley and Samuelson (1981), equilibrium bidding in an all pay auction with private information is revenue equivalent to bidding in a sealed high bid or open ascending price auction.

Bayesian econometrics is a branch of econometrics which applies Bayesian principles to economic modelling. Bayesianism is based on a degree-of-belief interpretation of probability, as opposed to a relative-frequency interpretation.

In Bayesian inference, the Bernstein–von Mises theorem provides the basis for using Bayesian credible sets for confidence statements in parametric models. It states that under some conditions, a posterior distribution converges in total variation distance to a multivariate normal distribution centered at the maximum likelihood estimator with covariance matrix given by , where is the true population parameter and is the Fisher information matrix at the true population parameter value:

In mathematics — specifically, in the fields of probability theory and inverse problems — Besov measures and associated Besov-distributed random variables are generalisations of the notions of Gaussian measures and random variables, Laplace distributions, and other classical distributions. They are particularly useful in the study of inverse problems on function spaces for which a Gaussian Bayesian prior is an inappropriate model. The construction of a Besov measure is similar to the construction of a Besov space, hence the nomenclature.

Stochastic transitivity models are stochastic versions of the transitivity property of binary relations studied in mathematics. Several models of stochastic transitivity exist and have been used to describe the probabilities involved in experiments of paired comparisons, specifically in scenarios where transitivity is expected, however, empirical observations of the binary relation is probabilistic. For example, players' skills in a sport might be expected to be transitive, i.e. "if player A is better than B and B is better than C, then player A must be better than C"; however, in any given match, a weaker player might still end up winning with a positive probability. Tightly matched players might have a higher chance of observing this inversion while players with large differences in their skills might only see these inversions happen seldom. Stochastic transitivity models formalize such relations between the probabilities and the underlying transitive relation.

References

  1. 1 2 Garner, Bryan A. (2001). "jurimetrics". A Dictionary of Modern Legal Usage. Oxford University Press. p. 488. ISBN   0-19-514236-5.
  2. "Jurimetrics". American Bar Association. Retrieved 2015-02-06.
  3. Loevinger, Lee (1949). "Jurimetrics--The Next Step Forward". Minnesota Law Review. 33: 455.
  4. Loevinger, L. "Jurimetrics: Science and prediction in the field of law". Minnesota Law Review , vol. 46, HeinOnline, 1961.
  5. Holmes, The Path of the Law, 10 Harvard Law Review (1897) 457.
  6. Buscaglia, Edgardo (2001). "The Economic Factors Behind Legal Integration: A Jurimetric Analysis of the Latin American Experience" (PDF). German Papers in Law and Economics. 1: 1.
  7. Buscaglia, Edgardo (2001). "A Governance-Based Jurimetric Analysis of Judicial Corruption: Subjective versus Objective Indicators" (PDF). International Review of Law and Economics. 21: 231. doi:10.1016/S0144-8188(01)00058-8.
  8. Nigrini, Mark J. (1999-04-30). "I've Got Your Number: How a mathematical phenomenon can help CPAs uncover fraud and other irregularities". Journal of Accountancy.
  9. Durtschi, Cindy; Hillison, William; Pacini, Carl (2004). "The Effective Use of Benford's Law to Assist in Detecting Fraud in Accounting Data". Journal of Forensic Accounting. 5: 17–34.
  10. Moore, Thomas Gale (1986). "U. S. Airline Deregulation: Its Effects on Passengers, Capital, and Labor". The Journal of Law & Economics. 29 (1): 1–28. doi:10.1086/467107. ISSN   0022-2186. JSTOR   725400. S2CID   153646501.
  11. Gelman, Andrew; Fagan, Jeffrey; Kiss, Alex (2007). "An Analysis of the New York City Police Department's "Stop-and-Frisk" Policy in the Context of Claims of Racial Bias". Journal of the American Statistical Association. 102 (479): 813–823. doi: 10.1198/016214506000001040 . ISSN   0162-1459. JSTOR   27639927. S2CID   8505752.
  12. Agan, Amanda; Starr, Sonja (2018-02-01). "Ban the Box, Criminal Records, and Racial Discrimination: A Field Experiment". The Quarterly Journal of Economics. 133 (1): 191–235. doi:10.1093/qje/qjx028. ISSN   0033-5533. S2CID   18615965.
  13. Kiszko, Kamila M.; Martinez, Olivia D.; Abrams, Courtney; Elbel, Brian (2014). "The influence of calorie labeling on food orders and consumption: A review of the literature". Journal of Community Health. 39 (6): 1248–1269. doi:10.1007/s10900-014-9876-0. ISSN   0094-5145. PMC   4209007 . PMID   24760208.
  14. Finkelstein, Michael O.; Robbins, Herbert E. (1973). "Mathematical Probability in Election Challenges". Columbia Law Review. 73 (2): 241. doi:10.2307/1121228. JSTOR   1121228.
  15. Greenstone, Michael; McDowell, Richard; Nath, Ishan (2019-04-21). "Do Renewable Portfolio Standards Deliver?" (PDF). Energy Policy Institute at the University of Chicago, Working Paper No. 2019-62.
  16. Angrist, Joshua D.; Krueger, Alan B. (1991). "Does Compulsory School Attendance Affect Schooling and Earnings?". The Quarterly Journal of Economics. 106 (4): 979–1014. doi:10.2307/2937954. ISSN   0033-5533. JSTOR   2937954. S2CID   153718259.
  17. Eisenberg, Theodore; Sundgren, Stefan; Wells, Martin T. (1998). "Larger board size and decreasing firm value in small firms". Journal of Financial Economics. 48 (1): 35–54. doi: 10.1016/S0304-405X(98)00003-8 . ISSN   0304-405X.
  18. Guest, Paul M. (2009). "The impact of board size on firm performance: evidence from the UK" (PDF). The European Journal of Finance. 15 (4): 385–404. doi:10.1080/13518470802466121. hdl: 1826/4169 . ISSN   1351-847X. S2CID   3868815.
  19. Donohue III, John J.; Ho, Daniel E. (2007). "The Impact of Damage Caps on Malpractice Claims: Randomization Inference with Difference-in-Differences". Journal of Empirical Legal Studies. 4 (1): 69–102. doi:10.1111/j.1740-1461.2007.00082.x.
  20. Linnainmaa, Juhani T.; Melzer, Brian; Previtero, Alessandro (2018). "The Misguided Beliefs of Financial Advisors". SSRN. SSRN   3101426.
  21. Van Doren, Peter (2018-06-25). "The Fiduciary Rule and Conflict of Interest". Cato at Liberty. Cato Institute. Retrieved 2019-12-14.
  22. Kennedy, Edward H.; Hu, Chen; O'Brien, Barbara; Gross, Samuel R. (2014-05-20). "Rate of false conviction of criminal defendants who are sentenced to death". Proceedings of the National Academy of Sciences. 111 (20): 7230–7235. Bibcode:2014PNAS..111.7230G. doi: 10.1073/pnas.1306417111 . ISSN   0027-8424. PMC   4034186 . PMID   24778209.
  23. Fenton, Norman; Neil, Martin; Lagnado, David A. (2013). "A General Structure for Legal Arguments About Evidence Using Bayesian Networks". Cognitive Science. 37 (1): 61–102. doi: 10.1111/cogs.12004 . ISSN   1551-6709. PMID   23110576.
  24. Vlek, Charlotte S.; Prakken, Henry; Renooij, Silja; Verheij, Bart (2014-12-01). "Building Bayesian networks for legal evidence with narratives: a case study evaluation". Artificial Intelligence and Law. 22 (4): 375–421. doi:10.1007/s10506-014-9161-7. ISSN   1572-8382. S2CID   12449479.
  25. Kwan, Michael; Chow, Kam-Pui; Law, Frank; Lai, Pierre (2008). "Reasoning About Evidence Using Bayesian Networks". In Ray, Indrajit; Shenoi, Sujeet (eds.). Advances in Digital Forensics IV. IFIP — The International Federation for Information Processing. Vol. 285. Springer US. pp. 275–289. doi: 10.1007/978-0-387-84927-0_22 . ISBN   978-0-387-84927-0.
  26. Devi, Tanaya; Fryer, Roland G. Jr. (2020). "Policing the Police: The Impact of "Pattern-or-Practice" Investigations on Crime" (PDF). NBER Working Paper Series. No. 27324.
  27. Lai, T. L.; Levin, Bruce; Robbins, Herbert; Siegmund, David (1980-06-01). "Sequential medical trials". Proceedings of the National Academy of Sciences. 77 (6): 3135–3138. Bibcode:1980PNAS...77.3135L. doi: 10.1073/pnas.77.6.3135 . ISSN   0027-8424. PMC   349568 . PMID   16592839.
  28. Levin, Bruce (2015). "The futility study—progress over the last decade". Contemporary Clinical Trials. 45 (Pt A): 69–75. doi:10.1016/j.cct.2015.06.013. ISSN   1551-7144. PMC   4639404 . PMID   26123873.
  29. Deichmann, Richard E.; Krousel-Wood, Marie; Breault, Joseph (2016). "Bioethics in Practice: Considerations for Stopping a Clinical Trial Early". The Ochsner Journal. 16 (3): 197–198. ISSN   1524-5012. PMC   5024796 . PMID   27660563.
  30. "Adaptive Designs for Clinical Trials of Drugs and Biologics: Guidance for Industry". U.S. Department of Health and Human Services/Food and Drug Administration. 2019.
  31. Finkelstein, Michael O.; Levin, Bruce (1997). "Clear Choices and Guesswork in Peremptory Challenges in Federal Criminal Trials". Journal of the Royal Statistical Society. Series A (Statistics in Society). 160 (2): 275–288. doi:10.1111/1467-985X.00062. ISSN   0964-1998. JSTOR   2983220. S2CID   120315268.
  32. Jones, Shayne E.; Miller, Joshua D.; Lynam, Donald R. (2011-07-01). "Personality, antisocial behavior, and aggression: A meta-analytic review". Journal of Criminal Justice. 39 (4): 329–337. doi:10.1016/j.jcrimjus.2011.03.004. ISSN   0047-2352.
  33. Perry, Walter L.; McInnis, Brian; Price, Carter C.; Smith, Susan; Hollywood, John S. (2013). "Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations". RAND Corporation. Retrieved 2019-08-16.
  34. Spivak, Andrew L.; Damphousse, Kelly R. (2006). "Who Returns to Prison? A Survival Analysis of Recidivism among Adult Offenders Released in Oklahoma, 1985 – 2004". Justice Research and Policy. 8 (2): 57–88. doi:10.3818/jrp.8.2.2006.57. ISSN   1525-1071. S2CID   144566819.
  35. Localio, A. Russell; Lawthers, Ann G.; Bengtson, Joan M.; Hebert, Liesi E.; Weaver, Susan L.; Brennan, Troyen A.; Landis, J. Richard (1993). "Relationship Between Malpractice Claims and Cesarean Delivery". JAMA. 269 (3): 366–373. doi:10.1001/jama.1993.03500030064034. PMID   8418343.
  36. Unger, Adriana Jacoto; Neto, José Francisco dos Santos; Fantinato, Marcelo; Peres, Sarajane Marques; Trecenti, Julio; Hirota, Renata (21 June 2021). Process mining-enabled jurimetrics: analysis of a Brazilian court's judicial performance in the business law processing. ACM. pp. 240–244. doi:10.1145/3462757.3466137. ISBN   978-1-4503-8526-8.
  37. 1 2 Somin, Ilya (2018-10-04). "California's Unconstitutional Gender Quotas for Corporate Boards". Reason.com. The Volokh Conspiracy. Retrieved 2019-08-13.
  38. Stewart, Emily (2018-10-03). "California just passed a law requiring more women on boards. It matters, even if it fails". Vox. Retrieved 2019-08-13.
  39. Gillespie, Nick (2018-02-14). "Yes, This Is a Good Time To Talk About Gun Violence and How To Reduce It". Reason.com. Retrieved 2019-08-17.
  40. "Terrorist Screening Center". Federal Bureau of Investigation. Retrieved 2019-08-17.
  41. "What is the scope of cocaine use in the United States?". National Institute on Drug Abuse. Retrieved 2019-08-17.
  42. 1 2 "Jurimetrics Spring 2024". www.americanbar.org. Retrieved 2024-12-10.
  43. 1 2 "Jurimetrics Spring 2024". www.americanbar.org. Retrieved 2024-12-10.
  44. 1 2 3 Garbin, Christian; Marques, Nicholas; Marques, Oge (2023-06-01). "Machine learning for predicting opioid use disorder from healthcare data: A systematic review". Computer Methods and Programs in Biomedicine. 236: 107573. doi:10.1016/j.cmpb.2023.107573. ISSN   0169-2607.
  45. Gutierrez, Richard E.; Scurich, Nicholas; Garrett, Brandon L. (2024). "The Impact Of Defense Experts On Juror Perceptions Of Firearms Examination Testimony". doi.org. Retrieved 2024-12-10.
  46. 1 2 Jacobs, Abigail Z.; Wallach, Hanna (2021-03-12), Measurement and Fairness, doi:10.48550/arXiv.1912.05511 , retrieved 2024-12-10
  47. Mattu, Jeff Larson,Julia Angwin,Lauren Kirchner,Surya. "How We Analyzed the COMPAS Recidivism Algorithm". ProPublica. Retrieved 2024-12-10.{{cite web}}: CS1 maint: multiple names: authors list (link)
  48. Mattu, Jeff Larson,Julia Angwin,Lauren Kirchner,Surya. "How We Analyzed the COMPAS Recidivism Algorithm". ProPublica. Retrieved 2024-12-10.{{cite web}}: CS1 maint: multiple names: authors list (link)

Further reading