# Jurimetrics

Last updated

Jurimetrics is the application of quantitative methods, and often especially probability and statistics, to law. [1] In the United States, the journal Jurimetrics is published by the American Bar Association and Arizona State University. [2] The Journal of Empirical Legal Studies is another publication that emphasizes the statistical analysis of law.

## Contents

The term was coined in 1949 by Lee Loevinger in his article "Jurimetrics: The Next Step Forward". [1] [3] Showing the influence of Oliver Wendell Holmes, Jr., Loevinger quoted [4] Holmes' celebrated phrase that:

“For the rational study of the law the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.” [5]

The first work on this topic is attributed to Nicolaus I Bernoulli in his doctoral dissertation De Usu Artis Conjectandi in Jure, written in 1709.

## Applications

### Gender quotas on corporate boards

In 2018, California's legislature passed Senate Bill 826, which requires all publicly held corporations based in the state to have a minimum number of women on their board of directors. [34] [35] Boards with five or fewer members must have at least two women, while boards with six or more members must have at least three women.

Using the binomial distribution, we may compute what the probability is of violating the rule laid out in Senate Bill 826 by the number of board members. The probability mass function for the binomial distribution is:

${\displaystyle \mathbb {P} (X=k)={n \choose {k}}p^{k}(1-p)^{n-k}}$

where ${\displaystyle p}$ is the probability of getting ${\displaystyle k}$ successes in ${\displaystyle n}$ trials, and ${\textstyle {n \choose {k}}}$ is the binomial coefficient. For this computation, ${\displaystyle p}$ is the probability that a person qualified for board service is female, ${\displaystyle k}$ is the number of female board members, and ${\displaystyle n}$ is the number of board seats. We will assume that ${\displaystyle p=0.5}$. Depending on the number of board members, we are trying compute the cumulative distribution function:

${\displaystyle {\begin{cases}\mathbb {P} (X\leq 1)=(1-p)^{n}+np(1-p)^{n-1},\quad &n\leq 5\\\mathbb {P} (X\leq 2)=\mathbb {P} (X\leq 1)+{n(n-1) \over {2}}p^{2}(1-p)^{n-2},\quad &n>5\end{cases}}}$

With these formulas, we are able to compute the probability of violating Senate Bill 826 by chance:

Probability of Violation by Chance (# of board members)
3456789101112
0.500.310.190.340.230.140.090.050.030.02

As Ilya Somin points out, [34] a significant percentage of firms - without any history of sex discrimination - could be in violation of the law.

In more male-dominated industries, such as technology, there could be an even greater imbalance. Suppose that instead of parity in general, the probability that a person who is qualified for board service is female is 40%; this is likely to be a high estimate, given the predominance of males in the technology industry. Then the probability of violating Senate Bill 826 by chance may be recomputed as:

Probability of Violation by Chance (# of board members)
3456789101112
0.650.480.340.540.420.320.230.170.120.08

### Bayesian analysis of evidence

Bayes' theorem states that, for events ${\displaystyle A}$ and ${\displaystyle B}$, the conditional probability of ${\displaystyle A}$ occurring, given that ${\displaystyle B}$ has occurred, is:

${\displaystyle \mathbb {P} (A|B)={\mathbb {P} (B|A)\mathbb {P} (A) \over {\mathbb {P} (B)}}}$

Using the law of total probability, we may expand the denominator as:

${\displaystyle \mathbb {P} (B)=\mathbb {P} (B|A)\mathbb {P} (A)+\mathbb {P} (B|\sim A)[1-\mathbb {P} (A)]}$

Then Bayes' theorem may be rewritten as:

{\displaystyle {\begin{aligned}\mathbb {P} (A|B)&={\mathbb {P} (B|A)\mathbb {P} (A) \over {\mathbb {P} (B|A)\mathbb {P} (A)+\mathbb {P} (B|\sim A)[1-\mathbb {P} (A)]}}\\&={1 \over {1+{1-\mathbb {P} (A) \over {\mathbb {P} (A)}}{\mathbb {P} (B|\sim A) \over {\mathbb {P} (B|A)}}}}\end{aligned}}}

This may be simplified further by defining the prior odds of event ${\displaystyle A}$ occurring ${\displaystyle \eta }$ and the likelihood ratio ${\displaystyle {\mathcal {L}}}$ as:

${\displaystyle \eta ={\mathbb {P} (A) \over {1-\mathbb {P} (A)}},\quad {\mathcal {L}}={\mathbb {P} (B|A) \over {\mathbb {P} (B|\sim A)}}}$

Then the compact form of Bayes' theorem is:

${\displaystyle \mathbb {P} (A|B)={1 \over {1+(\eta {\mathcal {L}})^{-1}}}}$

Different values of the posterior probability, based on the prior odds and likelihood ratio, are computed in the following table:

Likelihood Ratio Prior Odds 1 2 3 4 5 10 15 20 0.01 0.02 0.03 0.04 0.05 0.09 0.13 0.17 0.20 0.33 0.02 0.04 0.06 0.07 0.09 0.17 0.23 0.29 0.33 0.50 0.03 0.06 0.08 0.11 0.13 0.23 0.31 0.38 0.43 0.60 0.04 0.07 0.11 0.14 0.17 0.29 0.38 0.44 0.50 0.67 0.05 0.09 0.13 0.17 0.20 0.33 0.43 0.50 0.56 0.71 0.09 0.17 0.23 0.29 0.33 0.50 0.60 0.67 0.71 0.83 0.13 0.23 0.31 0.38 0.43 0.60 0.69 0.75 0.79 0.88 0.17 0.29 0.38 0.44 0.50 0.67 0.75 0.80 0.83 0.91 0.20 0.33 0.43 0.50 0.56 0.71 0.79 0.83 0.86 0.93 0.23 0.38 0.47 0.55 0.60 0.75 0.82 0.86 0.88 0.94

If we take ${\displaystyle A}$ to be some criminal behavior and ${\displaystyle B}$ a criminal complaint or accusation, Bayes' theorem allows us to determine the conditional probability of a crime being committed. More sophisticated analyses of evidence can be undertaken with the use of Bayesian networks.

### Screening of drug users, mass shooters, and terrorists

In recent years, there has been a growing interest in the use of screening tests to identify drug users on welfare, potential mass shooters, [36] and terrorists. [37] The efficacy of screening tests can be analyzed using Bayes' theorem.

Suppose that there is some binary screening procedure for an action ${\displaystyle V}$ that identifies a person as testing positive ${\displaystyle +}$ or negative ${\displaystyle -}$ for the action. Bayes' theorem tells us that the conditional probability of taking action ${\displaystyle V}$, given a positive test result, is:

${\displaystyle \mathbb {P} (V|+)={\mathbb {P} (+|V)\mathbb {P} (V) \over {\mathbb {P} (+|V)\mathbb {P} (V)+\mathbb {P} (+|\sim V)\left[1-\mathbb {P} (V)\right]}}}$

For any screening test, we must be cognizant of its sensitivity and specificity. The screening test has sensitivity ${\displaystyle \mathbb {P} (+|V)}$ and specificity ${\textstyle \mathbb {P} (-|\sim V)=1-\mathbb {P} (+|\sim V)}$. The sensitivity and specificity can be analyzed using concepts from the standard theory of statistical hypothesis testing:

• Sensitivity is equal to the statistical power ${\displaystyle 1-\beta }$, where ${\displaystyle \beta }$ is the type II error rate
• Specificity is equal to ${\displaystyle 1-\alpha }$, where ${\displaystyle \alpha }$ is the type I error rate

Therefore, the form of Bayes' theorem that is pertinent to us is:

${\displaystyle \mathbb {P} (V|+)={(1-\beta )\mathbb {P} (V) \over {(1-\beta )\mathbb {P} (V)+\alpha \left[1-\mathbb {P} (V)\right]}}}$

Suppose that we have developed a test with sensitivity and specificity of 99%, which is likely to be higher than most real-world tests. We can examine several scenarios to see how well this hypothetical test works:

• We screen welfare recipients for cocaine use. The base rate in the population is approximately 1.5%, [38] assuming no differences in use between welfare recipients and the general population.
• We screen men for the possibility of committing mass shootings or terrorist attacks. The base rate is assumed to be 0.01%.

With these base rates and the hypothetical values of sensitivity and specificity, we may calculate the posterior probability that a positive result indicates the individual will actually engage in each of the actions:

Posterior Probabilities
Drug UseMass Shooting
0.60120.0098

Even with very high sensitivity and specificity, the screening tests only return posterior probabilities of 60.1% and 0.98% respectively for each action. Under more realistic circumstances, it is likely that screening would prove even less useful than under these hypothetical conditions. The problem with any screening procedure for rare events is that it is very likely to be too imprecise, which will identify too many people of being at risk of engaging in some undesirable action.

## Jurimetrics and law and economics

The difference between jurimetrics and law and economics is that jurimetrics investigates legal questions from a probabilistic/statistical point of view, while law and economics addresses legal questions using standard microeconomic analysis. A synthesis of these fields is possible through the use of econometrics (statistics for economic analysis) and other quantitative methods to answer relevant legal matters. As an example, the Columbia University scholar Edgardo Buscaglia published several peer-reviewed articles by using a joint jurimetrics and law and economics approach. [39] [40]

## Related Research Articles

The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution, Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution is the distribution of the x-intercept of a ray issuing from with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.

In probability theory, the central limit theorem (CLT) establishes that, in many situations, when independent random variables are summed up, their properly normalized sum tends toward a normal distribution even if the original variables themselves are not normally distributed. The theorem is a key concept in probability theory because it implies that probabilistic and statistical methods that work for normal distributions can be applicable to many problems involving other types of distributions. This theorem has seen many changes during the formal development of probability theory. Previous versions of the theorem date back to 1811, but in its modern general form, this fundamental result in probability theory was precisely stated as late as 1920, thereby serving as a bridge between classical and modern probability theory.

In statistics, the likelihood function measures the goodness of fit of a statistical model to a sample of data for given values of the unknown parameters. It is formed from the joint probability distribution of the sample, but viewed and used as a function of the parameters only, thus treating the random variables as fixed at the observed values.

Pareto efficiency or Pareto optimality is a situation where no individual or preference criterion can be better off without making at least one individual or preference criterion worse off or without any loss thereof. The concept is named after Vilfredo Pareto (1848–1923), Italian civil engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The following three concepts are closely related:

In probability theory and statistics, Bayes' theorem, named after the Reverend Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to an individual of a known age to be assessed more accurately than simply assuming that the individual is typical of the population as a whole.

Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, and especially in mathematical statistics. Bayesian updating is particularly important in the dynamic analysis of a sequence of data. Bayesian inference has found application in a wide range of activities, including science, engineering, philosophy, medicine, sport, carpooling, and law. In the philosophy of decision theory, Bayesian inference is closely related to subjective probability, often called "Bayesian probability".

In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference.

Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation that views probability as the limit of the relative frequency of an event after many trials.

In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is a stochastic process with independent, stationary increments: it represents the motion of a point whose successive displacements are random, in which displacements in pairwise disjoint time intervals are independent, and displacements in different time intervals of the same length have identical probability distributions. A Lévy process may thus be viewed as the continuous-time analog of a random walk.

The positive and negative predictive values are the proportions of positive and negative results in statistics and diagnostic tests that are true positive and true negative results, respectively. The PPV and NPV describe the performance of a diagnostic test or other statistical measure. A high result can be interpreted as indicating the accuracy of such a statistic. The PPV and NPV are not intrinsic to the test ; they depend also on the prevalence. Both PPV and NPV can be derived using Bayes' theorem.

The Granger causality test is a statistical hypothesis test for determining whether one time series is useful in forecasting another, first proposed in 1969. Ordinarily, regressions reflect "mere" correlations, but Clive Granger argued that causality in economics could be tested for by measuring the ability to predict the future values of a time series using prior values of another time series. Since the question of "true causality" is deeply philosophical, and because of the post hoc ergo propter hoc fallacy of assuming that one thing preceding another can be used as a proof of causation, econometricians assert that the Granger test finds only "predictive causality". Using the term "causality" alone is a misnomer, as Granger-causality is better described as "precedence", or, as Granger himself later claimed in 1977, "temporally related". Rather than testing whether Xcauses Y, the Granger causality tests whether X forecastsY.

In probability theory, an empirical process is a stochastic process that describes the proportion of objects in a system in a given state. For a process in a discrete state space a population continuous time Markov chain or Markov population model is a process which counts the number of objects in a given state . In mean field theory, limit theorems are considered and generalise the central limit theorem for empirical measures. Applications of the theory of empirical processes arise in non-parametric statistics.

Lindley's paradox is a counterintuitive situation in statistics in which the Bayesian and frequentist approaches to a hypothesis testing problem give different results for certain choices of the prior distribution. The problem of the disagreement between the two approaches was discussed in Harold Jeffreys' 1939 textbook; it became known as Lindley's paradox after Dennis Lindley called the disagreement a paradox in a 1957 paper.

In probability theory, Dirichlet processes are a family of stochastic processes whose realizations are probability distributions. In other words, a Dirichlet process is a probability distribution whose range is itself a set of probability distributions. It is often used in Bayesian inference to describe the prior knowledge about the distribution of random variables—how likely it is that the random variables are distributed according to one or another particular distribution.

In economics and game theory, an all-pay auction is an auction in which every bidder must pay regardless of whether they win the prize, which is awarded to the highest bidder as in a conventional auction.

In mathematics—specifically, in functional analysis—a weakly measurable function taking values in a Banach space is a function whose composition with any element of the dual space is a measurable function in the usual (strong) sense. For separable spaces, the notions of weak and strong measurability agree.

In applied mathematics, topological data analysis (TDA) is an approach to the analysis of datasets using techniques from topology. Extraction of information from datasets that are high-dimensional, incomplete and noisy is generally challenging. TDA provides a general framework to analyze such data in a manner that is insensitive to the particular metric chosen and provides dimensionality reduction and robustness to noise. Beyond this, it inherits functoriality, a fundamental concept of modern mathematics, from its topological nature, which allows it to adapt to new mathematical tools.

Statistical proof is the rational demonstration of degree of certainty for a proposition, hypothesis or theory that is used to convince others subsequent to a statistical test of the supporting evidence and the types of inferences that can be drawn from the test scores. Statistical methods are used to increase the understanding of the facts and the proof demonstrates the validity and logic of inference with explicit reference to a hypothesis, the experimental data, the facts, the test, and the odds. Proof has two essential aims: the first is to convince and the second is to explain the proposition through peer and public review.

Probability bounds analysis (PBA) is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random variables and other quantities through mathematical expressions. For instance, it computes sure bounds on the distribution of a sum, product, or more complex function, given only sure bounds on the distributions of the inputs. Such bounds are called probability boxes, and constrain cumulative probability distributions.

Stochastic transitivity models are stochastic versions of the transitivity property of binary relations studied in mathematics. Several models of stochastic transitivity exist and have been used to describe the probabilities involved in experiments of paired comparisons, specifically in scenarios where transitivity is expected, however, empirical observations of the binary relation is probabilistic. For example, players' skills in a sport might be expected to be transitive, i.e. "if player A is better than B and B is better than C, then player A must be better than C"; however, in any given match, a weaker player might still end up winning with a positive probability. Tighly matched players might have a higher chance of observing this inversion while players with large differences in their skills might only see these inversions happen seldomly. Stochastic transitivity models formalize such relations between the probabilities and the underlying transitive relation.

## References

1. Garner, Bryan A. (2001). "jurimetrics". A Dictionary of Modern Legal Usage. p. 488. ISBN   0195142365.
2. "Jurimetrics". American Bar Association. Retrieved 2015-02-06.
3. Loevinger, Lee (1949). "Jurimetrics--The Next Step Forward". Minnesota Law Review. 33: 455.
4. Loevinger, L. "Jurimetrics: Science and prediction in the field of law". Minnesota Law Review , vol. 46, HeinOnline, 1961.
5. Holmes, The Path of the Law, 10 Harvard Law Review (1897) 457.
6. Nigrini, Mark J. (1999-04-30). "I've Got Your Number: How a mathematical phenomenon can help CPAs uncover fraud and other irregularities". Journal of Accountancy.
7. Durtschi, Cindy; Hillison, William; Pacini, Carl (2004). "The Effective Use of Benford's Law to Assist in Detecting Fraud in Accounting Data". Journal of Forensic Accounting. 5: 17–34.
8. Moore, Thomas Gale (1986). "U. S. Airline Deregulation: Its Effects on Passengers, Capital, and Labor". The Journal of Law & Economics. 29 (1): 1–28. doi:10.1086/467107. ISSN   0022-2186. JSTOR   725400. S2CID   153646501.
9. Gelman, Andrew; Fagan, Jeffrey; Kiss, Alex (2007). "An Analysis of the New York City Police Department's "Stop-and-Frisk" Policy in the Context of Claims of Racial Bias". Journal of the American Statistical Association. 102 (479): 813–823. doi:10.1198/016214506000001040. ISSN   0162-1459. JSTOR   27639927. S2CID   8505752.
10. Agan, Amanda; Starr, Sonja (2018-02-01). "Ban the Box, Criminal Records, and Racial Discrimination: A Field Experiment". The Quarterly Journal of Economics. 133 (1): 191–235. doi:10.1093/qje/qjx028. ISSN   0033-5533. S2CID   18615965.
11. Kiszko, Kamila M.; Martinez, Olivia D.; Abrams, Courtney; Elbel, Brian (2014). "The influence of calorie labeling on food orders and consumption: A review of the literature". Journal of Community Health. 39 (6): 1248–1269. doi:10.1007/s10900-014-9876-0. ISSN   0094-5145. PMC  . PMID   24760208.
12. Finkelstein, Michael O.; Robbins, Herbert E. (1973). "Mathematical Probability in Election Challenges". Columbia Law Review. 73 (2): 241. doi:10.2307/1121228. JSTOR   1121228.
13. Greenstone, Michael; McDowell, Richard; Nath, Ishan (2019-04-21). "Do Renewable Portfolio Standards Deliver?" (PDF). Energy Policy Institute at the University of Chicago, Working Paper No. 2019-62.
14. Angrist, Joshua D.; Krueger, Alan B. (1991). "Does Compulsory School Attendance Affect Schooling and Earnings?". The Quarterly Journal of Economics. 106 (4): 979–1014. doi:10.2307/2937954. ISSN   0033-5533. JSTOR   2937954. S2CID   153718259.
15. Eisenberg, Theodore; Sundgren, Stefan; Wells, Martin T. (1998). "Larger board size and decreasing firm value in small firms". Journal of Financial Economics. 48 (1): 35–54. doi:10.1016/S0304-405X(98)00003-8. ISSN   0304-405X.
16. Guest, Paul M. (2009). "The impact of board size on firm performance: evidence from the UK" (PDF). The European Journal of Finance. 15 (4): 385–404. doi:10.1080/13518470802466121. hdl:. ISSN   1351-847X. S2CID   3868815.
17. Donohue III, John J.; Ho, Daniel E. (2007). "The Impact of Damage Caps on Malpractice Claims: Randomization Inference with Difference-in-Differences". Journal of Empirical Legal Studies. 4 (1): 69–102. doi:10.1111/j.1740-1461.2007.00082.x.
18. Linnainmaa, Juhani T.; Melzer, Brian; Previtero, Alessandro (2018). "The Misguided Beliefs of Financial Advisors". SSRN. SSRN  .
19. Van Doren, Peter (2018-06-25). "The Fiduciary Rule and Conflict of Interest". Cato at Liberty. Cato Institute. Retrieved 2019-12-14.
20. Kennedy, Edward H.; Hu, Chen; O’Brien, Barbara; Gross, Samuel R. (2014-05-20). "Rate of false conviction of criminal defendants who are sentenced to death". Proceedings of the National Academy of Sciences. 111 (20): 7230–7235. Bibcode:2014PNAS..111.7230G. doi:. ISSN   0027-8424. PMC  . PMID   24778209.
21. Fenton, Norman; Neil, Martin; Lagnado, David A. (2013). "A General Structure for Legal Arguments About Evidence Using Bayesian Networks". Cognitive Science. 37 (1): 61–102. doi:. ISSN   1551-6709. PMID   23110576.
22. Vlek, Charlotte S.; Prakken, Henry; Renooij, Silja; Verheij, Bart (2014-12-01). "Building Bayesian networks for legal evidence with narratives: a case study evaluation". Artificial Intelligence and Law. 22 (4): 375–421. doi:10.1007/s10506-014-9161-7. ISSN   1572-8382. S2CID   12449479.
23. Kwan, Michael; Chow, Kam-Pui; Law, Frank; Lai, Pierre (2008). Ray, Indrajit; Shenoi, Sujeet (eds.). "Reasoning About Evidence Using Bayesian Networks". Advances in Digital Forensics IV. IFIP — The International Federation for Information Processing. Springer US. 285: 275–289. doi:. ISBN   9780387849270.
24. Devi, Tanaya; Fryer Jr, Roland G. (2020). "Policing the Police: The Impact of "Pattern-or-Practice" Investigations on Crime" (PDF). NBER Working Paper Series. No. 27324.
25. Lai, T. L.; Levin, Bruce; Robbins, Herbert; Siegmund, David (1980-06-01). "Sequential medical trials". Proceedings of the National Academy of Sciences. 77 (6): 3135–3138. Bibcode:1980PNAS...77.3135L. doi:. ISSN   0027-8424. PMC  . PMID   16592839.
26. Levin, Bruce (2015). "The futility study—progress over the last decade". Contemporary Clinical Trials. 45 (Pt A): 69–75. doi:10.1016/j.cct.2015.06.013. ISSN   1551-7144. PMC  . PMID   26123873.
27. Deichmann, Richard E.; Krousel-Wood, Marie; Breault, Joseph (2016). "Bioethics in Practice: Considerations for Stopping a Clinical Trial Early". The Ochsner Journal. 16 (3): 197–198. ISSN   1524-5012. PMC  . PMID   27660563.
28. "Adaptive Designs for Clinical Trials of Drugs and Biologics: Guidance for Industry". U.S. Department of Health and Human Services/Food and Drug Administration. 2019.
29. Finkelstein, Michael O.; Levin, Bruce (1997). "Clear Choices and Guesswork in Peremptory Challenges in Federal Criminal Trials". Journal of the Royal Statistical Society. Series A (Statistics in Society). 160 (2): 275–288. doi:10.1111/1467-985X.00062. ISSN   0964-1998. JSTOR   2983220.
30. Jones, Shayne E.; Miller, Joshua D.; Lynam, Donald R. (2011-07-01). "Personality, antisocial behavior, and aggression: A meta-analytic review". Journal of Criminal Justice. 39 (4): 329–337. doi:10.1016/j.jcrimjus.2011.03.004. ISSN   0047-2352.
31. Perry, Walter L.; McInnis, Brian; Price, Carter C.; Smith, Susan; Hollywood, John S. (2013). "Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations". RAND Corporation. Retrieved 2019-08-16.
32. Spivak, Andrew L.; Damphousse, Kelly R. (2006). "Who Returns to Prison? A Survival Analysis of Recidivism among Adult Offenders Released in Oklahoma, 1985 – 2004". Justice Research and Policy. 8 (2): 57–88. doi:10.3818/jrp.8.2.2006.57. ISSN   1525-1071. S2CID   144566819.
33. Localio, A. Russell; Lawthers, Ann G.; Bengtson, Joan M.; Hebert, Liesi E.; Weaver, Susan L.; Brennan, Troyen A.; Landis, J. Richard (1993). "Relationship Between Malpractice Claims and Cesarean Delivery". JAMA. 269 (3): 366–373. doi:10.1001/jama.1993.03500030064034. PMID   8418343.
34. Somin, Ilya (2018-10-04). "California's Unconstitutional Gender Quotas for Corporate Boards". Reason.com. The Volokh Conspiracy. Retrieved 2019-08-13.
35. Stewart, Emily (2018-10-03). "California just passed a law requiring more women on boards. It matters, even if it fails". Vox. Retrieved 2019-08-13.
36. Gillespie, Nick (2018-02-14). "Yes, This Is a Good Time To Talk About Gun Violence and How To Reduce It". Reason.com. Retrieved 2019-08-17.
37. "Terrorist Screening Center". Federal Bureau of Investigation. Retrieved 2019-08-17.
38. "What is the scope of cocaine use in the United States?". National Institute on Drug Abuse. Retrieved 2019-08-17.
39. Buscaglia, Edgardo (2001). "The Economic Factors Behind Legal Integration: A Jurimetric Analysis of the Latin American Experience" (PDF). German Papers in Law and Economics. 1: 1.
40. Buscaglia, Edgardo (2001). "A Governance-Based Jurimetric Analysis of Judicial Corruption: Subjective versus Objective Indicators" (PDF). International Review of Law and Economics. 21: 231. doi:10.1016/S0144-8188(01)00058-8.
• Angrist, Joshua D.; Pischke, Jörn-Steffen (2009). Mostly Harmless Econometrics: An Empiricist's Companion. Princeton, NJ: Princeton University Press. ISBN   9780691120355.
• Borenstein, Michael; Hedges, Larry V.; Higgins, Julian P.T.; Rothstein, Hannah R. (2009). Introduction to Meta-Analysis. Hoboken, NJ: John Wiley & Sons. ISBN   9780470057247.
• Finkelstein, Michael O.; Levin, Bruce (2015). Statistics for Lawyers. Statistics for Social and Behavioral Sciences (3rd ed.). New York, NY: Springer. ISBN   9781441959843.
• Hosmer, David W.; Lemeshow, Stanley; May, Susanne (2008). Applied Survival Analysis: Regression Modeling of Time-to-Event Data. Wiley-Interscience (2nd ed.). Hoboken, NJ: John Wiley & Sons. ISBN   9780471754992.
• McCullagh, Peter; Nelder, John A. (1989). Generalized Linear Models. Monographs on Statistics and Applied Probability (2nd ed.). Boca Raton, FL: Chapman & Hall/CRC. ISBN   9780412317606.