Computational law

Last updated

Computational Law is the branch of legal informatics concerned with the automation of legal reasoning. [1] [2] What distinguishes Computational Law systems from other instances of legal technology is their autonomy, i.e. the ability to answer legal questions without additional input from human legal experts.

Contents

While there are many possible applications of Computational Law, the primary focus of work in the field today is compliance management, i.e. the development and deployment of computer systems capable of assessing, facilitating, or enforcing compliance with rules and regulations. Some systems of this sort already exist. TurboTax is a good example. And the potential is particularly significant now due to recent technological advances – including the prevalence of the Internet in human interaction and the proliferation of embedded computer systems (such as smart phones, self-driving cars, and robots).

There are also applications that do not involve governmental laws. The regulations can just as well be the terms of contracts (e.g. delivery schedules, insurance covenants, real estate transactions, financial agreements). [3] They can be the policies of corporations (e.g. constraints on travel, expenditure reporting, pricing rules). They can even be the rules of games (embodied in computer game playing systems).

History

Speculation about potential benefits to legal practice through applying methods from computational science and AI research to automate parts of the law date back at least to the middle 1940s. [4] Further, AI and law and computational law do not seem easily separable, as perhaps most of AI research focusing on the law and its automation appears to utilize computational methods. The forms that speculation took are multiple and not all related in ways to readily show closeness to one another. This history will sketch them as they were, attempting to show relationships where they can be found to have existed.

By 1949, a minor academic field aiming to incorporate electronic and computational methods to legal problems had been founded by American legal scholars, called jurimetrics. [5] Though broadly said to be concerned with the application of the "methods of science" to the law, these methods were actually of a quite specifically defined scope. Jurimetrics was to be "concerned with such matters as the quantitative analysis of judicial behavior, the application of communication and information theory to legal expression, the use of mathematical logic in law, the retrieval of legal data by electronic and mechanical means, and the formulation of a calculus of legal predictability". [6]

These interests led in 1959 to the founding a journal, Modern Uses of Logic in Law , as a forum wherein articles would be published about the applications of techniques such as mathematical logic, engineering, statistics, etc. to the legal study and development. [7] In 1966, this Journal was renamed as Jurimetrics. [8] Today, however, the journal and meaning of jurimetrics seems to have broadened far beyond what would fit under the areas of applications of computers and computational methods to law. Today the journal not only publishes articles on such practices as found in computational law, but has broadened jurimetrical concerns to mean also things like the use of social science in law or the "policy implications [of] and legislative and administrative control of science". [9]

Independently in 1958, at the Conference for the Mechanization of Thought held at the National Physical Laboratory in Teddington, Middlesex, UK, the French jurist Lucien Mehl presented a paper both on the benefits of using computational methods for law and on the potential means to use such methods to automate law for a discussion that included AI luminaries like Marvin Minsky. [10] [11] Mehl believed that the law could by automated by two basic distinct, though not wholly separable, types of machine. These were the "documentary or information machine", which would provide the legal researcher quick access to relevant case precedents and legal scholarship, [12] and the "consultation machine", which would be "capable of answering any question put to it over a vast field of law". [13] The latter type of machine would be able to basically do much of a lawyer's job by simply giving the "exact answer to a [legal] problem put to it". [14]

By 1970, Mehl's first type of machine, one that would be able to retrieve information, had been accomplished but there seems to have been little consideration of further fruitful intersections between AI and legal research. [15] There were, however, still hopes that computers could model the lawyer's thought processes through computational methods and then apply that capacity to solve legal problems, thus automating and improving legal services via increased efficiency as well as shedding light on the nature of legal reasoning. [16] By the late 1970s, computer science and the affordability of computer technology had progressed enough that the retrieval of "legal data by electronic and mechanical means" had been achieved by machines fitting Mehl's first type and were in common use in American law firms. [17] [18] During this time, research focused on improving the goals of the early 1970s occurred, with programs like Taxman being worked on in order to both bring useful computer technology into the law as practical aids and to help specify the exact nature of legal concepts. [19]

Nonetheless, progress on the second type of machine, one that would more fully automate the law, remained relatively inert. [18] Research into machines that could answer questions in the way that Mehl's consultation machine would picked up somewhat in the late 1970s and 1980s. A 1979 convention in Swansea, Wales marked the first international effort solely to focus upon applying artificial intelligence research to legal problems in order to "consider how computers can be used to discover and apply the legal norms embedded within the written sources of the law". [18]

Considerable progress on the development of the second type of machine was made in the following decade, with the development of a variety of expert systems. According to Thorne McCarty, [20] "these systems all have the following characteristics: They do backward chaining inference from a specified goal; they ask questions to elicit information from the user; and they produce a suggested answer along with a trace of the supporting legal rules." According to Prakken and Sartor [21] the representation of the British Nationality Act as a logic program, [22] which introduced this approach, was "hugely influential for the development of computational representations of legislation, showing how logic programming enables intuitively appealing representations that can be directly deployed to generate automatic inferences". In 2021, this work received the Inaugural CodeX Prize [23] as "one of the first and best-known works in computational law, and one of the most widely cited papers in the field."

In a 1988 review of Anne Gardner's book An Artificial Intelligence Approach to Legal Reasoning (1987), the Harvard academic legal scholar and computer scientist Edwina Rissland wrote that "She plays, in part, the role of pioneer; artificial intelligence ("AI") techniques have not yet been widely applied to perform legal tasks. Therefore, Gardner, and this review, first describe and define the field, then demonstrate a working model in the domain of contract offer and acceptance." [24] Eight years after the Swansea conference had passed, and still AI and law researchers merely trying to delineate the field could be described by their own kind as "pioneer[s]".

In the 1990s and early 2000s more progress occurred. Computational research generated insights for law. [25] The First International Conference on AI and the Law occurred in 1987, but it is in the 1990s and 2000s that the biannual conference began to build up steam and to delve more deeply into the issues involved with work intersecting computational methods, AI, and law. [26] [27] [28] Classes began to be taught to undergraduates on the uses of computational methods to automating, understanding, and obeying the law. [29]

Further, by 2005, a team largely composed of Stanford computer scientists from the Stanford Logic group had devoted themselves to studying the uses of computational techniques to the law. [30] Computational methods in fact advanced enough that members of the legal profession began in the 2000s to both analyze, predict and worry about the potential future of computational law and a new academic field of computational legal studies seems to be now well established. As insight into what such scholars see in the law's future due in part to computational law, here is quote from a recent conference about the "New Normal" for the legal profession:

"Over the last 5 years, in the fallout of the Great Recession, the legal profession has entered the era of the New Normal. Notably, a series of forces related to technological change, globalization, and the pressure to do more with less (in both corporate America and law firms) has changed permanently the legal services industry. As one article put it, firms are cutting back on hiring "in order to increase efficiency, improve profit margins, and reduce client costs." Indeed, in its recently noted cutbacks, Weil Gotshal's leaders remarked that it had initially expected old work to return, but came "around to the view that this is the ‘new normal.’"The New Normal provides lawyers with an opportunity to rethink—and reimagine—the role of lawyers in our economy and society. To the extent that law firms enjoyed, or still enjoy, the ability to bundle work together, that era is coming to an end, as clients unbundle legal services and tasks. Moreover, in other cases, automation and technology can change the roles of lawyers, both requiring them to oversee processes and use technology more aggressively as well as doing less of the work that is increasingly managed by computers (think: electronic discovery). The upside is not only greater efficiencies for society, but new possibilities for legal craftsmanship. The emerging craft of lawyering in the New Normal is likely to require lawyers to be both entrepreneurial and fluent with a range of competencies that will enable them to add value for clients. Apropos of the trends noted above, there are emerging opportunities for "legal entrepreneurs" in a range of roles from legal process management to developing technologies to manage legal operations (such as overseeing automated processes) to supporting online dispute resolution processes. In other cases, effective legal training as well as domain specific knowledge (finance, sales, IT, entrepreneurship, human resources, etc.) can form a powerful combination that prepares law school grads for a range of opportunities (business development roles, financial operations roles, HR roles, etc.). In both cases, traditional legal skills alone will not be enough to prepare law students for these roles. But the proper training, which builds on the traditional law school curriculum and goes well beyond it including practical skills, relevant domain knowledge (e.g., accounting), and professional skills (e.g., working in teams), will provide law school students a huge advantage over those with a one-dimensional skill set." [31]

Many see perks to oncoming changes brought about by the computational automation of law. For one thing, legal experts have predicted that it will aid legal self-help, especially in the areas of contract formation, enterprise planning, and the prediction of rule changes. [9] For another thing, those with knowledge about computers see the potential for computational law to really fully bloom as eminent. In this vein, it seems that machines like Mehl's second type may come into existence. Stephen Wolfram has said that:

"So we're slowly moving toward people being educated in the kind of computational paradigm. Which is good, because the way I see it, computation is going to become central to almost every field. Let's talk about two examples—classic professions: law and medicine. It's funny, when Leibniz was first thinking about computation at the end of the 1600s, the thing he wanted to do was to build a machine that would effectively answer legal questions. It was too early then. But now we’re almost ready, I think, for computational law. Where for example contracts become computational. They explicitly become algorithms that decide what's possible and what's not.You know, some pieces of this have already happened. Like with financial derivatives, like options and futures. In the past these used to just be natural language contracts. But then they got codified and parametrized. So they’re really just algorithms, which of course one can do meta-computations on, which is what has launched a thousand hedge funds, and so on. Well, eventually one's going to be able to make computational all sorts of legal things, from mortgages to tax codes to perhaps even patents. Now to actually achieve that, one has to have ways to represent many aspects of the real world, in all its messiness. Which is what the whole knowledge-based computing of Wolfram|Alpha is about." [32]

In Estonia, the government has been spearheading a 'robotic judge' initiative whereby chief data officer Ott Velsberg is implementing elements both adjacent and directly related to computational law. Firstly, inspectors no longer verify the use of hay-field subsidies (that prevent forests) rather they use a deep-learning algorithm to validate the results from satellite comparison thus saving nearly a million dollars per annum. The perceived success of this operation has led to the Estonian government moving ahead with proposals to automate, or compute the law of, small claims disputes beneath $8,000 especially in contract disputes. The decisions will be appealable to a human judge. [33]

Approaches

Algorithmic law

There have also been many attempts to create a machine readable or machine executable legal code. A machine readable code would simplify the analysis of legal code, allowing the rapid construction and analysis of databases, without the need for advanced text processing techniques. A machine executable format would allow the specifics of a case to be input, and would return the decision based on the case.

Machine readable legal code is already quite common. METAlex, [34] an XML-based standard proposed and developed by the Leibniz Center for Law of the University of Amsterdam, [35] is used by the governments of both the United Kingdom and the Netherlands to encode their laws. In the United States, an executive order issued by President Barack Obama in the May 2013 mandated that all public government documentation be released in a machine readable format by default, although no specific format was mentioned. [36]

Machine executable legal code is much less common. Currently, as of 2020, numerous projects are working on systems for producing machine executable legal code, sometimes also through natural language, constrained language or a connection between natural language and executable code similar to Ricardian Contracts.[ citation needed ]

Empirical analysis

Many current efforts in computational law are focused on the empirical analysis of legal decisions, and their relation to legislation. These efforts usually make use of citation analysis, which examines patterns in citations between works. Due to the widespread practice of legal citation, it is possible to construct citation indices and large graphs of legal precedent, called citation networks. Citation networks allow the use of graph traversal algorithms in order to relate cases to one another, as well as the use of various distance metrics to find mathematical relationships between them. [37] [38] [39] These analyses can reveal important overarching patterns and trends in judicial proceedings and the way law is used. [40] [41]

There have been several breakthroughs in the analysis of judicial rulings in recent research on legal citation networks. These analyses have made use of citations in Supreme Court majority opinions to build citation networks, and analyzed the patterns in these networks to identify meta-information about individual decisions, such as the importance of the decision, as well as general trends in judicial proceedings, such as the role of precedent over time. [42] [37] [40] These analyses have been used to predict which cases the Supreme Court will choose to consider. [40]

Another effort has examined United States Tax Court decisions, compiling a publicly available database of Tax Court decisions, opinions, and citations between the years of 1990 and 2008, and constructing a citation network from this database. Analysis of this network revealed that large sections of the tax code were rarely, if ever, cited, and that other sections of code, such as those that dealt with "divorce, dependents, nonprofits, hobby and business expenses and losses, and general definition of income," were involved the vast majority of disputes. [41]

Some research has also been focused on hierarchical networks, in combination with citation networks, and the analysis of United States Code. This research has been used to analyze various aspects of the Code, including its size, the density of citations within and between sections of the Code, the type of language used in the Code, and how these features vary over time. This research has been used to provide commentary on the nature of the Code's change over time, which is characterized by an increase in size and in interdependence between sections. [38]

Visualization

Visualization of legal code, and of the relationships between various laws and decisions, is also a hot topic in computational law. Visualizations allow both professionals and laypeople to see large-scale relationships and patterns, which may be difficult to see using standard legal analysis or empirical analysis.

Legal citation networks lend themselves to visualization, and many citation networks which are analyzed empirically also have sub-sections of the network that are represented visually as a result. [37] However, there are still many technical problems in network visualization. The density of connections between nodes, and the sheer number of nodes in some cases, can make the visualization incomprehensible to humans. There are a variety of methods that can be used to reduce the complexity of the displayed information, for example by defining semantic sub-groups within the network, and then representing relationships between these semantic groups, rather than between every node. [43] This allows the visualization to be human readable, but the reduction in complexity can obscure relationships. Despite this limitation, visualization of legal citation networks remains a popular field and practice.

Examples of tools

  1. OASIS Legal XML, UNDESA Akoma Ntoso, and CEN Metalex, which are standardizations created by legal and technical experts for the electronic exchange of legal data. [44] [45]
  2. Creative Commons, which correspond to custom-generated copyright licenses for internet content.
  3. Legal Analytics, which combines big data, critical expertise, and intuitive tools to deliver business intelligence and benchmarking solutions.
  4. Legal visualizations. Examples include Katz's map of supreme court decisions, [46] Starger's Opinion Lines for the commerce clause [47] and stare decisis., [48] and Surden's visualizations of Copyright Law. [49]
  1. PACER is an online repository of judicial rulings, maintained by the Federal Judiciary. [50]
  2. The Law Library of Congress maintains a comprehensive online repository of legal information, including legislation at the international, national, and state levels. [51]
  3. The Supreme Court Database is a comprehensive database containing detailed information about decisions made by the Supreme Court from 1946 to the present. [52]
  4. The United States Reports contained detailed information about every Supreme Court decision from 1791 to the near-present. [53]

See also

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Expert system</span> Computer system emulating the decision-making ability of a human expert

In artificial intelligence (AI), an expert system is a computer system emulating the decision-making ability of a human expert. Expert systems are designed to solve complex problems by reasoning through bodies of knowledge, represented mainly as if–then rules rather than through conventional procedural code. Expert systems were first created in the 1970s and then proliferated in the 1980s. Expert systems were among the first truly successful forms of AI software. An expert system is divided into two subsystems: the inference engine and the knowledge base. The knowledge base represents facts and rules. The inference engine applies the rules to the known facts to deduce new facts. Inference engines can also include explanation and debugging abilities.

Computer science is the study of the theoretical foundations of information and computation and their implementation and application in computer systems. One well known subject classification system for computer science is the ACM Computing Classification System devised by the Association for Computing Machinery.

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.

<span class="mw-page-title-main">Logic in computer science</span> Academic discipline

Logic in computer science covers the overlap between the field of logic and that of computer science. The topic can essentially be divided into three main areas:

The expression computational intelligence (CI) usually refers to the ability of a computer to learn a specific task from data or experimental observation. Even though it is commonly considered a synonym of soft computing, there is still no commonly accepted definition of computational intelligence.

<span class="mw-page-title-main">Robert Kowalski</span> British computer scientist (born 1941)

Robert Anthony Kowalski is an American-British logician and computer scientist, whose research is concerned with developing both human-oriented models of computing and computational models of human thinking. He has spent most of his career in the United Kingdom.

Legal informatics is an area within information science.

In computer science, in particular in knowledge representation and reasoning and metalogic, the area of automated reasoning is dedicated to understanding different aspects of reasoning. The study of automated reasoning helps produce computer programs that allow computers to reason completely, or nearly completely, automatically. Although automated reasoning is considered a sub-field of artificial intelligence, it also has connections with theoretical computer science and philosophy.

The following outline is provided as an overview of and topical guide to artificial intelligence:

Computational journalism can be defined as the application of computation to the activities of journalism such as information gathering, organization, sensemaking, communication and dissemination of news information, while upholding values of journalism such as accuracy and verifiability. The field draws on technical aspects of computer science including artificial intelligence, content analysis, visualization, personalization and recommender systems as well as aspects of social computing and information science.

A legal expert system is a domain-specific expert system that uses artificial intelligence to emulate the decision-making abilities of a human expert in the field of law. Legal expert systems employ a rule base or knowledge base and an inference engine to accumulate, reference and produce expert knowledge on specific subjects within the legal domain.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence, its sub-disciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

Argument technology is a sub-field of collective intelligence and artificial intelligence that focuses on applying computational techniques to the creation, identification, analysis, navigation, evaluation and visualisation of arguments and debates.

Michael Genesereth is an American logician and computer scientist, who is most known for his work on computational logic and applications of that work in enterprise management, computational law, and general game playing. Genesereth is professor in the Computer Science Department at Stanford University and a professor by courtesy in the Stanford Law School. His 1987 textbook on Logical Foundations of Artificial Intelligence remains one of the key references on symbolic artificial intelligence. He is the author of the influential Game Description Language (GDL) and Knowledge Interchange Format (KIF), the latter of which led to the ISO Common Logic standard.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

References

  1. Genesereth, Michael. "What is Computational Law?".
  2. Genesereth, Michael. "Computational Law - The Cop in the Backseat".
  3. Surden, Harry. Surden, Harry (2012). "Computable Contracts". SSRN   2216866., U.C. Davis Law Review Vol 46 (2012): 629-676.
  4. 18 Rocky Mntn. L. Rev. 378 (1945-1946) Does the Law Need a Technological Revolution; Kelso, Louis O.
  5. 33 Minn. L. Rev. 455 (1948-1949) Jurimetrics--The Next Step Forward; Loevinger, Lee
  6. Loevinger, Lee. "Jurimetrics: The methodology of legal inquiry." Law and Contemporary Problems (1963): 5-35. At 8.
  7. "About Jurimetrics." About the Journal. American Bar Association Section of Science & Technology Law and the Center for Law, Science & Innovation, n.d. Web. 26 Apr. 2014. <http://www.law.asu.edu/jurimetrics/JurimetricsJournal/AbouttheJournal.aspx Archived 12 March 2015 at the Wayback Machine >.
  8. Ibid,
  9. 1 2 Ibid.
  10. Mechanization of Thought Processes: Proceedings of a Symposium Held at the National Physical Laboratory on 24th, 25th, 26th and 27th November 1958. London: Her Majesty's Stationery Office, 1959. Print.
  11. Niblett, Bryan. Computer Science and the Law: Inaugural Lecture of the Professor of Computer Science Delivered at the College on January 25, 1977. Swansea, Wales: U College of Swansea, 1977. 7-8. Print.
  12. "Automation in the Legal World." Mechanization of Thought Processes: Proceedings of a Symposium Held at the National Physical Laboratory on 24th, 25th, 26th and 27th November 1958. London: Her Majesty's Stationery Office, 1959. 755-87. Print. At 759.
  13. "Automation in the Legal World." Mechanization of Thought Processes: Proceedings of a Symposium Held at the National Physical Laboratory on 24th, 25th, 26th and 27th November 1958. London: Her Majesty's Stationery Office, 1959. 755-87. Print. At 768-769.
  14. Ibid. 768.
  15. Some Speculation about Artificial Intelligence and Legal Reasoning Bruce G. Buchanan and Thomas E. Headrick, Stanford Law Review, Vol. 23, No. 1 (Nov., 1970), pp. 40-62. At p. 40.
  16. Some Speculation about Artificial Intelligence and Legal Reasoning Bruce G. Buchanan and Thomas E. Headrick, Stanford Law Review, Vol. 23, No. 1 (Nov., 1970), pp. 40-62. At pp. 51-60.
  17. Legal Decisions and Information Systems. Jon Bing and Trygve Harvold. Oslo, Norway: Universitets Forlaget; 1977
  18. 1 2 3 Niblett, Bryan. B. Niblett, editor. Computer Science and Law: An Advanced Course. Cambridge University Press, 1980.
  19. See, e.g., L. Thorne McCarty, Reflections on Taxman: An Experiment in Artificial Intelligence and Legal Reasoning, 90 Harv. L. Rev. 837-895 (1977).
  20. McCarty, L. Thorne. "Artificial Intelligence and Law: How to get there from here." Ratio Juris 3.2 (1990): 189-200.
  21. Prakken, H.; Sartor, G. (October 2015). "Law and logic: a review from an argumentation perspective" (PDF). Artificial Intelligence . 227: 214–245. doi:10.1016/j.artint.2015.06.005. S2CID   4261497. Archived from the original (PDF) on 27 September 2020. Retrieved 27 September 2023.
  22. Sergot, M.J.; Sadri, F.; Kowalski, R.A.; Kriwaczek, F.; Hammond, P; Cory, H.T. (1986). "The British Nationality Act as a logic program" (PDF). Communications of the ACM . 29 (5): 370–386. doi:10.1145/5689.5920. S2CID   5665107.
  23. "New CodeX Prize Awarded to Computational Law Pioneers During 9th Annual CodeX FutureLaw Conference". Stanford Law School. 8 April 2021.
  24. Rissland, Edwina. "Artificial Intelligence and Legal Reasoning: A Discussion of the Field and Gardner's Book." AI Magazine 9.3 (1988): 45.
  25. See, e.g., Kades, Eric, "The Laws of Complexity & the Complexity of Laws: The Implications of Computational Complexity Theory for the Law" 49 Rutgers Law Review 403-484 (1997)
  26. Rissland, E. L., Ashley, K. D., & Loui, R. P. (2003). AI and Law: A fruitful synergy. Artificial Intelligence, 150(1-2), 1-15.
  27. Bench-Capon, Trevor, Michał Araszkiewicz, Kevin Ashley, Katie Atkinson, Floris Bex, Filipe Borges, Daniele Bourcier et al. "A history of AI and Law in 50 papers: 25 years of the international conference on AI and Law." Artificial Intelligence and Law 20, no. 3 (2012): 215-319.
  28. See "International Conference on Artificial Intelligence and Law (ICAIL)."International Conference on AI and Law (ICAIL). The DBLP Computer Science Bibliography, n.d. Web. 24 Apr. 2014. <http://www.informatik.uni-trier.de/~LEY/db/conf/icail/index.html>. The citation includes all past conferences and links to their contents. It appears that during the 1990s the number of papers presented, talks given, etc. increased significantly from the first two conferences held respectively in 1987 and 1989.
  29. See, e.g., this syllabus from Stanford for CS 204 Computers and Law. Genesereth, Michael R. "CS 204: Computers and Law." CS204: Computers and Law. Stanford University, n.d. Web. 23 Apr. 2014. <http://logic.stanford.edu/classes/cs204/>.
  30. "Stanford Computational Law." Stanford Computational Law. Stanford University, n.d. Web. 24 Apr. 2014. <http://complaw.stanford.edu/>
  31. "The Future of Law School Innovation (Conference @ColoradoLaw)."Computational Legal Studies. N.p., n.d. Web. 18 Apr. 2014. <http://computationallegalstudies.com/2014/04/17/the-future-of-law-school-innovation-conference-coloradolaw/>.[ permanent dead link ]
  32. Wolfram, Stephen. "Talking about the Computational Future at SXSW 2013—Stephen Wolfram Blog." Stephen Wolfram Blog RSS. N.p., 19 Mar. 2013. Web. 17 Apr. 2014. <http://blog.stephenwolfram.com/2013/03/talking-about-the-computational-future-at-sxsw-2013/>.
  33. Freeman Engstrom, David. "Can AI Be A Fair Judge In Court? Estonia Thinks So". Wired. Retrieved 20 October 2022.
  34. "CEN MetaLex - Open XML Interchange Format for Legal and Legislative Resources".
  35. Hoekstra, R. J.; Boer, A. W. F.; Winkels, R. G. F. (2003). "METAlex: An XML Standard for Legal Documents". Proceedings of the XML Europe Conference, London (UK). hdl:11245/1.425496 via University of Amsterdam Digital Academic Repository.
  36. The White House. Office of the Press Secretary. Executive Order -- Making Open and Machine Readable the New Default for Government Information. N.p., 9 May 2013. Web.
  37. 1 2 3 Fowler, J. H., T. R. Johnson, J. F. Spriggs, S. Jeon, and P. J. Wahlbeck. "Network Analysis and the Law: Measuring the Legal Importance of Precedents at the U.S. Supreme Court." Political Analysis 15.3 (2006): 324-46. Print.
  38. 1 2 Bommarito, Michael J., and Daniel M. Katz. "A Mathematical Approach to the Study of the United States Code." Physica A: Statistical Mechanics and Its Applications 389.19 (2010): 4195-200. Print.
  39. Bommarito, Michael J., Daniel Martin Katz, Jonathan L. Zelner, and James H. Fowler. "Distance Measures for Dynamic Citation Networks." Physica A: Statistical Mechanics and Its Applications 389.19 (2010): 4201-208. Print.
  40. 1 2 3 Fowler, James H., and Sangick Jeon. "The Authority of Supreme Court Precedent." Social Networks 30.1 (2008): 16-30. Print.
  41. 1 2 Bommarito, Michael J. "Empirical Survey of the Population of US Tax Court Written Decisions, An." Va. Tax Rev. 30 (2010): 523.
  42. Ridi, Niccolò (1 June 2019). "The Shape and Structure of the 'Usable Past': An Empirical Analysis of the Use of Precedent in International Adjudication". Journal of International Dispute Settlement. 10 (2): 200–247. doi:10.1093/jnlids/idz007.
  43. Shneiderman, Ben, and Aleks Aris. "Network Visualization by Semantic Substrates." IEEE Transactions on Visualization and Computer Graphics 12.5 (2006): 733-40. Print.
  44. "Legal XML".
  45. Flatt, Amelie; Langner, Arne; Leps, Olof (2022). Model-Driven Development of Akoma Ntoso Application Profiles - A Conceptual Framework for Model-Based Generation of XML Subschemas (1st ed.). Heidelberg: Sprinter Nature. ISBN   978-3-031-14131-7.
  46. Katz, Daniel Martin (4 May 2010). "Visualizing Temporal Patterns in the United States Supreme Court's Network of Citations".
  47. Starger, Colin P. (30 June 2012). "A Visual Guide to NFIB v. Sebelius: Competing Commerce Clause Opinion Lines 1789-2012". Cardozo L. Rev. doi:10.2139/ssrn.2097161. SSRN   2097161 via SSRN.
  48. Starger, Colin P. (16 April 2012). "Expanding Stare Decisis: The Role of Precedent in the Unfolding Dialectic of Brady v. Maryland". Loyola of Los Angeles Law Review. 46: 77. SSRN   2040881 via SSRN.
  49. Surden, Harry. "Visualizing U.S. Copyright Law Using a Force Directed Graph".
  50. "Public Access to Court Electronic Records".
  51. "Law Library of Congress". Library of Congress .
  52. "The Supreme Court Database".
  53. "Bound Volumes".