Algorithmic entities

Last updated

Algorithmic entities refer to autonomous algorithms that operate without human control or interference. Recently, attention is being given to the idea of algorithmic entities being granted (partial or full) legal personhood. Professor Shawn Bayern [1] [2] and Professor Lynn M. LoPucki [3] popularized through their papers the idea of having algorithmic entities that obtain legal personhood and the accompanying rights and obligations.

Contents

Academics and politicians have been discussing over the last few years whether it is possible to have a legal algorithmic entity, meaning that an algorithm or AI is granted legal personhood. In most countries, the law only recognizes natural or real persons and legal persons. The main argument is that behind every legal person (or layers of legal persons), there is eventually a natural person. [4]

In some countries there have been made some exceptions to this in the form of the granting of an environmental personhood to rivers, waterfalls, forests and mountains. In the past, some form of personhood also existed for certain religious constructions such as churches and temples. [5]

Certain countries – albeit for publicity purposes – have shown willingness to grant (some form of) legal personhood to robots. On the 27th of October 2017, Saudi Arabia became to first country in the world to grant citizenship to a robot when it gave “Sophia” a passport. In the same year, official residency status was granted to a chatbot named “Shibuya Mirai” in Tokyo, Japan. [6]

The general consensus is that AI in any case cannot be regarded as a natural or real person and that granting AI (legal) personhood at this stage is unwanted from a societal point of view. However, the academic and public discussions continue as AI software becomes more sophisticated and companies are increasingly implementing artificial intelligence to assist in all aspects of business and society. This leads to some scholars to wonder whether AI should be granted legal personhood as it is not unthinkable to one day have a sophisticated algorithm capable of managing a firm completely independent of human interventions. [6]

Brown argues that the question of whether legal personhood for AI may be granted is tied directly to the issue of whether AI can or should even be allowed to legally own property. [7] Brown “concludes that that legal personhood is the best approach for AI to own personal property.” [8] This is an especially important inquiry since many scholars already recognize AI as having possession and control of some digital assets or even data. AI can also create written text, photo, art, and even algorithms, though ownership of these works is not currently granted to AI in any country because it is not recognized as a legal person.

United States

Bayern (2016) argues that this is already possible currently under US law. He states, that, in the United States, creating an AI controlled firm without human interference or ownership is already possible under current legislation by creating a “zero member LLC”:

(1) an individual member creates a member-managed LLC, filing the appropriate paperwork with the state; (2) the individual (along, possibly, with the LLC, which is controlled by the sole member) enters into an operating agreement governing the conduct of the LLC; (3) the operating agreement specifies that the LLC will take actions as determined by an autonomous system, specifying terms or conditions as appropriate to achieve the autonomous system’s legal goals; (4) the sole member withdraws from the LLC, leaving the LLC without any members. The result is potentially a perpetual LLC—a new legal person—that requires no ongoing intervention from any preexisting legal person in order to maintain its status. [1]

Shawn Bayern

Sherer (2018) argues – after conducting an analysis on New York's (and other states’) LLC law(s), the Revised Uniform Limited Liability Company Act (RULLCA) and US case law on fundamentals of legal personhood – that this option is not viable, but agrees with Bayern on the existence of a ‘loophole’ whereby an AI system could “effectively control a LLC and thereby have the functional equivalent of legal personhood”. [9] Bayern's loophole of “entity cross-ownership” would work as follows:

(1) Existing person P establishes member-managed LLCs A and B, with identical operating agreements both providing that the entity is controlled by an autonomous system that is not a preexisting legal person; (2) P causes A to be admitted as a member of B and B to be admitted as a member of A; (3) P withdraws from both entities. [10]

Shawn Bayern

Unlike the zero member LLC, the entity cross-ownership would not trigger a response by the law for having a memberless entity as what remains are two entities each having one member. In corporations, this sort of situations is often prevented by formal provisions in the statutes (predominantly for voting rights for shares), however, such limitations do not seem to be in place for LLCs as they are more flexible in arranging control and organization. [10]

Europe

In Europe, certain academics from different countries have started to look at the possibilities in their respective jurisdictions. Bayern et al. (2017) compared the UK, Germany and Switzerland to the findings of Bayern (2016) earlier for the US to see whether such “loopholes” in the law exist there as well to set up an algorithmic entity. [10]

Some smaller jurisdiction are going further and adapting their laws for the 21st century technological changes. Guernsey has granted (limited) rights to electronic agents [11] and Malta is currently busy creating a robot citizenship test. [12]

While it is unlikely the EU would allow for AI to receive legal personality at this moment, the European Parliament did however request the European Commission in a February 2017 resolution to “creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently”. [13]

Not all parts of the supranational European bodies agreed as the European Economic and Social Committee gave in its own initiative an opposing opinion given May 2017: “The EESC is opposed to any form of legal status for robots or AI (systems), as this entails an unacceptable risk of moral hazard. Liability law is based on a preventive, behavior-correcting function, which may disappear as soon as the maker no longer bears the liability risk since this is transferred to the robot (or the AI system). There is also a risk of inappropriate use and abuse of this kind of legal status.” [14]

In reaction to the European Parliament's request, the European Commission set up a High Level Expert Group to tackle issues and take initiative in a number of subjects relating to automation, robotics and AI. The High Level Expert Group released a draft document for AI ethical guidelines [15] and a document defining AI in December 2018. [16] The document on ethical guidelines was opened for consultation and received extensive feedback. [17] The European Commission is taking a careful approach legislating AI by emphasizing on ethics, but at the same time – as the EU is behind in AI research to the United States and China – focusing on how to narrow the gap with competitors by creating a more inviting regulatory framework for AI research and development. [18] Giving (limited) legal personality to AI or even allow certain forms of algorithmic entities might create an extra edge. [19]

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

In law, a legal person is any person or 'thing' that can do the things a human person is usually able to do in law – such as enter into contracts, sue and be sued, own property, and so on. The reason for the term "legal person" is that some legal persons are not people: companies and corporations are "persons" legally speaking, but they are not people in a literal sense.

A person is a being who has certain capacities or attributes such as reason, morality, consciousness or self-consciousness, and being a part of a culturally established form of social relations such as kinship, ownership of property, or legal responsibility. The defining features of personhood and, consequently, what makes a person count as a person, differ widely among cultures and contexts.

<span class="mw-page-title-main">Capacity (law)</span> Legal aptitude to have rights and liabilities

Legal capacity is a quality denoting either the legal aptitude of a person to have rights and liabilities, or altogether the personhood itself in regard to an entity other than a natural person.

<span class="mw-page-title-main">AI takeover</span> Hypothetical artificial intelligence scenario

An AI takeover is a scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI, and the popular notion of a robot uprising. Stories of AI takeovers are popular throughout science fiction. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Laws of robotics are any set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.

The following outline is provided as an overview of and topical guide to artificial intelligence:

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence is the branch of the ethics of technology specific to artificial intelligence (AI) systems.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

Electronic persons is a term first proposed by the European Parliament's Committee on Legal Affairs in a draft report on civil law rules on robotics dated May 31, 2016. The term is used to describe the potential legal status of the most sophisticated autonomous robots so that they may have "specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently".

Lawbots are a broad class of customer-facing legal AI applications that are used to automate specific legal tasks, such as document automation and legal research. The terms robot lawyer and lawyer bot are used as synonyms to lawbot. A robot lawyer or a robo-lawyer refers to a legal AI application that can perform tasks that are typically done by paralegals or young associates at law firms. However, there is some debate on the correctness of the term. Some commentators say that legal AI is technically speaking neither a lawyer nor a robot and should not be referred to as such. Other commentators believe that the term can be misleading and note that the robot lawyer of the future won't be one all-encompassing application but a collection of specialized bots for various tasks.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between global superpowers for better military AI, driven by increasing geopolitical and military tensions.

Government by algorithm is an alternative form of government or social ordering where the usage of computer algorithms is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013. A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation of judiciary is in its scope. In the context of blockchain, it is also known as blockchain governance.

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. Another emerging topic is the regulation of blockchain algorithms and is mentioned along with regulation of AI algorithms. Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI); it is therefore related to the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally, including in the European Union and in supra-national bodies like the IEEE, OECD and others. Since 2016, a wave of AI ethics guidelines have been published in order to maintain social control over the technology. Regulation is considered necessary to both encourage AI and manage associated risks. In addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks. Regulation of AI through mechanisms such as review boards can also be seen as social means to approach the AI control problem.

Mariarosaria Taddeo is an Italian philosopher working on the ethics of digital technologies. She is Professor of Digital Ethics and Defence Technologies at the Oxford Internet Institute, University of Oxford and Dslt Ethics Fellow at the Alan Turing Institute, London.

Digital self-determination is a multidisciplinary concept derived from the legal concept of self-determination and applied to the digital sphere, to address the unique challenges to individual and collective agency and autonomy arising with increasing digitalization of many aspects of society and daily life.

<span class="mw-page-title-main">Artificial Intelligence Act</span> European Union regulation on artificial intelligence

The Artificial Intelligence Act is a European Union regulation on artificial intelligence (AI) in the European Union. Proposed by the European Commission on 21 April 2021 and passed on 13 March 2024, it aims to establish a common regulatory and legal framework for AI.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

References

  1. 1 2 [Bayern, Shawn (2016). The Implications of Modern Business–Entity Law for the Regulation of Autonomous Systems. European Journal of Risk Regulation, 7(2), 297-309.]
  2. Bayern, Shawn (2021). Autonomous Organizations. Cambridge, UK: Cambridge University Press. doi:10.1017/9781108878203. ISBN   9781108839938.
  3. [LoPucki, Lynn M. (2017). Algorithmic Entities. Wash. UL Rev., 95, 887.]
  4. [van Genderen, R. V. D. H. (2019). Does future society need legal personhood for Robots and AI?. In Artificial Intelligence in Medical Imaging (pp. 257-290). Springer, Cham.]
  5. [Miller, M. (2019). Environmental Personhood and Standing for Nature: Examining the Colorado River case. The University of New Hampshire Law Review, 17(2), 13.]
  6. 1 2 [Pagallo, Ugo (2018). Vital, Sophia, and Co.—The quest for the legal personhood of robots. Information, 9(9), 230.]
  7. [Brown, Rafael Dean (2021). Property ownership and the legal personhood of artificial intelligence. Information & Communications Technology Law, 30(2), 208-234. See https://www.tandfonline.com/doi/full/10.1080/13600834.2020.1861714. Source is available under a Creative Commons Attribution 4.0 International License]
  8. [Brown, Rafael Dean (2021). Property ownership and the legal personhood of artificial intelligence. Information & Communications Technology Law, 30(2), 208-234. See https://www.tandfonline.com/doi/full/10.1080/13600834.2020.1861714. Text of the source is available under a Creative Commons Attribution 4.0 International License]
  9. [Scherer, Matthew U. (2018). Of Wild Beasts and Digital Analogs: The Legal Status of Autonomous Systems.]
  10. 1 2 3 [Bayern, Shawn, Burri, Thomas, Grant, Thomas D., Hausermann, Daniel M., Moslein, Florian, & Williams, Richard (2017). Company law and autonomous systems: a blueprint for lawyers, entrepreneurs, and regulators. Hastings Sci. & Tech. LJ, 9, 135.]
  11. [Electronic Transactions (Electronic Agents) (Guernsey) Ordinance, 2019, see http://www.guernseylegalresources.gg/article/170571/Electronic-Transactions-Electronic-Agents-Guernsey-Ordinance-2019]
  12. "Malta to explore creation of citizenship test for AI robots". The Malta Independent. 1 November 2018.
  13. [European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), see https://www.europarl.europa.eu/doceo/document/TA-8-2017-0051_EN.pdf]
  14. [European Economic and Social Committee, Artificial intelligence – The consequences of artificial intelligence on the (digital) single market, production, consumption, employment and society (own-initiative opinion), INT/806, see http://allai.nl/wp-content/uploads/2019/09/EESC-opinion-on-Artificial-Intelligence-and-Society.pdf]
  15. [High-Level Expert Group on Artificial Intelligence, Draft Ethics Guidelines for Trustworthy AI, see https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_draft_ethics_guidelines_18_december.pdf]
  16. [High-Level Expert Group on Artificial Intelligence, A definition of AI: Main capabilities and scientific disciplines, see https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_definition_of_ai_18_december.pdf]
  17. [Consultation on Draft AI Ethics Guidelines, see https://ec.europa.eu/futurium/en/system/files/ged/consultation_feedback_on_draft_ai_ethics_guidelines_4.pdf]
  18. "Europe Can Catch up in AI, but Must Act—Today". 31 July 2020.
  19. "Europe and AI: Leading, Lagging Behind, or Carving Its Own Way?".