Regulation of algorithms

Last updated

Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. [1] [2] [3] For the subset of AI algorithms, the term regulation of artificial intelligence is used. The regulatory and policy landscape for artificial intelligence (AI) is an emerging issue in jurisdictions globally, including in the European Union. [4] Regulation of AI is considered necessary to both encourage AI and manage associated risks, but challenging. [5] Another emerging topic is the regulation of blockchain algorithms (Use of the smart contracts must be regulated) and is mentioned along with regulation of AI algorithms. [6] Many countries have enacted regulations of high frequency trades, which is shifting due to technological progress into the realm of AI algorithms.[ citation needed ]

Contents

The motivation for regulation of algorithms is the apprehension of losing control over the algorithms, whose impact on human life increases. Multiple countries have already introduced regulations in case of automated credit score calculation—right to explanation is mandatory for those algorithms. [7] [8] For example, The IEEE has begun developing a new standard to explicitly address ethical issues and the values of potential future users. [9] Bias, transparency, and ethics concerns have emerged with respect to the use of algorithms in diverse domains ranging from criminal justice [10] to healthcare [11] —many fear that artificial intelligence could replicate existing social inequalities along race, class, gender, and sexuality lines.

Regulation of artificial intelligence

Public discussion

In 2016, Joy Buolamwini founded Algorithmic Justice League after a personal experience with biased facial detection software in order to raise awareness of the social implications of artificial intelligence through art and research. [12]

In 2017 Elon Musk advocated regulation of algorithms in the context of the existential risk from artificial general intelligence. [13] [14] [15] According to NPR, the Tesla CEO was "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." [13]

In response, some politicians expressed skepticism about the wisdom of regulating a technology that is still in development. [14] Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich has argued that artificial intelligence is in its infancy and that it is too early to regulate the technology. [15] Instead of trying to regulate the technology itself, some scholars suggest to rather develop common norms including requirements for the testing and transparency of algorithms, possibly in combination with some form of warranty. [16] One suggestion has been for the development of a global governance board to regulate AI development. [17] In 2020, the European Union published its draft strategy paper for promoting and regulating AI. [18]

Algorithmic tacit collusion is a legally dubious antitrust practise committed by means of algorithms, which the courts are not able to prosecute. [19] This danger concerns scientists and regulators in EU, US and beyond. [19] European Commissioner Margrethe Vestager mentioned an early example of algorithmic tacit collusion in her speech on "Algorithms and Collusion" on March 16, 2017, described as follows: [20]

"A few years ago, two companies were selling a textbook called The Making of a Fly. One of those sellers used an algorithm which essentially matched its rival’s price. That rival had an algorithm which always set a price 27% higher than the first. The result was that prices kept spiralling upwards, until finally someone noticed what was going on, and adjusted the price manually. By that time, the book was selling – or rather, not selling – for 23 million dollars a copy."

In 2018, the Netherlands employed an algorithmic system SyRI (Systeem Risico Indicatie) to detect citizens perceived being high risk for committing welfare fraud, which quietly flagged thousands of people to investigators. [21] This caused a public protest. The district court of Hague shut down SyRI referencing Article 8 of the European Convention on Human Rights (ECHR). [22]

In 2020, algorithms assigning exam grades to students in the UK sparked open protest under the banner "Fuck the algorithm." [23] This protest was successful and the grades were taken back. [24]

Implementation

AI law and regulations can be divided into three main topics, namely governance of autonomous intelligence systems, responsibility and accountability for the systems, and privacy and safety issues. [5] The development of public sector strategies for management and regulation of AI has been increasingly deemed necessary at the local, national, [25] and international levels [18] and in fields from public service management [26] to law enforcement, [18] the financial sector, [25] robotics, [27] the military, [28] and international law. [29] [30] There are many concerns that there is not enough visibility and monitoring of AI in these sectors. [31] In the United States financial sector, for example, there have been calls for the Consumer Financial Protection Bureau to more closely examine source code and algorithms when conducting audits of financial institutions' non-public data. [32]

In the United States, on January 7, 2019, following an Executive Order on 'Maintaining American Leadership in Artificial Intelligence', the White House's Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI. [33] [34] In response, the National Institute of Standards and Technology has released a position paper, [35] the National Security Commission on Artificial Intelligence has published an interim report, [36] and the Defense Innovation Board has issued recommendations on the ethical use of AI. [37]

In April 2016, for the first time in more than two decades, the European Parliament adopted a set of comprehensive regulations for the collection, storage, and use of personal information, the General Data Protection Regulation (GDPR)1 (European Union, Parliament and Council 2016).[6] The GDPR's policy on the right of citizens to receive an explanation for algorithmic decisions highlights the pressing importance of human interpretability in algorithm design. [38]

In 2016, China published a position paper questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N. Security Council to broach the issue, [29] and leading to proposals for global regulation. [39] In the United States, steering on regulating security-related AI is provided by the National Security Commission on Artificial Intelligence. [40]

In 2017, the U.K. Vehicle Technology and Aviation Bill imposes liability on the owner of an uninsured automated vehicle when driving itself and makes provisions for cases where the owner has made “unauthorized alterations” to the vehicle or failed to update its software. Further ethical issues arise when, e.g., a self-driving car swerves to avoid a pedestrian and causes a fatal accident. [41]

In 2021, the European Commission proposed the Artificial Intelligence Act. [42]

Algorithm certification

There is a concept of algorithm certification emerging as a method of regulating algorithms. Algorithm certification involves auditing whether the algorithm used during the life cycle 1) conforms to the protocoled requirements (e.g., for correctness, completeness, consistency, and accuracy); 2) satisfies the standards, practices, and conventions; and 3) solves the right problem (e.g., correctly model physical laws), and satisfies the intended use and user needs in the operational environment. [9]

Regulation of blockchain algorithms

Blockchain systems provide transparent and fixed records of transactions and hereby contradict the goal of the European GDPR, which is to give individuals full control of their private data. [43] [44]

By implementing the Decree on Development of Digital Economy, Belarus has become the first-ever country to legalize smart contracts. Belarusian lawyer Denis Aleinikov is considered to be the author of a smart contract legal concept introduced by the decree. [45] [46] [47] There are strong arguments that the existing US state laws are already a sound basis for the smart contracts' enforceability — Arizona, Nevada, Ohio and Tennessee have amended their laws specifically to allow for the enforceability of blockchain-based contracts nevertheless. [48]

Regulation of robots and autonomous algorithms

There have been proposals to regulate robots and autonomous algorithms. These include:

In 1942, author Isaac Asimov addressed regulation of algorithms by introducing the fictional Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. [49]

The main alternative to regulation is a ban, and the banning of algorithms is presently highly unlikely. However, in Frank Herbert's Dune universe, thinking machines is a collective term for artificial intelligence, which were completely destroyed and banned after a revolt known as the Butlerian Jihad: [50]

JIHAD, BUTLERIAN: (see also Great Revolt) — the crusade against computers, thinking machines, and conscious robots begun in 201 B.G. and concluded in 108 B.G. Its chief commandment remains in the O.C. Bible as "Thou shalt not make a machine in the likeness of a human mind." [51]

See also

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">AI takeover</span> Hypothetical outcome of artificial intelligence

An AI takeover is an imagined scenario in which artificial intelligence (AI) emerges as the dominant form of intelligence on Earth and computer programs or robots effectively take control of the planet away from the human species, which relies on human intelligence. Stories of AI takeovers have been popular throughout science fiction, but recent advancements have made the threat more real. Possible scenarios include replacement of the entire human workforce due to automation, takeover by a superintelligent AI (ASI), and the notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

A smart contract is a computer program or a transaction protocol that is intended to automatically execute, control or document events and actions according to the terms of a contract or an agreement. The objectives of smart contracts are the reduction of need for trusted intermediators, arbitration costs, and fraud losses, as well as the reduction of malicious and accidental exceptions. Smart contracts are commonly associated with cryptocurrencies, and the smart contracts introduced by Ethereum are generally considered a fundamental building block for decentralized finance (DeFi) and non-fungible token (NFT) applications.

Robot ethics, sometimes known as "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such that they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.

Data portability is a concept to protect users from having their data stored in "silos" or "walled gardens" that are incompatible with one another, i.e. closed platforms, thus subjecting them to vendor lock-in and making the creation of data backups or moving accounts between services difficult.

Machine ethics is a part of the ethics of artificial intelligence concerned with adding or ensuring moral behaviors of man-made machines that use artificial intelligence, otherwise known as artificial intelligent agents. Machine ethics differs from other ethical fields related to engineering and technology. It should not be confused with computer ethics, which focuses on human use of computers. It should also be distinguished from the philosophy of technology, which concerns itself with technology's grander social effects.

<span class="mw-page-title-main">General Data Protection Regulation</span> EU regulation on the processing of personal data

The General Data Protection Regulation, abbreviated GDPR, or French RGPD is a European Union regulation on information privacy in the European Union (EU) and the European Economic Area (EEA). The GDPR is an important component of EU privacy law and human rights law, in particular Article 8(1) of the Charter of Fundamental Rights of the European Union. It also governs the transfer of personal data outside the EU and EEA. The GDPR's goals are to enhance individuals' control and rights over their personal information and to simplify the regulations for international business. It supersedes the Data Protection Directive 95/46/EC and, among other things, simplifies the terminology.

<span class="mw-page-title-main">Artificial intelligence in healthcare</span> Overview of the use of artificial intelligence in healthcare

Artificial intelligence in healthcare is the application of artificial intelligence (AI) to copy human cognition in the analysis, presentation, and understanding of complex medical and health care data. It can also augment and exceed human capabilities by providing faster or new ways to diagnose, treat, or prevent disease. Using AI in healthcare has the potential improve predicting, diagnosing and treating diseases. Through machine learning algorithms and deep learning, AI can analyse large sets of clinical data and electronic health records and can help to diagnose the disease more quickly and precisely. In addition, AI is becoming more relevant in bringing culturally competent healthcare practices to the industry.

<span class="mw-page-title-main">Mary-Anne Williams</span> Australian professor at UNSW founded Artificial Intelligence programs

Mary-Anne Williams FTSE is the Michael J Crouch Chair for Innovation at the University of New South Wales in Sydney Australia (UNSW) based in the UNSW Business School.

A military artificial intelligence arms race is an arms race between two or more states to develop and deploy lethal autonomous weapons systems (LAWS). Since the mid-2010s, many analysts have noted the emergence of such an arms race between superpowers for better military AI, driven by increasing geopolitical and military tensions.

The artificial intelligenceindustry in China is a rapidly developing multi-billion dollar industry. The roots of China's AI development started in the late 1970s following Deng Xiaoping's economic reforms emphasizing science and technology as the country's primary productive force.

Algorithmic entities refer to autonomous algorithms that operate without human control or interference. Recently, attention is being given to the idea of algorithmic entities being granted legal personhood. Professor Shawn Bayern and Professor Lynn M. LoPucki popularized through their papers the idea of having algorithmic entities that obtain legal personhood and the accompanying rights and obligations.

<span class="mw-page-title-main">Angelo Dalli</span>

Angelo Dalli is a computer scientist specialising in artificial intelligence, a serial entrepreneur, and business angel investor.

<span class="mw-page-title-main">Sandra Wachter</span> Data ethics, artificial intelligence and robotics researcher

Sandra Wachter is a professor and senior researcher in data ethics, artificial intelligence, robotics, algorithms and regulation at the Oxford Internet Institute. She is a former Fellow of The Alan Turing Institute.

Government by algorithm is an alternative form of government or social ordering where the usage of computer algorithms is applied to regulations, law enforcement, and generally any aspect of everyday life such as transportation or land registration. The term "government by algorithm" has appeared in academic literature as an alternative for "algorithmic governance" in 2013. A related term, algorithmic regulation, is defined as setting the standard, monitoring and modifying behaviour by means of computational algorithms – automation of judiciary is in its scope. In the context of blockchain, it is also known as blockchain governance.

Regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating artificial intelligence (AI). It is part of the broader regulation of algorithms. The regulatory and policy landscape for AI is an emerging issue in jurisdictions worldwide, including for international organizations without direct enforcement power like the IEEE or the OECD.

<span class="mw-page-title-main">Artificial Intelligence Act</span> 2024 European Union regulation on artificial intelligence

The Artificial Intelligence Act is a European Union regulation concerning artificial intelligence (AI). It establishes a common regulatory and legal framework for AI within the European Union (EU). It came into force on 1 August 2024, with provisions coming into operation gradually over the following 6 to 36 months.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

Discussions on regulation of artificial intelligence in the United States have included topics such as the timeliness of regulating AI, the nature of the federal regulatory framework to govern and promote AI, including what agency should lead, the regulatory and governing powers of that agency, and how to update regulations in the face of rapidly changing technology, as well as the roles of state governments and courts.

References

  1. "Algorithms have gotten out of control. It's time to regulate them". theweek.com. 3 April 2019. Archived from the original on 22 March 2020. Retrieved 22 March 2020.
  2. Martini, Mario. "FUNDAMENTALS OF A REGULATORY SYSTEM FOR ALGORITHM-BASED PROCESSES" (PDF). Retrieved 22 March 2020.
  3. "Rise and Regulation of Algorithms". Berkeley Global Society. Archived from the original on 22 March 2020. Retrieved 22 March 2020.
  4. Law Library of Congress (U.S.). Global Legal Research Directorate, issuing body. Regulation of artificial intelligence in selected jurisdictions. OCLC   1110727808.
  5. 1 2 Wirtz, Bernd W.; Weyerer, Jan C.; Geyer, Carolin (2018-07-24). "Artificial Intelligence and the Public Sector—Applications and Challenges". International Journal of Public Administration. 42 (7): 596–615. doi:10.1080/01900692.2018.1498103. ISSN   0190-0692. S2CID   158829602. Archived from the original on 2020-08-18. Retrieved 2024-09-25.
  6. Fitsilis, Fotios (2019). Imposing Regulation on Advanced Algorithms. Springer International Publishing. ISBN   978-3-030-27978-3.
  7. Consumer Financial Protection Bureau, §1002.9(b)(2)
  8. Edwards, Lilian; Veale, Michael (2018). "Enslaving the Algorithm: From a 'Right to an Explanation' to a 'Right to Better Decisions'?" (PDF). IEEE Security & Privacy. 16 (3): 46–54. doi:10.1109/MSP.2018.2701152. S2CID   4049746. SSRN   3052831. Archived (PDF) from the original on 2020-10-21. Retrieved 2020-08-14.
  9. 1 2 3 Treleaven, Philip; Barnett, Jeremy; Koshiyama, Adriano (February 2019). "Algorithms: Law and Regulation". Computer. 52 (2): 32–40. doi:10.1109/MC.2018.2888774. ISSN   0018-9162. S2CID   85500054. Archived from the original on 2024-08-17. Retrieved 2024-09-25.
  10. Hao, Karen (January 21, 2019). "AI is sending people to jail—and getting it wrong". MIT Technology Review. Archived from the original on 2024-09-25. Retrieved 2021-01-24.
  11. Ledford, Heidi (2019-10-24). "Millions of black people affected by racial bias in health-care algorithms". Nature. 574 (7780): 608–609. Bibcode:2019Natur.574..608L. doi:10.1038/d41586-019-03228-6. PMID   31664201. S2CID   204943000. Archived from the original on 2024-09-23. Retrieved 2024-09-25.
  12. Lufkin, Bryan (22 July 2019). "Algorithmic justice". BBC Worklife. Archived from the original on 25 September 2024. Retrieved 31 December 2020.
  13. 1 2 Domonoske, Camila (July 17, 2017). "Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk'". NPR. Archived from the original on 17 August 2017. Retrieved 27 November 2017.
  14. 1 2 Gibbs, Samuel (17 July 2017). "Elon Musk: regulate AI to combat 'existential threat' before it's too late". The Guardian. Archived from the original on 6 June 2020. Retrieved 27 November 2017.
  15. 1 2 Kharpal, Arjun (7 November 2017). "A.I. is in its 'infancy' and it's too early to regulate it, Intel CEO Brian Krzanich says". CNBC. Archived from the original on 22 March 2020. Retrieved 27 November 2017.
  16. Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62: 15–25. doi:10.1016/j.bushor.2018.08.004. S2CID   158433736.
  17. Boyd, Matthew; Wilson, Nick (2017-11-01). "Rapid developments in Artificial Intelligence: how might the New Zealand government respond?". Policy Quarterly. 13 (4). doi: 10.26686/pq.v13i4.4619 . ISSN   2324-1101.
  18. 1 2 3 White Paper: On Artificial Intelligence – A European approach to excellence and trust (PDF). Brussels: European Commission. 2020. p. 1. Archived (PDF) from the original on 2020-02-20. Retrieved 2020-03-27.
  19. 1 2 Ezrachi, A.; Stucke, M. E. (13 March 2020). "Sustainable and unchallenged algorithmic tacit collusion". Northwestern Journal of Technology & Intellectual Property. 17 (2). ISSN   1549-8271.
  20. VESTAGER, Margrethe (2017). "Algorithms and competition". European Commission. Archived from the original (Bundeskartellamt 18th Conference on Competition) on 2019-11-29. Retrieved 1 May 2021.
  21. Simonite, Tom (February 7, 2020). "Europe Limits Government by Algorithm. The US, Not So Much". Wired. Archived from the original on 11 April 2020. Retrieved 11 April 2020.
  22. Rechtbank Den Haag 5 February 2020, C-09-550982-HA ZA 18-388 (English), ECLI:NL:RBDHA:2020:1878
  23. "Skewed Grading Algorithms Fuel Backlash Beyond the Classroom". Wired. Archived from the original on 20 September 2020. Retrieved 26 September 2020.
  24. Reuter, Markus (17 August 2020). "Fuck the Algorithm - Jugendproteste in Großbritannien gegen maschinelle Notenvergabe erfolgreich". netzpolitik.org (in German). Archived from the original on 19 September 2020. Retrieved 3 October 2020.
  25. 1 2 Bredt, Stephan (2019-10-04). "Artificial Intelligence (AI) in the Financial Sector—Potential and Public Strategies". Frontiers in Artificial Intelligence. 2: 16. doi: 10.3389/frai.2019.00016 . ISSN   2624-8212. PMC   7861258 . PMID   33733105.
  26. Wirtz, Bernd W.; Müller, Wilhelm M. (2018-12-03). "An integrated artificial intelligence framework for public management". Public Management Review. 21 (7): 1076–1100. doi:10.1080/14719037.2018.1549268. ISSN   1471-9037. S2CID   158267709.
  27. Iphofen, Ron; Kritikos, Mihalis (2019-01-03). "Regulating artificial intelligence and robotics: ethics by design in a digital society". Contemporary Social Science. 16 (2): 170–184. doi:10.1080/21582041.2018.1563803. ISSN   2158-2041. S2CID   59298502.
  28. United States. Defense Innovation Board. AI principles : recommendations on the ethical use of artificial intelligence by the Department of Defense. OCLC   1126650738.
  29. 1 2 "Robots with Guns: The Rise of Autonomous Weapons Systems". Snopes.com. 21 April 2017. Archived from the original on 25 September 2024. Retrieved 24 December 2017.
  30. Bento, Lucas (2017). "No Mere Deodands: Human Responsibilities in the Use of Violent Intelligent Systems Under Public International Law". Harvard Scholarship Depository. Archived from the original on 2020-03-23. Retrieved 2019-09-14.
  31. MacCarthy, Mark (9 March 2020). "AI Needs More Regulation, Not Less". Brookings. Archived from the original on 24 April 2023. Retrieved 25 September 2024.
  32. Van Loo, Rory (July 2018). "Technology Regulation by Default: Platforms, Privacy, and the CFPB". Georgetown Law Technology Review. 2 (1): 542–543. Archived from the original on 2021-01-17. Retrieved 2024-09-25.
  33. "AI Update: White House Issues 10 Principles for Artificial Intelligence Regulation". Inside Tech Media. 2020-01-14. Archived from the original on 2020-03-25. Retrieved 2020-03-25.
  34. Memorandum for the Heads of Executive Departments and Agencies (PDF). Washington, D.C.: White House Office of Science and Technology Policy. 2020. Archived (PDF) from the original on 2020-03-18. Retrieved 2020-03-27.
  35. U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools (PDF). National Institute of Science and Technology. 2019. Archived (PDF) from the original on 2020-03-25. Retrieved 2020-03-27.
  36. NSCAI Interim Report for Congress. The National Security Commission on Artificial Intelligence. 2019. Archived from the original on 2021-09-10. Retrieved 2020-03-27.
  37. AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense (PDF). Washington, DC: Defense Innovation Board. 2020. Archived (PDF) from the original on 2020-01-14. Retrieved 2020-03-27.
  38. Goodman, Bryce; Flaxman, Seth (2017-10-02). "European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation"". AI Magazine. 38 (3): 50–57. arXiv: 1606.08813 . doi:10.1609/aimag.v38i3.2741. ISSN   2371-9621. S2CID   7373959. Archived from the original on 2022-12-24. Retrieved 2024-09-25.
  39. Baum, Seth (2018-09-30). "Countering Superintelligence Misinformation". Information. 9 (10): 244. doi: 10.3390/info9100244 . ISSN   2078-2489.
  40. Stefanik, Elise M. (2018-05-22). "H.R.5356 – 115th Congress (2017–2018): National Security Commission Artificial Intelligence Act of 2018". www.congress.gov. Archived from the original on 2020-03-23. Retrieved 2020-03-13.
  41. "The Highway Code - Introduction - Guidance - GOV.UK". www.gov.uk. Archived from the original on 2022-11-30. Retrieved 2022-11-30.
  42. "Why the world needs a Bill of Rights on AI". Financial Times. 2021-10-18. Archived from the original on 2024-09-25. Retrieved 2023-03-19.
  43. "A recent report issued by the Blockchain Association of Ireland has found there are many more questions than answers when it comes to GDPR". siliconrepublic.com. 23 November 2017. Archived from the original on 5 March 2018. Retrieved 5 March 2018.
  44. "Blockchain and the General Data Protection Regulation - Think Tank". www.europarl.europa.eu (in German). Archived from the original on 4 August 2020. Retrieved 28 March 2020.
  45. Makhovsky, Andrei (December 22, 2017). "Belarus adopts crypto-currency law to woo foreign investors". Reuters . Archived from the original on February 9, 2019. Retrieved April 21, 2020.
  46. "Belarus Enacts Unique Legal Framework for Crypto Economy Stakeholders" (PDF). Deloitte. December 27, 2017. Archived (PDF) from the original on May 21, 2020. Retrieved April 21, 2020.
  47. Patricolo, Claudia (December 26, 2017). "ICT Given Huge Boost in Belarus". Emerging Europe. Archived from the original on September 19, 2020. Retrieved April 21, 2020.
  48. Levi, Stuart; Lipton, Alex; Vasile, Christina (2020). "Blockchain Laws and Regulations | 13 Legal issues surrounding the use of smart contracts | GLI". GLI - Global Legal InsightsInternational legal business solutions. Archived from the original on 25 September 2020. Retrieved 21 April 2020.
  49. Asimov, Isaac (1950). "Runaround". I, Robot (The Isaac Asimov Collection ed.). New York City: Doubleday. p. 40. ISBN   978-0-385-42304-5. This is an exact transcription of the laws. They also appear in the front of the book, and in both places there is no "to" in the 2nd law.
  50. Herbert, Frank (1969). Dune Messiah.
  51. Herbert, Frank (1965). "Terminology of the Imperium: JIHAD, BUTLERIAN". Dune . Philadelphia, Chilton Books.