Part of a series on |
Artificial intelligence (AI) |
---|
![]() |
The Age of Artificial Intelligence, also known as the AI Era [1] [2] [3] [4] or the Cognitive Age, [5] [6] is a historical period characterized by the rapid development and widespread integration of artificial intelligence (AI) technologies across various aspects of society, economy, and daily life. Artificial intelligence is the development of computer systems enabling machines to learn, and make intelligent decisions to achieve a set of defined goals. [7]
MIT physicist Max Tegmark was one of the first people to use the term "Age of Artificial Intelligence" in his 2017 non-fiction book Life 3.0: Being Human in the Age of Artificial Intelligence . [8] [9]
This era is marked by significant advancements in machine learning, data processing, and the application of AI in solving complex problems and automating tasks previously thought to require human intelligence. [7] [10]
British neuroscientist Karl Friston's work on the free energy principle is widely seen as foundational to the Age of Artificial Intelligence, providing a theoretical framework for developing AI systems that closely mimic biological intelligence. [11] The concept has gained traction in various fields, including neuroscience and technology. [12] Many specialists place its beginnings in the early 2010s, coinciding with significant breakthroughs in deep learning and the increasing availability of big data, optical networking, and computational power. [13] [14]
The foundations for the Age of Artificial Intelligence were laid during the latter part of the 20th century and the early 2000s. Key developments included advancements in computer science, neural network models, data storage, the Internet, and optical networking, enabling rapid data transmission essential for AI progress. [15]
The transition to this new era is characterized by the ability of machines to process and store information, and also learn, adapt, and make decisions based on complex data analysis. [15] [7] This shift is significantly affecting various sectors, including healthcare, finance, education, transportation, and entertainment. [7]
Tegmark's book, Life 3.0: Being Human in the Age of Artificial Intelligence , details a phase in which AI can independently design its hardware and software, transforming human existence. He highlights views from digital utopians, techno-skeptics, and advocates for ensuring AI benefits humanity. [9] [16]
Leopold Aschenbrenner, a former employee of OpenAI's Superalignment team, focused on improving human decision-making with AI. In June 2024, he outlined a phased progression from data processing to augmented decision-making, autonomous actions, and, ultimately, AI with holistic situational awareness. [17] [18]
Sam Altman, founder of OpenAI, has predicted that AI will reach a point of superintelligence within the year 2025. [19] Superintelligence was popularized by philosopher Nick Bostrom, who defines it as "any intellect that greatly exceeds the cognitive performance of humans" in his 2014 book Superintelligence: Paths, Dangers, Strategies . [13] [19]
Altman outlined a phased approach to AI development that began with AI's early, narrow focus on specific tasks, which then transitioned to general intelligence that aligns with human values and safety considerations. [19] The next phase is a collaboration between humanity and AI, and the final phase is superintelligence, in which AI must be controlled to ensure it is benefiting humanity as a whole. [20] Altman also outlines five levels of AI capability growth from generative AI, cognition, agentics, and scientific discovery to automated innovation. [21] [22]
American computer scientist Ray Kurzweil, predicts a path leading to what he refers to as "The Singularity" around 2045. [23] His phases include substantial growth in computing power, narrow AI, general AI (expected by 2029), and lastly, the integration of human and machine intelligence. [24] [25]