Part of a series on |
Artificial intelligence (AI) |
---|
![]() |
The Age of Artificial Intelligence, also known as the AI Era [1] [2] [3] [4] or the Cognitive Age, [5] [6] is a historical period characterized by the rapid development and widespread integration of artificial intelligence (AI) technologies across various aspects of society, economy, and daily life. Artificial intelligence is the development of computer systems enabling machines to learn, and make intelligent decisions to achieve a set of defined goals. [7]
MIT physicist Max Tegmark was one of the first people to use the term "Age of Artificial Intelligence" in his 2017 non-fiction book Life 3.0: Being Human in the Age of Artificial Intelligence . [8] [9]
This era is marked by significant advancements in machine learning, data processing, and the application of AI in solving complex problems and automating tasks previously thought to require human intelligence. [7] [10]
British neuroscientist Karl Friston's work on the free energy principle is widely seen as foundational to the Age of Artificial Intelligence, providing a theoretical framework for developing AI systems that closely mimic biological intelligence. [11] The concept has gained traction in various fields, including neuroscience and technology. [12] Many specialists place its beginnings in the early 2010s, coinciding with significant breakthroughs in deep learning and the increasing availability of big data, optical networking, and computational power. [13] [14]
Artificial intelligence has seen a significant increase in global research activity, business investment, and societal integration within the last decade. Computer scientist Andrew Ng has referred to AI as the "new electricity," drawing a parallel to how electricity transformed industries in the early 20th century, and suggesting that AI will have a similarly pervasive impact across all industries during the Age of Artificial Intelligence. [15]
The foundations for the Age of Artificial Intelligence were laid during the latter part of the 20th century and the early 2000s. Key developments included advancements in computer science, neural network models, data storage, the Internet, and optical networking, enabling rapid data transmission essential for AI progress. [16]
The transition to this new era is characterized by the ability of machines to process and store information, and also learn, adapt, and make decisions based on complex data analysis. [16] [7] This shift is significantly affecting various sectors, including healthcare, finance, education, transportation, and entertainment. [7]
Tegmark's book, Life 3.0: Being Human in the Age of Artificial Intelligence , details a phase in which AI can independently design its hardware and software, transforming human existence. He highlights views from digital utopians, techno-skeptics, and advocates for ensuring AI benefits humanity. [9] [17]
Leopold Aschenbrenner, a former employee of OpenAI's Superalignment team, focused on improving human decision-making with AI. In June 2024, he outlined a phased progression from data processing to augmented decision-making, autonomous actions, and, ultimately, AI with holistic situational awareness. [18] [19]
Sam Altman, founder of OpenAI, has predicted that AI will reach a point of superintelligence within the year 2025. [20] Superintelligence was popularized by philosopher Nick Bostrom, who defines it as "any intellect that greatly exceeds the cognitive performance of humans" in his 2014 book Superintelligence: Paths, Dangers, Strategies . [13] [20]
Altman outlined a phased approach to AI development that began with AI's early, narrow focus on specific tasks, which then transitioned to general intelligence that aligns with human values and safety considerations. [20] The next phase is a collaboration between humanity and AI, and the final phase is superintelligence, in which AI must be controlled to ensure it is benefiting humanity as a whole. [21] Altman also outlines five levels of AI capability growth from generative AI, cognition, agentics, and scientific discovery to automated innovation. [22] [23]
American computer scientist and writer Ray Kurzweil predicts a path leading to what he refers to as "The Singularity" around 2045. [24] His phases include substantial growth in computing power, narrow AI, general AI (expected by 2029), and lastly, the integration of human and machine intelligence. [25] [26]
From 2019 to 2023, there was a significant jump in AI capabilities, exemplified by the progression from GPT-2 to GPT-4, which saw AI models advance from grade-school level to advanced high-school level capabilities. This progress is measured in orders of magnitude increases in computing power and algorithmic efficiencies. [18]
In 2017, researchers at Google introduced the Transformer architecture in a paper titled "Attention Is All You Need," authored by computer scientist Ashish Vaswani, and others. [27] Transformers revolutionized natural language processing (NLP) and subsequently influenced various other AI domains. [28] Key features of Transformers include their attention mechanism, which allows the model to weigh the importance of different parts of the input data dynamically; their ability to process input data in parallel, significantly speeding up training and inference compared to recurrent neural networks; and their high scalability, allowing for the creation of increasingly large and powerful models. [29]
Transformers have been used to form the basis of models like BERT and GPT series, which have achieved state-of-the-art performance across a wide range of NLP tasks. [28] Transformers have also been adopted in other domains, including computer vision, audio processing, and even protein structure prediction. [30]
Transformers face limitations, including quadratic time and memory complexity with respect to sequence length, lack of built-in inductive biases for certain tasks, and the need for vast amounts of training data. [31] [32] [33] [34] The complexity of Transformer models also often makes it challenging to interpret their decision-making processes. [35]
To address these limitations, researchers are exploring various approaches, including alternative attention mechanisms (Reformer, Longformer, BigBird), sparse attention patterns, Mixture of Experts (MoE) approaches, and retrieval-augmented models. [36] Researchers are also exploring neuro-symbolic AI and multimodal models to create more versatile and capable AI systems. [37] [38]
Optical networking is fundamental to AI system functioning. Optical fiber and laser technologies, such as dense wave division multiplexing power all the optical networks that enable the collection, updating, processing, and transmission of vast datasets used for training AI models. Data centers store the processed data required by users of Large Language Models (LLMs) and other AI applications. [39] [40]
By 2030, data transmission volumes are expected to increase by more than ten times compared to 2020 levels. This growth is accompanied by advancements in data processing technologies, including the development of quantum-sensing technologies and massive data centers. [41]
Generative AI has emerged as a transformative technology in the Age of Artificial Intelligence. [42] As of 2025, 75% of organizations surveyed by McKinsey & Company reported they regularly use generative AI, which is an increase of 10% from the previous year. Companies have largely worked with the 10% of data that is structured, including transactions, SKUs, and product specifications. However, generative AI is now providing access to the remaining 90% of unstructured data, which includes videos, images, chats, emails, and product reviews. [41]
The Age of Intelligence is expected to have profound economic implications as AI could contribute up to $19.9 trillion to the global economy by 2030. [43] This economic transformation is anticipated from increased productivity, automation of cognitive tasks, and the creation of new industries and job categories. [44]
The rise of AI and automation technologies is leading to significant changes in the workforce. While there are concerns about job displacement, many specialists argue that AI will create new job categories and drive productivity growth. New roles such as prompt engineers, AI ethics stewards, and unstructured-data specialists are emerging. [41]
AI-powered drug discovery could generate up to $70 billion in savings for the pharmaceutical industry by 2028. [45]
The global autonomous vehicle market is projected to reach $556.67 billion by 2026. [46] Leveraging the mobile telephone infrastructure, AI-powered traffic management systems can reduce urban travel times by up to 20%. [47]