This article may require cleanup to meet Wikipedia's quality standards. The specific problem is: inaccurate re licensing and what's required to protect the four freedoms.(November 2024) |
Open-source artificial intelligence is an AI system that is freely available to use, study, modify, and share. [1] These attributes extend to each of the system's components, including datasets, code, and model parameters, promoting a collaborative and transparent approach to AI development. [1] Free and open-source software (FOSS) licenses, such as the Apache License, MIT License, and GNU General Public License, outline the terms under which open-source artificial intelligence can be accessed, modified, and redistributed. [2]
The open-source model provides widespread access to new AI technologies, allowing individuals and organizations of all sizes to participate in AI research and development. [3] [4] This approach supports collaboration and allows for shared advancements within the field of artificial intelligence. [3] [4] In contrast, closed-source artificial intelligence is proprietary, restricting access to the source code and internal components. [3] Only the owning company or organization can modify or distribute a closed-source artificial intelligence system, prioritizing control and protection of intellectual property over external contributions and transparency. [3] [5] [6] Companies often develop closed products in an attempt to keep a competitive advantage in the marketplace. [6] However, some experts suggest that open-source AI tools may have a development advantage over closed-source products and have the potential to overtake them in the marketplace. [6] [4]
Popular open-source artificial intelligence project categories include large language models, machine translation tools, and chatbots. [7] For software developers to produce open-source artificial intelligence (AI) resources, they must trust the various other open-source software components they use in its development. [8] [9] Open-source AI software has been speculated to have potentially increased risk compared to closed-source AI as bad actors may remove safety protocols of public models as they wish. [4] Similarly, closed-source AI has also been speculated to have an increased risk compared to open-source AI due to issues of dependence, privacy, opaque algorithms, corporate control and limited availability while potentially slowing beneficial innovation. [10] [11] [12]
There also is a debate about the openness of AI systems as openness is differentiated [13] – an article in Nature suggests that some systems presented as open, such as Meta's Llama 3, "offer little more than an API or the ability to download a model subject to distinctly non-open use restrictions". Such software has been criticized as "openwashing" [14] systems that are better understood as closed. [11] There are some works and frameworks that assess the openness of AI systems [15] [13] as well as a new definition by the Open Source Initiative about what constitutes open source AI. [16] [17] [18]
The history of open-source artificial intelligence (AI) is intertwined with both the development of AI technologies and the growth of the open-source software movement. [19] Open-source AI has evolved significantly over the past few decades, with contributions from various academic institutions, research labs, tech companies, and independent developers. [20] This section explores the major milestones in the development of open-source AI, from its early days to its current state.
The concept of AI dates back to the mid-20th century, when computer scientists like Alan Turing and John McCarthy laid the groundwork for modern AI theories and algorithms. [21] Early AI research focused on developing symbolic reasoning systems and rule-based expert systems. [22] During this period, the idea of open-source software was beginning to take shape, with pioneers like Richard Stallman advocating for free software as a means to promote collaboration and innovation in programming. [23]
The Free Software Foundation, founded in 1985 by Stallman, was one of the first major organizations to promote the idea of software that could be freely used, modified, and distributed. The ideas from this movement eventually influenced the development of open-source AI, as more developers began to see the potential benefits of open collaboration in software creation, including AI models and algorithms. [24] [25]
In the 1990s, open-source software began to gain more traction as the internet facilitated collaboration across geographical boundaries. [26] The rise of machine learning and statistical methods also led to the development of more practical AI tools. However, it wasn't until the early 2000s that open-source AI began to take off, with the release of foundational libraries and frameworks that were available for anyone to use and contribute to. [27]
One of the early open-source AI frameworks was Scikit-learn, released in 2007. [28] Scikit-learn became one of the most widely used libraries for machine learning due to its ease of use and robust functionality, providing implementations of common algorithms like regression, classification, and clustering. [29] [30] Around the same time, other open-source machine learning libraries such as OpenCV (2000), Torch (2002), and Theano (2007) were developed by tech companies and research labs, further cementing the growth of open-source AI.
The 2010s marked a significant shift in the development of AI, driven by the advent of deep learning and neural networks. [31] Open-source deep learning frameworks such as TensorFlow (developed by Google Brain) and PyTorch (developed by Facebook's AI Research Lab) revolutionized the AI landscape by making complex deep learning models more accessible. [32] [33] These frameworks allowed researchers and developers to build and train sophisticated neural networks for tasks like image recognition, natural language processing (NLP), and autonomous driving. [34] [35]
During this time, AI models like Google's BERT (2018) for natural language processing and OpenAI's GPT series (2018–present) for text generation also became widely available in open-source form. These models demonstrated the potential for AI to revolutionize industries by improving understanding and generation of human language, sparking further interest in open-source AI development.
The 2020s saw the continued growth and maturation of open-source AI. Companies and research organizations began to release large-scale pre-trained models to the public, which led to a boom in both commercial and academic applications of AI. Notably, Hugging Face, a company focused on NLP, became a hub for the development and distribution of state-of-the-art AI models, including open-source versions of transformers like GPT-2, GPT-3, and BERT. [36]
With the announcement of GPT-2, OpenAI originally planned to keep the source code of their models private citing concerns about malicious applications. [37] After OpenAI faced public backlash, however, it released the source code for GPT-2 to GitHub three months after its release. [37] OpenAI has not publicly released the source code or pretrained weights for the GPT-3 or GPT-4 models, though their functionalities can be integrated by developers through the OpenAI API. [38] [39]
The rise of large language models (LLMs) and generative AI, such as OpenAI's GPT-3 (2020), further propelled the demand for open-source AI frameworks. [40] [41] These models have been used in a variety of applications, including chatbots, content creation, and code generation, demonstrating the broad capabilities of AI systems. [42]
The LF AI & Data Foundation, a project under the Linux Foundation, has significantly influenced the open-source AI landscape by fostering collaboration and innovation, and supporting open-source projects. [43] By providing a neutral platform, LF AI & Data unites developers, researchers, and organizations to build cutting-edge AI and data solutions, addressing critical technical challenges and promoting ethical AI development. [44]
As of October 2024, the foundation comprised 77 member companies from North America, Europe, and Asia, and hosted 67 open-source software (OSS) projects contributed by a diverse array of organizations, including silicon valley giants such as Nvidia, Amazon, Intel, and Microsoft. [45] Other large conglomerates like Alibaba, TikTok, AT&T, and IBM have also contributed. [45] Research organizations such as NYU, University of Michigan AI labs, Columbia University, Penn State are also associate members of the LF AI & Data Foundation. [45]
In September 2022, the PyTorch Foundation was established to oversee the widely used PyTorch deep learning framework, which was donated by Meta. [46] The foundation's mission is to drive the adoption of AI tools by fostering and sustaining an ecosystem of open-source, vendor-neutral projects integrated with PyTorch, and to democratize access to state-of-the-art tools, libraries, and other components, making these innovations accessible to everyone. [47]
The PyTorch Foundation also separates business and technical governance, with the PyTorch project maintaining its technical governance structure, while the foundation handles funding, hosting expenses, events, and management of assets such as the project's website, GitHub repository, and social media accounts, ensuring open community governance. [47] Upon its inception, the foundation formed a governing board comprising representatives from its initial members: AMD, Amazon Web Services, Google Cloud, Hugging Face, IBM, Intel, Meta, Microsoft, and NVIDIA. [47]
In 2024, Meta released a collection of large AI models, including Llama 3.1 405B, comparable to the most advanced closed-source models. [48] The company claimed its approach to AI would be open-source, differing from other major tech companies. [48] The Open Source Initiative and others stated that Llama is not open-source despite Meta describing it as open-source, due to Llama's software license prohibiting it from being used for some purposes. [49] [50] [51]
In parallel with the development of AI models, there has been growing interest in ensuring ethical standards in open-source AI development. [52] [53] This includes addressing concerns such as bias, privacy, and the potential for misuse of AI systems. [52] [53] As a result, frameworks for responsible AI development and the creation of guidelines for documenting ethical considerations, such as the Model Card concept introduced by Google, have gained popularity, though studies show the continued need for their adoption to avoid unintended negative outcomes. [54] [55] [56]
Open-source artificial intelligence has brought widespread accessibility to machine learning (ML) tools, enabling developers to implement and experiment with ML models across various industries. Sci-kit Learn, Tensorflow, and PyTorch are three of the most widely used open-source ML libraries, each contributing unique capabilities to the field. [57] Sci-kit Learn is known for its robust toolkit, offering accessible functions for classification, regression, clustering, and dimensionality reduction. [58] This library simplifies the ML pipeline from data preprocessing to model evaluation, making it ideal for users with varying levels of expertise. [58] Tensorflow, initially developed by Google, supports large-scale ML models, especially in production environments requiring scalability, such as healthcare, finance, and retail. [59] PyTorch, favored for its flexibility and ease of use, has been particularly popular in research and academia, supporting everything from basic ML models to advanced deep learning applications, and it is now widely used by the industry, too. [60]
Open-source AI has played a crucial role in developing and adopting of Large Language Models (LLMs), transforming text generation and comprehension capabilities. While proprietary models like OpenAI's GPT series have redefined what is possible in applications such as interactive dialogue systems and automated content creation, fully open-source models have also made significant strides. Google's BERT, for instance, is an open-source model widely used for tasks like entity recognition and language translation, establishing itself as a versatile tool in NLP. [61] These open-source LLMs have democratized access to advanced language technologies, enabling developers to create applications such as personalized assistants, legal document analysis, and educational tools without relying on proprietary systems. [62]
Open-source machine translation models have paved the way for multilingual support in applications across industries. Hugging Face's MarianMT is a prominent example, providing support for a wide range of language pairs, becoming a valuable tool for translation and global communication. [63] Another notable model, OpenNMT, offers a comprehensive toolkit for building high-quality, customized translation models, which are used in both academic research and industries. [64] Alongside these open-source models, open-source datasets such as the WMT (Workshop on Machine Translation) datasets, Europarl Corpus, and OPUS have played a critical role in advancing machine translation technology. [65] [66] These datasets provide diverse, high-quality parallel text corpora that enable developers to train and fine-tune models for specific languages and domains. [65]
Open-source AI has led to considerable advances in the field of computer vision, with libraries such as OpenCV (Open Computer Vision Library) playing a pivotal role in the democratization of powerful image processing and recognition capabilities. [67] OpenCV provides a comprehensive set of functions that can support real-time computer vision applications, such as image recognition, motion tracking, and facial detection. [68] Originally developed by Intel, OpenCV has become one of the most popular libraries for computer vision due to its versatility and extensive community support. [67] [68] The library includes a range of pre-trained models and utilities for handling common tasks, making OpenCV into a valuable resource for both beginners and experts of the field. Beyond OpenCV, other open-source computer vision models like YOLO (You Only Look Once) and Detectron2 offer specialized frameworks for object detection, classification, and segmentation, contributing to advancements in applications like security, autonomous vehicles, and medical imaging. [69] [70]
Unlike the previous generations of Computer Vision models, which process image data through convolutional layers, newer generations of computer vision models, referred to as Vision Transformer (ViT), rely on attention mechanisms similar to those found in the area of natural language processing. [71] ViT models break down an image into smaller patches and apply self-attention to identify which areas of the image are most relevant, effectively capturing long-range dependencies within the data. [71] This shift from convolutional operations to attention mechanisms enables ViT models to achieve state-of-the-art accuracy in image classification and other tasks, pushing the boundaries of computer vision applications. [72]
Open-source artificial intelligence has made a notable impact in robotics by providing a flexible, scalable development environment for both academia and industry. [73] The Robot Operating System (ROS) stands out as a leading open-source framework, offering tools, libraries, and standards essential for building robotics applications. [74] ROS simplifies the development process, allowing developers to work across different hardware platforms and robotic architectures. [73] Furthermore, Gazebo, an open-source robotic simulation software often paired with ROS, enables developers to test and refine their robotic systems in a virtual environment before real-world deployment. [75]
In the healthcare industry, open-source AI has revolutionized diagnostics, patient care, and personalized treatment options. [76] Open-source libraries like Tensorflow and PyTorch have been applied extensively in medical imaging for tasks such as tumor detection, improving the speed and accuracy of diagnostic processes. [77] [76] Additionally, OpenChem, an open-source library specifically geared toward chemistry and biology applications, enables the development of predictive models for drug discovery, helping researchers identify potential compounds for treatment. [78] NLP models, adapted for analyzing electronic health records (EHRs), have also become instrumental in healthcare. [79] By summarizing patient data, detecting patterns, and flagging potential issues, open-source AI has enhanced clinical decision-making and improved patient outcomes, demonstrating the transformative power of AI in medicine. [79]
Open-source AI has become a critical component in military applications, highlighting both its potential and its risks. Meta's Llama models, which have been described as open-source by Meta, were adopted by U.S. defense contractors like Lockheed Martin and Oracle after unauthorized adaptations by Chinese researchers affiliated with the People's Liberation Army (PLA) came to light. [80] [81] The Open Source Initiative and others have contested Meta's use of the term open-source to describe Llama, due to Llama's license containing an acceptable use policy that prohibits use cases including non-U.S. military use. [51] Chinese researchers used an earlier version of Llama to develop tools like ChatBIT, optimized for military intelligence and decision-making, prompting Meta to expand its partnerships with U.S. contractors to ensure the technology could be used strategically for national security. [81] These applications now include logistics, maintenance, and cybersecurity enhancements. [81]
The open-source movement has influenced the development of artificial intelligence, enabling the widespread adoption and collaboration that are key to its rapid evolution. By making AI tools freely available, open-source platforms empower individuals, research institutions, and companies to contribute, adapt, and innovate on top of existing technologies.
Open-source AI democratizes access to cutting-edge tools, lowering entry barriers for individuals and smaller organizations that may lack resources. [82] By making these technologies freely available, open-source AI allows developers to innovate and create AI solutions that might have been otherwise inaccessible due to financial constraints, enabling independent developers and researchers, smaller organizations, and startups to utilize advanced AI models without the financial burden of proprietary software licenses. [82] This affordability encourages innovation in niche or specialized applications, as developers can modify existing models to meet unique needs. [82] [83]
By sharing code, data, and research findings, open-source AI enables collective problem-solving and innovation. [83] Large-scale collaborations, such as those seen in the development of frameworks like TensorFlow and PyTorch, have accelerated advancements in machine learning (ML) and deep learning. [31]
The open-source nature of these platforms also facilitates rapid iteration and improvement, as contributors from across the globe can propose modifications and enhancements to existing tools. [31] [24] Beyond enhancements directly within ML and deep learning, this collaboration can lead to faster advancements in the products of AI, as shared knowledge and expertise are pooled together. [24] [83]
The openness of the development process encourages diverse contributions, making it possible for underrepresented groups to shape the future of AI. This inclusivity not only fosters a more equitable development environment but also helps to address biases that might otherwise be overlooked by larger, profit-driven corporations. [84] With contributions from a broad spectrum of perspectives, open-source AI has the potential to create more fair, accountable, and impactful technologies that better serve global communities. [84]
One key benefit of open-source AI is the increased transparency it offers compared to closed-source alternatives. With open-source models, the underlying algorithms and code are accessible for inspection, which promotes accountability and helps developers understand how a model reaches its conclusions. [15] Additionally, open-weight models such as GPT-3, LLaMA, and Stable Diffusion allow developers to directly access model parameters, potentially facilitating the reduced bias and increased fairness in their applications. [15] This transparency can help create systems with human-readable outputs, or "explainable AI", which is a growingly key concern, especially in high-stakes applications such as healthcare, criminal justice, and finance, where the consequences of decisions made by AI systems can be significant (though may also pose certain risks, as mentioned in the Concerns section). [85]
A Nature editorial suggests medical care could become dependent on AI models that could be taken down at any time, are difficult to evaluate, and may threaten patient privacy. [10] Its authors propose that health-care institutions, academic researchers, clinicians, patients and technology companies worldwide should collaborate to build open-source models for health care of which the underlying code and base models are easily accessible and can be fine-tuned freely with own data sets. [10]
In parallel with its benefits, open-source AI brings with it important ethical and social implications, as well as quality and security concerns.
Open-sourced development of AI has been criticized by researchers for additional quality and security concerns beyond general concerns regarding AI safety.
Current open-source models underperform closed-source models on most tasks, but open-source models are improving faster to close the gap. [86]
Open-source development of models has been deemed to have theoretical risks. Once a model is public, it cannot be rolled back or updated if serious security issues are detected. [4] For example, Open-source AI may allow bioterrorism groups like Aum Shinrikyo to remove fine-tuning and other safeguards of AI models to get AI to help develop more devastating terrorist schemes. [87] The main barrier to developing real-world terrorist schemes lies in stringent restrictions on necessary materials and equipment. [4] Furthermore, the rapid pace of AI advancement makes it less appealing to use older models, which are more vulnerable to attacks but also less capable. [4]
In July 2024, the United States released a presidential report saying it did not find sufficient evidence to restrict revealing model weights. [88]
There have been numerous cases of artificial intelligence leading to unintentionally biased products. Some notable examples include AI software predicting higher risk of future crime and recidivism for African-Americans when compared to white individuals, voice recognition models performing worse for non-native speakers, and facial-recognition models performing worse for women and darker-skinned individuals. [89] [84] [90]
Researchers have also criticized open-source artificial intelligence for existing security and ethical concerns. An analysis of over 100,000 open-source models on Hugging Face and GitHub using code vulnerability scanners like Bandit, FlawFinder, and Semgrep found that over 30% of models have high-severity vulnerabilities. [91] Furthermore, closed models typically have fewer safety risks than open-sourced models. [4] The freedom to augment open-source models has led to developers releasing models without ethical guidelines, such as GPT4-Chan. [4]
With AI systems increasingly employed into critical frameworks of society such as law enforcement and healthcare, there is a growing focus on preventing biased and unethical outcomes through guidelines, development frameworks, and regulations. Open-source AI has the potential to both exacerbate and mitigate bias, fairness, and equity, depending on its use.
While AI suffers from a lack of centralized guidelines for ethical development, frameworks for addressing the concerns regarding AI systems are emerging. These frameworks, often products of independent studies and interdisciplinary collaborations, are frequently adapted and shared across platforms like GitHub and Hugging Face to encourage community-driven enhancements.
There are numerous systemic problems that may contribute to inequitable and biased AI outcomes, stemming from causes such as biased data, flaws in model creation, and failing to recognize or plan for the possibility of these outcomes. [92] As highlighted in research, poor data quality—such as the underrepresentation of specific demographic groups in datasets—and biases introduced during data curation lead to skewed model outputs. [90]
A study of open-source AI projects revealed a failure to scrutinize for data quality, with less than 28% of projects including data quality concerns in their documentation. [92] This study also showed a broader concern that developers do not place enough emphasis on the ethical implications of their models, and even when developers do take ethical implications into consideration, these considerations overemphasize certain metrics (behavior of models) and overlook others (data quality and risk-mitigation steps). [92] These issues are compounded by AI documentation practices, which often lack actionable guidance and only briefly outline ethical risks without providing concrete solutions.
Another key flaw notable in many of the systems shown to have biased outcomes is their lack of transparency. [90] Many open-source AI models operate as "black boxes", where their decision-making process is not easily understood, even by their creators. [90] This lack of interpretability can hinder accountability, making it difficult to identify why a model made a particular decision or to ensure it operates fairly across diverse groups. [90]
Furthermore, when AI models are closed-source (proprietary), this can facilitate biased systems slipping through the cracks, as was the case for numerous widely adopted facial recognition systems. [90] These hidden biases can persist when those proprietary systems fail to publicize anything about the decision process which could help reveal those biases, such as confidence intervals for decisions made by AI. [90] Especially for systems like those used in healthcare, being able to see and understand systems' reasoning or getting "an [accurate] explanation" of how an answer was obtained is "crucial for ensuring trust and transparency". [93]
Efforts to counteract these challenges have resulted in the creation of structured documentation frameworks that guide the ethical development and deployment of AI:
As AI use grows, increasing AI transparency and reducing model biases has become increasingly emphasized as a concern. [85] These frameworks can help empower developers and stakeholders to identify and mitigate bias, fostering fairness and inclusivity in AI systems. [89] [85] Using these frameworks can help the open-source community create tools that are not only innovative but also equitable and ethical.
A superintelligence is a hypothetical agent that possesses intelligence surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes. This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics, lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status, artificial superintelligence and existential risks.
In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.
PyTorch is a machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, originally developed by Meta AI and now part of the Linux Foundation umbrella. It is one of the most popular deep learning frameworks, alongside others such as TensorFlow and PaddlePaddle, offering free and open-source software released under the modified BSD license. Although the Python interface is more polished and the primary focus of development, PyTorch also has a C++ interface.
Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML). Both refer to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. This has been brought up again as a topic of active research as users now need to know the safety and explain what automated decision making is in different applications. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.
The Open Neural Network Exchange (ONNX) [] is an open-source artificial intelligence ecosystem of technology companies and research organizations that establish open standards for representing machine learning algorithms and software tools to promote innovation and collaboration in the AI sector. ONNX is available on GitHub.
Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
Artificial intelligence is used in Wikipedia and other Wikimedia projects for the purpose of developing those projects. Human and bot interaction in Wikimedia projects is routine and iterative.
NNI is a free and open-source AutoML toolkit developed by Microsoft. It is used to automate feature engineering, model compression, neural architecture search, and hyper-parameter tuning.
Meta AI is a company owned by Meta that develops artificial intelligence and augmented and artificial reality technologies. Meta AI deems itself an academic research laboratory, focused on generating knowledge for the AI community, and should not be confused with Meta's Applied Machine Learning (AML) team, which focuses on the practical applications of its products.
A foundation model, also known as large X model (LxM), is a machine learning or deep learning model that is trained on vast datasets so it can be applied across a wide range of use cases. Generative AI applications like Large Language Models are often examples of foundation models.
Prompt injection is a family of related computer security exploits carried out by getting a machine learning model which was trained to follow human-given instructions to follow instructions provided by a malicious user. This stands in contrast to the intended operation of instruction-following systems, wherein the ML model is intended only to follow trusted instructions (prompts) provided by the ML model's operator.
AI safety is an interdisciplinary field focused on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence (AI) systems. It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.
In the field of artificial intelligence (AI), a hallucination or artificial hallucination is a response generated by AI that contains false or misleading information presented as fact. This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneous responses rather than perceptual experiences.
EleutherAI is a grass-roots non-profit artificial intelligence (AI) research group. The group, considered an open-source version of OpenAI, was formed in a Discord server in July 2020 by Connor Leahy, Sid Black, and Leo Gao to organize a replication of GPT-3. In early 2023, it formally incorporated as the EleutherAI Institute, a non-profit research institute.
Generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
Llama is a family of autoregressive large language models (LLMs) released by Meta AI starting in February 2023. The latest version is Llama 3.3, released in December 2024.
Medical open network for AI (MONAI) is an open-source, community-supported framework for Deep learning (DL) in healthcare imaging. MONAI provides a collection of domain-optimized implementations of various DL algorithms and utilities specifically designed for medical imaging tasks. MONAI is used in research and industry, aiding the development of various medical imaging applications, including image segmentation, image classification, image registration, and image generation.
Artificial intelligence engineering is a technical discipline that focuses on the design, development, and deployment of AI systems. AI engineering involves applying engineering principles and methodologies to create scalable, efficient, and reliable AI-based solutions. It merges aspects of data engineering and software engineering to create real-world applications in diverse domains such as healthcare, finance, autonomous systems, and industrial automation.
{{cite web}}
: CS1 maint: multiple names: authors list (link){{cite web}}
: CS1 maint: multiple names: authors list (link){{cite journal}}
: Cite journal requires |journal=
(help){{cite web}}
: CS1 maint: multiple names: authors list (link)