Artificial intelligence engineering

Last updated

Artificial Intelligence engineering (or AI engineering) is a technical discipline that focuses on the design, development, and deployment of AI systems. AI engineering involves applying engineering principles and methodologies to create scalable, efficient, and reliable AI-based solutions. It merges aspects of data engineering and software engineering to create real-world applications in diverse domains such as healthcare, finance, autonomous systems, and industrial automation. [1] [2]

Contents

Key components

AI engineering integrates a variety of technical domains and practices, all of which are essential to building scalable, reliable, and ethical AI systems.

Data engineering and infrastructure

Data serves as the cornerstone of AI systems, necessitating careful engineering to ensure quality, availability, and usability. AI engineers gather large, diverse datasets from multiple sources, including databases, APIs, and real-time streams. This data undergoes cleaning, normalization, and preprocessing, often facilitated by automated data pipelines that manage extraction, transformation, and loading (ETL) processes. [3]

Efficient storage solutions, such as SQL databases, NoSQL databases, and data lakes, must be selected based on data characteristics and use cases. Security measures, including encryption and access controls, are critical for protecting sensitive information and ensuring compliance with regulations like GDPR. Scalability is essential, frequently involving cloud services and distributed computing frameworks to handle growing data volumes effectively. [4] [5] [6]

Algorithm selection and optimization

Selecting the appropriate algorithm is crucial for the success of any AI system. Engineers evaluate the problem—classification, regression, clustering, or reinforcement learning—to determine the most suitable machine learning or deep learning algorithm. [7] [8]

Once an algorithm is chosen, optimizing it through hyperparameter tuning is essential to enhance efficiency and accuracy. [9] Techniques such as grid search or Bayesian optimization are employed, and engineers often utilize parallelization to expedite training processes, particularly for large datasets. [10] For existing models, techniques like transfer learning can be applied to adapt pre-trained models for specific tasks, reducing the time and resources needed for training. [11]

Deep learning engineering

Deep learning is particularly important for tasks involving large and complex datasets. Engineers design neural network architectures tailored to specific applications, using convolutional neural networks (CNNs) for visual tasks and recurrent neural networks (RNNs) for sequence-based tasks. Transfer learning, where pre-trained models are fine-tuned for specific use cases, helps streamline development and often enhances performance. [12]

Optimization for deployment in resource-constrained environments, such as mobile devices, involves techniques like pruning and quantization to minimize model size while maintaining performance. Engineers also tackle data imbalance challenges through augmentation and synthetic data generation, ensuring robust model performance across various classes. [12]

Natural language processing

Natural language processing (NLP) is a crucial component of AI engineering, focused on enabling machines to understand and generate human language. The process begins with text preprocessing to prepare data for machine learning models. Recent advancements, particularly transformer-based models like BERT and GPT, have greatly improved the ability to understand context in language. [13]

AI engineers work on various NLP tasks, including sentiment analysis, machine translation, and information extraction. These tasks require sophisticated models that utilize attention mechanisms to enhance accuracy. [14] Applications range from virtual assistants and chatbots to more specialized tasks like named entity recognition (NER) and Part-of-Speech (POS) tagging. [15] [16]

Reasoning and decision-making systems

Developing systems capable of reasoning and decision-making is a significant aspect of AI engineering. Whether starting from scratch or building on existing frameworks, engineers create solutions that operate on data or logical rules. Symbolic AI employs formal logic and predefined rules for inference, while probabilistic reasoning techniques like Bayesian networks help address uncertainty. These models are essential for applications in dynamic environments, such as autonomous vehicles, where real-time decision-making is critical. [17] [18]

Security

Security is a critical consideration in AI engineering, particularly as AI systems become increasingly integrated into sensitive and mission-critical applications. AI engineers implement robust security measures to protect models from adversarial attacks, such as evasion and poisoning, which can compromise system integrity and performance. Techniques such as adversarial training, where models are exposed to malicious inputs during development, help harden systems against these attacks. [19] [20]

Additionally, securing the data used to train AI models is of paramount importance. Encryption, secure data storage, and access control mechanisms are employed to safeguard sensitive information from unauthorized access and breaches. AI systems also require constant monitoring to detect and mitigate vulnerabilities that may arise post-deployment. In high-stakes environments like autonomous systems and healthcare, engineers incorporate redundancy and fail-safe mechanisms to ensure that AI models continue to function correctly in the presence of security threats. [21]

Ethics and compliance

As AI systems increasingly influence societal aspects, ethics and compliance are vital components of AI engineering. Engineers design models to mitigate risks such as data poisoning and ensure that AI systems adhere to legal frameworks, such as data protection regulations like GDPR. Privacy-preserving techniques, including data anonymization and differential privacy, are employed to safeguard personal information and ensure compliance with international standards. [22]

Ethical considerations focus on reducing bias in AI systems, preventing discrimination based on race, gender, or other protected characteristics. By developing fair and accountable AI solutions, engineers contribute to the creation of technologies that are both technically sound and socially responsible. [23]

Workload

An AI engineer’s workload revolves around the AI system’s life cycle, which is a complex, multi-stage process. [24] This process may involve building models from scratch or using pre-existing models through transfer learning, depending on the project’s requirements. [25] Each approach presents unique challenges and influences the time, resources, and technical decisions involved.

Problem definition and requirements analysis

Regardless of whether a model is built from scratch or based on a pre-existing model, the work begins with a clear understanding of the problem. The engineer must define the scope, understand the business context, and identify specific AI objectives that align with strategic goals. This stage includes consulting with stakeholders to establish key performance indicators (KPIs) and operational requirements. [24]

When developing a model from scratch, the engineer must also decide which algorithms are most suitable for the task. [7] Conversely, when using a pre-trained model, the workload shifts toward evaluating existing models and selecting the one most aligned with the task. The use of pre-trained models often allows for a more targeted focus on fine-tuning, as opposed to designing an entirely new model architecture. [26]

Data acquisition and preparation

Data acquisition and preparation are critical stages regardless of the development method chosen, as the performance of any AI system relies heavily on high-quality, representative data.

For systems built from scratch, engineers must gather comprehensive datasets that cover all aspects of the problem domain, ensuring enough diversity and representativeness in the data to train the model effectively. This involves cleansing, normalizing, and augmenting the data as needed. Creating data pipelines and addressing issues like imbalanced datasets or missing values are also essential to maintain model integrity during training. [27]

In the case of using pre-existing models, the dataset requirements often differ. Here, engineers focus on obtaining task-specific data that will be used to fine-tune a general model. While the overall data volume may be smaller, it needs to be highly relevant to the specific problem. Pre-existing models, especially those based on transfer learning, typically require fewer data, which accelerates the preparation phase, although data quality remains equally important. [28]

Model design and training

The workload during the model design and training phase depends significantly on whether the engineer is building the model from scratch or fine-tuning an existing one.

When creating a model from scratch, AI engineers must design the entire architecture, selecting or developing algorithms and structures that are suited to the problem. For deep learning models, this might involve designing a neural network with the right number of layers, activation functions, and optimizers. [29] Engineers go through several iterations of testing, adjusting hyperparameters, and refining the architecture. [9] This process can be resource-intensive, requiring substantial computational power and significant time to train the model on large datasets.

For AI systems based on pre-existing models, the focus is more on fine-tuning. Transfer learning allows engineers to take a model that has already been trained on a broad dataset and adapt it for a specific task using a smaller, task-specific dataset. This method dramatically reduces the complexity of the design and training phase. Instead of building the architecture, engineers adjust the final layers and perform hyperparameter tuning. The time and computational resources required are typically lower than training from scratch, as pre-trained models have already learned general features that only need refinement for the new task. [25]

Whether building from scratch or fine-tuning, engineers employ optimization techniques like cross-validation and early stopping to prevent overfitting. In both cases, model training involves running numerous tests to benchmark performance and improve accuracy. [30]

System integration

Once the model is trained, it must be integrated into the broader system, a phase that largely remains the same regardless of how the model was developed. System integration involves connecting the AI model to various software components and ensuring that it can interact with external systems, databases, and user interfaces.

For models developed from scratch, integration may require additional work to ensure that the custom-built architecture aligns with the operational environment, especially if the AI system is designed for specific hardware or edge computing environments. Pre-trained models, by contrast, are often more flexible in terms of deployment since they are built using widely adopted frameworks, which are compatible with most modern infrastructure. [31]

Engineers use containerization tools to package the model and create consistent environments for deployment, ensuring seamless integration across cloud-based or on-premise systems. Whether starting from scratch or using pre-trained models, the integration phase requires ensuring that the model is ready to scale and perform efficiently within the existing infrastructure. [32] [33]

Testing and validation

Testing and validation play a crucial role in both approaches, though the depth and nature of testing might differ slightly. For models built from scratch, more exhaustive functional testing is needed to ensure that the custom-built components of the model function as intended. Stress tests are conducted to evaluate the system under various operational loads, and engineers must validate that the model can handle the specific data types and edge cases of the domain. [34]

For pre-trained models, the focus of testing is on ensuring that fine-tuning has adequately adapted the model to the task. Functional tests validate that the pre-trained model’s outputs are accurate for the new context. In both cases, bias assessments, fairness evaluations, and security reviews are critical to ensure ethical AI practices and prevent vulnerabilities, particularly in sensitive applications like finance, healthcare, or autonomous systems. [35]

Explainability is also essential in both workflows, especially when working in regulated industries or with stakeholders who need transparency in AI decision-making processes. Engineers must ensure that the model’s predictions can be understood by non-technical users and align with ethical and regulatory standards. [36]

Deployment and monitoring

The deployment stage typically involves the same overarching strategies—whether the model is built from scratch or based on an existing model. However, models built from scratch may require more extensive fine-tuning during deployment to ensure they meet performance requirements in a production environment. For example, engineers might need to optimize memory usage, reduce latency, or adapt the model for edge computing. [37] [38] [39]

When deploying pre-trained models, the workload is generally lighter. Since these models are often already optimized for production environments, engineers can focus on ensuring compatibility with the task-specific data and infrastructure. In both cases, deployment techniques such as phased rollouts, A/B testing, or canary deployments are used to minimize risks and ensure smooth transition into the live environment. [40] [41] [42]

Monitoring, however, is critical in both approaches. Once the AI system is deployed, engineers set up performance monitoring to detect issues like model drift, where the model’s accuracy decreases over time as data patterns change. Continuous monitoring helps identify when the model needs retraining or recalibration. For pre-trained models, periodic fine-tuning may suffice to keep the model performing optimally, while models built from scratch may require more extensive updates depending on how the system was designed. [43] [44]

Regular maintenance includes updates to the model, re-validation of fairness and bias checks, and security patches to protect against adversarial attacks.

Machine learning operations (MLOps)

MLOps, or Artificial Intelligence Operations (AIOps), is a critical component in modern AI engineering, integrating machine learning model development with reliable and efficient operations practices. Similar to the DevOps practices in software development, MLOps provides a framework for continuous integration, continuous delivery (CI/CD), and automated monitoring of machine learning models throughout their lifecycle. This practice bridges the gap between data scientists, AI engineers, and IT operations, ensuring that AI models are deployed, monitored, and maintained effectively in production environments. [45]

MLOps is particularly important as AI systems scale to handle more complex tasks and larger datasets. Without robust MLOps practices, models risk underperforming or failing once deployed into production, leading to issues such as downtime, ethical concerns, or loss of stakeholder trust. By establishing automated, scalable workflows, MLOps allows AI engineers to manage the entire lifecycle of machine learning models more efficiently, from development through to deployment and ongoing monitoring. [46]

Additionally, as regulatory frameworks around AI systems continue to evolve, MLOps practices are critical for ensuring compliance with legal requirements, including data privacy regulations and ethical AI guidelines. By incorporating best practices from MLOps, organizations can mitigate risks, maintain high performance, and scale AI solutions responsibly. [45]

Challenges

AI engineering faces a distinctive set of challenges that differentiate it from traditional software development. One of the primary issues is model drift, where AI models degrade in performance over time due to changes in data patterns, necessitating continuous retraining and adaptation. [47] Additionally, data privacy and security are critical concerns, particularly when sensitive data is used in cloud-based models. [48] [49] Ensuring model explainability is another challenge, as complex AI systems must be made interpretable for non-technical stakeholders. [50] Bias and fairness also require careful handling to prevent discrimination and promote equitable outcomes, as biases present in training data can propagate through AI algorithms, leading to unintended results. [51] [52] Addressing these challenges requires a multidisciplinary approach, combining technical acumen with ethical and regulatory considerations.

Sustainability

Training large-scale AI models involves processing immense datasets over prolonged periods, consuming considerable amounts of energy. This has raised concerns about the environmental impact of AI technologies, given the expansion of data centers required to support AI training and inference. [53] [54]

The increasing demand for computational power has led to significant electricity consumption, with AI-driven applications often leaving a substantial carbon footprint. In response, AI engineers and researchers are exploring ways to mitigate these effects by developing more energy-efficient algorithms, employing green data centers, and leveraging renewable energy sources. Addressing the sustainability of AI systems is becoming a critical aspect of responsible AI development as the industry continues to scale globally. [55] [56]

Educational pathways

Education in AI engineering typically involves advanced courses in software and data engineering. Key topics include machine learning, deep learning, natural language processing and computer vision. Many universities now offer specialized programs in AI engineering at both the undergraduate and postgraduate levels, including hands-on labs, project-based learning, and interdisciplinary courses that bridge AI theory with engineering practices. [57]

Professional certifications can also supplement formal education. Additionally, hands-on experience with real-world projects, internships, and contributions to open-source AI initiatives are highly recommended to build practical expertise. [57] [58]

Related Research Articles

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals. Such machines may be called AIs.

<span class="mw-page-title-main">Neural network (machine learning)</span> Computational model used in machine learning, based on connected, hierarchical functions

In machine learning, a neural network is a model inspired by the structure and function of biological neural networks in animal brains.

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data, and thus perform tasks without explicit instructions. Advances in the field of deep learning have allowed neural networks to surpass many previous approaches in performance.

Meta-learning is a subfield of machine learning where automatic learning algorithms are applied to metadata about machine learning experiments. As of 2017, the term had not found a standard interpretation, however the main goal is to use such metadata to understand how automatic learning can become flexible in solving learning problems, hence to improve the performance of existing learning algorithms or to learn (induce) the learning algorithm itself, hence the alternative term learning to learn.

Imaging informatics, also known as radiology informatics or medical imaging informatics, is a subspecialty of biomedical informatics that aims to improve the efficiency, accuracy, usability and reliability of medical imaging services within the healthcare enterprise. It is devoted to the study of how information about and contained within medical images is retrieved, analyzed, enhanced, and exchanged throughout the medical enterprise.

Music and artificial intelligence (AI) is the development of music software programs which use AI to generate music. As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment. Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI in music composition, performance, theory and digital sound processing.

This glossary of artificial intelligence is a list of definitions of terms and concepts relevant to the study of artificial intelligence (AI), its subdisciplines, and related fields. Related glossaries include Glossary of computer science, Glossary of robotics, and Glossary of machine vision.

In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, and ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.

Industrial artificial intelligence, or industrial AI, usually refers to the application of artificial intelligence to industry and business. Unlike general artificial intelligence which is a frontier research discipline to build computerized systems that perform tasks requiring human intelligence, industrial AI is more concerned with the application of such technologies to address industrial pain-points for customer value creation, productivity improvement, cost reduction, site optimization, predictive analysis and insight discovery.

Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML), either refers to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.

Automated machine learning (AutoML) is the process of automating the tasks of applying machine learning to real-world problems. It is the combination of automation and ML.

In artificial intelligence, researchers teach AI systems to develop their own ways of communicating by having them work together on tasks and use symbols as parts of a new language. These languages might grow out of human languages or be built completely from scratch. When AI is used for translating between languages, it can even create a new shared language to make the process easier. Natural Language Processing (NLP) helps these systems understand and generate human-like language, making it possible for AI to interact and communicate more naturally with people.

<span class="mw-page-title-main">MLOps</span> Approach to machine learning lifecycle management

MLOps or ML Ops is a paradigm that aims to deploy and maintain machine learning models in production reliably and efficiently. The word is a compound of "machine learning" and the continuous delivery practice (CI/CD) of DevOps in the software field. Machine learning models are tested and developed in isolated experimental systems. When an algorithm is ready to be launched, MLOps is practiced between Data Scientists, DevOps, and Machine Learning engineers to transition the algorithm to production systems. Similar to DevOps or DataOps approaches, MLOps seeks to increase automation and improve the quality of production models, while also focusing on business and regulatory requirements. While MLOps started as a set of best practices, it is slowly evolving into an independent approach to ML lifecycle management. MLOps applies to the entire lifecycle - from integrating with model generation, orchestration, and deployment, to health, diagnostics, governance, and business metrics.

<span class="mw-page-title-main">Federated learning</span> Decentralized machine learning

Federated learning is a machine learning technique focusing on settings in which multiple entities collaboratively train a model while ensuring that their data remains decentralized. This stands in contrast to machine learning settings in which data is centrally stored. One of the primary defining characteristics of federated learning is data heterogeneity. Due to the decentralized nature of the clients' data, there is no guarantee that data samples held by each client are independently and identically distributed.

Artificial Intelligence for IT Operations (AIOps) is a practice that uses artificial intelligence and machine learning to enhance and automate various aspects of IT operations. It is designed to optimize IT environments by analyzing large volumes of data generated by complex IT systems, including system logs, performance metrics, and network data. AIOps aims to streamline IT workflows, predict potential issues, automate incident response, and ultimately improve the performance and efficiency of enterprise IT environments.

<span class="mw-page-title-main">ModelOps</span>

ModelOps, as defined by Gartner, "is focused primarily on the governance and lifecycle management of a wide range of operationalized artificial intelligence (AI) and decision models, including machine learning, knowledge graphs, rules, optimization, linguistic and agent-based models". "ModelOps lies at the heart of any enterprise AI strategy". It orchestrates the model lifecycles of all models in production across the entire enterprise, from putting a model into production, then evaluating and updating the resulting application according to a set of governance rules, including both technical and business key performance indicators (KPI's). It grants business domain experts the capability to evaluate AI models in production, independent of data scientists.

Toloka, based in Amsterdam, is a crowdsourcing and generative AI services provider.

<span class="mw-page-title-main">Data Version Control (software)</span> Open source version system

DVC is a free and open-source, platform-agnostic version system for data, machine learning models, and experiments. It is designed to make ML models shareable, experiments reproducible, and to track versions of models, data, and pipelines. DVC works on top of Git repositories and cloud storage.

Medical open network for AI (MONAI) is an open-source, community-supported framework for Deep learning (DL) in healthcare imaging. MONAI provides a collection of domain-optimized implementations of various DL algorithms and utilities specifically designed for medical imaging tasks. MONAI is used in research and industry, aiding the development of various medical imaging applications, including image segmentation, image classification, image registration, and image generation.

Open-source artificial intelligence is an AI system that is freely available to use, study, modify, and share. These attributes extend to each of the system’s components, including datasets, code, and model parameters, promoting a collaborative and transparent approach to AI development.

References

  1. "What is Ai Engineering? Exploring the Roles of an Ai Engineer". ARTiBA. Retrieved 2024-10-17.
  2. Marr, Bernard. "15 Amazing Real-World Applications Of AI Everyone Should Know About". Forbes. Retrieved 2024-10-17.
  3. DEITEL, Paul J.; DEITEL, Harvey M. (2019). Intro to Python for Computer Science and Data Science: Learning to Program with AI, Big Data and The Cloud. Pearson.
  4. Tobin, Donal. "Which Database Is Right for Your Use Case?". Integrate.io. Retrieved 2024-10-18.
  5. Scalzo, Bert (2022-08-16). "Data Modeling 301 for the cloud: data lake and NoSQL data modeling and design". erwin Expert Blog. Retrieved 2024-10-18.
  6. Bahmani, Amir; Alavi, Arash; Buergel, Thore; Upadhyayula, Sushil; Wang, Qiwen; Ananthakrishnan, Srinath Krishna; Alavi, Amir; Celis, Diego; Gillespie, Dan; Young, Gregory; Xing, Ziye; Nguyen, Minh Hoang Huynh; Haque, Audrey; Mathur, Ankit; Payne, Josh (2021-10-01). "A scalable, secure, and interoperable platform for deep data-driven health management". Nature Communications. 12 (1): 5757. Bibcode:2021NatCo..12.5757B. doi:10.1038/s41467-021-26040-1. ISSN   2041-1723. PMC   8486823 . PMID   34599181.
  7. 1 2 Bischl, Bernd; Kerschke, Pascal; Kotthoff, Lars; Lindauer, Marius; Malitsky, Yuri; Fréchette, Alexandre; Hoos, Holger; Hutter, Frank; Leyton-Brown, Kevin; Tierney, Kevin; Vanschoren, Joaquin (2016-08-01). "ASlib: A benchmark library for algorithm selection". Artificial Intelligence. 237: 41–58. arXiv: 1506.02465 . doi:10.1016/j.artint.2016.04.003. ISSN   0004-3702.
  8. "Explained: Neural networks". MIT News | Massachusetts Institute of Technology. 2017-04-14. Retrieved 2024-10-18.
  9. 1 2 FEURER, Matthias; HUTTER, Frank. "Hyperparameter optimization". AutoML: Methods, Systems, Challenges. pp. 3–38.
  10. "Grid Search, Random Search, and Bayesian Optimization". Keylabs: latest news and updates. 2024-08-21. Retrieved 2024-10-18.
  11. West, Jeremy; Ventura, Dan; Warnick, Sean (2007). "Spring Research Presentation: A Theoretical Foundation for Inductive Transfer". Brigham Young University, College of Physical and Mathematical Sciences. Archived from the original on 2007-08-01. Retrieved 2024-10-18.
  12. 1 2 Chaudhury, Krishnendu (2024). Math and Architectures of Deep Learning. Manning Publications.
  13. "The Power of Natural Language Processing". Harvard Business Review. 2022-04-19. ISSN   0017-8012 . Retrieved 2024-10-18.
  14. Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N.; Kaiser, Lukasz; Polosukhin, Illia (2023-08-01), Attention Is All You Need, arXiv: 1706.03762
  15. Sun, Peng; Yang, Xuezhen; Zhao, Xiaobing; Wang, Zhijuan (November 2018). "An Overview of Named Entity Recognition". 2018 International Conference on Asian Language Processing (IALP). IEEE. pp. 273–278. doi:10.1109/IALP.2018.8629225. ISBN   978-1-7281-1175-9.
  16. Chiche, Alebachew; Yitagesu, Betselot (2022-01-24). "Part of speech tagging: a systematic review of deep learning and machine learning approaches". Journal of Big Data. 9 (1): 10. doi: 10.1186/s40537-022-00561-y . ISSN   2196-1115.
  17. Susskind, Zachary; Arden, Bryce; John, Lizy K.; Stockton, Patrick; John, Eugene B. (2021-09-13), Neuro-Symbolic AI: An Emerging Class of AI Workloads and their Characterization, arXiv: 2109.06133
  18. Garnelo, Marta; Shanahan, Murray (2019-10-01). "Reconciling deep learning with symbolic artificial intelligence: representing objects and relations". Current Opinion in Behavioral Sciences. 29: 17–23. doi: 10.1016/j.cobeha.2018.12.010 . ISSN   2352-1546.
  19. Fan, Jiaxin; Yan, Qi; Li, Mohan; Qu, Guanqun; Xiao, Yang (July 2022). "A Survey on Data Poisoning Attacks and Defenses". 2022 7th IEEE International Conference on Data Science in Cyberspace (DSC). IEEE. pp. 48–55. doi:10.1109/DSC55868.2022.00014. ISBN   978-1-6654-7480-1.
  20. Ren, Kui; Zheng, Tianhang; Qin, Zhan; Liu, Xue (2020-03-01). "Adversarial Attacks and Defenses in Deep Learning". Engineering. 6 (3): 346–360. Bibcode:2020Engin...6..346R. doi: 10.1016/j.eng.2019.12.012 . ISSN   2095-8099.
  21. "How should we assess security and data minimisation in AI?". ico.org.uk. 2024-07-31. Retrieved 2024-10-23.
  22. European Parliament. Directorate General for Parliamentary Research Services. (2020). The impact of the general data protection regulation on artificial intelligence. LU: Publications Office. doi:10.2861/293. ISBN   978-92-846-6771-0.
  23. Ferrara, Emilio (March 2024). "Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies". Sci. 6 (1): 3. arXiv: 2304.07683 . doi: 10.3390/sci6010003 . ISSN   2413-4155.
  24. 1 2 Haakman, Mark; Cruz, Luís; Huijgens, Hennie; van Deursen, Arie (2021-07-08). "AI lifecycle models need to be revised". Empirical Software Engineering. 26 (5): 95. doi: 10.1007/s10664-021-09993-1 . ISSN   1573-7616.
  25. 1 2 Fritz (2023-09-21). "Pre-Trained Machine Learning Models vs Models Trained from Scratch". Fritz ai. Retrieved 2024-10-18.
  26. Alshalali, Tagrid; Josyula, Darsana (December 2018). "Fine-Tuning of Pre-Trained Deep Learning Models with Extreme Learning Machine". 2018 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE. pp. 469–473. doi:10.1109/CSCI46756.2018.00096. ISBN   978-1-7281-1360-9.
  27. Jain, Mitaksh (2024-05-27). "Data Acquisition: The Ultimate Guide to Master Machine Learning". Emeritus Online Courses. Retrieved 2024-10-18.
  28. Dodge, Jesse; Ilharco, Gabriel; Schwartz, Roy; Farhadi, Ali; Hajishirzi, Hannaneh; Smith, Noah (2020-02-14), Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping, arXiv: 2002.06305
  29. "What is a Model Architecture? - Hopsworks". www.hopsworks.ai. Retrieved 2024-10-18.
  30. "How the training of the AI models works - The Data Scientist". 2023-08-16. Retrieved 2024-10-18.
  31. Thórisson, Kristinn R.; Benko, Hrvoje; Abramov, Denis; Arnold, Andrew; Maskey, Sameer; Vaseekaran, Aruchunan (2004). "Constructionist Design Methodology for Interactive Intelligences" (PDF). A.I. Magazine. Archived from the original (PDF) on 2006-09-03.
  32. "AI Model Packaging Best Practices | Restackio". Restack. Retrieved 2024-10-23.
  33. Mungoli*, Neelesh (2023). Scalable, Distributed AI Frameworks: Leveraging Cloud Computing for Enhanced Deep Learning Performance and Efficiency. arXiv: 2304.13738 .
  34. "The Role of Performance Testing in AI Applications Digital Product Modernization". Round The Clock Technologies. 2023-07-11. Retrieved 2024-10-23.
  35. "Artificial intelligence validation and verification service". SQS. Retrieved 2024-10-23.
  36. Hand, David J.; Khan, Shakeel (June 2020). "Validating and Verifying AI Systems". Patterns. 1 (3): 100037. doi:10.1016/j.patter.2020.100037. ISSN   2666-3899. PMC   7660449 . PMID   33205105.
  37. Katsaragakis, Manolis; Papadopoulos, Lazaros; Konijnenburg, Mario; Catthoor, Francky; Soudris, Dimitrios (October 2020). "Memory Footprint Optimization Techniques for Machine Learning Applications in Embedded Systems". 2020 IEEE International Symposium on Circuits and Systems (ISCAS). IEEE. pp. 1–4. doi:10.1109/ISCAS45731.2020.9181038. ISBN   978-1-7281-3320-1.
  38. Ramkumar, M.; Karthick, R.; Jeyashree, A. (2023). "SMART MONITORING AND ENHANCEMENT OF NETWORK LATENCY IN 5G CLOUD COMPUTING USING AI BASED MACHINE LEARNING MODEL". Ictact Journal on Data Science and Machine Learning. 04 (2): 421–425.
  39. Hua, Haochen; Li, Yutong; Wang, Tonghe; Dong, Nanqing; Li, Wei; Cao, Junwei (2023-01-16). "Edge Computing with Artificial Intelligence: A Machine Learning Perspective". ACM Comput. Surv. 55 (9): 184:1–184:35. doi:10.1145/3555802. ISSN   0360-0300.
  40. "Rollout and Training • BT Standard". BT Standard. Retrieved 2024-10-23.
  41. Brinkmann, Demetrios (2022-09-15). "The what, why and how of Machine Learning A/B Tests". MLOps Community. Retrieved 2024-10-23.
  42. Roller, Joshua (2024-06-20). "Canary Deployment: What It Is and Why It Matters". IEEE Computer Society. Retrieved 2024-10-23.
  43. Onnes, Annet (2022-05-05), Monitoring AI systems: A Problem Analysis, Framework and Outlook, arXiv: 2205.02562
  44. "Why Continuous Monitoring is Essential for Maintaining AI Integrity – Wisecube AI – Research Intelligence Platform" . Retrieved 2024-10-23.
  45. 1 2 Treveil, Mark; Omont, Nicolas; Stenac, Clément; Lefevre, Kenji; Phan, Du; Zentici, Joachim; Lavoillotte, Adrien; Miyazaki, Makoto; Heidmann, Lynn (2020). Introducing MLOps. O'Reilly Media, Inc.
  46. Gift, Noah; Deza, Alfredo (2021). Practical MLOps. O'Reilly Media, Inc.
  47. Carter, Rickey E.; Anand, Vidhu; Harmon, David M.; Pellikka, Patricia A. (2022-01-01). "Model drift: When it can be a sign of success and when it can be an occult problem". Intelligence-Based Medicine. 6: 100058. doi: 10.1016/j.ibmed.2022.100058 . ISSN   2666-5212.
  48. Luqman, Alka; Mahesh, Riya; Chattopadhyay, Anupam (2024-01-31), Privacy and Security Implications of Cloud-Based AI Services : A Survey, arXiv: 2402.00896
  49. Hassan, M. Ali (2024-03-20). "Cloud-Based AI: What Does it Mean For Cybersecurity?". Vaporvm. Retrieved 2024-10-23.
  50. Ali, Sajid; Abuhmed, Tamer; El-Sappagh, Shaker; Muhammad, Khan; Alonso-Moral, Jose M.; Confalonieri, Roberto; Guidotti, Riccardo; Del Ser, Javier; Díaz-Rodríguez, Natalia; Herrera, Francisco (2023-11-01). "Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence". Information Fusion. 99: 101805. doi:10.1016/j.inffus.2023.101805. hdl: 10481/84480 . ISSN   1566-2535.
  51. "What Do We Do About the Biases in AI?". Harvard Business Review. 2019-10-25. ISSN   0017-8012 . Retrieved 2024-10-23.
  52. Ferrara, Emilio (March 2024). "Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies". Sci. 6 (1): 3. arXiv: 2304.07683 . doi: 10.3390/sci6010003 . ISSN   2413-4155.
  53. Editorial, MITSloan ME (2024-04-19). "Hyperscale Data Centers Growing at an Impressive Pace, Driven by Generative AI". MIT Sloan Management Review Middle East. Retrieved 2024-10-23.
  54. "Technology News | TechHQ | Latest Technology News & Analysis". TechHQ. 2024-01-26. Retrieved 2024-10-23.
  55. "AI models are devouring energy. Tools to reduce consumption are here, if data centers will adopt. | MIT Lincoln Laboratory". www.ll.mit.edu. Retrieved 2024-10-23.
  56. "Recommendations on Powering Artificial Intelligence and Data Center Infrastructure" (PDF). U.S. Department of Energy. 2024-07-30. Retrieved 2024-10-23.
  57. 1 2 "Guide: How To Become An Artificial Intelligence Engineer". Indeed. 2024-05-06.
  58. "How To Become an Artificial Intelligence (AI) Engineer". Upwork. 2023-06-22.