Human–artificial intelligence collaboration

Last updated

Human-AI collaboration is the study of how humans and artificial intelligence (AI) agents work together to accomplish a shared goal. [1] AI systems can aid humans in everything from decision making tasks to art creation. [2] Examples of collaboration include medical decision making aids., [3] [4] hate speech detection, [5] and music generation. [6] As AI systems are able to tackle more complex tasks, studies are exploring how different models and explanation techniques can improve human-AI collaboration.

Contents

Improving collaboration

Explainable AI

When a human uses an AI's output, they often want to understand why a model gave a certain output. [7] While some models, like decision trees, are inherently explainable, black box models do not have clear explanations. Various Explainable artificial intelligence methods aim to describe model outputs with post-hoc explanations [8] or visualizations, [9] these methods can often provide misleading and false explanations. [10] Studies have also found that explanations may not improve the performance of a human-AI team, but simply increase a human's reliance on the model's output. [11]

Trust in AI

A human's trust in an AI agent is an important factor in human-AI collaboration, dictating whether the human should follow or override the AI's input. [12] Various factors impact a person's trust in an AI system, including its accuracy [13] and reliability [14]

Adoption of AI

AI adoption by users is crucial for improving Human-AI collaboration since user’s adoption is not just about using the new technology, but also important in transforming how work is done, how decisions are made, and how projects and organizations operate in a more efficient manner. This transformation is essential for realizing the full potential of Human-AI collaboration. In the evolving digital landscape, there is an increasing pressure to adopt and effectively utilize artificial intelligence (AI), which is steadily entering the management, work, and organizational ecosystems and enabling digital transformations. The successful adoption of AI is a complex and multifaceted process that requires careful consideration of various factors [15]

Why is humanizing AI-Generated text is important?

Here are the reasons why humanizing AI-generated content is important: [16]

  1. Relatability: Human readers seek emotionally resonant content. AI can lack the nuances that make content relatable.
  2. Authenticity: Readers value a genuine human touch behind content, ensuring it doesn't come off as robotic.
  3. Contextual Understanding: AI can misinterpret nuances, requiring human oversight for accuracy.
  4. Ethical Considerations: Humanizing AI content helps identify and rectify biases, ensuring fairness.
  5. Search Engine Performance: AI may not consistently meet search engine guidelines, risking penalties.
  6. Conversion Improvement: Humanized content connects emotionally and crafts tailored calls to action.
  7. Building Trust: Humanized content adds credibility, fostering reader trust.
  8. Cultural Sensitivity: Humanization ensures content is respectful and tailored to diverse audiences.

Related Research Articles

In the field of artificial intelligence (AI), tasks that are hypothesized to require artificial general intelligence to solve are informally known as AI-complete or AI-hard. Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm.

<span class="mw-page-title-main">Chatbot</span> Program that simulates conversation

A chatbot is a software application or web interface that is designed to mimic human conversation through text or voice interactions. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.

Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of statistical algorithms that can learn from data and generalize to unseen data and thus perform tasks without explicit instructions. Recently, artificial neural networks have been able to surpass many previous approaches in performance.

A recommender system, or a recommendation system, is a subclass of information filtering system that provides suggestions for items that are most pertinent to a particular user. Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer.

<span class="mw-page-title-main">Ben Shneiderman</span> American computer scientist

Ben Shneiderman is an American computer scientist, a Distinguished University Professor in the University of Maryland Department of Computer Science, which is part of the University of Maryland College of Computer, Mathematical, and Natural Sciences at the University of Maryland, College Park, and the founding director (1983-2000) of the University of Maryland Human-Computer Interaction Lab. He conducted fundamental research in the field of human–computer interaction, developing new ideas, methods, and tools such as the direct manipulation interface, and his eight rules of design.

Value sensitive design (VSD) is a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner. VSD originated within the field of information systems design and human-computer interaction to address design issues within the fields by emphasizing the ethical values of direct and indirect stakeholders. It was developed by Batya Friedman and Peter Kahn at the University of Washington starting in the late 1980s and early 1990s. Later, in 2019, Batya Friedman and David Hendry wrote a book on this topic called "Value Sensitive Design: Shaping Technology with Moral Imagination". Value Sensitive Design takes human values into account in a well-defined matter throughout the whole process. Designs are developed using an investigation consisting of three phases: conceptual, empirical and technological. These investigations are intended to be iterative, allowing the designer to modify the design continuously.

<span class="mw-page-title-main">Eric Horvitz</span> American computer scientist, and Technical Fellow at Microsoft

Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He was previously the director of Microsoft Research Labs, including research centers in Redmond, WA, Cambridge, MA, New York, NY, Montreal, Canada, Cambridge, UK, and Bangalore, India.

<span class="mw-page-title-main">Apache SINGA</span> Open-source machine learning library

Apache SINGA is an Apache top-level project for developing an open source machine learning library. It provides a flexible architecture for scalable distributed training, is extensible to run over a wide range of hardware, and has a focus on health-care applications.

Explainable AI (XAI), often overlapping with interpretable AI, or explainable machine learning (XML), either refers to an artificial intelligence (AI) system over which it is possible for humans to retain intellectual oversight, or refers to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.

In the regulation of algorithms, particularly artificial intelligence and its subfield of machine learning, a right to explanation is a right to be given an explanation for an output of the algorithm. Such rights primarily refer to individual rights to be given an explanation for decisions that significantly affect an individual, particularly legally or financially. For example, a person who applies for a loan and is denied may ask for an explanation, which could be "Credit bureau X reports that you declared bankruptcy last year; this is the main factor in considering you too likely to default, and thus we will not give you the loan you applied for."

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

Animal–computer interaction (ACI) is a field of research for the design and use of technology with, for and by animals covering different kinds of animals from wildlife, zoo and domesticated animals in different roles. It emerged from, and was heavily influenced by, the discipline of Human–computer interaction (HCI). As the field expanded, it has become increasingly multi-disciplinary, incorporating techniques and research from disciplines such as artificial intelligence (AI), requirements engineering (RE), and veterinary science.

ACM Conference on Fairness, Accountability, and Transparency is a peer-reviewed academic conference series about ethics and computing systems. Sponsored by the Association for Computing Machinery, this conference focuses on issues such as algorithmic transparency, fairness in machine learning, bias, and ethics from a multi-disciplinary perspective. The conference community includes computer scientists, statisticians, social scientists, scholars of law, and others.

<span class="mw-page-title-main">Artificial intelligence art</span> Machine application of knowledge of human aesthetic expressions

Artificial intelligence art is visual artwork created through the use of an artificial intelligence (AI) program.

Tawanna Dillahunt is an American computer scientist and information scientist based at the University of Michigan School of Information. She runs the Social Innovations Group, a research group that designs, builds, and enhances technologies to solve real-world problems. Her research has been cited over 4,600 times according to Google Scholar.

<span class="mw-page-title-main">Hanna Wallach</span> Computational social scientist

Hanna Wallach is a computational social scientist and partner research manager at Microsoft Research. Her work makes use of machine learning models to study the dynamics of social processes. Her current research focuses on issues of fairness, accountability, transparency, and ethics as they relate to AI and machine learning.

Jofish Kaye is an American and British scientist specializing in human-computer interaction and artificial intelligence. He runs interaction design and user research at anthem.ai, and is an editor of Personal & Ubiquitous Computing.

<span class="mw-page-title-main">Margaret Mitchell (scientist)</span> U.S. computer scientist

Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models, as well as more transparent reporting of their intended use.

Automated decision-making (ADM) involves the use of data, machines and algorithms to make decisions in a range of contexts, including public administration, business, health, education, law, employment, transport, media and entertainment, with varying degrees of human oversight or intervention. ADM involves large-scale data from a range of sources, such as databases, text, social media, sensors, images or speech, that is processed using various technologies including computer software, algorithms, machine learning, natural language processing, artificial intelligence, augmented intelligence and robotics. The increasing use of automated decision-making systems (ADMS) across a range of contexts presents many benefits and challenges to human society requiring consideration of the technical, legal, ethical, societal, educational, economic and health consequences.

AI literacy or artificial intelligence literacy, is the ability to understand, use, monitor, and critically reflect on AI applications. The term usually refers to teaching skills and knowledge to the general public, not to people who are adept in AI.

References

  1. Sturm, Timo; Gerlach, Jin P.; Pumplun, Luisa; Mesbah, Neda; Peters, Felix; Tauchert, Christoph; Nan, Ning; Buxmann, Peter (2021). "Coordinating Human and Machine Learning for Effective Organizational Learning". MIS Quarterly. 45 (3): 1581–1602. doi:10.25300/MISQ/2021/16543. S2CID   238222756.
  2. Mateja, Deborah; Heinzl, Armin (July 2021). "Towards Machine Learning as an Enabler of Computational Creativity". IEEE Transactions on Artificial Intelligence. 2 (6): 460–475. doi: 10.1109/TAI.2021.3100456 . ISSN   2691-4581. S2CID   238941032.
  3. Yang, Qian; Steinfeld, Aaron; Zimmerman, John (2019-05-02). "Unremarkable AI". Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI '19. Glasgow, Scotland Uk: Association for Computing Machinery. pp. 1–11. arXiv: 1904.09612 . doi:10.1145/3290605.3300468. ISBN   978-1-4503-5970-2. S2CID   127989976.
  4. Patel, Bhavik N.; Rosenberg, Louis; Willcox, Gregg; Baltaxe, David; Lyons, Mimi; Irvin, Jeremy; Rajpurkar, Pranav; Amrhein, Timothy; Gupta, Rajan; Halabi, Safwan; Langlotz, Curtis (2019-11-18). "Human–machine partnership with artificial intelligence for chest radiograph diagnosis". npj Digital Medicine. 2 (1): 111. doi:10.1038/s41746-019-0189-7. ISSN   2398-6352. PMC   6861262 . PMID   31754637.
  5. "Facebook's AI for Hate Speech Improves. How Much Is Unclear". Wired. ISSN   1059-1028 . Retrieved 2021-02-08.
  6. Roberts, Adam; Engel, Jesse; Mann, Yotam; Gillick, Jon; Kayacik, Claire; Nørly, Signe; Dinculescu, Monica; Radebaugh, Carey; Hawthorne, Curtis; Eck, Douglas (2019). "Magenta Studio: Augmenting Creativity with Deep Learning in Ableton Live". Proceedings of the International Workshop on Musical Metacreation (MUME).
  7. Samek, Wojciech; Montavon, Grégoire; Vedaldi, Andrea; Hansen, Lars Kai; Müller, Klaus-Robert (2019-09-10). Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer Nature. ISBN   978-3-030-28954-6.
  8. Ribeiro, Marco Tulio; Singh, Sameer; Guestrin, Carlos (2016-08-13). ""Why Should I Trust You?"". Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD '16. San Francisco, California, USA: Association for Computing Machinery. pp. 1135–1144. doi:10.1145/2939672.2939778. ISBN   978-1-4503-4232-2. S2CID   13029170.
  9. Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. (October 2017). "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization". 2017 IEEE International Conference on Computer Vision (ICCV). pp. 618–626. arXiv: 1610.02391 . doi:10.1109/ICCV.2017.74. ISBN   978-1-5386-1032-9. S2CID   206771654.
  10. Adebayo, Julius; Gilmer, Justin; Muelly, Michael; Goodfellow, Ian; Hardt, Moritz; Kim, Been (2018-12-03). "Sanity checks for saliency maps". Proceedings of the 32nd International Conference on Neural Information Processing Systems. NIPS'18. Montréal, Canada: Curran Associates Inc.: 9525–9536. arXiv: 1810.03292 .
  11. Bansal, Gagan; Wu, Tongshuang; Zhou, Joyce; Fok, Raymond; Nushi, Besmira; Kamar, Ece; Ribeiro, Marco Tulio; Weld, Daniel S. (2021-01-12). "Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance". arXiv: 2006.14779 [cs.AI].
  12. Glikson, Ella; Woolley, Anita Williams (2020-03-26). "Human Trust in Artificial Intelligence: Review of Empirical Research". Academy of Management Annals. 14 (2): 627–660. doi:10.5465/annals.2018.0057. ISSN   1941-6520. S2CID   216198731.
  13. Yin, Ming; Wortman Vaughan, Jennifer; Wallach, Hanna (2019-05-02). "Understanding the Effect of Accuracy on Trust in Machine Learning Models". Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. CHI '19. Glasgow, Scotland Uk: Association for Computing Machinery. pp. 1–12. doi:10.1145/3290605.3300509. ISBN   978-1-4503-5970-2. S2CID   109927933.
  14. Bansal, Gagan; Nushi, Besmira; Kamar, Ece; Lasecki, Walter S.; Weld, Daniel S.; Horvitz, Eric (2019-10-28). "Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance". Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. 7 (1): 2–11. doi: 10.1609/hcomp.v7i1.5285 . S2CID   201685074.
  15. A. Tursunbayeva, H. Chalutz-Ben Gal (2024) (2024). "Adoption of artificial intelligence: A TOP framework-based checklist for digital leaders" (PDF). Business Horizons. 67 (4). Business Horizons, In Press, 2024: 357–368. doi:10.1016/j.bushor.2024.04.006.{{cite journal}}: CS1 maint: numeric names: authors list (link)
  16. "Humanize AI Text". www.humanizeaitext.org. Retrieved 2023-10-19.