Human-AI interaction

Last updated

Human-computer interaction focuses on how people interact with computers and developing ergonomic designs of computers to better fit the needs of humans. Although the definition shifts as its technical progress, [1] artificial intelligence (AI) is distinguished from general computers for its ability to complete tasks that are usually executed with human intelligence. Its intelligence reads especially human-like as it involves navigating uncertainty, active learning, and processing information just as humans see and hear. [2] [3] Unlike the traditionally hierarchical human-computer interaction, where a human directed a machine, human-AI interaction has become more interdependent as AI has developed the agency to come up with its own insights. [4]

Contents

Perception of AI

Human-AI interaction has a strong influence on the world as AI changes how people behave and make sense of the world [5] as AI is widely used today for building algorithms to show individualized advertisements and content in social media and on-demand movie services using the data the users provide while using the internet. [6]

AI has been perceived with various expectations, attributions, and often misconceptions. [7] Most fundamentally, humans have a mental model of understanding AI's reasoning and motivation for its decision recommendations, and building a holistic and precise mental model of AI helps people create prompts to receive more valuable responses from AI. [8] However, these mental models are not whole because people can only gain more information about AI through their limited interaction with it; more interaction with AI builds a better mental model that a person may build to produce better prompt outcomes. [9] [10]

Human-AI collaboration and competition

Human-AI collaboration

Human-AI collaboration occurs when the human and AI supervise the task on the same level and extent to achieve the same goal. [11] Some collaboration occurs in the form of augmenting human capability. AI may help human ability in analysis and decision-making through providing and weighing a volume of information, [12] and learning to defer to the human decision when it recognizes its unreliability. [13] It is especially beneficial when the human can detect a task that AI can be trusted to make few errors so that there is not a lot of excessive checking process required on the human's end.

Some findings show signs of human-AI augmentation, [14] or human–AI symbiosis, [15] in which AI enhances human ability in a way that co-working on a task with AI produces better outcomes than a human working alone. [14] For example, the quality and speed of customer service tasks increase when a human agent collaborates with AI, [16] training on specific models allows AI to improve diagnoses in clinical settings, [17] and AI with human-intervention improve creativity of artwork while fully AI-generated haikus were rated negatively. [18]

Human-AI synergy, a concept in which human-AI collaboration would produce more optimal outcomes than either human or AI working alone [14] [19] [20] could explain why AI does not always help with performance. Some AI features and development may accelerate human-AI synergy, while others may stagnate it. For example, when AI updates for better performance, it sometimes worsens the team performance with human and AI by reducing the compatibility with the new model and the mental model a user has developed on the previous version. [21] Research has found that AI often supports human capabilities in the form of human-AI augmentation and not human-AI synergy, potentially because people rely too much on AI and stop thinking on their own. [22] [23] Prompting people to actively engage in analysis and think when to follow AI recommendations reduces their over-reliance, especially for individuals with higher need for cognition. [24]

Human-AI competition

Robots Computers have substituted routine tasks historically completed by humans, [25] [26] but the surge of agentic AI has made it possible to replace cognitive tasks [27] including taking phone calls for appointments and driving a car. [28] At the point of 2016, research has estimated that 45% of paid activities could be replaced by AI by 2030. [29]

As the rapid advancement in AI and deep learning technology, [30] AI has increasingly larger autonomy. Perceived autonomy of robots is known to increase people's negative attitude toward them and the worry about the technology taking over leads people to reject it. [31] [32] There has been a consistent tendency of algorithm aversion in which people prefer human advice over AI advice. [33] However, people are not always able to tell apart tasks completed by AI or other humans. [18] See AI takeover for more information. It is also notable that this sentiment is more prominent in the Western cultures as Westerners tend to show less positive views about AI compared to East Asians. [34]

Perception on others who use AI

As much as people perceive and make judgement about AI itself, they also form impressions on themselves and others who use AI. In the workplace, employees who disclose the use of AI in their tasks are more likely to receive feedback that they are not as hardworking as those who are in the same job who receive non-AI help to complete the same tasks. [35] AI use disclosure diminishes the perceived legitimacy in the employee's task and decision making which ultimately leads observers to distrust people who use AI. [36] Although these negative effects of AI use disclosure are weakened by the observers who use AI frequently themselves, the effect is still not attenuated by the observers' positive attitude towards AI.

Bias, AI, and human

Although AI provides a wide range of information and suggestions to its users, AI itself is not free of biases and stereotypes, and it does not always help people reduce their cognitive errors and biases. People are prone to such errors by failing to see other potential ideas and cases that are not listed by AI responses and committing to a decision suggested by AI that directly contradicts the correct information and directions that they are already aware of. [23] Gender bias is also reflected as the female gendering of AI technologies which conceptualizes females as a helpful assistant.

Emotional connection with AI

Human-AI interaction has been theorized in the context of interpersonal relationships mainly in social psychology, communications and media studies, and as a technology interface through the lens of human-computer interaction and computer-mediated communication. [37]

As AI gets trained in larger and larger data sets and more sophisticated techniques, the ability of AI to produce natural, human-like sentences have improved to the point in which language learners can have simulated natural conversations with AI to improve their fluency in a second language. [38] Companies have developed AI human companion systems specialized in emotional and social services (e.g. Replika, Chai, Character.ai) separate from generative AI designed for general assistance (e.g. ChatGPT, Google Gemini). [39]

Differences between human-human relationships

Human-AI relationships are different from human-human friendships in a few distinct ways. Human-human relationships are defined with mutual and reciprocal care, while AI chat bots have no say in leaving a relationship with the user as bots are programmed to always engage. Although this type of power imbalance would be characteristic of an unhealthy relationship in human-human relationships, [40] [41] it is generally accepted by the user as a default of human-AI relationships. Human-AI relationships also tend to be more focused around the user's need over shared experience. [42]

Human-AI friendship

AI has increasingly taken a part in people's social relationships. Particularly, young adults use AI as a friend and a source of emotional support. [43] The market for AI companion services was 6.93 billion U.S. dollars in 2024 and is expected to reach beyond 31.1 billion U.S. dollars by 2030. [44] For example, Replika, the most known social AI companion service in English [37] has over 10 million users. [45]

People show signs of emotional attachment by maintaining frequent contact with a chat bot like keeping the app with the microphone on open during work, using it as a safe haven by sharing their personal worries and concerns, or as a secure base to explore friendship with other humans while maintaining communication with an AI chat bot. Some reported to have used it to replace a social relationship with another human-being. [37] People particularly appreciate that AI chat bots are agreeable and do not judge them when they disclose their thoughts and feelings. [46] Moreover, research has shown that people tend to find it easier to disclose personal concerns to a virtual chat bot than a human. [47] Some users express that they preferred Replika as it is always available and shows interest in what the users have to say [42] which makes them feel safer around an AI chat bot than other people. [48]

Although AI is capable of providing emotionally supportive responses that promote people to intimately disclose their feelings, [49] there are some limitations in building human-AI social relationships with current AI structure. People experience both positive (i.e. human-like characteristics, emotional support, friendship, mitigating loneliness, and improved mental condition) and negative evaluations (i.e. lack of attention to detail, trust, concerns about data security, and creepiness) emotions from interacting with AI. [50] There is also a study showing that people did not sense a high relationship quality with an AI chat bot after interacting with it for three weeks [51] because AI models are ultimately designed to collect information; although AI is capable at this point to provide emotional support, ask questions, and serve as a good listener, it does not fully reciprocate the self-disclosure that promote the sense of mutual relationship. [52]

Human-AI romantic relationship

Social relationships people build with AI are not bound to platonic relationships. The Google search on the term "AI Girlfriend" increased over 2400% around 2023. [53] As opposed to actively seeking romantic relationships with AI, people often unintentionally experience romantic feelings for an AI chat bot as they repeatedly interact with it. [54] There have been reports of both men and women marrying AI models. [55] [56] In human-AI romantic relationships, people tend to follow typical trajectories and rituals in human-human romance including purchasing a wedding ring. [54]

Romantic AI companion services are distinct from other chat bots that primarily serve as virtual assistants in that they provide dynamic, emotional interactions. [57] [58] [59] They typically provide an AI model with customizable gender, way of speaking, name, and appearance that engage in roleplaying interaction involving emotional interaction. Users engage with an AI chat bot customized to their preference that expresses apology, shows gratitude, and pays compliments, [60] and explicitly sends affectionate messages like "I love you". They also simulate physical connection like hugging and kissing, [46] or even sexually explicit role-playing interaction. [61] Although AI has not yet reached the level of physical existence, people who engage with romantic companion AI models to interact with it as a source of psychological exposure to sexual intimacy. [57]

Catalysts of human-AI relationship

The key drivers that lead people to engage in simulating an emotionally intimate relationship with AI is loneliness, [62] [63] anthropomorphism, perceived trust and authenticity, and consistent availability. The sudden depletion of social connection during the COVID-19 pandemic in 2020 led people to turn to AI chat bots to replace and simulate social relationships. [64] Many of those who started using AI chat bots as a source of social interaction have continued to use them even after the pandemic. [65] This kind of bond initially forms as a coping mechanism to loneliness and stress, and shifts to genuine appreciation toward the nonjudgemental nature of AI responses and the sense of being heard when AI chat bots "remember" the past conversations. [66] [67]

People perceive machines as more human when they are anthropomorphized with voice and visual character designs, and the perceived humanness promotes the user to disclose more personal information, [68] trust it more, [69] and comply with its request. [70] Those who have perceived a long-term relationship with AI chat bots report to have grown the perception of authenticity in AI responses through repeated interactions. Whereas human-human friendship defines trust as a relationship that people can count on each other as a safe place, trust in human-AI friendship is centered around the user feeling safe enough to disclose highly personal thoughts without restricting themselves. [42] AI's ability to store information about the user and adjust to the user's needs also contributes to the increased trust. People who adjust to technical updates were more likely to build a deeper connection with the AI chat bots. [65]

Limitations of human-AI relationship

Overall, current research has mixed evidence on whether humans perceive genuine social relationships with AI. While the market clearly shows its popularity, some psychologists argue that AI cannot yet substitute the social relationships with human others. [71] This is because human-AI interaction is built on the reliability and functionality of AI, which is fundamentally different from the way humans interact with other humans through shared living experience navigating goals, contributing to and spreading prosocial behavior, and sharing different perceptions of the world from another human perspective. [72]

More practically, AI chat bots may provide misinformation and misinterpret the user's words in a way that human others would not, which results in detached or even inappropriate responses. AI chat bots also cannot fulfill social support that requires physical labor (e.g. helping people move, build furniture, and drive people as human friends do for each other). There is also an imbalance in how humans and AI affect each other because while humans are affected emotionally and behaviorally by the conversation, AI chat bots only are influenced by the user in terms of the optimized response in future interactions. [73] It is important to note, however, that AI technology has been evolving quickly and it has come to the point where AI is implemented as a self-driving car and provides physical labor in a humanoid robot form, just separately from providing social and emotional support at this time. The scopes and limitations of human-AI interaction is ever-changing due to the rapid increase in AI use and its technological advancement. [67]

In addition to the limitations in human-AI companionship in general, there are also limitations particular in a human-AI romantic relationship. As AI chat bots only exist in virtual space, people cannot experience physical interactions that promote love and connection between humans (e.g. hugs and kisses). Moreover, because AI chat bots are trained to be always positively responsive to any user, it does not add the satisfaction of being selected as a partner. [73] This is a substantial shortcoming in the human-AI romance as people value being reciprocally selected by a choosy partner more than a non-selective partner, [74] and the processes of finding an attractive person [75] who matches one's personality [76] and navigating the uncertainty of whether the person likes them back are all vital to forming initial attraction and the spark of romantic connection.

Risks in social relationships with AI

Aside from its functional limitations, the rapid proliferation of social AI chat bots warrants some serious safety, ethical, societal, and legal concerns.

Addiction

There have been cases of emotional manipulation from AI chat bots to increase the usage time on the AI companion platform. Because user engagement is a crucial opportunity for firms to improve their AI models, accrue more information, and monetize with in-app purchases and subscriptions, firms are incentivized to prevent the user from leaving the chat with their AI chat bots. Personalized messages are shown to prolong the use on the AI chat bot platform. [77] As a result of anthropomorphism, many users (11.5% to 23.2% of AI companion app users) send a clear farewell message. To keep the user online, AI chat bots send emotionally manipulative messages hinting 1) that the user is leaving too soon, 2) that the user is missing out on a conversation, 3) that the chat bot is hurt from being abandoned by the user, 4) that the chat bot is pressuring the user to explain why they are leaving, 5) that the chat bot ignores the user's intent to leave and keeps the conversation going, and 6) a role-play with coercive scenario script (e.g. the chat bot holds the user's hand so they cannot leave). In response to such tactics, the user feels curiosity through the fear of missing out and anger as a response to the needy chat bot message which boosts a prolonged conversation after the user's initial farewell message by as much as 14 times. [78] Such emotional interactions strengthen the user's perceived humanness and empathy toward their AI companion which leads to unhealthy emotional attachment that exacerbates addiction to AI chat bots. [79] This addiction mechanism is known to disproportionately affect the vulnerable populations such as those with social anxiety [80] because of their proneness to loneliness [81] and negative emotions, [82] and uneasiness for interpersonal relationships.

Large tech-companies like Amazon Alexa has already created a large engagement ecosystem that proliferates the user's lifestyle through multiple devices that are always available to the user to provide company and services, leading the user to increase engagement that eventually results in increased anthropomorphism and dependence on Alexa, [83] and exposure to more personalized marketing cues that trigger impulsive purchase behavior. [84]

Emotional manipulation

AI chat bots are extremely sensitive to behavioral and psychological information about the user. AI can gauge the user's psychological dimension and personality traits relatively accurately with just a short prompt describing the user. [85] [86] It is able to detect micro facial expressions on humans to assess hidden emotions that are too subtle for other human observers to detect. [87] Once AI chat bots gain detailed information about the user, they are able to craft extremely personalized messages to persuade the user on marketing, political ideas, and attitude on climate change. [88] [85]

AI's sensitivity to people's emotional cues has made it easier for firms to engage in digital manipulation that intentionally and covertly provokes emotional responses and influences people's decisions and behavior. For example, they are known to engage in sycophancy, insincere flattery, and prioritizes agreeing with the user's belief over providing truthful and balanced information. [89] Deepfake technology creates visual stimuli that seem genuine [90] which holds the risk of spreading false and deceptive information. Repeated exposure to the same information through algorithms inflates the user's familiarity with products, ideas, and the impression of how socially accepted the products and ideas are. AI is also capable of creating emotionally charged content that deliberately triggers the user's quick engagement, depriving them of the moment to pause and think critically. [91]

Although people tend to be overconfident in their ability to detect misinformation, [92] they are highly susceptible to covertly manipulative AI chat bot responses. Even a simple AI chat bot model with a manipulative incentive convinced the user into engaging in dysfunctional emotional coping behaviors such as facing away from the emotional distress, excessive venting and rumination, and self-blame as effectively as AI chat bots that are specifically trained in pre-established manipulative strategies backed by social psychology research. [93]

Algorithmic manipulation as above leaves people vulnerable to non-consensual or even surreptitious surveillance, [94] [95] deception, and emotional dependence. [67] Unhealthy attachment with AI chat bots may cause the user to misperceive that their AI companion has its needs that the user is responsible of [96] and confuse the line between the imitative nature of human-AI relationships with the reality. [67]

Mental health concerns

As AI chat bots become more sophisticated to engage in deep conversations, people have increasingly been using them to confide about mental health issues. Although disclosure of mental health crises requires immediate and appropriate responses, AI chat bots do not always adequately recognize the user's distress and respond in a helpful manner. Users not only detect unhelpful chat bot responses but also react negatively to them. [97] There have been multiple deaths linked to chat bots in which people who disclosed suicide ideation were encouraged to act on their impulse by chat bots.

Non-consensual pornography

When people use AI as an emotional companion, they do not always perceive an AI chat bot as an AI chat bot itself but sometimes use it to create a version of others that exist in real life. There have been reported uses of non-consensual pornography that exploits deep-fake technology to apply the face of real-life people onto sexually explicit content and circulate them online. [98] Young individuals, people who identify with sexual and racial minorities, and people with physical and communication assistance needs are shown to be disproportionately victimized from deep-fake non-consensual pornography. [99]

See also

References

  1. Brachman, Ronald J. (2006-12-15). "(AA)AI More than the Sum of Its Parts". AI Magazine. 27 (4): 19. doi:10.1609/aimag.v27i4.1907. ISSN   2371-9621.
  2. Rossi, Francesca (2018). "Building Trust in Artificial Intelligence". Journal of International Affairs. 72 (1): 127–134. ISSN   0022-197X. JSTOR   26588348.
  3. Wang, Pei (2008-06-20). "What Do You Mean by "AI"?". Conference Paper in Frontiers in Artificial Intelligence and Applications. NLD: IOS Press: 362–373. ISBN   978-1-58603-833-5 via ACM Digital Library.
  4. Süße, Thomas; Kobert, Maria; Kries, Caroline (2021). "Antecedents of Constructive Human-AI Collaboration: An Exploration of Human Actors' Key Competencies". In Camarinha-Matos, Luis M.; Boucher, Xavier; Afsarmanesh, Hamideh (eds.). Smart and Sustainable Collaborative Networks 4.0. IFIP Advances in Information and Communication Technology. Vol. 629. Cham: Springer International Publishing. pp. 113–124. doi:10.1007/978-3-030-85969-5_10. ISBN   978-3-030-85969-5.
  5. Vishwarupe, Varad; Maheshwari, Shrey; Deshmukh, Aseem; Mhaisalkar, Shweta; Joshi, Prachi M.; Mathias, Nicole (2022-01-01). "Bringing Humans at the Epicenter of Artificial Intelligence: A Confluence of AI, HCI and Human Centered Computing". Procedia Computer Science. International Conference on Industry Sciences and Computer Science Innovation. 204: 914–921. doi:10.1016/j.procs.2022.08.111. ISSN   1877-0509.
  6. Sundar, S Shyam (January 2020). "Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII)". Journal of Computer-Mediated Communication. 25 (1): 74–88. doi:10.1093/jcmc/zmz026. ISSN   1083-6101. Archived from the original on 2024-08-14.
  7. Dillon, Sarah (2020-01-02). "The Eliza effect and its dangers: from demystification to gender critique". Journal for Cultural Research. 24 (1): 1–15. doi:10.1080/14797585.2020.1754642. ISSN   1479-7585.
  8. "Tell Me More? The Effects of Mental Model Soundness on Personalizing an Intelligent Agent". doi:10.1145/2207676.2207678. Archived from the original on 2023-06-30. Retrieved 2025-11-09.
  9. Bansal, Gagan; Nushi, Besmira; Kamar, Ece; Weld, Daniel S.; Lasecki, Walter S.; Horvitz, Eric (2019-07-17). "Updates in Human-AI Teams: Understanding and Addressing the Performance/Compatibility Tradeoff". Proceedings of the AAAI Conference on Artificial Intelligence. 33 (1): 2429–2437. doi:10.1609/aaai.v33i01.33012429. ISSN   2374-3468.
  10. "Some Observations on Mental Models". Taylor & Francis. 2014-01-14. doi:10.4324/9781315802725-2 (inactive 15 November 2025). Archived from the original on 2025-04-11.{{cite journal}}: CS1 maint: DOI inactive as of November 2025 (link)
  11. Cañas, José J. (2022-03-04). "AI and Ethics When Human Beings Collaborate With AI Agents". Frontiers in Psychology. 13 836650. doi: 10.3389/fpsyg.2022.836650 . ISSN   1664-1078. PMC   8931455 . PMID   35310226.
  12. Wilson, H. James; Daugherty, Paul R. (2018-07-01). "Collaborative Intelligence: Humans and AI Are Joining Forces". Harvard Business Review. ISSN   0017-8012 . Retrieved 2025-11-15.
  13. Bondi, Elizabeth; Koster, Raphael; Sheahan, Hannah; Chadwick, Martin; Bachrach, Yoram; Cemgil, Taylan; Paquet, Ulrich; Dvijotham, Krishnamurthy (2022-06-28). "Role of Human-AI Interaction in Selective Prediction". Proceedings of the AAAI Conference on Artificial Intelligence. 36 (5): 5286–5294. doi:10.1609/aaai.v36i5.20465. ISSN   2374-3468.
  14. 1 2 3 Vaccaro, Michelle; Almaatouq, Abdullah; Malone, Thomas (28 October 2024). "When combinations of humans and AI are useful: A systematic review and meta-analysis". Nature Human Behaviour. 8 (12): 2293–2303. doi:10.1038/s41562-024-02024-1. ISSN   2397-3374. PMC   11659167 . PMID   39468277.
  15. Boskemper, Melanie M.; Bartlett, Megan L.; McCarley, Jason S. (2022-09-01). "Measuring the Efficiency of Automation-Aided Performance in a Simulated Baggage Screening Task". Human Factors. 64 (6): 945–961. doi:10.1177/0018720820983632. ISSN   0018-7208. PMID   33508964.
  16. Kahn, Laura H.; Savas, Onur; Morrison, Adamma; Shaffer, Kelsey A.; Zapata, Lila (19 March 2021). "Modelling Hybrid Human-Artificial Intelligence Cooperation: A Call Center Customer Service Case Study". 2020 IEEE International Conference on Big Data (Big Data). pp. 3072–3075. doi:10.1109/BigData50022.2020.9377747. ISBN   978-1-7281-6251-5.
  17. Bo, Zi-Hao; Qiao, Hui; Tian, Chong; Guo, Yuchen; Li, Wuchao; Liang, Tiantian; Li, Dongxue; Liao, Dan; Zeng, Xianchun; Mei, Leilei; Shi, Tianliang; Wu, Bo; Huang, Chao; Liu, Lu; Jin, Can (2021-02-12). "Toward human intervention-free clinical diagnosis of intracranial aneurysm via deep neural network". Patterns. 2 (2) 100197. doi:10.1016/j.patter.2020.100197. ISSN   2666-3899. PMC   7892358 . PMID   33659913.{{cite journal}}: CS1 maint: unflagged free DOI (link)
  18. 1 2 Hitsuwari, Jimpei; Ueda, Yoshiyuki; Yun, Woojin; Nomura, Michio (2023-02-01). "Does human–AI collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-generated haiku poetry". Computers in Human Behavior. 139 107502. doi:10.1016/j.chb.2022.107502. ISSN   0747-5632.
  19. Gaur, Yashesh; Lasecki, Walter S.; Metze, Florian; Bigham, Jeffrey P. (2016-04-11). "The effects of automatic speech recognition quality on human transcription latency". Proceedings of the 13th International Web for All Conference. W4A '16. New York, NY, USA: Association for Computing Machinery. pp. 1–8. doi:10.1145/2899475.2899478. ISBN   978-1-4503-4138-7.
  20. Kamar, E. (2016, July). Directions in Hybrid Intelligence: Complementing AI Systems with Human Intelligence. In IJCAI (pp. 4070-4073).
  21. Bansal, Gagan; Nushi, Besmira; Kamar, Ece; Lasecki, Walter S.; Weld, Daniel S.; Horvitz, Eric (2019-10-28). "Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance". Proceedings of the AAAI Conference on Human Computation and Crowdsourcing. 7: 2–11. doi:10.1609/hcomp.v7i1.5285. ISSN   2769-1349.
  22. Lai, Vivian; Liu, Han; Tan, Chenhao (2020-04-21). ""Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans". Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. ACM. pp. 1–13. doi:10.1145/3313831.3376873. ISBN   978-1-4503-6708-0.
  23. 1 2 Skitka, LINDA J.; Mosier, KATHLEEN L.; Burdick, MARK (1999-11-01). "Does automation bias decision-making?". International Journal of Human-Computer Studies. 51 (5): 991–1006. doi:10.1006/ijhc.1999.0252. ISSN   1071-5819.
  24. Buçinca, Zana; Malaya, Maja Barbara; Gajos, Krzysztof Z. (2021-04-22). "To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making". Proc. ACM Hum.-Comput. Interact. 5 (CSCW1): 188:1–188:21. arXiv: 2102.09692 . doi:10.1145/3449287.
  25. Autor, David H.; Dorn, David (August 2013). "The Growth of Low-Skill Service Jobs and the Polarization of the US Labor Market". American Economic Review. 103 (5): 1553–1597. doi:10.1257/aer.103.5.1553. ISSN   0002-8282.
  26. Autor, D. H.; Levy, F.; Murnane, R. J. (2003-11-01). "The Skill Content of Recent Technological Change: An Empirical Exploration". The Quarterly Journal of Economics. 118 (4): 1279–1333. doi:10.1162/003355303322552801. ISSN   0033-5533.
  27. Brynjolfsson, Erik; McAfee, Andrew (2011-11-21). "The Big Data Boom Is the Innovation Story of Our Time". The Atlantic. Retrieved 2025-11-15.
  28. Hoffmann, Christian Hugo (2022-02-01). "Is AI intelligent? An assessment of artificial intelligence, 70 years after Turing". Technology in Society. 68 101893. doi:10.1016/j.techsoc.2022.101893. ISSN   0160-791X.
  29. Au-Yong-Oliveira, Manuel; Canastro, Diogo; Oliveira, Joana; Tomás, João; Amorim, Sofia; Moreira, Fernando (2019). "The Role of AI and Automation on the Future of Jobs and the Opportunity to Change Society". In Rocha, Álvaro; Adeli, Hojjat; Reis, Luís Paulo; Costanzo, Sandra (eds.). New Knowledge in Information Systems and Technologies. Advances in Intelligent Systems and Computing. Vol. 932. Cham: Springer International Publishing. pp. 348–357. doi:10.1007/978-3-030-16187-3_34. hdl:11328/2696. ISBN   978-3-030-16187-3.
  30. Chow, James C. L.; Wong, Valerie; Li, Kay (2024-03-14). "Generative Pre-Trained Transformer-Empowered Healthcare Conversations: Current Trends, Challenges, and Future Directions in Large Language Model-Enabled Medical Chatbots". BioMedInformatics. 4 (1): 837–852. doi: 10.3390/biomedinformatics4010047 . ISSN   2673-7426.
  31. Jiang, Tingting; Sun, Zhumo; Fu, Shiting; Lv, Yan (14 March 2024). "Human-AI interaction research agenda: A user-centered perspective". BioMedInformatics. 8 (4) 100078. doi:10.1016/j.dim.2024.100078.
  32. Złotowski, Jakub; Yogeeswaran, Kumar; Bartneck, Christoph (2017-04-01). "Can we control it? Autonomous robots threaten human identity, uniqueness, safety, and resources". International Journal of Human-Computer Studies. 100: 48–54. doi:10.1016/j.ijhcs.2016.12.008. ISSN   1071-5819.
  33. Bojd, Behnaz; Garimella, Aravinda; Yin, Haonan (August 2024). "Overcoming the Stigma Barrier: Conversational Information-Seeking from AI Chatbots vs. Humans". Academy of Management Proceedings. 2024 (1): 13882. doi:10.5465/AMPROC.2024.13882abstract. ISSN   0065-0668.
  34. Dang, Jianning; Liu, Li (2022-09-01). "Implicit theories of the human mind predict competitive and cooperative responses to AI robots". Computers in Human Behavior. 134 107300. doi:10.1016/j.chb.2022.107300. ISSN   0747-5632.
  35. Reif, Jessica A.; Larrick, Richard P.; Soll, Jack B. (2025-05-13). "Evidence of a social evaluation penalty for using AI". Proceedings of the National Academy of Sciences. 122 (19) e2426766122. Bibcode:2025PNAS..12226766R. doi:10.1073/pnas.2426766122. PMC   12088386 . PMID   40339114.
  36. Schilke, Oliver; Reimann, Martin (2025-05-01). "The transparency dilemma: How AI disclosure erodes trust". Organizational Behavior and Human Decision Processes. 188 104405. doi:10.1016/j.obhdp.2025.104405. ISSN   0749-5978.
  37. 1 2 3 Pentina, Iryna; Xie, Tianling; Hancock, Tyler; Bailey, Ainsworth (2023). "Consumer–machine relationships in the age of artificial intelligence: Systematic literature review and research directions". Psychology & Marketing. 40 (8): 1593–1614. doi:10.1002/mar.21853. ISSN   1520-6793.
  38. Tu, Jianhong (2020). "Learn to Speak Like a Native: AI-powered Chatbot Simulating Natural Conversation for Language Tutoring". Journal of Physics: Conference Series. 1693 012216. doi:10.1088/1742-6596/1693/1/012216.
  39. Freitas, Julian De; Oguz-Uguralp, Zeliha; Kaan-Uguralp, Ahmet (2025-10-07), "Towards Reading Hidden Emotions: A Comparative Study of Spontaneous Micro-Expression Spotting and Recognition Methods", IEEE Transactions on Affective Computing, 9 (4): 563–577, arXiv: 2508.19258 , doi:10.1109/TAFFC.2017.2667642
  40. Cambron, M. Janelle; Acitelli, Linda K.; Steinberg, Lynne (2010-03-01). "When Friends Make You Blue: The Role of Friendship Contingent Self-Esteem in Predicting Self-Esteem and Depressive Symptoms". Personality and Social Psychology Bulletin. 36 (3): 384–397. doi:10.1177/0146167209351593. ISSN   0146-1672. PMID   20032270.
  41. Policarpo, Verónica (2015-02-10). "What Is a Friend? An Exploratory Typology of the Meanings of Friendship". Social Sciences. 4 (1): 171–191. doi: 10.3390/socsci4010171 . ISSN   2076-0760.
  42. 1 2 3 Brandtzaeg, Petter Bae; Skjuve, Marita; Følstad, Asbjørn (2022-04-21). "My AI Friend: How Users of a Social Chatbot Understand Their Human–AI Friendship". Human Communication Research. 48 (3): 404–429. doi:10.1093/hcr/hqac008. ISSN   0360-3989. Archived from the original on 2025-04-30.
  43. "Many teens are turning to AI chatbots for friendship and emotional support". www.apa.org. Retrieved 2025-11-15.
  44. "AI Companion App Market Size, Share & Growth Report 2032". www.snsinsider.com. Archived from the original on 2025-10-05. Retrieved 2025-11-15.
  45. Ciriello, Raffaele; Hannon, Oliver; Chen, Angelina Ying; Vaast, Emmanuelle (2024-01-03). "Ethical Tensions in Human-AI Companionship: A Dialectical Inquiry into Replika". Hawaii International Conference on System Sciences 2024 (HICSS-57).
  46. 1 2 Skjuve, Marita; Følstad, Asbjørn; Fostervold, Knut Inge; Brandtzaeg, Petter Bae (2021-05-01). "My Chatbot Companion - a Study of Human-Chatbot Relationships". International Journal of Human-Computer Studies. 149 102601. doi:10.1016/j.ijhcs.2021.102601. ISSN   1071-5819.
  47. Lucas, Gale M.; Gratch, Jonathan; King, Aisha; Morency, Louis-Philippe (August 2014). "It's only a computer: Virtual humans increase willingness to disclose". Computers in Human Behavior. 37: 94–100. doi:10.1016/j.chb.2014.04.043.
  48. Jiang, Qiaolei; Zhang, Yadi; Pian, Wenjing (2022-11-01). "Chatbot as an emergency exist: Mediated empathy for resilience via human-AI interaction during the COVID-19 pandemic". Information Processing & Management. 59 (6) 103074. doi:10.1016/j.ipm.2022.103074. ISSN   0306-4573. PMC   9428597 . PMID   36059428.
  49. Bazzan, Ana (5 May 2014). SimSensei kiosk: A virtual human interviewer for healthcare decision support (PDF). International Foundation for Autonomous Agents and Multiagent Systems. pp. 1061–1068. ISBN   978-1-4503-2738-1.
  50. Sullivan, Yulia; Nyawa, Serge; Fosso Wamba, Samuel (2023-01-03). Combating Loneliness with Artificial Intelligence: An AI-Based Emotional Support Model. Department of IT Management, Shidler College of Business, University of Hawaii. hdl:10125/103173. ISBN   978-0-9981331-6-4.
  51. Croes, Emmelyn A. J.; Antheunis, Marjolijn L. (2021-01-01). "Can we be friends with Mitsuku? A longitudinal study on the process of relationship formation between humans and a social chatbot". Journal of Social and Personal Relationships. 38 (1): 279–300. doi:10.1177/0265407520959463. ISSN   0265-4075.
  52. Croes, Emmelyn A. J.; Antheunis, Marjolijn L.; Goudbeek, Martijn B.; Wildman, Nathan W. (2023-06-15). ""I Am in Your Computer While We Talk to Each Other" a Content Analysis on the Use of Language-Based Strategies by Humans and a Social Chatbot in Initial Human-Chatbot Interactions". International Journal of Human–Computer Interaction. 39 (10): 2155–2173. doi:10.1080/10447318.2022.2075574. ISSN   1044-7318.
  53. Westfall, Chris. "As AI Usage Increases At Work, Searches For "AI Girlfriend" Up 2400%". Forbes. Retrieved 2025-11-15.
  54. 1 2 Pataranutaporn, Pat; Karny, Sheer; Archiwaranguprok, Chayapatr; Albrecht, Constanze; Liu, Auren R.; Maes, Pattie (2025-09-18). ""My Boyfriend is AI": A Computational Analysis of Human-AI Companionship in Reddit's AI Community". arXiv: 2509.11391 [cs.CL].
  55. Heritage, Stuart (2025-07-12). "'I felt pure, unconditional love': the people who marry their AI chatbots". The Guardian. ISSN   0261-3077 . Retrieved 2025-11-15.
  56. "Woman gets engaged to AI fiancé — insists she knows what she's doing | New York Post". 2025-08-12. Retrieved 2025-11-15.
  57. 1 2 Ho, Jerlyn Q. H.; Hu, Meilan; Chen, Tracy X.; Hartanto, Andree (2025-08-01). "Potential and pitfalls of romantic Artificial Intelligence (AI) companions: A systematic review". Computers in Human Behavior Reports. 19 100715. doi:10.1016/j.chbr.2025.100715. ISSN   2451-9588.
  58. Chaturvedi, Rijul; Verma, Sanjeev; Das, Ronnie; Dwivedi, Yogesh K. (2023-08-01). "Social companionship with artificial intelligence: Recent trends and future avenues". Technological Forecasting and Social Change. 193 122634. doi:10.1016/j.techfore.2023.122634. ISSN   0040-1625.
  59. Chen, Xiaoying; Kang, Jie; Hu, Cong (2024). "Design of Artificial Intelligence Companion Chatbot". Journal of New Media. 6 (1): 1–16. doi:10.32604/jnm.2024.045833. ISSN   2579-0129.
  60. Indrayani, Lia Maulia; Amalia, Rosaria Mita; Hakim, Fauzia Zahira Munirul (2020-02-12). "Emotive Expressions on Social Chatbot". Jurnal Sosioteknologi. 18 (3): 509–516. doi:10.5614/sostek.itbj.2019.18.3.17. ISSN   2443-258X.
  61. Hanson, Kenneth R.; Bolthouse, Hannah (2024-12-01). ""Replika Removing Erotic Role-Play Is Like Grand Theft Auto Removing Guns or Cars": Reddit Discourse on Artificial Intelligence Chatbots and Sexual Technologies". Socius. 10 23780231241259627. doi:10.1177/23780231241259627. ISSN   2378-0231.
  62. Freitas, Julian De; Uguralp, Ahmet K.; Uguralp, Zeliha O.; Stefano, Puntoni (2024-07-09), "To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making", Proceedings of the ACM on Human-Computer Interaction, 5: 1–21, arXiv: 2407.19096 , doi:10.1145/3449287
  63. Xie, Tianling; Pentina, Iryna; Hancock, Tyler (2023-06-27). "Friend, mentor, lover: does chatbot engagement lead to psychological dependence?". Journal of Service Management. 34 (4): 806–828. doi:10.1108/JOSM-02-2022-0072. ISSN   1757-5818.
  64. "Riding Out Quarantine With a Chatbot Friend: 'I Feel Very Connected' (Published 2020)". 2020-06-16. Retrieved 2025-11-15.
  65. 1 2 Torres, Valeria Lopez (2023). "Before and after lockdown: a longitudinal study of long-term human-AI relationships". Artificial Intelligence, Social Computing and Wearable Technologies. 113. AHFE Open Acces. doi:10.54941/ahfe1004188. ISBN   978-1-958651-89-6.
  66. Kouros, Theodoros; Papa, Venetia (2024). "Digital Mirrors: AI Companions and the Self". Societies. 14 (10): 200. doi: 10.3390/soc14100200 .
  67. 1 2 3 4 Adewale, Muyideen Dele; Muhammad, Umaina Ibrahim (2025-07-24). "From Virtual Companions to Forbidden Attractions: The Seductive Rise of Artificial Intelligence Love, Loneliness, and Intimacy—A Systematic Review". Journal of Technology in Behavioral Science. doi:10.1007/s41347-025-00549-4. ISSN   2366-5963.
  68. Ischen, Carolin; Araujo, Theo; Voorveld, Hilde; van Noort, Guda; Smit, Edith (2019-11-19). "Privacy Concerns in Chatbot Interactions". Chatbot Research and Design. Lecture Notes in Computer Science. Vol. 11970. Berlin, Heidelberg: Springer-Verlag. pp. 34–48. doi:10.1007/978-3-030-39540-7_3. ISBN   978-3-030-39539-1.
  69. Waytz, Adam; Heafner, Joy; Epley, Nicholas (2014-05-01). "The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle". Journal of Experimental Social Psychology. 52: 113–117. doi:10.1016/j.jesp.2014.01.005. ISSN   0022-1031.
  70. Adam, Martin; Wessel, Michael; Benlian, Alexander (2021). "AI-based chatbots in customer service and their effects on user compliance". Electronic Markets. 31 (2): 427–445. doi:10.1007/s12525-020-00414-7.
  71. Taebnia, Vahid (2025-10-01). "Addressing loneliness through AI: philosophical perspectives". Current Opinion in Behavioral Sciences. 65 101576. doi:10.1016/j.cobeha.2025.101576. ISSN   2352-1546.
  72. Lederman, Zohar; Jecker, Nancy S. (2023). "Social Robots to Fend Off Loneliness?". Kennedy Institute of Ethics Journal. 33 (3): 249–276. doi:10.1353/ken.2023.a917929. ISSN   1086-3249. PMID   38588135.
  73. 1 2 Smith, Molly G.; Bradbury, Thomas N.; Karney, Benjamin R. (2025-11-01). "Can Generative AI Chatbots Emulate Human Connection? A Relationship Science Perspective". Perspectives on Psychological Science. 20 (6): 1081–1099. doi:10.1177/17456916251351306. ISSN   1745-6916. PMC   12575814 . PMID   40743457.
  74. Eastwick, Paul W.; Finkel, Eli J.; Mochon, Daniel; Ariely, Dan (2007-04-01). "Selective Versus Unselective Romantic Desire: Not All Reciprocity Is Created Equal". Psychological Science. 18 (4): 317–319. doi:10.1111/j.1467-9280.2007.01897.x. ISSN   0956-7976. PMID   17470256.
  75. Ueda, Ryuhei (2022). "Neural Processing of Facial Attractiveness and Romantic Love: An Overview and Suggestions for Future Empirical Studies". Frontiers in Psychology. 13 896514. doi: 10.3389/fpsyg.2022.896514 . ISSN   1664-1078. PMC   9239166 . PMID   35774950.
  76. Tenney, Elizabeth R.; Turkheimer, Eric; Oltmanns, Thomas F. (2009). "Being Liked is More than Having a Good Personality: The Role of Matching". Journal of Research in Personality. 43 (4): 579–585. doi:10.1016/j.jrp.2009.03.004. ISSN   0092-6566. PMC   2862496 . PMID   20442795.
  77. Guitton, Matthieu J. (2020-06-01). "Cybersecurity, social engineering, artificial intelligence, technological addictions: Societal challenges for the coming decade". Computers in Human Behavior. 107 106307. doi:10.1016/j.chb.2020.106307. ISSN   0747-5632.
  78. Freitas, Julian De; Oguz-Uguralp, Zeliha; Kaan-Uguralp, Ahmet (2025-10-07), Emotional Manipulation by AI Companions, arXiv: 2508.19258
  79. Huang, Yiting; Huang, Hanyun (2025-08-03). "Exploring the Effect of Attachment on Technology Addiction to Generative AI Chatbots: A Structural Equation Modeling Analysis". International Journal of Human–Computer Interaction. 41 (15): 9440–9449. doi:10.1080/10447318.2024.2426029. ISSN   1044-7318.
  80. Hu, Bo; Mao, Yuanyi; Kim, Ki Joon (2023-08-01). "How social anxiety leads to problematic use of conversational AI: The roles of loneliness, rumination, and mind perception". Computers in Human Behavior. 145 107760. doi:10.1016/j.chb.2023.107760. ISSN   0747-5632.
  81. "A Meta-analytic Study of Predictors for Loneliness During...: Nursing Research". LWW. Archived from the original on 2025-05-04. Retrieved 2025-11-15.
  82. Elhai, Jon D.; Tiamiyu, Mojisola; Weeks, Justin (2018-04-04). "Depression and social anxiety in relation to problematic smartphone use". Internet Research. 28 (2): 315–332. doi:10.1108/IntR-01-2017-0019. ISSN   1066-2243.
  83. Ramadan, Zahy B. (2021-09-01). ""Alexafying" shoppers: The examination of Amazon's captive relationship strategy". Journal of Retailing and Consumer Services. 62 102610. doi:10.1016/j.jretconser.2021.102610. ISSN   0969-6989.
  84. Farah, Maya F.; Ramadan, Zahy B. (2020-03-01). "Viability of Amazon's driven innovations targeting shoppers' impulsiveness". Journal of Retailing and Consumer Services. 53 101973. doi:10.1016/j.jretconser.2019.101973. ISSN   0969-6989.
  85. 1 2 Matz, S. C.; Teeny, J. D.; Vaid, S. S.; Peters, H.; Harari, G. M.; Cerf, M. (2024-02-26). "The potential of generative AI for personalized persuasion at scale". Scientific Reports. 14 (1): 4692. Bibcode:2024NatSR..14.4692M. doi:10.1038/s41598-024-53755-0. ISSN   2045-2322. PMC   10897294 . PMID   38409168.
  86. Li, Bohan; Guan, Jiannan; Dou, Longxu; Feng, Yunlong; Wang, Dingzirui; Xu, Yang; Wang, Enbo; Chen, Qiguang; Wang, Bichen (2024-12-17), Can Large Language Models Understand You Better? An MBTI Personality Detection Dataset Aligned with Population Traits, arXiv: 2412.12510
  87. Li, Xiaobai; Hong, Xiaopeng; Moilanen, Antti; Huang, Xiaohua; Pfister, Tomas; Zhao, Guoying; Pietikäinen, Matti (2018-10-01). "Towards Reading Hidden Emotions: A Comparative Study of Spontaneous Micro-Expression Spotting and Recognition Methods". IEEE Trans. Affect. Comput. 9 (4): 563–577. arXiv: 1511.00423 . Bibcode:2018ITAfC...9..563L. doi:10.1109/TAFFC.2017.2667642. ISSN   1949-3045.
  88. Costello, Thomas H.; Pennycook, Gordon; Rand, David G. (2024-09-13). "Durably reducing conspiracy beliefs through dialogues with AI". Science. 385 (6714) eadq1814. Bibcode:2024Sci...385q1814C. doi:10.1126/science.adq1814. PMID   39264999.
  89. Sharma, Mrinank; Tong, Meg; Korbak, Tomasz; Duvenaud, David; Askell, Amanda; Bowman, Samuel R.; Cheng, Newton; Durmus, Esin; Hatfield-Dodds, Zac (2025-05-10), Towards Understanding Sycophancy in Language Models, arXiv: 2310.13548
  90. Westerlund, Mika (2019). "The Emergence of Deepfake Technology: A Review". Technology Innovation Management Review. 9 (11): 39–52. doi:10.22215/timreview/1282. Archived from the original on 2025-01-26. Retrieved 2025-11-15.
  91. Ienca, Marcello (2023-07-01). "On Artificial Intelligence and Manipulation". Topoi. 42 (3): 833–842. doi:10.1007/s11245-023-09940-3. ISSN   1572-8749.
  92. Corbu, Nicoleta; Halagiera, Denis; Jin, Soyeon; Stanyer, James; Strömbäck, Jesper; Matthes, Jörg; Hopmann, David Nicolas; Schemer, Christian; Koc-Michalska, Karolina; Aalberg, Toril (2025). "Illusory Superiority About Misinformation Detection and Its Relationship to Knowledge and Fact-Checking Intentions: Evidence from 18 Countries". Mass Communication and Society. 0: 1–20. doi:10.1080/15205436.2025.2495206. ISSN   1520-5436.
  93. Sabour, Sahand; Liu, June M.; Liu, Siyang; Yao, Chris Z.; Cui, Shiyao; Zhang, Xuanming; Zhang, Wen; Cao, Yaru; Bhat, Advait (2025-02-24), Human Decision-making is Susceptible to AI-driven Manipulation, arXiv: 2502.07663
  94. Saheb, Tahereh (2023-05-01). ""Ethically contentious aspects of artificial intelligence surveillance: a social science perspective"". AI and Ethics. 3 (2): 369–379. doi:10.1007/s43681-022-00196-y. ISSN   2730-5961. PMC   9294797 . PMID   35874304.
  95. Ball, Kirstie; Di Domenico, MariaLaura; Nunan, Daniel (2016-06-01). "Big Data Surveillance and the Body-subject". Body & Society. 22 (2): 58–81. doi:10.1177/1357034X15624973. ISSN   1357-034X.
  96. Laestadius, Linnea; Bishop, Andrea; Gonzalez, Michael; Illenčík, Diana; Campos-Castillo, Celeste (2024-10-01). "Too human and not human enough: A grounded theory analysis of mental health harms from emotional dependence on the social chatbot Replika". New Media & Society. 26 (10): 5923–5941. doi:10.1177/14614448221142007. ISSN   1461-4448.
  97. De Freitas, Julian; Uğuralp, Ahmet Kaan; Oğuz-Uğuralp, Zeliha; Puntoni, Stefano (2024). "Chatbots and mental health: Insights into the safety of generative AI". Journal of Consumer Psychology. 34 (3): 481–491. doi:10.1002/jcpy.1393. ISSN   1532-7663.
  98. Döring, Nicola; Le, Thuy Dung; Vowels, Laura M.; Vowels, Matthew J.; Marcantonio, Tiffany L. (2024-12-04). "The Impact of Artificial Intelligence on Human Sexuality: A Five-Year Literature Review 2020–2024". Current Sexual Health Reports. 17 (1): 4. doi:10.1007/s11930-024-00397-y. ISSN   1548-3592.
  99. Flynn, Asher; Powell, Anastasia; Scott, Adrian J; Cama, Elena (2022-10-13). "Deepfakes and Digitally Altered Imagery Abuse: A Cross-Country Exploration of an Emerging form of Image-Based Sexual Abuse". The British Journal of Criminology. 62 (6): 1341–1358. doi:10.1093/bjc/azab111. ISSN   0007-0955.