Chatfishing is a term used in media to describe the use of artificial intelligence (AI), particularly large language models and chatbots, to conduct online conversations in a way that misrepresents the sender's identity, personality, intent, or level of human involvement, especially in the context of online dating. [1] [2] [3] Chatfishing is also discussed in relation to romance scam operations and other confidence schemes, where generative AI can be used to increase perceived credibility of fictitious profiles. [4]
The term is generally presented as an AI-mediated variant of catfishing: while catfishing is commonly associated with false identities and fabricated personal details, chatfishing emphasizes deception through AI-generated messages. [2] [5]
Media coverage has used chatfishing to describe several related practices, ranging from occasional AI assistance in composing messages to handing over entire conversations to an AI system. Some accounts distinguish between AI use intended as “enhancement” (e.g., rewriting a draft to sound clearer or more empathetic) and AI use that materially misrepresents the sender (e.g., letting the AI conduct the relationship-defining emotional labor without disclosure). [2] [3] [6]
Coverage of chatfishing typically describes multiple forms, including: [7]
In reporting, motivations attributed to chatfishing include dating fatigue, time constraints, anxiety about messaging norms, accessibility needs, and attempts to make the dating process efficient on competitive platforms. [2] [6]
The exact prevalance of chatfishing is unknown. However, surveys suggest growing use of AI tools in dating contexts. In 2025, Match Group and researchers affiliated with the Kinsey Institute reported that 26% of surveyed U.S. singles used AI to enhance their dating lives. [8]
A recurring theme in media accounts is that AI-generated text can be difficult for readers to reliably distinguish from human-authored writing, particularly when the model is prompted using personal details or prior messages from the target conversation. [1] [3]
Empirical research outside dating contexts has reported near-chance performance on tasks requiring humans to distinguish AI-generated text from human-written text. In a 2024 study published in Scientific Reports, participants were, on average, 57% accurate at identifying whether a text was human- or AI-authored and 78% accurate in identifying the human social media comment with substantial variation across individuals. [9]
Cybersecurity guidance aimed at online dating commonly recommends identity verification steps such as moving from text to real-time channels (phone or video) and checking profile consistency especially when there are signs of scripted or evasive behavior. [10]
Critics have described undisclosed AI-mediated courtship as a form of false advertising that can distort expectations and undermine trust when people meet offline. [3] [2] Other accounts emphasize that some users treat AI tools as writing assistance, arguing that the ethical boundary depends on the degree of outsourcing and whether the AI materially changes the persona presented to others. [2] [6]
{{cite journal}}: CS1 maint: date and year (link)