Jeffrey T. Hancock is a communication and psychology researcher and professor at the College of Communication Stanford University. Hancock is best known for his research in fields of deception, trust in technology, and the psychology of social media. Hancock has been published in over 80 journal articles and has been cited in National Public Radio (NPR) and CBS This Morning.
In 2024, Hancock was retained by the Minnesota Attorney General to provide expert testimony in defense of a state law banning the usage of AI generated deepfakes to influence elections. Hancock submitted a court filing generated by AI, containing citations of fake, AI-generated sources.
Hancock was born in Canada, though he currently resides in Palo Alto, California. He received his bachelor's degree of Applied Science in Psychology from the University of Victoria in 1996. During his undergraduate college career, Hancock was a Customs Officer for Canada Border Service Agency, which introduced him to deception. [1] In 1997, he began his doctoral program in Psychology at Dalhousie University in Canada, from which he would graduate in 2002. [2] From 2002 to 2015, Hancock was a faculty member and professor of Information Science and Communication at Cornell University. [3]
Since 2015, Hancock has been a professor in the College of Communication at Stanford University. In his tenure at Stanford University, he founded the Stanford Social Media Lab. This lab, whose publications date back to March 2017, works to understand psychological and interpersonal processes in social media. The lab’s network includes faculty, staff, and doctoral candidates who study social media in various capacities. Some research focuses of the lab include romantic relationships through the use of technology, how new media affects child and adolescent development, gender biases and other social inequalities on social networks, and more. The Stanford Social Media Lab receives funding from the Stanford Institute for Human-Centered Artificial Intelligence, the Knight Foundation, and the National Science Foundation. Some notable lab alumni include Annabell Suh, Megan French, and David M. Markowitz. [4]
In addition to his research focuses and lab work, Hancock teaches courses during the academic year. During the 2020–21 academic year at Stanford University, some of his course offerings included: Advanced Studies in Behavior and Social Media; Introduction to Communication; Language and Technology; Truth, Trust, and Tech; and six sections of independent study. [5]
Hancock is a communication and psychology researcher who has published over 80 journal articles in his career. His research interests involve studying how language can reveal social and psychological dynamics, including deception and trust, emotional dynamics, intimacy and relationships, social support, and the ethical concerns associated with computation computer science. His work has been published in several notable journals, like the Journal of Communication, and has been funded by the National Science Foundation and the United States Department of Defense. [6]
In a study published in 2009 in the Journal of Communication, Hancock and his co-author investigated the accuracy of online dating service photographs. The study, whose participant pool included 54 heterosexual dater profiles, found that daters juggle the line between presenting themselves in photos to enhance their physical attractiveness and presenting a photo that would not be considered deceptive. [7] This is just one example of the connection between Hancock’s different research interests, deception and interpersonal relationships mediated through technology. Many of Hancock’s articles involve more than one of his areas of research expertise.
Outside of the scope of communication researchers and academics, Hancock’s work has been able to reach a lay audience through his inclusion in several non-academic presentations of his research. His 2012 presentation at TED, entitled "The Future of Lying," has been viewed over one million times. In this TED Talk, Hancock details the way that most online, technology-mediated communication is more honest than face-to-face communication. He posits that this can be explained by the permanence of online communication. Before the invention of writing and social media, words were only as permanent as the memory of the people who heard them. Now, technology memorializes everyday interactions and compels us to consider what record we are leaving behind in our online communication. [1] In the 2014 "Why we Lie" episode of NPR’s TED Radio Hour, Hancock also discussed the implications of his research that suggest that technology can actually make us more honest. The podcast episode referenced Hancock’s 2012 TED Talk but expanded the narrative and provided more context to the research backing his claims. [8]
In 2012, Hancock appeared in a segment on CBS This Morning to talk about social media privacy in the job hunt and modern challenges of the digital age after some places of work were reportedly asking job applicants to submit their social media login information for an audit of their account. [9] This talk show appearance came just a few weeks after Hancock published an article titled, “The Effect of LinkedIn on Deception in Resumes.” That journal article found that LinkedIn profiles and resumes were less deceptive than paper resumes concerning job experience and skills. [10] In an article published in Social Media + Society in 2020, Hancock and his co-author wrote about the challenges the COVID-19 pandemic has placed on older adults. As social distancing is implemented to slow the spread of COVID-19, older adults are left facing challenges involving loneliness and a lack of proficiency in digital skills. The article recommends that technology companies make accessibility by older adults a priority in product development to help prevent loneliness and increase media literacy. [11]
In November 2024, Hancock used AI to generate a court filing with two non-existent sources to defend a Minnesota law banning the usage of AI-generated deepfakes to influence an election. [12] [13] [14] Hancock submitted the filing to the court under penalty of perjury. The AI generated content was noticed by attorneys for Minnesota state representative Mary Franson and YouTuber Christopher Kohls, who sued to block the law on free speech grounds. Hancock's filing was excluded from consideration after his AI generated "hallucinations" were discovered. Hancock admitted to using ChatGPT-4o for his filing. [15] [16]
Hancock was paid $600 an hour by Minnesota Attorney General Keith Ellison to provide his expert testimony in defense of Minnesota's law. A request by the attorney general's office to allow Hancock to submit a revised filing without the AI generated hallucinations was rejected, with the presiding judge stating that Hancock's usage of fake sources "shatters his credibility with this Court". [16]
A chatbot is a software application or web interface designed to have textual or spoken conversations. Modern chatbots are typically online and use generative artificial intelligence systems that are capable of maintaining a conversation with a user in natural language and simulating the way a human would behave as a conversational partner. Such chatbots often use deep learning and natural language processing, but simpler chatbots have existed for decades.
Media manipulation refers to orchestrated campaigns in which actors exploit the distinctive features of broadcasting mass communications or digital media platforms to mislead, misinform, or create a narrative that advance their interests and agendas.
Stephen Michael Kosslyn is an American psychologist and neuroscientist. Kosslyn is the president of Active Learning Sciences Inc., which helps institutions design, deliver, and assess active-learning based courses and educational programs. He is also the founder and chief academic officer of Foundry College, an online two-year college.
Reid Garrett Hoffman is an American internet entrepreneur, venture capitalist, podcaster, and author. Hoffman is the co-founder and executive chairman of LinkedIn, a business-oriented social network used primarily for professional networking. He is also chairman of venture capital firm Village Global and a co-founder of Inflection AI.
Music and artificial intelligence is the development of music software programs which use AI to generate music. As with applications in other fields, AI in music also simulates mental tasks. A prominent feature is the capability of an AI algorithm to learn based on past data, such as in computer accompaniment technology, wherein the AI is capable of listening to a human performer and performing accompaniment. Artificial intelligence also drives interactive composition technology, wherein a computer composes music in response to a live performance. There are other AI applications in music that cover not only music composition, production, and performance but also how music is marketed and consumed. Several music player programs have also been developed to use voice recognition and natural language processing technology for music voice control. Current research includes the application of AI in music composition, performance, theory and digital sound processing.
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015 and headquartered in San Francisco, California. Its stated mission is to develop "safe and beneficial" artificial general intelligence (AGI), which it defines as "highly autonomous systems that outperform humans at most economically valuable work". As a leading organization in the ongoing AI boom, OpenAI is known for the GPT family of large language models, the DALL-E series of text-to-image models, and a text-to-video model named Sora. Its release of ChatGPT in November 2022 has been credited with catalyzing widespread interest in generative AI.
Ian J. Goodfellow is an American computer scientist, engineer, and executive, most noted for his work on artificial neural networks and deep learning. He is a research scientist at Google DeepMind, was previously employed as a research scientist at Google Brain and director of machine learning at Apple, and has made several important contributions to the field of deep learning, including the invention of the generative adversarial network (GAN). Goodfellow co-wrote, as the first author, the textbook Deep Learning (2016) and wrote the chapter on deep learning in the authoritative textbook of the field of artificial intelligence, Artificial Intelligence: A Modern Approach.
Deepfakes are images, videos, or audio which are edited or generated using artificial intelligence tools, and which may depict real or non-existent people. They are a type of synthetic media and modern form of a Media prank.
Digital cloning is an emerging technology, that involves deep-learning algorithms, which allows one to manipulate currently existing audio, photos, and videos that are hyper-realistic. One of the impacts of such technology is that hyper-realistic videos and photos makes it difficult for the human eye to distinguish what is real and what is fake. Furthermore, with various companies making such technologies available to the public, they can bring various benefits as well as potential legal and ethical concerns.
Artificial intelligence art is visual artwork created or enhanced through the use of artificial intelligence (AI) programs.
Synthetic media is a catch-all term for the artificial production, manipulation, and modification of data and media by automated means, especially through the use of artificial intelligence algorithms, such as for the purpose of misleading people or changing an original meaning. Synthetic media as a field has grown rapidly since the creation of generative adversarial networks, primarily through the rise of deepfakes as well as music synthesis, text generation, human image synthesis, speech synthesis, and more. Though experts use the term "synthetic media," individual methods such as deepfakes and text synthesis are sometimes not referred to as such by the media but instead by their respective terminology Significant attention arose towards the field of synthetic media starting in 2017 when Motherboard reported on the emergence of AI altered pornographic videos to insert the faces of famous actresses. Potential hazards of synthetic media include the spread of misinformation, further loss of trust in institutions such as media and government, the mass automation of creative and journalistic jobs and a retreat into AI-generated fantasy worlds. Synthetic media is an applied form of artificial imagination.
Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing photographs or video by applying deepfake technology to the images of the participants. The use of deepfake pornography has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.
Audio deepfake technology, also referred to as voice cloning or deepfake audio, is an application of artificial intelligence designed to generate speech that convincingly mimics specific individuals, often synthesizing phrases or sentences they have never spoken. Initially developed with the intent to enhance various aspects of human life, it has practical applications such as generating audiobooks and assisting individuals who have lost their voices due to medical conditions. Additionally, it has commercial uses, including the creation of personalized digital assistants, natural-sounding text-to-speech systems, and advanced speech translation services.
David M. Markowitz is a communication professor at Michigan State University who specializes in the study of language and deception. Much of his work focuses on how technological channels impact the encoding and decoding of messages. His work has captured the attention of magazines and outlets in popular culture; he writes articles for Forbes magazine about deception. Much of his research has utilized analyses of linguistic and analytic styles of writing, for example, Markowitz's work on pet adoption ads was referenced in a website featuring tips on how to write better pet adoption ads.
In the digital humanities, "algorithmic culture" is part of an emerging synthesis of rigorous software algorithm driven design that couples software, highly structured data driven design with human oriented sociocultural attributes. An early occurrence of the term is found in Alexander R. Galloway classic Gaming: Essays on Algorithmic Culture
ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and launched in 2022. It is currently based on the GPT-4o large language model (LLM). ChatGPT can generate human-like conversational responses and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence (AI). Some observers have raised concern about the potential of ChatGPT and similar programs to displace human intelligence, enable plagiarism, or fuel misinformation.
ElevenLabs is a software company that specializes in developing natural-sounding speech synthesis software using deep learning.
Generative artificial intelligence is a subset of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models learn the underlying patterns and structures of their training data and use them to produce new data based on the input, which often comes in the form of natural language prompts.
The AI boom is an ongoing period of rapid progress in the field of artificial intelligence (AI) that started in the late 2010s before gaining international prominence in the early 2020s. Examples include protein folding prediction led by Google DeepMind as well as large language models and generative AI applications developed by OpenAI. This period is sometimes referred to as an AI spring, to contrast it with previous AI winters.
As artificial intelligence (AI) has become more mainstream, there is growing concern about how this will influence elections. Potential targets of AI include election processes, election offices, election officials and election vendors.
This article needs additional or more specific categories .(May 2021) |