Lauren McCarthy | |
---|---|
Born | Lauren Lee McCarthy Boston, Massachusetts |
Nationality | Chinese-American [1] [2] |
Alma mater | |
Known for | media art, computer-based art |
Awards | United States Artist Fellow, Sundance New Frontier Story Lab Fellow, Eyebeam Rapid Response Fellow |
Website | lauren-mccarthy |
Lauren Lee McCarthy is a Chinese-American artist and computer programmer based in Los Angeles. [3] McCarthy creates artworks that use a variety of media and techniques, including performance, artificial intelligence and programmed computer-based interaction. She created p5.js, an open-source and web-based version of the software Processing. [4]
McCarthy graduated from MIT with a BS in Computer Science and a BS in Art and Design. [5] At MIT she studied technology's impact on physical interactions with her work Tools For Improved Social Interactions, where she made an Anti-Daydreaming Device, a Happiness Hat, and a Body Contact Training Suit out of a knitted, wearable material. [5] The devices included sensors to monitor the wearer and evoke uncomfortable stimuli if the user is not doing what the piece is designed to achieve. [5] For example, if the user does not smile big enough while wearing the Happiness Hat a spike would poke the back of their neck. For her thesis at MIT, McCarthy focused on the similarities between virtual and physical interactions by comparing gym culture and social networking culture. [6]
McCarthy received her MFA degree from UCLA in 2011, where she has been an assistant professor since 2016. [7]
McCarthy often creates works that humanize the roles that smart devices like Amazon Alexa or Google Home take on. The idea for most of these projects was rooted in McCarthy's social anxiety. Getting to know people, and the small talk necessary to build connections is something that is stressful for McCarthy. [8] She stated that she felt jealous of how Amazon Alexa automatically has an intimate place in people's lives. [8]
In 2017, for her work LAUREN, she installed cameras, microphones and speakers in her apartment, then interacted with visitors by performing the role of assistive technology, similar to Amazon Alexa. [9] [10] [11] The roles were reversed in her project SOMEONE, where visitors had 24-hour access and control of McCarthy's home. [8]
In her collaborative work, Waking Agents, visitors are prompted to lie down and use "smart" pillows that can have conversations, play music, ask the users name, tell stories and be an overall guiding intelligence. [12] The users were unaware that the "smart" pillows they were conversing with were actually human performers with their voices disguised to sound like A.I. robots. [12]
McCarthy collaborated with David Leonard, in the project I.A. Suzie, to evaluate how artificial intelligence is used as a care-taking device, and how the user creates a relationship with the device. For this project, McCarthy and Leonard acted as a smart home device in the home of Mary Ann, an 80-year-old woman living in North Carolina. [13] For a week straight they had 24-hour watch over Mary Ann and had the ability to speak with her, control the lights and activate the appliances. [13]
McCarthy explored projects regarding social media in an effort to connect with others and meet new people with the help of technology. McCarthy wished there was a computer program that could scour through social media profiles and automatically make her friends in real life. [14] She decided to manually do this in her work, Friend Crawl, a project she live-streamed on the internet. For 10 hours a day for a week, McCarthy looked at 1,000+ social media profiles, spending about five minutes per profile. [14] Another project she live-streamed was her 2013 work, Social Turkers. [8] McCarthy wanted to explore what including an unbiased third party would do to a social situation and if they could provide her with helpful instruction. [15] To make this happen, McCarthy employed Amazon Turk workers to comment on OkCupid dates that she secretly recorded and live-streamed. [16] McCarthy actually met her husband through this project, when one day he was watching one of the live streams. [8] One the website McCarthy made for the project, she has 16 public logs that ranges from January 4 to January 30. [16] These logs include her personal thoughts on how the dates went as well as the Turk Workers entry transcripts that McCarthy received. [16]
McCarthy helped create Social Soul, a large installation for the TED Conference with Delta Air Lines and MKG. [17] Mccarthy and her partner Kyle McDonald worked to bring the Twitter pages of participants, TED presenters, and attendees to life. To do this they streamed the social media profiles in an immersive 360-degree environment, where the viewer is surrounded by monitors, mirrors and sounds all relating to an individual's specific feed. [17] This project had custom algorithms to match the viewer with other attendees by showing them the strangers social feed. [17] Once the viewer left the simulation they received a tweet connecting them to the person that the algorithm matched them with, so after streaming another's social media fee they could connect with that individual in person. [17]
In Follower, a 2016 work, users could use an app to voluntarily request a person to follow them around New York for an entire day, without knowing the identity of the follower. [18] [19] McCarthy collaborated with Kyle McDonald again in the work How We Act Together, which encourages viewers to follow computer-generated prompts to interact with video persona by nodding, screaming, greeting or making eye contact with the projection. [20] [21]
In September 2021, McCarthy was ranked as a "40 under 40" artist. [22]
Ubiquitous computing is a concept in software engineering, hardware engineering and computer science where computing is made to appear anytime and everywhere. In contrast to desktop computing, ubiquitous computing can occur using any device, in any location, and in any format. A user interacts with the computer, which can exist in many different forms, including laptop computers, tablets, smart phones and terminals in everyday objects such as a refrigerator or a pair of glasses. The underlying technologies to support ubiquitous computing include Internet, advanced middleware, operating system, mobile code, sensors, microprocessors, new I/O and user interfaces, computer networks, mobile protocols, location and positioning, and new materials.
In mass communication, digital media is any communication media that operates in conjunction with various encoded machine-readable data formats. Digital content can be created, viewed, distributed, modified, listened to, and preserved on a digital electronic device, including digital data storage media and digital broadcasting. Digital is defined as any data represented by a series of digits, and media refers to methods of broadcasting or communicating this information. Together, digital media refers to mediums of digitized information broadcast through a screen and/or a speaker. This also includes text, audio, video, and graphics that are transmitted over the internet for viewing or listening to on the internet.
An audio game is an electronic game played on a device such as a personal computer. It is similar to a video game save that there is audible and tactile feedback but not visual.
Processing is a free graphical library and integrated development environment (IDE) built for the electronic arts, new media art, and visual design communities with the purpose of teaching non-programmers the fundamentals of computer programming in a visual context.
Cynthia Breazeal is an American robotics scientist and entrepreneur. She is a former chief scientist and chief experience officer of Jibo, a company she co-founded in 2012 that developed personal assistant robots. Currently, she is a professor of media arts and sciences at the Massachusetts Institute of Technology and the director of the Personal Robots group at the MIT Media Lab. Her most recent work has focused on the theme of living everyday life in the presence of AI, and gradually gaining insight into the long-term impacts of social robots.
A voice-user interface (VUI) enables spoken human interaction with computers, using speech recognition to understand spoken commands and answer questions, and typically text to speech to play a reply. A voice command device is a device controlled with a voice user interface.
User interface (UI) design or user interface engineering is the design of user interfaces for machines and software, such as computers, home appliances, mobile devices, and other electronic devices, with the focus on maximizing usability and the user experience. In computer or software design, user interface (UI) design primarily focuses on information architecture. It is the process of building interfaces that clearly communicate to the user what's important. UI design refers to graphical user interfaces and other forms of interface design. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals.
Amazon Music is a music streaming platform and online music store operated by Amazon. As of January 2020, the service had 55 million subscribers.
TuneIn is a global audio streaming service providing news, radio, sports, music, and podcasts to over 75 million monthly active users.
A virtual assistant (VA) is a software agent that can perform a range of tasks or services for a user based on user input such as commands or questions, including verbal ones. Such technologies often incorporate chatbot capabilities to simulate human conversation, such as via online chat, to facilitate interaction with their users. The interaction may be via text, graphical interface, or voice - as some virtual assistants are able to interpret human speech and respond via synthesized voices.
Cloud gaming, sometimes called gaming on demand or game streaming, is a type of online gaming that runs video games on remote servers and streams the game's output directly to a user's device, or more colloquially, playing a game remotely from a cloud. It contrasts with traditional means of gaming, wherein a game is run locally on a user's video game console, personal computer, or mobile device.
An Internet area network (IAN) is a concept for a communications network that connects voice and data endpoints within a cloud environment over IP, replacing an existing local area network (LAN), wide area network (WAN) or the public switched telephone network (PSTN).
Amazon Fire TV is a line of digital media players and microconsoles developed by Amazon. The devices are small network appliances that deliver digital audio and video content streamed via the Internet to a connected high-definition television. They also allow users to access local content and to play video games with the included remote control or another game controller, or by using a mobile app remote control on another device.
Radhika Nagpal is an Indian-American computer scientist and researcher in the fields of self-organising computer systems, biologically-inspired robotics, and biological multi-agent systems. She is the Augustine Professor in Engineering in the Departments of Mechanical and Aerospace Engineering and Computer Science at Princeton University. Formerly, she was the Fred Kavli Professor of Computer Science at Harvard University and the Harvard School of Engineering and Applied Sciences. In 2017, Nagpal co-founded a robotics company under the name of Root Robotics. This educational company works to create many different opportunities for those unable to code to learn how.
Amazon Echo, often shortened to Echo, is an American brand of smart speakers developed by Amazon. Echo devices connect to the voice-controlled intelligent personal assistant service Alexa, which will respond when a user says "Alexa". Users may change this wake word to "Amazon", "Echo", "Computer", and other options. The features of the device include voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts, and playing audiobooks, in addition to providing weather, traffic and other real-time information. It can also control several smart devices, acting as a home automation hub.
Amazon Alexa or Alexa is a virtual assistant technology largely based on a Polish speech synthesizer named Ivona, bought by Amazon in 2013. It was first used in the Amazon Echo smart speaker and the Echo Dot, Echo Studio and Amazon Tap speakers developed by Amazon Lab126. It is capable of natural language processing (NLP) for tasks such as voice interaction, music playback, creating to-do lists, setting alarms, streaming podcasts, playing audiobooks, providing weather, traffic, sports, other real-time information and news. Alexa can also control several smart devices as a home automation system. Alexa capabilities may be extended by installing "skills" such as weather programs and audio features. It performs these tasks using automatic speech recognition, NLP, and other forms of weak AI.
A smart speaker is a type of loudspeaker and voice command device with an integrated virtual assistant that offers interactive actions and hands-free activation with the help of one "hot word". Some smart speakers can also act as a smart device that utilizes Wi-Fi, and other protocol standards to extend usage beyond audio playback, such as to control home automation devices. This can include, but is not limited to, features such as compatibility across a number of services and platforms, peer-to-peer connection through mesh networking, virtual assistants, and others. Each can have its own designated interface and features in-house, usually launched or controlled via application or home automation software. Some smart speakers also include a screen to show the user a visual response.
Amazon Echo Show is a smart speaker that is part of the Amazon Echo line of products. Similarly to other devices in the family, it is designed around Amazon's virtual assistant Alexa, but additionally features a touchscreen display that can be used to display visual information to accompany its responses, as well as play video and conduct video calls with other Echo Show users. The video call feature was later expanded to include all Skype users.
Joy Adowaa Buolamwini is a Ghanaian-American-Canadian computer scientist and digital activist based at the MIT Media Lab. She founded the Algorithmic Justice League (AJL), an organization that works to challenge bias in decision-making software, using art, advocacy, and research to highlight the social implications and harms of artificial intelligence (AI).
Kyle McDonald is a media artist. McDonald creates visually appealing models using code, and releases tool kits for other artists to customize their own art as they see fit. McDonald was recently an adjunct professor at New York University Tisch School of the Arts' ITP. He is a member of F.A.T lab, and a community manager for OpenFrameworks. He was a resident at STUDIO for Creative Inquiry at Carnegie Mellon University.