You Look Like a Thing and I Love You

Last updated

You Look Like a Thing and I Love You
You Look Like a Thing and I Love You.jpg
Author Janelle Shane
Country United States
Language English
Genre Popular science
Publisher Voracious
Publication date
5 November 2019
Pages272 pp
ISBN 978-0316525244

You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place is a 2019 nonfiction book by optics research scientist Janelle Shane. The book documents experiences the author and others have had with machine learning programs, and discusses what "intelligence" means in the context of "artificial intelligence" (AI). [1]

Contents

Overview

The main title of the book refers to a phrase generated as a pickup line by a neural net that Shane trained on pickup lines gathered from the Internet. [2]

Shane discusses the dangers of "artificial stupidity" (not phrased as such), describing for example a 2016 crash at a city street intersection, which Shane attributes in part to Tesla Autopilot being trained for highway use and therefore failing to properly perceive a blocking flatbed truck from a side view. Shane provides "Five Principles of AI Weirdness", including "AIs don't understand the problems you want them to solve" and "AIs take the path of least resistance to their programmed goal". [1] Shane gives many examples of AI "shortcuts", including the (possibly apocryphal) legend of an AI that appeared to reliably recognize tanks from photos, by noticing whether the photos were taken on a sunny or a cloudy day. Another of Shane's examples is a hypothetical scenario where a simulated AI evolved to keep people from entering a hazardous hallway during a fire emergency, learns the optimal strategy is to just kill everyone so they cannot enter the hallway. Because AI lacks general intelligence, Shane is skeptical of efforts to power self-driving cars or to detect online hate speech using artificial intelligence. Shane also pushes back against concerns artificial intelligence will replace people's jobs. [3]

Reception

A reviewer in the Christian Science Monitor found the book "eye-opening" and "fun", as well as "comforting" in terms of Shane's arguments against jobs being at risk from AI. [1] A review in ZDNet called the book "approachable" and "insightful". [3] A capsule review in The Philadelphia Inquirer called Shane a "great guide", [4] and a capsule review in Publishers Weekly called the book an "accessible primer" with "charming" and "often-hilarious" content. [5] A reviewer in E&T judged the book "stands out for Shane's madcap sense of humour and affection for the subject". [6] In The Verge , a December 2019 list of "the 11 best new sci-fi books" included Shane's book, stating "Science fact, rather than science fiction, (the book is) incredibly informative". [7] A similar list in Ars Technica praised that "anybody, not just the engineer-minded or the tech-savvy, can understand the often abstract concepts she details." [8] The book also made Scientific American's list of "Recommended Books" for November 2019. [9]

See also

Related Research Articles

<span class="mw-page-title-main">Artificial intelligence</span> Intelligence of machines or software

Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of humans or animals. It is also the field of study in computer science that develops and studies intelligent machines. "AI" may also refer to the machines themselves.

The technological singularity—or simply the singularity—is a hypothetical future point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model, an upgradable intelligent agent will eventually enter a "runaway reaction" of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an "explosion" in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.

<span class="mw-page-title-main">Jürgen Schmidhuber</span> German computer scientist

Jürgen Schmidhuber is a German computer scientist noted for his work in the field of artificial intelligence, specifically artificial neural networks. He is a scientific director of the Dalle Molle Institute for Artificial Intelligence Research in Switzerland. He is also director of the Artificial Intelligence Initiative and professor of the Computer Science program in the Computer, Electrical, and Mathematical Sciences and Engineering (CEMSE) division at the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia.

<span class="mw-page-title-main">Geoffrey Hinton</span> British-Canadian computer scientist and psychologist (born 1947)

Geoffrey Everest Hinton is a British-Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks. From 2013 to 2023, he divided his time working for Google and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the risks of artificial intelligence (AI) technology. In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.

<span class="mw-page-title-main">Artificial general intelligence</span> Hypothetical human-level or stronger AI

An artificial general intelligence (AGI) is a hypothetical type of intelligent agent. If realized, an AGI could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks. Creating AGI is a primary goal of some artificial intelligence research and of companies such as OpenAI, DeepMind, and Anthropic. AGI is a common topic in science fiction and futures studies.

<span class="mw-page-title-main">Timeline of artificial intelligence</span>

This is a timeline of artificial intelligence, sometimes alternatively called synthetic intelligence.

<span class="mw-page-title-main">Gary Marcus</span> American cognitive scientist

Gary Fred Marcus is an American psychologist, cognitive scientist, and author, known for his research on the intersection of cognitive psychology, neuroscience, and artificial intelligence (AI).

<span class="mw-page-title-main">Applications of artificial intelligence</span>

Artificial intelligence (AI) has been used in applications to alleviate certain problems throughout industry and academia. AI, like electricity or computers, is a general-purpose technology that has a multitude of applications. It has been used in fields of language translation, image recognition, credit scoring, e-commerce and other domains.

<span class="mw-page-title-main">OpenAI</span> Artificial intelligence research organization

OpenAI is an American artificial intelligence (AI) research laboratory consisting of the non-profit OpenAI, Inc. and its for-profit subsidiary corporation OpenAI, L.P. registered in Delaware. OpenAI researches artificial intelligence with the declared intention of developing "safe and beneficial" artificial general intelligence, which it defines as "highly autonomous systems that outperform humans at most economically valuable work".

AI takeover—the idea that some kind of artificial intelligence may supplant humankind as the dominant intelligent species on the planet—is a common theme in science fiction. Famous cultural touchstones include Terminator and The Matrix.

Sunspring is a 2016 experimental science fiction short film entirely written by an artificial intelligence bot using neural networks. It was conceived by BAFTA-nominated filmmaker Oscar Sharp and NYU AI researcher Ross Goodwin and produced by film production company, End Cue along with Allison Friedman and Andrew Swett. It stars Thomas Middleditch, Elisabeth Grey, and Humphrey Ker as three people, namely H, H2, and C, living in a future world and eventually connecting with each other through a love triangle. The script of the film was authored by a recurrent neural network called long short-term memory (LSTM) by an AI bot named Benjamin.

<span class="mw-page-title-main">Ian Goodfellow</span> American computer scientist

Ian J. Goodfellow is an American computer scientist, engineer, and executive, most noted for his work on artificial neural networks and deep learning. He was previously employed as a research scientist at Google Brain and director of machine learning at Apple and has made several important contributions to the field of deep learning including the invention of the generative adversarial network (GAN). Goodfellow co-wrote, as the first author, the textbook Deep Learning (2016) and wrote the chapter on deep learning in the authoritative textbook of the field of artificial intelligence, Artificial Intelligence: A Modern Approach.

<i>1 the Road</i> Novel written by an artificial intelligence

1 the Road is an experimental novel composed by artificial intelligence (AI). Emulating Jack Kerouac's On the Road, Ross Goodwin drove from New York to New Orleans in March 2017 with an AI in a laptop hooked up to various sensors, whose output the AI turned into words that were printed on rolls of receipt paper. The novel was published in 2018 by Jean Boîte Éditions.

<span class="mw-page-title-main">Artificial intelligence art</span> Machine application of knowledge of human aesthetic expressions

Artificial intelligence art is any visual artwork created through the use of artificial intelligence (AI) programs.

Janelle Shane is an optics research scientist and artificial intelligence researcher, writer and public speaker. She keeps a popular science blog called AI Weirdness, where she documents various machine learning algorithms, both ones submitted by readers and ones she personally creates. Shane's first book You Look Like A Thing And I Love You: How AI Works And Why It's Making The World A Weirder Place was published in November 2019 covering many of the topics from her AI Weirdness blog for a general audience.

<span class="mw-page-title-main">Timeline of computing 2020–present</span> Historical timeline

This article presents a detailed timeline of events in the history of computing from 2020 to the present. For narratives explaining the overall developments, see the history of computing.

<span class="mw-page-title-main">GPT-3</span> 2020 large language model

Generative Pre-trained Transformer 3 (GPT-3) is a large language model released by OpenAI in 2020. Like its predecessor GPT-2, it is a decoder-only transformer model of deep neural network, which uses attention in place of previous recurrence- and convolution-based architectures. Attention mechanisms allow the model to selectively focus on segments of input text it predicts to be the most relevant. It uses a 2048-tokens-long context and then-unprecedented size of 175 billion parameters, requiring 800GB to store. The model demonstrated strong zero-shot and few-shot learning on many tasks.

<span class="mw-page-title-main">Text-to-image model</span> Machine learning model

A text-to-image model is a machine learning model which takes an input natural language description and produces an image matching that description. Such models began to be developed in the mid-2010s, as a result of advances in deep neural networks. In 2022, the output of state of the art text-to-image models, such as OpenAI's DALL-E 2, Google Brain's Imagen, StabilityAI's Stable Diffusion, and Midjourney began to approach the quality of real photographs and human-drawn art.

<span class="mw-page-title-main">Hallucination (artificial intelligence)</span> Confident unjustified claim by an AI

In the field of artificial intelligence (AI), a hallucination or artificial hallucination is a confident response by an AI that does not seem to be justified by its training data. For example, a hallucinating chatbot might, when asked to generate a financial report for a company, falsely state that the company's revenue was $13.6 billion.

<span class="mw-page-title-main">AI boom</span> Rapid progress in generative AI since mid-2010s

The AI boom refers to an ongoing period of rapid and unprecedented development in the field of artificial intelligence, with the generative AI race being a key component of this boom, which began in earnest with the founding of OpenAI in 2016 or 2017. OpenAI's generative AI systems, such as its various GPT models and DALL-E (2021), have played a significant role in driving this development.

References

  1. 1 2 3 O'Kelly, Kevin (7 April 2020). "Artificial Intelligence still has a long way to go". Christian Science Monitor . Retrieved 24 May 2020.
  2. Fessenden, Marissa (2017). "This Artificial Neural Network Generates Absurd Pickup Lines". Smithsonian Magazine. Retrieved 24 May 2020.
  3. 1 2 Grossman, Wendy M. (2019). "You Look Like a Thing and I Love You, book review: The weird side of AI". ZDNet. Retrieved 24 May 2020.
  4. Timpane, John (2019). "Fall 2019′s biggest books include new titles from Margaret Atwood, Salman Rushdie, Ta-Nehisi Coates, Stephen King, and Ann Patchett". Philadelphia Inquirer . Retrieved 24 May 2020.
  5. Scheier, Liz (2019). "'Alexa, Balance My Portfolio': Business and Personal Finance Books 2019". PublishersWeekly.com. Retrieved 24 May 2020.
  6. Lamb, Hilary (12 December 2019). "Book review: 'You Look Like a Thing and I Love You' by Janelle Shane". eandt.theiet.org. Retrieved 24 May 2020.
  7. Gartenberg, Chaim (29 December 2019). "The 11 best new sci-fi books to check out on your new Kindle". The Verge. Retrieved 24 May 2020.
  8. Palladino, Valentina (1 September 2019). "Ars To-Be-Read: Space operas, platypus papers, and more books to read this fall". Ars Technica. Retrieved 24 May 2020.
  9. Gawrylewski, Andrea (2019). "Recommended Books, November 2019". Scientific American. Retrieved 24 May 2020.