\n"}" id="mwCw">
Writer | Nicholas G. Carr |
---|---|
Categories | Advocacy journalism |
First issue | Published in The Atlantic , July 1, 2008 |
Website | Cover story |
Is Google Making Us Stupid? What the Internet Is Doing to Our Brains! (alternatively Is Google Making Us Stoopid?) is a magazine article by technology writer Nicholas G. Carr, and is highly critical of the Internet's effect on cognition. It was published in the July/August 2008 edition of The Atlantic magazine as a six-page cover story. [1] Carr's main argument is that the Internet might have detrimental effects on cognition that diminish the capacity for concentration and contemplation. Despite the title, the article is not specifically targeted at Google, but more at the cognitive impact of the Internet and World Wide Web. [2] [3] Carr expanded his argument in The Shallows: What the Internet Is Doing to Our Brains , a book published by W. W. Norton in June 2010.
The essay was extensively discussed in the media and the blogosphere, with reactions to Carr's argument being polarised. At the Britannica Blog, a part of the discussion focused on the apparent bias in Carr's argument toward literary reading. In Carr's view, reading on the Internet is generally a shallower form in comparison with reading from printed books in which he believes a more intense and sustained form of reading is exercised. [4] Elsewhere in the media, the Internet's impact on memory retention was discussed; and, at the online scientific magazine Edge , several argued that it was ultimately the responsibility of individuals to monitor their Internet usage so that it does not impact their cognition.
While long-term psychological and neurological studies have yet to yield definitive results justifying Carr's argument, a few studies have provided glimpses into the changing cognitive habits of Internet users. [5] A UCLA study led some to wonder whether a breadth of brain activity—which was shown to occur while users performed Internet searches in the study's functional MRI scans—actually facilitated reading and cognition or possibly overburdened the mind; and what quality of thought could be determined by the additional presence of brain activity in regions known to control decision-making and complex reasoning skills.
Prior to the publication of Carr's Atlantic essay, critics had long been concerned about the potential for electronic media to supplant literary reading. [6] In 1994, American academic Sven Birkerts published a book titled The Gutenberg Elegies: The Fate of Reading in an Electronic Age, consisting of a collection of essays that declaimed against the declining influence of literary culture—the tastes in literature that are favored by a social group—with a central premise among the essays asserting that alternative delivery formats for the book are inferior to the paper incarnation. [7] [8] [9] Birkerts was spurred to write the book after his experience with a class he taught in the fall of 1992, where the students had little appreciation for the literature he had assigned them, stemming from, in his opinion, their inaptitude for the variety of skills involved in deep reading. [10] [11] [12] In "Perseus Unbound", an essay from the book, Birkerts presented several reservations toward the application of interactive technologies to educational instruction, cautioning that the "long-term cognitive effects of these new processes of data absorption" were unknown and that they could yield "an expansion of the short-term memory banks and a correlative atrophying of long-term memory". [13]
In 2007, developmental psychologist Maryanne Wolf took up the cause of defending reading and print culture in her book Proust and the Squid: The Story and Science of the Reading Brain , approaching the subject matter from a scientific angle in contrast to Birkerts' cultural-historical angle. [2] [8] [14] [15] A few reviewers were critical of Wolf for only touching upon the Internet's potential impact on reading in her book; [16] [17] [18] however, in essays published concurrent with the book's release she elaborated upon her worries. In an essay in The Boston Globe , Wolf expressed her grave concern that the development of knowledge in children who are heavy users of the Internet could produce mere "decoders of information who have neither the time nor the motivation to think beneath or beyond their googled universes", and cautioned that the web's "immediacy and volume of information should not be confused with true knowledge". [19] In an essay published by Powell's Books, Wolf contended that some of the reading brain's strengths could be lost in future generations "if children are not taught first to read, and to think deeply about their reading, and only then to e-read". [20] Preferring to maintain an academic perspective, Wolf firmly asserted that her speculations have not yet been scientifically verified but deserved serious study. [21] [22]
In Carr's 2008 book The Big Switch: Rewiring the World, From Edison to Google , the material in the final chapter, "iGod", provided a basis for his later Atlantic magazine article titled "Is Google Making Us Stupid?" [23] The inspiration to write "Is Google Making Us Stupid?" came from the difficulties Carr found he had in remaining engaged with not only books he had to read but even books that he found very interesting. [3] This is sometimes called deep reading, a term coined by academic Sven Birkerts in The Gutenberg Elegies and later defined by developmental psychologist Maryanne Wolf with an added cognitive connotation. [11] [21] [22] [24] [25]
"Is Google Making Us Stupid?" is a 2008 article written by technologist Nicholas Carr for The Atlantic, and later expanded on in a published edition by W. W. Norton. The book investigates the cognitive effects of technological advancements that relegate certain cognitive activities—namely, knowledge-searching—to external computational devices. The book received mainstream recognition for interrogating the assumptions people make about technological change and advocating for a component of personal accountability in our relationships to devices.
Carr begins the essay by saying that his recent problems with concentrating on reading lengthy texts, including the books and articles that he used to read effortlessly, stem from spending too much time on the Internet. He suggests that constantly using the Internet might reduce one's ability to concentrate and reflect on content. He introduces a few anecdotes taken from bloggers who write about the transformation in their reading and writing habits over time. In addition, he analyzes a 2008 study by University College London about new "types" of reading that will emerge and become predominant in the information age. He particularly refers to the work of Maryanne Wolf, a reading behavior scholar, which includes theories about the role of technology and media in learning how to write new languages. Carr argues that while speech is an innate ability that stems directly from brain structure, reading is conscious and taught. He acknowledges that this theory has a paucity of evidence so far, but refers to such works as Wolf's Proust and the Squid, which discusses how the brain's neurons adapt to a creature's environmental demands to become literate in new problem areas. The Internet, in his opinion, is just another kind of environment that we will uniquely adapt to.
Carr discusses how concentration might be impaired by Internet usage. He references the historical example of Nietzsche, who used a typewriter, which was new during his time in the 1880s. Allegedly, Nietzsche's writing style changed after the advent of the typewriter. Carr categorizes this example as demonstrative of neuroplasticity, a scientific theory that states neural circuits are contingent and in flux. He invokes the idea of sociologist Daniel Bell that technologies extend human cognition, arguing that humans unconsciously conform to the very qualities, or kinds of patterns, involved in these devices' functions. He uses the clock as an example of a device that has both improved and regulated human perception and behavior.
Carr argues that the Internet is changing behavior at unprecedented levels because it is one of the most pervasive and life-altering technologies in human history. He suggests that the Internet engenders cognitive distractions in the form of ads and popups. These concentration-altering events are only worsened by online media as they adapt their strategies and visual forms to those of Internet platforms to seem more legitimate and trick the viewer into processing them.
Carr also posits that people's ability to concentrate might decrease as new algorithms free them from knowledge work; that is, the process of manipulating and synthesizing abstract information into new concepts and conclusions. He compares the Internet with industrial management systems, tracing how they caused workers to complain that they felt like automata after the implementation of Taylorist management workflows. He compares this example with the modern example of Google, which places its computer engineers and designers into a systematized knowledge environment, creating robust insights and results at the expense of creativity. Additionally, Carr argues that the Internet makes its money mainly by exploiting users' privacy or bombarding them with overstimulation, a vicious cycle where companies facilitate mindless browsing instead of rewarding sustained thinking.
Carr ends his essay by tracing the roots of the skeptic trend. He discusses events where people were wary about new technologies, including Socrates's skepticism about the use of written language and a fifteenth-century Italian editor's concern about the shift from manually written to printed works. All of these technologies indelibly changed human cognition, but also led to mind-opening innovations that endure today. Still, Carr concludes his argument on an ambivalent note, citing a quote by Richard Foreman that laments the erosion of educated and articulate people. Though Google and other knowledge-finding and knowledge-building technologies might speed up existing human computational processes, they might also foreclose the human potential to easily create new knowledge.
We can expect … that the circuits woven by our use of the Net will be different from those woven by our reading of books and other printed works. |
— Nicholas Carr, "Is Google Making Us Stupid?". [24] |
Carr's essay was widely discussed in the media both critically and in passing. While English technology writer Bill Thompson observed that Carr's argument had "succeeded in provoking a wide-ranging debate", [2] Damon Darlin of The New York Times quipped that even though "[everyone] has been talking about [the] article in The Atlantic magazine", only "[s]ome subset of that group has actually read the 4,175-word article, by Nicholas Carr." [26] The controversial online responses to Carr's essay were, according to Chicago Tribune critic Steve Johnson, partly the outcome of the essay's title "Is Google Making Us Stupid?", a question that the article proper does not actually pose and that he believed was "perfect fodder for a 'don't-be-ridiculous' blog post"; Johnson challenged his readers to carefully consider their online responses in the interest of raising the quality of debate. [3]
Many critics discussed the merits of Carr's essay at great length in forums set up formally for this purpose at online hubs such as the Britannica Blog and publisher John Brockman's online scientific magazine Edge , where the roster of names quickly took on the semblance of a Who's Who of the day's Internet critics. [27] [28] [29] [30] Calling it "the great digital literacy debate", British-American entrepreneur and author Andrew Keen judged the victor to be the American reader, who has a wide range of compelling writing from "all of America's most articulate Internet luminaries". [30]
Book critic Scott Esposito pointed out that Chinese characters are incorrectly described as ideograms in Carr's essay, an error that he believed undermined the essay's argument. [31] The myth that Chinese script is ideographic had been effectively debunked in scholar John DeFrancis' 1984 book The Chinese Language: Fact and Fantasy ; [32] DeFrancis classifies Chinese as a logosyllabic writing system. [33] Carr acknowledged that there was a debate over the terminology of 'ideogram', but in a response to Esposito he explained that he had "decided to use the common term" and quoted The Oxford American Dictionary to demonstrate that they likewise define Chinese characters as instances of ideograms. [34]
Writer and activist Seth Finkelstein noted that predictably several critics would label Carr's argument as a Luddite one, [35] and he was not to be disappointed when one critic later maintained that Carr's "contrarian stance [was] slowly forcing him into a caricature of Luddism". [36] Then, journalist David Wolman, in a Wired magazine piece, described as "moronic" the assumption that the web "hurts us more than it helps", a statement that was preceded by an overview of the many technologies that had been historically denounced; Wolman concluded that the solution was "better schools as well as a renewed commitment to reason and scientific rigor so that people can distinguish knowledge from garbage". [37]
Several prominent scientists working in the field of neuroscience supported Carr's argument as scientifically plausible. James Olds, a professor of computational neuroscience, who directs the Krasnow Institute for Advanced Study at George Mason University, was quoted in Carr's essay for his expertise, and upon the essay's publication Olds wrote a letter to the editor of The Atlantic in which he reiterated that the brain was "very plastic"—referring to the changes that occur in the organization of the brain as a result of experience. It was Olds' opinion that given the brain's plasticity it was "not such a long stretch to Carr's meme". [38] One of the pioneers in neuroplasticity research, Michael Merzenich, later added his own comment to the discussion, stating that he had given a talk at Google in 2008 in which he had asked the audience the same question that Carr asked in his essay. Merzenich believed that there was "absolutely no question that our brains are engaged less directly and more shallowly in the synthesis of information, when we use research strategies that are all about 'efficiency', 'secondary (and out-of-context) referencing', and 'once over, lightly'". [39] Another neuroscientist, Gary Small, director of UCLA's Memory & Aging Research Center, wrote a letter to the editor of The Atlantic in which he stated that he believed that "brains are developing circuitry for online social networking and are adapting to a new multitasking technology culture". [40]
In the media, there were many testimonials and refutations given by journalists for the first part of Carr's argument regarding the capacity for concentration; treatments of the second part of Carr's argument regarding the capacity for contemplation, were, however, far rarer. [41] Although columnist Andrew Sullivan noted that he had little leisure time at his disposal for contemplation compared with when he grew up, [42] the anecdotes provided by journalists that indicated a deficiency in the capacity to contemplate were described only in the context of third parties, such as columnist Margaret Wente's anecdote about how one consultant had found a growing tendency in her clients to provide ill-considered descriptions for their technical problems. [41] [43]
Columnist Leonard Pitts of The Miami Herald described his difficulty sitting down to read a book, in which he felt like he "was getting away with something, like when you slip out of the office to catch a matinee". [44] Technology evangelist Jon Udell admitted that, in his "retreats" from the Internet, he sometimes struggled to settle into "books, particularly fiction, and particularly in printed form". [45] He found portable long-form audio to be "transformative", however, because he can easily achieve "sustained attention", which makes him optimistic about the potential to "reactivate ancient traditions, like oral storytelling, and rediscover their powerful neural effects". [9] [45]
Also writing in The Atlantic, a year after Carr, the futurist Jamais Cascio argued that human cognition has always evolved to meet environmental challenges, and that those posed by the internet are no different. He described the 'skimming' referred to by Carr as a form of attention deficit caused by the immaturity of filter algorithms: "The trouble isn't that we have too much information at our fingertips, but that our tools for managing it are still in their infancy... many of the technologies that Carr worries about were developed precisely to help us get some control over a flood of data and ideas. Google isn't the problem; it's the beginning of a solution." [46] Cascio has since modified his stance, conceding that, while the internet remains good at illuminating knowledge, it is even better at manipulating emotion. "If Carr wrote his Atlantic essay now [2020] with the title 'Is Facebook Making Us Stupid?' it would be difficult to argue in favor of 'No.'". [47]
Cascio and Carr's articles have been discussed together in several places. Pew Research used them to form a tension-pair question survey that was distributed to noted academics. Most responded in detail; concurring with the proposition "Carr was wrong: Google does not make us stupid". [48] In The Googlization of Everything, Vaidhyanathan tended to side with Carr. However, he thought both arguments relied too much on determinism: Carr in thinking that an over-reliance on internet tools will inevitably cause the brain to atrophy, and Cascio in thinking that getting smarter is the necessary outcome of the evolutionary pressures he describes. [49] In From Gutenberg to Zuckerberg Naughton noted that, while many agreed Carr had hit on an important subject, his conclusions were not widely supported. [50]
Firmly contesting Carr's argument, journalist John Battelle praised the virtues of the web: "[W]hen I am deep in search for knowledge on the web, jumping from link to link, reading deeply in one moment, skimming hundreds of links the next, when I am pulling back to formulate and reformulate queries and devouring new connections as quickly as Google and the Web can serve them up, when I am performing bricolage in real time over the course of hours, I am 'feeling' my brain light up, I and [ sic ] 'feeling' like I'm getting smarter". [2] [51] Web journalist Scott Rosenberg reported that his reading habits are the same as they were when he "was a teenager plowing [his] way through a shelf of Tolstoy and Dostoyevsky". [52] In book critic Scott Esposito's view, "responsible adults" have always had to deal with distractions, and, in his own case, he claimed to remain "fully able to turn down the noise" and read deeply. [31] [41]
In critiquing the rise of Internet-based computing, the philosophical question of whether or not a society can control technological progress was raised. At the online scientific magazine Edge , Wikipedia co-founder Larry Sanger argued that individual will was all that was necessary to maintain the cognitive capacity to read a book all the way through, and computer scientist and writer Jaron Lanier rebuked the idea that technological progress is an "autonomous process that will proceed in its chosen direction independently of us". [29] Lanier echoed a view stated by American historian Lewis Mumford in his 1970 book The Pentagon of Power , in which Mumford suggested that the technological advances that shape a society could be controlled if the full might of a society's free will were employed. [23] [53] Lanier believed that technology was significantly hindered by the idea that "there is only one axis of choice" which is either pro- or anti- when it comes to technology adoption. [29] Yet Carr had stated in The Big Switch that he believed an individual's personal choice toward a technology had little effect on technological progress. [23] [54] According to Carr, the view expressed by Mumford about technological progress was incorrect because it regarded technology solely as advances in science and engineering rather than as an influence on the costs of production and consumption. Economics were a more significant consideration in Carr's opinion because in a competitive marketplace the most efficient methods of providing an important resource will prevail. As technological advances shape society, an individual might be able to resist the effects but his lifestyle will "always be lonely and in the end futile"; despite a few holdouts, technology will nevertheless shape economics which, in turn, will shape society. [23] [54]
The selection of one particular quote in Carr's essay from pathologist Bruce Friedman, a member of the faculty of the University of Michigan Medical School, who commented on a developing difficulty reading books and long essays and specifically the novel War and Peace , was criticized for having a bias toward narrative literature. The quote failed to represent other types of literature, such as technical and scientific literature, which had, in contrast, become much more accessible and widely read with the advent of the Internet. [24] [55] At the Britannica Blog, writer Clay Shirky pugnaciously observed that War and Peace was "too long, and not so interesting", further stating that "it would be hard to argue that the last ten years have seen a decrease in either the availability or comprehension of material on scientific or technical subjects". [36] Shirky's comments on War and Peace were derided by several of his peers as verging on philistinism. [25] [56] [57] In Shirky's defense, inventor W. Daniel Hillis asserted that, although books "were created to serve a purpose", that "same purpose can often be served by better means". While Hillis considered the book to be "a fine and admirable device", he imagined that clay tablets and scrolls of papyrus, in their time, "had charms of their own". [29] Wired magazine editor Kevin Kelly believed that the idea that "the book is the apex of human culture" should be resisted. [7] And Birkerts differentiated online reading from literary reading, stating that in the latter the reader is directed within themselves and enters "an environment that is nothing at all like the open-ended information zone that is cyberspace" in which he feels psychologically fragmented. [27] [58]
Abundance of books makes men less studious. |
— Hieronimo Squarciafico, a 15th-century Venetian editor, bemoaning the printing press. [59] [60] |
Several critics theorized about the effects of the shift from scarcity to abundance of written material in the media as a result of the technologies introduced by the Internet. This shift was examined for its potential to lead individuals to a superficial comprehension of many subjects rather than a deep comprehension of just a few subjects. According to Shirky, an individual's ability to concentrate had been facilitated by the "relatively empty environment" which had ceased to exist when the wide availability of the web proliferated new media. Although Shirky acknowledged that the unprecedented quantity of written material available on the web might occasion a sacrifice of the cultural importance of many works, he believed that the solution was "to help make the sacrifice worth it". [36] In direct contrast, Sven Birkerts argued that "some deep comprehension of our inheritance [was] essential", and called for "some consensus vision among those shapers of what our society and culture might be shaped toward", warning against allowing the commercial marketplace to dictate the future standing of traditionally important cultural works. [61] While Carr found solace in Shirky's conceit that "new forms of expression" might emerge to suit the Internet, he considered this conceit to be one of faith rather than reason. [25] In a later response, Shirky continued to expound upon his theme that "technologies that make writing abundant always require new social structures to accompany them", explaining that Gutenberg's printing press led to an abundance of cheap books which were met by "a host of inventions large and small", such as the separation of fiction from non-fiction, the recognition of talents, the listing of concepts by indexes, and the practice of noting editions. [55]
As a result of the vast stores of information made accessible on the web, approximately one hundred critics pointed to a decrease in the desire to recall certain types of information, indicating, they believed, a change in the process of recalling information, as well as the types of information that are recalled. According to Ben Worthen, a Wall Street Journal business technology blogger, the growing importance placed on the ability to access information instead of the capacity to recall information straight from memory would, in the long term, change the type of job skills that companies who are hiring new employees would find valuable. Due to an increased reliance on the Internet, Worthen speculated that before long "the guy who remembers every fact about a topic may not be as valuable as the guy who knows how to find all of these facts and many others". [41] [62] Evan Ratliff of Salon.com wondered if the usage of gadgets to recall phone numbers, as well as geographical and historical information, had the effect of releasing certain cognitive resources that in turn strengthened other aspects of cognition. Drawing parallels with transactive memory—a process whereby people remember things in relationships and groups—Ratliff mused that perhaps the web was "like a spouse who is around all the time, with a particular knack for factual memory of all varieties". [27] Far from conclusive, these ruminations left the web's impact on memory retention an open question. [27]
In the essay, Carr introduces the discussion of the scientific support for the idea that the brain's neural circuitry can be rewired with an example in which philosopher Friedrich Nietzsche is said to have been influenced by technology. According to German scholar Friedrich A. Kittler in his book Gramophone, Film, Typewriter, Nietzsche's writing style became more aphoristic after he started using a typewriter. Nietzsche began using a Malling-Hansen Writing Ball because of his failing eyesight which had disabled his ability to write by hand. [23] [65] The idea that Nietzsche's writing style had changed for better or worse when he adopted the typewriter was disputed by several critics. Kevin Kelly and Scott Esposito each offered alternate explanations for the apparent changes. [29] [31] [66] Esposito believed that "the brain is so huge and amazing and enormously complex that it's far, far off base to think that a few years of Internet media or the acquisition of a typewriter can fundamentally rewire it." [31] In a response to Esposito's point, neuroscientist James Olds stated that recent brain research demonstrated that it was "pretty clear that the adult brain can re-wire on the fly". In The New York Times it was reported that several scientists believed that it was certainly plausible that the brain's neural circuitry may be shaped differently by regular Internet usage compared with the reading of printed works. [6]
Although there was a consensus in the scientific community about how it was possible for the brain's neural circuitry to change through experience, the potential effect of web technologies on the brain's neural circuitry was unknown. [38] [39] On the topic of the Internet's effect on reading skills, Guinevere F. Eden, director of the Center for the Study of Learning at Georgetown University, remarked that the question was whether or not the Internet changed the brain in a way that was beneficial to an individual. [6] Carr believed that the effect of the Internet on cognition was detrimental, weakening the ability to concentrate and contemplate. Olds cited the potential benefits of computer software that specifically targets learning disabilities, stating that among some neuroscientists there was a belief that neuroplasticity-based software was beneficial in improving receptive language disorders. [38] Olds mentioned neuroscientist Michael Merzenich, who had formed several companies with his peers in which neuroplasticity-based computer programs had been developed to improve the cognitive functioning of kids, adults and the elderly. [38] [67] In 1996, Merzenich and his peers had started a company called Scientific Learning in which neuroplastic research had been used to develop a computer training program called Fast ForWord that offered seven brain exercises that improved language impairments and learning disabilities in children. [68] Feedback on Fast ForWord showed that these brain exercises even had benefits for autistic children, an unexpected spillover effect that Merzenich has attempted to harness by developing a modification of Fast ForWord specifically designed for autism. [69] At a subsequent company that Merzenich started called Posit Science, Fast ForWord-like brain exercises and other techniques were developed with the aim of sharpening the brains of elderly people by retaining the plasticity of their brains. [70]
In Stanley Kubrick's 1968 science fiction film 2001: A Space Odyssey , astronaut David Bowman slowly disassembles the mind of an artificial intelligence named HAL by sequentially unplugging its memory banks. Carr likened the emotions of despair expressed by HAL as its mind is disassembled to his own, at the time, cognitive difficulties in engaging with long texts. [2] He felt as if someone was "tinkering with [his] brain, remapping the neural circuitry, reprogramming the memory". [24] HAL had also been used as a metaphor for the "ultimate search engine" in a PBS interview with Google co-founder Sergey Brin as noted in Carr's book The Big Switch, and also Brin's TED talk. Brin was comparing Google's ambitions of building an artificial intelligence to HAL, while dismissing the possibility that a bug like the one that led HAL to murder the occupants of the fictional spacecraft Discovery One could occur in a Google-based artificial intelligence. [23] [71] [72] Carr observed in his essay that throughout history technological advances have often necessitated new metaphors, such as the mechanical clock engendering the simile "like clockwork" and the age of the computer engendering the simile "like computers". Carr concluded his essay with an explanation as to why he believed HAL was an appropriate metaphor for his essay's argument. He observed that HAL showed genuine emotion as his mind was disassembled while, throughout the film, the humans onboard the space station appeared to be automatons, thinking and acting as if they were following the steps of an algorithm. Carr believed that the film's prophetic message was that as individuals increasingly rely on computers for an understanding of their world their intelligence may become more machinelike than human. [2] [24]
The brain is very specialized in its circuitry and if you repeat mental tasks over and over it will strengthen certain neural circuits and ignore others. |
— Gary Small, a professor at UCLA's Semel Institute for Neuroscience and Human Behaviour. [73] |
After the publication of Carr's essay, a developing view unfolded in the media as sociological and neurological studies surfaced that were relevant to determining the cognitive impact of regular Internet usage. Challenges to Carr's argument were made frequently. As the two most outspoken detractors of electronic media, Carr and Birkerts were both appealed to by Kevin Kelly to each formulate a more precise definition of the faults they perceived regarding electronic media so that their beliefs could be scientifically verified. [74] While Carr firmly believed that his skepticism about the Internet's benefits to cognition was warranted, [25] he cautioned in both his essay and his book The Big Switch that long-term psychological and neurological studies were required to definitively ascertain how cognition develops under the influence of the Internet. [3] [24] [75]
Scholars at University College London conducted a study titled "Information Behaviour of the Researcher of the Future", the results of which suggested that students' research habits tended towards skimming and scanning rather than in-depth reading. [76] The study provoked serious reflection among educators about the implications for educational instruction. [77]
In October 2008, new insights into the effect of Internet usage on cognition were gleaned from the results, reported in a press release, [78] of a study conducted by UCLA's Memory and Aging Research Center that had tested two groups of people between the ages of 55 and 76 years old; only one group of which were experienced web users. While they had read books or performed assigned search tasks their brain activity had been monitored with functional MRI scans, which revealed that both reading and web search utilize the same language, reading, memory, and visual regions of the brain; however, it was discovered that those searching the web stimulated additional decision-making and complex reasoning regions of the brain, with a two-fold increase in these regions in experienced web users compared with inexperienced web users. [79] [80] [81] [82] Gary Small, the director of the UCLA center and lead investigator of the UCLA study, concurrently released the book iBrain: Surviving the Technological Alteration of the Modern Mind, co-authored with Gigi Vorgan, with the press release.
While one set of critics and bloggers used the UCLA study to dismiss the argument raised in Carr's essay, [83] [84] another set took a closer look at the conclusions that could be drawn from the study concerning the effects of Internet usage. [85] Among the reflections concerning the possible interpretations of the UCLA study were whether greater breadth of brain activity while using the Internet in comparison with reading a book improved or impaired the quality of a reading session; and whether the decision-making and complex reasoning skills that are apparently involved in Internet search, according to the study, suggest a high quality of thought or simply the use of puzzle solving skills. [86] [87] Thomas Claburn, in InformationWeek , observed that the study's findings regarding the cognitive impact of regular Internet usage were inconclusive and stated that "it will take time before it's clear whether we should mourn the old ways, celebrate the new, or learn to stop worrying and love the Net". [5]
Cognitive science is the interdisciplinary, scientific study of the mind and its processes. It examines the nature, the tasks, and the functions of cognition. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."
A concept is an abstract idea that serves as a foundation for more concrete principles, thoughts, and beliefs. Concepts play an important role in all aspects of cognition. As such, concepts are studied within such disciplines as linguistics, psychology, and philosophy, and these disciplines are interested in the logical and psychological structure of concepts, and how they are put together to form thoughts and sentences. The study of concepts has served as an important flagship of an emerging interdisciplinary approach, cognitive science.
Steven Arthur Pinker is a Canadian-American cognitive psychologist, psycholinguist, popular science author, and public intellectual. He is an advocate of evolutionary psychology and the computational theory of mind.
Richard Scott Bakker is a Canadian fantasy author. He grew up on a tobacco farm in the Simcoe area.
Sven Birkerts is an American essayist and literary critic. He is best known for his book The Gutenberg Elegies (1994), which posits a decline in reading due to the overwhelming advances of the Internet and other technologies of the "electronic culture." In 2006 he published a revised edition with new introduction and afterword, reflecting on the endurance of reading.
Nicholas G. Carr is an American journalist and writer who has published books and articles on technology, business, and culture. His book The Shallows: What the Internet Is Doing to Our Brains was a finalist for the 2011 Pulitzer Prize in General Nonfiction.
A cognitive architecture refers to both a theory about the structure of the human mind and to a computational instantiation of such a theory used in the fields of artificial intelligence (AI) and computational cognitive science. These formalized models can be used to further refine comprehensive theories of cognition and serve as the frameworks for useful artificial intelligence programs. Successful cognitive architectures include ACT-R and SOAR. The research on cognitive architectures as software instantiation of cognitive theories was initiated by Allen Newell in 1990.
Clay Shirky is an American writer, consultant and teacher on the social and economic effects of Internet technologies and journalism.
Francis Paul Heylighen is a Belgian cyberneticist investigating the emergence and evolution of intelligent organization. He presently works as a research professor at the Vrije Universiteit Brussel, where he directs the transdisciplinary "Center Leo Apostel" and the research group on "Evolution, Complexity and Cognition". He is best known for his work on the Principia Cybernetica Project, his model of the Internet as a global brain, and his contributions to the theories of memetics and self-organization. He is also known, albeit to a lesser extent, for his work on gifted people and their problems.
Cognitive poetics is a school of literary criticism that applies the principles of cognitive science, particularly cognitive psychology, to the interpretation of literary texts. It has ties to reader-response criticism, and also has a grounding in modern principles of cognitive linguistics. The research and focus on cognitive poetics paves way for psychological, sociocultural and indeed linguistic dimensions to develop in relation to stylistics.
Michael Matthias Merzenich is an American neuroscientist and professor emeritus at the University of California, San Francisco. He took the sensory cortex maps developed by his predecessors and refined them using dense micro-electrode mapping techniques. Using this, he definitively showed there to be multiple somatotopic maps of the body in the postcentral sulcus, and multiple tonotopic maps of the acoustic inputs in the superior temporal plane.
Gary Fred Marcus is an American psychologist, cognitive scientist, and author, known for his research on the intersection of cognitive psychology, neuroscience, and artificial intelligence (AI).
Ashwin Ram is an Indian-American computer scientist. He was chief innovation officer at PARC from 2011 to 2016, and published books and scientific articles and helped start at least two companies.
Various researchers have undertaken efforts to examine the psychological effects of Internet use. Some research employs studying brain functions in Internet users. Some studies assert that these changes are harmful, while others argue that asserted changes are beneficial.
Cognitive Surplus: How Technology Makes Consumers into Collaborators is a 2010 non-fiction book by Clay Shirky, originally published in with the subtitle "Creativity and Generosity in a Connected Age". The book is an indirect sequel to Shirky's Here Comes Everybody, which covered the impact of social media. Cognitive Surplus focuses on describing the free time that individuals have to engage with collaborative activities within new media. Shirky's text searches to prove that global transformation can come from individuals committing their time to actively engage with technology. Overall response has been mixed with some critics praising Shirky's insights but also decrying some of the shortcomings of his theory.
Maryanne Wolf is a scholar, teacher, and advocate for children and literacy around the world. She is the UCLA Professor-in-Residence of Education, Director of the UCLA Center for Dyslexia, Diverse Learners, and Social Justice, and the Chapman University Presidential Fellow (2018-2022). She is also the former John DiBiaggio Professor of Citizenship and Public Service, Director of the Center for Reading and Language Research, and Professor in the Eliot-Pearson Department of Child Study and Human Development at Tufts University. She is a permanent academician in the Pontifical Academy of Science. She was recently made an Honorary Advisory Fellow on the United Sigma Intelligence Association.
The Shallows: What the Internet Is Doing to Our Brains, published in the United Kingdom as The Shallows: How the Internet Is Changing the Way We Think, Read and Remember, is a 2010 book by the American journalist Nicholas G. Carr. Published by W. W. Norton & Company, the book expands on the themes first raised in "Is Google Making Us Stupid?", Carr's 2008 essay in The Atlantic, and explores the effects of the Internet on the brain. The book claims research shows "online reading" yields lower comprehension than reading a printed page. The Shallows was a finalist for the 2011 Pulitzer Prize in General Nonfiction.
In philosophy of mind, the extended mind thesis says that the mind does not exclusively reside in the brain or even the body, but extends into the physical world. The thesis proposes that some objects in the external environment can be part of a cognitive process and in that way function as extensions of the mind itself. Examples of such objects are written calculations, a diary, or a PC; in general, it concerns objects that store information. The hypothesis considers the mind to encompass every level of cognition, including the physical level.
Jamais Cascio is a San Francisco Bay Area–based author and futurist.
Thomas L. Griffiths is an Australian academic who is the Henry R. Luce Professor of Information Technology, Consciousness, and Culture at Princeton University. He studies human decision-making and its connection to problem-solving methods in computation. His book with Brian Christian, Algorithms to Live By: The Computer Science of Human Decisions, was named one of the "Best Books of 2016" by MIT Technology Review.
{{cite news}}
: |author=
has generic name (help){{cite news}}
: |author=
has generic name (help){{cite news}}
: |author=
has generic name (help)