Culturomics is a form of computational lexicology that studies human behavior and cultural trends through the quantitative analysis of digitized texts. [1] [2] Researchers data mine large digital archives to investigate cultural phenomena reflected in language and word usage. [3] The term is an American neologism first described in a 2010 Science article called Quantitative Analysis of Culture Using Millions of Digitized Books, co-authored by Harvard researchers Jean-Baptiste Michel and Erez Lieberman Aiden. [4]
Michel and Aiden helped create the Google Labs project Google Ngram Viewer which uses n-grams to analyze the Google Books digital library for cultural patterns in language use over time.
Because the Google Ngram data set is not an unbiased sample, [5] and does not include metadata, [6] there are several pitfalls when using it to study language or the popularity of terms. [7] Medical literature accounts for a large, but shifting, share of the corpus, [8] which does not take into account how often the literature is printed, or read.
In a study called Culturomics 2.0, Kalev H. Leetaru examined news archives including print and broadcast media (television and radio transcripts) for words that imparted tone or "mood" as well as geographic data. [10] [11] The research retroactively predicted the 2011 Arab Spring and successfully estimated the final location of Osama bin Laden to within 124 miles (200 km). [10] [11]
In a 2012 paper by Alexander M. Petersen and co-authors, [12] they found a "dramatic shift in the birth rate and death rates of words": [13] Deaths have increased and births have slowed. The authors also identified a universal "tipping point" in the life cycle of new words at about 30 to 50 years after their origin, they either enter the long-term lexicon or fall into disuse. [13]
Culturomic approaches have been taken in the analysis of newspaper content in a number of studies by I. Flaounas and co-authors. These studies showed macroscopic trends across different news outlets and countries. In 2012, a study of 2.5 million articles suggested that gender bias in news coverage depends on topic and how the readability of newspaper articles is related to topic. [14] A separate study by the same researchers, covering 1.3 million articles from 27 countries, [15] showed macroscopic patterns in the choice of stories to cover. In particular, countries made similar choices when they were related by economic, geographical and cultural links. The cultural links were revealed by the similarity in voting for the Eurovision song contest. This study was performed on a vast scale, by using statistical machine translation, text categorisation and information extraction techniques.
The possibility to detect mood shifts in a vast population by analysing Twitter content was demonstrated in a study by T. Lansdall-Welfare and co-authors. [16] The study considered 84 million tweets generated by more than 9.8 million users from the United Kingdom over a period of 31 months, showing how public sentiment in the UK has changed with the announcement of spending cuts.
In a 2013 study by S Sudhahar and co-authors, the automatic parsing of textual corpora has enabled the extraction of actors and their relational networks on a vast scale, turning textual data into network data. The resulting networks, which can contain thousands of nodes, are then analysed by using tools from Network theory to identify the key actors, the key communities or parties, and general properties such as robustness or structural stability of the overall network, or centrality of certain nodes. [17]
In a 2014 study by T Lansdall-Welfare and co-authors, 5 million news articles were collected over 5 years [18] and then analyzed to suggest a significant shift in sentiment relative to coverage of nuclear power, corresponding with the disaster of Fukushima. The study also extracted concepts that were associated with nuclear power before and after the disaster, explaining the change in sentiment with a change in narrative framing.
In 2015, a study revealed the bias of the Google books data set, which "suffers from a number of limitations which make it an obscure mask of cultural popularity," [5] and calls into question the significance of many of the earlier results.
Culturomic approaches can also contribute towards conservation science through a better understanding of human-nature relationships, with the first research published by McCallum and Bury in 2013. [19] This study revealed a precipitous decline in public interest in environmental issues. In 2016, a publication by Richard Ladle and colleagues [20] highlighted five key areas where culturomics can be used to advance the practice and science of conservation, including recognizing conservation-oriented constituencies and demonstrating public interest in nature, identifying conservation emblems, providing new metrics and tools for near-real-time environmental monitoring and to support conservation decision making, assessing the cultural impact of conservation interventions, and framing conservation issues and promoting public understanding.
In 2017, a study correlated joint pain with Google search activity and temperature. [21] While the study observed higher search activity for hip and knee pain (but not arthritis) during higher temperatures, it does not (and cannot) control for relevant other factors such as activity. Mass media misinterpreted this as "myth busted: rain does not increase joint pain", [22] [23] while the authors speculate the observed correlation is due to "changes in physical activity levels". [24]
Linguists and lexicographers have expressed skepticism regarding the methods and results of some of these studies, including one by Petersen et al. [25] Others have demonstrated bias in the Ngram data set. Their results "call into question the vast majority of existing claims drawn from the Google Books corpus": [5] "Instead of speaking about general linguistic or cultural change, it seems to be preferable to explicitly restrict the results to linguistic or cultural change ‘as it is represented in the Google Ngram data’" [6] because it is unclear what caused the observed change in the sample. Ficetola critiqued the use of Google Trends, suggesting interest was actually increasing. [26] But, in their rebuttal McCallum and Bury [27] provided that as far as public policy was concerned, proportional data was important and absolute numbers irrelevant, explaining that policy is driven by the opinion of the largest portion of the population not the absolute number with decisions made according to majority influence, not simply number of votes.
How the Self Controls Its Brain is a book by Sir John Eccles, proposing a theory of philosophical dualism, and offering a justification of how there can be mind-brain action without violating the principle of the conservation of energy. The model was developed jointly with the nuclear physicist Friedrich Beck in the period 1991–1992.
Text mining, text data mining (TDM) or text analytics is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extracting information from different written resources." Written resources may include websites, books, emails, reviews, and articles. High-quality information is typically obtained by devising patterns and trends by means such as statistical pattern learning. According to Hotho et al. (2005), there are three perspectives of text mining: information extraction, data mining, and knowledge discovery in databases (KDD). Text mining usually involves the process of structuring the input text, deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interest. Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling.
Social network analysis (SNA) is the process of investigating social structures through the use of networks and graph theory. It characterizes networked structures in terms of nodes and the ties, edges, or links that connect them. Examples of social structures commonly visualized through social network analysis include social media networks, meme proliferation, information circulation, friendship and acquaintance networks, business networks, knowledge networks, difficult working relationships, collaboration graphs, kinship, disease transmission, and sexual relationships. These networks are often visualized through sociograms in which nodes are represented as points and ties are represented as lines. These visualizations provide a means of qualitatively assessing networks by varying the visual representation of their nodes and edges to reflect attributes of interest.
The branches of science known informally as omics are various disciplines in biology whose names end in the suffix -omics, such as genomics, proteomics, metabolomics, metagenomics, phenomics and transcriptomics. Omics aims at the collective characterization and quantification of pools of biological molecules that translate into the structure, function, and dynamics of an organism or organisms.
Peter Norvig is an American computer scientist and Distinguished Education Fellow at the Stanford Institute for Human-Centered AI. He previously served as a director of research and search quality at Google. Norvig is the co-author with Stuart J. Russell of the most popular textbook in the field of AI: Artificial Intelligence: A Modern Approach used in more than 1,500 universities in 135 countries.
In mathematics, computer science and network science, network theory is a part of graph theory. It defines networks as graphs where the vertices or edges possess attributes. Network theory analyses these networks over the symmetric relations or asymmetric relations between their (discrete) components.
Computational sociology is a branch of sociology that uses computationally intensive methods to analyze and model social phenomena. Using computer simulations, artificial intelligence, complex statistical methods, and analytic approaches like social network analysis, computational sociology develops and tests theories of complex social processes through bottom-up modeling of social interactions.
Dale Hollis Hoiberg is a sinologist and was the editor-in-chief of the Encyclopædia Britannica from 1997 to 2015. He holds a PhD degree in Chinese literature and began to work for Encyclopædia Britannica as an index editor in 1978. In 2010, Hoiberg co-authored a paper with Harvard researchers Jean-Baptiste Michel and Erez Lieberman Aiden entitled "Quantitative Analysis of Culture Using Millions of Digitized Books". The paper was the first to describe the term culturomics.
Digital broadcasting is the practice of using digital signals rather than analogue signals for broadcasting over radio frequency bands. Digital television broadcasting is widespread. Digital audio broadcasting is being adopted more slowly for radio broadcasting where it is mainly used in Satellite radio.
Atlantogenata is a proposed clade (magnorder) of placental mammals containing the cohorts or superorders Xenarthra and Afrotheria. These groups originated and radiated in the South American and African continents, respectively, presumably in the Cretaceous. Together with Boreoeutheria, they make up Placentalia. The monophyly of this grouping is supported by some genetic evidence.
Digital humanities (DH) is an area of scholarly activity at the intersection of computing or digital technologies and the disciplines of the humanities. It includes the systematic use of digital resources in the humanities, as well as the analysis of their application. DH can be defined as new ways of doing scholarship that involve collaborative, transdisciplinary, and computationally engaged research, teaching, and publishing. It brings digital tools and methods to the study of the humanities with the recognition that the printed word is no longer the main medium for knowledge production and distribution.
Google Trends is a website by Google that analyzes the popularity of top search queries in Google Search across various regions and languages. The website uses graphs to compare the search volume of different queries over a certain period of time.
In genomics, a genome-wide association study, is an observational study of a genome-wide set of genetic variants in different individuals to see if any variant is associated with a trait. GWA studies typically focus on associations between single-nucleotide polymorphisms (SNPs) and traits like major human diseases, but can equally be applied to any other genetic variants and any other organisms.
Cliodynamics is a transdisciplinary area of research that integrates cultural evolution, economic history/cliometrics, macrosociology, the mathematical modeling of historical processes during the longue durée, and the construction and analysis of historical databases.
Erez Lieberman Aiden is an American research scientist active in multiple fields related to applied mathematics. He is a professor of molecular and human genetics and Emeritus McNair Scholar at the Baylor College of Medicine, and formerly a fellow at the Harvard Society of Fellows and visiting faculty member at Google. He is an adjunct professor of computer science at Rice University. Using mathematical and computational approaches, he has studied evolution in a range of contexts, including that of networks through evolutionary graph theory and languages in the field of culturomics. He has published scientific articles in a variety of disciplines.
Infoveillance is a type of syndromic surveillance that specifically utilizes information found online. The term, along with the term infodemiology, was coined by Gunther Eysenbach to describe research that uses online information to gather information about human behavior.
The Google Books Ngram Viewer is an online search engine that charts the frequencies of any set of search strings using a yearly count of n-grams found in printed sources published between 1500 and 2022 in Google's text corpora in English, Chinese (simplified), French, German, Hebrew, Italian, Russian, or Spanish. There are also some specialized English corpora, such as American English, British English, and English Fiction.
Computational social science is an interdisciplinary academic sub-field concerned with computational approaches to the social sciences. This means that computers are used to model, simulate, and analyze social phenomena. It has been applied in areas such as computational economics, computational sociology, computational media analysis, cliodynamics, culturomics, nonprofit studies. It focuses on investigating social and behavioral relationships and interactions using data science approaches, network analysis, social simulation and studies using interactive systems.
Determination of sex is a process by which scientists and medical professionals determine the biological sex of a person or other animal using genetics and biological sexual traits. It is not to be confused with sex assignment which is a more recent colloquial term that allows for the use of non-sexual or non-genetic traits to define a person's sex.
Martin Hilbert is a social scientist who is a professor at the University of California where he chairs the campus-wide emphasis on Computational Social Science. He studies societal digitalization. His work is recognized in academia for the first study that assessed how much information there is in the world; in public policy for having designed the first digital action plan with the governments of Latin America and the Caribbean at the United Nations ; and in the popular media for having alerted about the intervention of Cambridge Analytica a year before the scandal broke.