Filippo Menczer | |
---|---|
Born | 16 May 1965 |
Alma mater | Sapienza University of Rome University of California, San Diego |
Scientific career | |
Fields | Cognitive science Computer science Physics |
Institutions | Indiana University Bloomington |
Website | cnets |
Filippo Menczer is an American and Italian academic. He is a University Distinguished Professor and the Luddy Professor of Informatics and Computer Science at the Luddy School of Informatics, Computing, and Engineering, Indiana University. Menczer is the Director of the Observatory on Social Media, [1] a research center where data scientists and journalists study the role of media and technology in society and build tools to analyze and counter disinformation and manipulation on social media. Menczer holds courtesy appointments in Cognitive Science and Physics, is a founding member and advisory council member of the IU Network Science Institute, [2] a former director the Center for Complex Networks and Systems Research, [3] a senior research fellow of the Kinsey Institute, a fellow of the Center for Computer-Mediated Communication, [4] and a former fellow of the Institute for Scientific Interchange in Turin, Italy. In 2020 he was named a Fellow of the ACM.
Menczer holds a Laurea in physics from the Sapienza University of Rome and a PhD in computer science and cognitive science from the University of California, San Diego. He used to be an assistant professor of management sciences at the University of Iowa, and a fellow-at-large of the Santa Fe Institute. At Indiana University Bloomington since 2003, he served as division chair in the Luddy School in 2009–2011. Menczer has been the recipient of Fulbright, Rotary Foundation, and NATO fellowships, and a CAREER Award from the National Science Foundation. He holds editorial positions for the journals Network Science, [5] EPJ Data Science, [6] PeerJ Computer Science, [7] and HKS Misinformation Review . [8] He has served as program or track chair for various conferences including The Web Conference and the ACM Conference on Hypertext and Social Media. He was general chair of the ACM Web Science 2014 Conference [9] and general co-chair of the NetSci 2017 Conference.
Menczer's research focuses on Web science, social networks, social media, social computation, Web mining, data science, distributed and intelligent Web applications, and modeling of complex information networks. He introduced the idea of topical and adaptive Web crawlers, a specialized and intelligent type of Web crawler. [10] [11]
Menczer is also known for his work on social phishing, [12] [13] a type of phishing attacks that leverage friendship information from social networks, yielding over 70% success rate in experiments (with Markus Jakobsson); semantic similarity measures for information and social networks; [14] [15] [16] [17] models of complex information and social networks (with Alessandro Vespignani and others); [18] [19] [20] [21] search engine censorship; [22] [23] and search engine bias. [24] [25]
The group led by Menczer has analyzed and modeled how memes, information, and misinformation spread through social media in domains such as the Occupy movement, [26] [27] the Gezi Park protests, [28] and political elections. [29] Data and tools from Menczer's lab have aided in finding the roots of the Pizzagate conspiracy theory [30] and the disinformation campaign targeting the White Helmets, [31] and in taking down voter-suppression bots on Twitter. [32] Menczer and coauthors have also found a link between online COVID-19 misinformation and vaccination hesitancy. [33]
Analysis by Menczer's team demonstrated the echo-chamber structure of information-diffusion networks on Twitter during the 2010 United States elections. [34] The team found that conservatives almost exclusively retweeted other conservatives while liberals retweeted other liberals. Ten years later, this work received the Test of Time Award at the 15th International AAAI Conference on Web and Social Media (ICWSM). [35] As these patterns of polarization and segregation persist, [36] Menczer's team has developed a model that shows how social influence and unfollowing accelerate the emergence of online echo chambers. [37]
Menczer and colleagues have advanced the understanding of information virality, and in particular the prediction of what memes will go viral based on the structure of early diffusion networks [38] [39] and how competition for finite attention helps explain virality patterns. [40] [41] In a 2018 paper in Nature Human Behaviour, Menczer and coauthors used a model to show that when agents in a social networks share information under conditions of high information load and/or low attention, the correlation between quality and popularity of information in the system decreases. [42] An erroneous analysis in the paper suggested that this effect alone would be sufficient to explain why fake news are as likely to go viral as legitimate news on Facebook. When the authors discovered the error, they retracted the paper. [43]
Following influential publications on the detection of astroturfing [44] [45] [46] [47] [48] and social bots , [49] [50] Menczer and his team have studied the complex interplay between cognitive, social, and algorithmic factors that contribute to the vulnerability of social media platforms and people to manipulation, [51] [52] [53] [54] and focused on developing tools to counter such abuse. [55] [56] Their bot detection tool, Botometer, was used to assess the prevalence of social bots [57] [58] and their sharing activity. [59] Their tool to visualize the spread of low-credibility content, Hoaxy, [60] [61] [62] [63] was used in conjunction with Botometer to reveal the key role played by social bots in spreading low-credibility content during the 2016 United States presidential election. [64] [65] [66] [67] [68] Menczer's team also studied perceptions of partisan political bots, finding that Republican users are more likely to confuse conservative bots with humans, whereas Democratic users are more likely to confuse conservative human users with bots. [69] Using bot probes on Twitter, Menczer and coauthors demonstrated a conservative political bias on the platform. [70]
As social media have increased their countermeasures against malicious automated accounts, Menczer and coauthors have shown that coordinated campaigns by inauthentic accounts continue to threaten information integrity on social media, and developed a framework to detect these coordinated networks. [71] They also demonstrated new forms of social media manipulation by which bad actors can grow influence networks [72] and hide high-volume of content with which they flood the network. [73]
Menczer and colleagues have shown that political audience diversity can be used as an indicator of news source reliability in algorithmic ranking. [74]
The textbook A First Course in Network Science by Menczer, Fortunato, and Davis was published by Cambridge University Press in 2020. [75] The textbook has been translated into Japanese, Chinese, and Korean.
Scientific citation is providing detailed reference in a scientific publication, typically a paper or book, to previous published communications which have a bearing on the subject of the new publication. The purpose of citations in original work is to allow readers of the paper to refer to cited work to assist them in judging the new work, source background information vital for future development, and acknowledge the contributions of earlier workers. Citations in, say, a review paper bring together many sources, often recent, in one place.
Astroturfing is the practice of hiding the sponsors of a message or organization to make it appear as though it originates from, and is supported by, grassroots participants. It is a practice intended to give the statements or organizations credibility by withholding information about the source's financial backers.
Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning or semantic content as opposed to lexicographical similarity. These are mathematical tools used to estimate the strength of the semantic relationship between units of language, concepts or instances, through a numerical description obtained according to the comparison of information supporting their meaning or describing their nature. The term semantic similarity is often confused with semantic relatedness. Semantic relatedness includes any relation between two terms, while semantic similarity only includes "is a" relations. For example, "car" is similar to "bus", but is also related to "road" and "driving".
An Internet bot, web robot, robot or simply bot, is a software application that runs automated tasks (scripts) on the Internet, usually with the intent to imitate human activity, such as messaging, on a large scale. An Internet bot plays the client role in a client–server model whereas the server role is usually played by web servers. Internet bots are able to perform simple and repetitive tasks much faster than a person could ever do. The most extensive use of bots is for web crawling, in which an automated script fetches, analyzes and files information from web servers. More than half of all web traffic is generated by bots.
Misinformation is incorrect or misleading information. It differs from disinformation, which is deliberately deceptive and propagated information. Early definitions of misinformation focused on statements that were patently false, incorrect, or not factual. Therefore, a narrow definition of misinformation refers to the information's quality, whether inaccurate, incomplete, or false. However, recent studies define misinformation per deception rather than informational accuracy because misinformation can include falsehoods, selective truths, and half-truths.
Crooks and Liars is a progressive news blog focusing on political events founded by John Amato.
Human dynamics refer to a branch of complex systems research in statistical physics such as the movement of crowds and queues and other systems of complex human interactions including statistical modelling of human networks, including interactions over communications networks.
A filter bubble or ideological frame is a state of intellectual isolation that can result from personalized searches, recommendation systems, and algorithmic curation. The search results are based on information about the user, such as their location, past click-behavior, and search history. Consequently, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles, resulting in a limited and customized view of the world. The choices made by these algorithms are only sometimes transparent. Prime examples include Google Personalized Search results and Facebook's personalized news-stream.
The term twitter bomb or tweet bomb refers to posting numerous Tweets with the same hashtags and other similar content, including @messages, from multiple accounts, with the goal of advertising a certain meme, usually by filling people's Tweet feeds with the same message, and making it a "trending topic" on X. This may be done by individual users, fake accounts, or both.
An X bot, formerly known as Twitter bot, is a type of software bot that controls an X account via the X API. The social bot software may autonomously perform actions such as posting, reposting, liking, following, unfollowing, or direct messaging other accounts. The automation of X accounts is governed by a set of automation rules that outline proper and improper uses of automation. Proper usage includes broadcasting helpful information, automatically generating interesting or creative content, and automatically replying to users via direct message. Improper usage includes circumventing API rate limits, violating user privacy, spamming, and sockpuppeting. Twitter bots may be part of a larger botnet. They can be used to influence elections and in misinformation campaigns.
A paper with delayed recognition is a publication that received very little attention shortly after publication, but later receives a dramatic increase in citations. For example, an 1884 article by Charles Sanders Peirce was rarely cited until about the year 2000, but has since garnered many citations.
Social media mining is the process of obtaining big data from user-generated content on social media sites and mobile apps in order to extract actionable patterns, form conclusions about users, and act upon the information, often for the purpose of advertising to users or conducting research. The term is an analogy to the resource extraction process of mining for rare minerals. Resource extraction mining requires mining companies to shift through vast quantities of raw ore to find the precious minerals; likewise, social media mining requires human data analysts and automated software programs to shift through massive amounts of raw social media data in order to discern patterns and trends relating to social media usage, online behaviours, sharing of content, connections between individuals, online buying behaviour, and more. These patterns and trends are of interest to companies, governments and not-for-profit organizations, as these organizations can use these patterns and trends to design their strategies or introduce new programs, new products, processes or services.
Author-level metrics are citation metrics that measure the bibliometric impact of individual authors, researchers, academics, and scholars. Many metrics have been developed that take into account varying numbers of factors.
The stochastic block model is a generative model for random graphs. This model tends to produce graphs containing communities, subsets of nodes characterized by being connected with one another with particular edge densities. For example, edges may be more common within communities than between communities. Its mathematical formulation was first introduced in 1983 in the field of social network analysis by Paul W. Holland et al. The stochastic block model is important in statistics, machine learning, and network science, where it serves as a useful benchmark for the task of recovering community structure in graph data.
A social bot, also described as a social AI or social algorithm, is a software agent that communicates autonomously on social media. The messages it distributes can be simple and operate in groups and various configurations with partial human control (hybrid) via algorithm. Social bots can also use artificial intelligence and machine learning to express messages in more natural human dialogue.
Media Bias/Fact Check (MBFC) is an American website founded in 2015 by Dave M. Van Zandt. It considers four main categories and multiple subcategories in assessing the "political bias" and "factual reporting" of media outlets.
This article presents a detailed timeline of events in the history of computing from 2020 to the present. For narratives explaining the overall developments, see the history of computing.
Algorithmic radicalization is the concept that recommender algorithms on popular social media sites such as YouTube and Facebook drive users toward progressively more extreme content over time, leading to them developing radicalized extremist political views. Algorithms record user interactions, from likes/dislikes to amount of time spent on posts, to generate endless media aimed to keep users engaged. Through echo chamber channels, the consumer is driven to be more polarized through preferences in media and self-confirmation.
Roberta Sinatra is an Italian scientist and associate professor at the IT University of Copenhagen. She is known for her work in network science and conducts research on quantifying success in science.