Volume-Control Model

Last updated

The Volume-Control Model [1] is an analytical framework to describe the conditions that allow the transition of information into power. It requires controlling and regulating the connections between a large volume of information and people. This could be achieved by maintaining a balance between popular and personal information.

Contents

While popular information is relevant to a large audience, personal information is relevant to specific people. Ultimately, this is often practiced by network customization, which is tailoring information to specific groups based on common traits.

Basic principles

Volume-Control Model Volume-Control.gif
Volume-Control Model

The volume-control model is a part of the broader idea of the power-knowledge nexus. Lash [2] referred to the volume of information as an additive power, which is not only related to the amount of information people are exposed to, but also the number of links they get from others.

Volume is therefore associated with both the amount of information and the number of people who produce and receive it.

In this model control refers to the ability to effectively connect between the volume of information and the volume of people. One mechanism of control, popularization, is about focusing on the most popular information and offering it to a large number of people.

Popularization is a common strategy of global corporations such as Google (with its PageRank that prioritizes websites with many incoming links) and Netflix (with its algorithm to show the most viewed series and films), which enable them to exert greater control over their users. [3] [4]

Another mechanism of control is information personalization . This is often achieved by tailoring information to the specific needs of each unique user, or groups of users, based on their demographic profile and tastes, [5] their search history and website visits, [6] and the information they produce, including web activity and mouse movement. [7]

Applications

According to Scott Galloway, [8] the Big Four tech companies (Google, Meta, Amazon and Apple) have translated information to economic power by securing their exclusive access to a great volume of information and people. Their strategy was offering both popular and customized information to a growing number of users.

This model is being used to explain the bias of Google Images search, in which the vast majority of results to the query "beauty" present mainly white young females. [1] While the unique search query "beauty" enables personalization of images, all of them are ultimately homogeneous and similar to each other.

Taken from beauty industry company websites and fashion magazines, they represent the mainstream perception of beauty as a product. The trade-off between popularization and personalization techniques in the practice of large corporations such Netflix or Meta (with its Instagram platform) can similarly explain the seemingly different but largely homogeneous content they produce.

Another study [9] that applied the Volume-Control Model examined user engagement on Twitter. It measured the personalization strategies using singular pronouns such as "I", "you", "he" and "she", compared to popularization strategies using plural pronouns such as "we" and "they". It was found that retweets are more likely to use popularization strategies as users address larger audiences with the plural pronoun "we". Replies, on the other hand, are more likely to use personalization strategies as users address individuals using singular pronouns.

Related Research Articles

<span class="mw-page-title-main">Semantic network</span> Knowledge base that represents semantic relations between concepts in a network

A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, which represent concepts, and edges, which represent semantic relations between concepts, mapping or connecting semantic fields. A semantic network may be instantiated as, for example, a graph database or a concept map. Typical standardized semantic networks are expressed as semantic triples.

<span class="mw-page-title-main">Collaborative filtering</span> Algorithm

Collaborative filtering (CF) is a technique used by recommender systems. Collaborative filtering has two senses, a narrow one and a more general one.

A recommender system, or a recommendation system, is a subclass of information filtering system that provide suggestions for items that are most pertinent to a particular user. Recommender systems are particularly useful when an individual needs to choose an item from a potentially overwhelming number of items that a service may offer.

Personalization consists of tailoring a service or product to accommodate specific individuals, sometimes tied to groups or segments of individuals. Personalization requires collecting data on individuals, including web browsing history, web cookies, and location. Companies and organizations use personalization to improve customer satisfaction, digital sales conversion, marketing results, branding, and improved website metrics as well as for advertising. Personalization is a key element in social media and recommender systems. Personalization affects every sector of society—work, leisure, and citizenship.

In critical theory, power-knowledge is a term introduced by the French philosopher Michel Foucault. According to Foucault's understanding, power is based on knowledge and makes use of knowledge; on the other hand, power reproduces knowledge by shaping it in accordance with its anonymous intentions. Power creates and recreates its own fields of exercise through knowledge.

<span class="mw-page-title-main">Randomized experiment</span> Experiment using randomness in some aspect, usually to aid in removal of bias

In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of treatment effects. Randomization-based inference is especially important in experimental design and in survey sampling.

Cold start is a potential problem in computer-based information systems which involves a degree of automated data modelling. Specifically, it concerns the issue that the system cannot draw any inferences for users or items about which it has not yet gathered sufficient information.

Social information processing is "an activity through which collective human actions organize knowledge." It is the creation and processing of information by a group of people. As an academic field Social Information Processing studies the information processing power of networked social systems.

Wikipedia has been studied extensively. Between 2001 and 2010, researchers published at least 1,746 peer-reviewed articles about the online encyclopedia. Such studies are greatly facilitated by the fact that Wikipedia's database can be downloaded without help from the site owner.

Folksonomy is a classification system in which end users apply public tags to online items, typically to make those items easier for themselves or others to find later. Over time, this can give rise to a classification system based on those tags and how often they are applied or searched for, in contrast to a taxonomic classification designed by the owners of the content and specified when it is published. This practice is also known as collaborative tagging, social classification, social indexing, and social tagging. Folksonomy was originally "the result of personal free tagging of information [...] for one's own retrieval", but online sharing and interaction expanded it into collaborative forms. Social tagging is the application of tags in an open online environment where the tags of other users are available to others. Collaborative tagging is tagging performed by a group of users. This type of folksonomy is commonly used in cooperative and collaborative projects such as research, content repositories, and social bookmarking.

Personalized search is a web search tailored specifically to an individual's interests by incorporating information about the individual beyond the specific query provided. There are two general approaches to personalizing search results, involving modifying the user's query and re-ranking search results.

<span class="mw-page-title-main">Filter bubble</span> Intellectual isolation involving search engines

A filter bubble or ideological frame is a state of intellectual isolation that can result from personalized searches, recommendation systems, and algorithmic curation. The search results are based on information about the user, such as their location, past click-behavior, and search history. Consequently, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles, resulting in a limited and customized view of the world. The choices made by these algorithms are only sometimes transparent. Prime examples include Google Personalized Search results and Facebook's personalized news-stream.

Social advertising is advertising that relies on social information or networks in generating, targeting, and delivering marketing communications. Many current examples of social advertising use a particular Internet service to collect social information, establish and maintain relationships with consumers, and for delivering communications. For example, the advertising platforms provided by Google, Twitter, and Facebook involve targeting and presenting ads based on relationships articulated on those same services. Social advertising can be part of a broader social media marketing strategy designed to connect with consumers.

Contextual search is a form of optimizing web-based search results based on context provided by the user and the computer being used to enter the query. Contextual search services differ from current search engines based on traditional information retrieval that return lists of documents based on their relevance to the query. Rather, contextual search attempts to increase the precision of results based on how valuable they are to individual users.

<span class="mw-page-title-main">Algorithmic bias</span> Technological phenomenon with social implications

Algorithmic bias describes systematic and repeatable errors in a computer system that create "unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.

Matrix factorization is a class of collaborative filtering algorithms used in recommender systems. Matrix factorization algorithms work by decomposing the user-item interaction matrix into the product of two lower dimensionality rectangular matrices. This family of methods became widely known during the Netflix prize challenge due to its effectiveness as reported by Simon Funk in his 2006 blog post, where he shared his findings with the research community. The prediction results can be improved by assigning different regularization weights to the latent factors based on items' popularity and users' activeness.

Responsive computer-aided design is an approach to computer-aided design (CAD) that utilizes real-world sensors and data to modify a three-dimensional (3D) computer model. The concept is related to cyber-physical systems through blurring of the virtual and physical worlds, however, applies specifically to the initial digital design of an object prior to production.

<span class="mw-page-title-main">Shumin Zhai</span> Human–computer interaction research scientist

Shumin Zhai is a Chinese-born American Canadian Human–computer interaction (HCI) research scientist and inventor. He is known for his research specifically on input devices and interaction methods, swipe-gesture-based touchscreen keyboards, eye-tracking interfaces, and models of human performance in human-computer interaction. His studies have contributed to both foundational models and understandings of HCI and practical user interface designs and flagship products. He previously worked at IBM where he invented the ShapeWriter text entry method for smartphones, which is a predecessor to the modern Swype keyboard. Dr. Zhai's publications have won the ACM UIST Lasting Impact Award and the IEEE Computer Society Best Paper Award, among others, and he is most known for his research specifically on input devices and interaction methods, swipe-gesture-based touchscreen keyboards, eye-tracking interfaces, and models of human performance in human-computer interaction. Dr. Zhai is currently a Principal Scientist at Google where he leads and directs research, design, and development of human-device input methods and haptics systems.

Click tracking is when user click behavior or user navigational behavior is collected in order to derive insights and fingerprint users. Click behavior is commonly tracked using server logs which encompass click paths and clicked URLs. This log is often presented in a standard format including information like the hostname, date, and username. However, as technology develops, new software allows for in depth analysis of user click behavior using hypervideo tools. Given that the internet can be considered a risky environment, research strives to understand why users click certain links and not others. Research has also been conducted to explore the user experience of privacy with making user personal identification information individually anonymized and improving how data collection consent forms are written and structured.

In the design of experiments, a sample ratio mismatch (SRM) is a statistically significant difference between the expected and actual ratios of the sizes of treatment and control groups in an experiment. Sample ratio mismatches also known as unbalanced sampling often occur in online controlled experiments due to failures in randomization and instrumentation.

References

  1. 1 2 Segev, Elad (2019-09-05). "Volume and control: the transition from information to power". Journal of Multicultural Discourses. 14 (3): 240–257. doi:10.1080/17447143.2019.1662028. ISSN   1744-7143. S2CID   203088993.
  2. Lash, Scott. (2002). Critique of information. London: SAGE. ISBN   9781847876522. OCLC   654641948.
  3. Borghol, Youmna; Ardon, Sebastien; Carlsson, Niklas; Eager, Derek; Mahanti, Anirban (2012). "The untold story of the clones: Content-agnostic factors that impact YouTube video popularity". Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 1186–1194. arXiv: 1311.6526 . doi:10.1145/2339530.2339717. ISBN   9781450314626. S2CID   5666648.
  4. Kruitbosch, Gijs; Nack, Frank (31 October 2008). "Broadcast yourself on YouTube: really?". Proceedings of the 3rd ACM International Workshop on Human-centered Computing: 7–10. doi:10.1145/1462027.1462029. S2CID   16264402.
  5. Gilmore, James; Joseph, Pine (1997). "The four faces of mass customization". Harvard Business Review. 75 (1): 91–101. PMID   10174455.
  6. Segev, Elad (2010). Google and the digital divide: the bias of online knowledge. Oxford, U.K.: Chandos Pub. ISBN   9781843345657.
  7. Baeza-Yates, Ricardo (23 May 2018). "Bias on the web". Communications of the ACM. 61 (6): 54–61. doi: 10.1145/3209581 . S2CID   44111303.
  8. Galloway, Scott (2017). The Four: The Hidden DNA of Amazon, Apple, Facebook, and Google. Random House Large Print. ISBN   978-0525501220.
  9. Segev, Elad (April 2023). "Sharing Feelings and User Engagement on Twitter: It's All About Me and You". Social Media + Society. 9 (2). doi: 10.1177/20563051231183430 .