This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
Contextual integrity is a theory of privacy developed by Helen Nissenbaum and presented in her book Privacy In Context: Technology, Policy, and the Integrity of Social Life. [1] It comprises four essential descriptive claims:
Contextual integrity can be seen as a reaction to theories that define privacy as control over information about oneself, as secrecy, or as regulation of personal information that is private, or sensitive.
This places contextual integrity at odds with privacy regulation based on Fair Information Practice Principles; it also does not line up with the 1990s Cypherpunk view that newly discovered cryptographic techniques would assure privacy in the digital age because preserving privacy is not a matter of stopping any data collection, or blocking all flows of information, minimizing data flow, or by stopping information leakage. [2]
The fourth essential claim comprising contextual integrity gives privacy its ethical standing and allows for the evolution and alteration of informational norms, often due to novel sociotechnical systems. It holds that practices and norms can be evaluated in terms of:
The most distinctive of these considerations is the third. As such, contextual integrity highlights the importance of privacy not only for individuals, but for society and respective social domains.
The "contexts" of contextual integrity are social domains, intuitively, health, finance, marketplace, family, civil and political, etc. The five critical parameters that are singled out to describe data transfer operation are:
Some illustrations of contextual informational norms in western societies, include:
Examples of data subjects include patient, shopper, investor, or reader. Examples of information senders include a bank, police, advertising network, or a friend. Examples of data recipients include a bank, the police, a friend. Examples of information types include the contents of an email message, the data subject's demographic information, biographical information, medical information, and financial information. Examples of transmission principles include consent, coerced, stolen, buying, selling, confidentiality, stewardship, acting under the authority of a court with a warrant, and national security.
A key thesis is that assessing the privacy impact of information flows requires the values of all five parameters to be specified. Nissenbaum has found that access control rules not specifying the five parameters are incomplete and can lead to problematic ambiguities. [3]
Nissenbaum notes that the some kinds of language can lead one's analysis astray. For example, when the passive voice is used to describe the movement of data, it allows the speaker to gloss over the fact that there is an active agent performing the data transfer. For example, the sentence "Alice had her identity stolen" allows the speaker to gloss over the fact that someone or something did the actual stealing of Alice's identity. If we say that "Carol was able to find Bob's bankruptcy records because they had been placed online", we are implicitly ignoring the fact that someone or some organization did the actual collection of the bankruptcy records from a court and the placing of those records online.
Consider the norm: "US residents are required by law to file tax returns with the US Internal Revenue Service containing information, such as, name, address, SSN, gross earnings, etc. under conditions of strict confidentiality."
Given this norm, we can evaluate a hypothetical scenario and see if it violates the contextual integrity norm: "The US Internal Revenue Service agrees to supply Alice's tax returns to the city newspaper as requested by a journalist at the paper." This hypothetical clearly violates contextual integrity because providing the tax information to the local newspaper would violate the transmission principle under which the information was obtained.
As a conceptual framework, contextual integrity has been used to analyze and understand the privacy implications of socio-technical systems on a wide array of platforms (e.g. Web, smartphone, IoT systems), and has led to many tools, frameworks, and system designs that help study and address these privacy issues.
In her book Privacy In Context: Technology, Policy, and the Integrity of Social Life, Nissenbaum discussed the privacy issues related to public data, discussing examples like Google Street View privacy concerns and problems caused by converting previously paper-based public records into digital forms and making them online. In recent years, similar issues happening in the context of social media have revived the discussion.
Shi et al. examined how people manage their interpersonal information boundary with the help of the contextual integrity framework. They found that the information access norms was related to who was expected to view the information. [4] Researchers have also applied contextual integrity to more controversial social events, e.g. Facebook–Cambridge Analytica data scandal [5]
The concept of contextual integrity have also influenced the norms of ethics for research work using social media data. Fiesler et al. studied Twitter users' awareness and perception of research work that analyzed Twitter data, reported results in a paper, or even quoted the actual tweets. It turned out that users' concerns were largely dependent on contextual factors, i.e. who is conducting the research, what the study is for, etc., which is in line with the contextual integrity theory. [6]
The privacy concerns induced by the collection, dissemination and use of personal data via smartphones have received a large amount of attention from different stakeholders. A large body of computer science research aims to efficiently and accurately analyze how sensitive personal data (e.g. geolocation, user accounts) flows across the app and when it flows out of the phone. [7]
Contextual integrity has been widely referred to when trying to understand the privacy concerns of the objective data flow traces. For example, Primal et al. argued that smartphone permissions would be more efficient if it only prompts the user "when an application's access to sensitive data is likely to defy expectations", and they examined how applications were accessing personal data and the gap between the current practice and users' expectations. [8] Lin et al. demonstrated multiple problematic personal data use cases due to the violation of users' expectations. Among them, using personal data for mobile advertising purposes became the most problematic one. Most users were unaware of the implicit data collection behavior and found it unpleasantly surprising when researchers informed them of this behavior. [9]
The idea of contextual integrity has also infiltrated the design of the system. Both iOS and Android are using a permission system to help developers manage their access to sensitive resources (e.g. geolocation, contact list, user data, etc.) and to provide users with control over which app can access what data. In their official guidelines for developers, [10] [11] both iOS and Android recommend developers to limit the use of permission-protected data to situations only when necessary, and recommend developers to provide a short description of why the permission is requested. Since Android 6.0, users are prompted at runtime, in the context of the app, which is referred to as "Increased situational context" in their documentation.
In 2006 Barth, Datta, Mitchell and Nissenbaum presented a formal language that could be used to reason about the privacy rules in privacy law. They analyzed the privacy provisions of the Gramm-Leach-Bliley act and showed how to translate some of its principles into the formal language. [12]
Privacy is the ability of an individual or group to seclude themselves or information about themselves, and thereby express themselves selectively.
Mobile computing is human–computer interaction in which a computer is expected to be transported during normal usage and allow for transmission of data, which can include voice and video transmissions. Mobile computing involves mobile communication, mobile hardware, and mobile software. Communication issues include ad hoc networks and infrastructure networks as well as communication properties, protocols, data formats, and concrete technologies. Hardware includes mobile devices or device components. Mobile software deals with the characteristics and requirements of mobile applications.
Identity management (IdM), also known as identity and access management, is a framework of policies and technologies to ensure that the right users have the appropriate access to technology resources. IdM systems fall under the overarching umbrellas of IT security and data management. Identity and access management systems not only identify, authenticate, and control access for individuals who will be utilizing IT resources but also the hardware and applications employees need to access.
Mobile marketing is a multi-channel online marketing technique focused at reaching a specific audience on their smartphones, feature phones, tablets, or any other related devices through websites, e-mail, SMS and MMS, social media, or mobile applications. Mobile marketing can provide customers with time and location sensitive, personalized information that promotes goods, services, appointment reminders and ideas. In a more theoretical manner, academic Andreas Kaplan defines mobile marketing as "any marketing activity conducted through a ubiquitous network to which consumers are constantly connected using a personal mobile device".
Information assurance (IA) is the practice of assuring information and managing risks related to the use, processing, storage, and transmission of information. Information assurance includes protection of the integrity, availability, authenticity, non-repudiation and confidentiality of user data. IA encompasses both digital protections and physical techniques. These methods apply to data in transit, both physical and electronic forms, as well as data at rest. IA is best thought of as a superset of information security, and as the business outcome of information risk management.
Social information processing is "an activity through which collective human actions organize knowledge." It is the creation and processing of information by a group of people. As an academic field Social Information Processing studies the information processing power of networked social systems.
Social translucence is a term that was proposed by Thomas Erickson and Wendy Kellogg to refer to "design digital systems that support coherent behavior by making participants and their activities visible to one another".
Security service is a service, provided by a layer of communicating open systems, which ensures adequate security of the systems or of data transfers as defined by ITU-T X.800 Recommendation.
X.800 and ISO 7498-2 are technically aligned. This model is widely recognized
Mobile security, or mobile device security, is the protection of smartphones, tablets, and laptops from threats associated with wireless computing. It has become increasingly important in mobile computing. The security of personal and business information now stored on smartphones is of particular concern.
Digital privacy is often used in contexts that promote advocacy on behalf of individual and consumer privacy rights in e-services and is typically used in opposition to the business practices of many e-marketers, businesses, and companies to collect and use such information and data. Digital privacy can be defined under three sub-related categories: information privacy, communication privacy, and individual privacy.
Since the arrival of early social networking sites in the early 2000s, online social networking platforms have expanded exponentially, with the biggest names in social media in the mid-2010s being Facebook, Instagram, Twitter and Snapchat. The massive influx of personal information that has become available online and stored in the cloud has put user privacy at the forefront of discussion regarding the database's ability to safely store such personal information. The extent to which users and social media platform administrators can access user profiles has become a new topic of ethical consideration, and the legality, awareness, and boundaries of subsequent privacy violations are critical concerns in advance of the technological age.
Helen Nissenbaum is professor of information science at Cornell Tech. She is best known for the concept of "contextual integrity" and her work on privacy, privacy law, trust, and security in the online world. Specifically, contextual integrity has influenced the United States government's thinking about privacy issues.
Cross-device tracking refers to technology that enables the tracking of users across multiple devices such as smartphones, television sets, smart TVs, and personal computers.
The Signal Protocol is a non-federated cryptographic protocol that provides end-to-end encryption for voice and instant messaging conversations. The protocol was developed by Open Whisper Systems in 2013 and was first introduced in the open-source TextSecure app, which later became Signal. Several closed-source applications have implemented the protocol, such as WhatsApp, which is said to encrypt the conversations of "more than a billion people worldwide" or Google who provides end-to-end encryption by default to all RCS-based conversations between users of their Messages app for one-to-one conversations. Facebook Messenger also say they offer the protocol for optional Secret Conversations, as does Skype for its Private Conversations.
Network eavesdropping, also known as eavesdropping attack, sniffing attack, or snooping attack, is a method that retrieves user information through the internet. This attack happens on electronic devices like computers and smartphones. This network attack typically happens under the usage of unsecured networks, such as public wifi connections or shared electronic devices. Eavesdropping attacks through the network is considered one of the most urgent threats in industries that rely on collecting and storing data. Internet users use eavesdropping via the Internet to improve information security.
Permissions are a means of controlling and regulating access to specific system- and device-level functions by software. Typically, types of permissions cover functions that may have privacy implications, such as the ability to access a device's hardware features, and personal data. Permissions are typically declared in an application's manifest, and certain permissions must be specifically granted at runtime by the user—who may revoke the permission at any time.
Spatial cloaking is a privacy mechanism that is used to satisfy specific privacy requirements by blurring users’ exact locations into cloaked regions. This technique is usually integrated into applications in various environments to minimize the disclosure of private information when users request location-based service. Since the database server does not receive the accurate location information, a set including the satisfying solution would be sent back to the user. General privacy requirements include K-anonymity, maximum area, and minimum area.
Android 11 is the eleventh major release and 18th version of Android, the mobile operating system developed by the Open Handset Alliance led by Google. It was released on September 8, 2020. The first phone launched in Europe with Android 11 was the Vivo X51 5G and after its full stable release, the first phone in the world which came with Android 11 after Google Pixel 5 was OnePlus 8T. As of May 2023, Android 11 was the second most commonly used version of the Android OS with a share of 12.85 percent.
Privacy settings are "the part of a social networking website, internet browser, piece of software, etc. that allows you to control who sees information about you". With the growing prevalence of social networking services, opportunities for privacy exposures also grows. Privacy settings allow a person to control what information is shared on these platforms.
Spy pixels or tracker pixels are hyperlinks to remote image files in HTML email messages that have the effect of spying on the person reading the email if the image is downloaded. They are commonly embedded in the HTML of an email as small, imperceptible, transparent graphic files. Spy pixels are commonly used in marketing, and there are several countermeasures in place that aim to block email tracking pixels. However, there are few regulations in place that effectively guard against email tracking approaches.