Datafly algorithm

Last updated • 1 min readFrom Wikipedia, The Free Encyclopedia

Datafly algorithm is an algorithm for providing anonymity in medical data. The algorithm was developed by Latanya Arvette Sweeney in 199798. [1] [2] Anonymization is achieved by automatically generalizing, substituting, inserting, and removing information as appropriate without losing many of the details found within the data. The method can be used on-the-fly in role-based security within an institution, and in batch mode for exporting data from an institution. Organizations release and receive medical data with all explicit identifiers—such as name—removed, in the erroneous belief that patient confidentiality is maintained because the resulting data look anonymous. However the remaining data can be used to re-identify individuals by linking or matching the data to other databases or by looking at unique characteristics found in the fields and records of the database itself.

Contents

The Datafly algorithm has been criticized for trying to achieve anonymization by overgeneralization. The algorithm selects the attribute with the greatest number of distinct values as the one to generalize first. [3]

Core algorithm

An outline of the Datafly algorithm is presented below. [4]

Input : Private Table PT; quasi-identifier QI = ( A1, ..., An ), k-anonymity constraint k; domain generalization hierarchies DGHAi, where i = 1,...,n with accompanying functions fAi, and loss, which is a limit on the percentage of tuples that can be suppressed. PT[id] is the set of unique identifiers or keys for each tuple.

Output : MGT a generalization of PT[QI] that enforces k-anonymity

Assumes: | PT | ≤ k, and loss * | PT | = k

algorithm Datafly:

// Construct a frequency list containing unique sequences of values across the quasi-identifier in PT,

// along with the number of occurrences of each sequence.

1. let freq be an expandable and collapsible vector with no elements initially. Each element is of the form ( QI, frequency, SID ), where SID = { idi : ∃ t[id] ∈ [id] ⇒ t[id] = idi }; and, frequency = |SID|. Therefore, freq is also accessible as a table over (QI, frequency, SID).
2. let pos 0, total 0
3. while total ≠ |PT| do
3.1 freq[pos] ( t[QI], occurs, SID ) where t[QI] ∈ [QI], ( t[ QI ],__, ___ ) freq; occurs = |PT| - |PT[QI] – {t[QI]}|; and, SID = { idi : ∃ t[id] PT[id] ⇒ t[id] = idi }
3.2 pos pos + 1, total total + occurs
// Make a solution by generalizing the attribute with the most number of distinct values
// and suppressing no more than the allowed number of tuples.
4. let belowk 0
5. for pos 1 to |freq| do
5.1 ( __, count ) freq[pos]
5.2 if count < k then do
5.2.1 belowk belowk + count
6. if belowk > k then do: // Note. loss * |PT| = k.
6.1 freq generalize(freq)
6.2 go to step 4
7. else do
// assert: the number of tuples to suppress in freq is ≤ loss * |PT|
7.1 freq suppress(freq, belowk )
7.2 MGT reconstruct(freq)
8. return MGT.

Related Research Articles

In statistics, a central tendency is a central or typical value for a probability distribution.

<span class="mw-page-title-main">Supervised learning</span> A paradigm in machine learning

Supervised learning (SL) is a paradigm in machine learning where input objects and a desired output value train a model. The training data is processed, building a function that maps new data on expected output values. An optimal scenario will allow for the algorithm to correctly determine output values for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way. This statistical quality of an algorithm is measured through the so-called generalization error.

The relational model (RM) is an approach to managing data using a structure and language consistent with first-order predicate logic, first described in 1969 by English computer scientist Edgar F. Codd, where all data is represented in terms of tuples, grouped into relations. A database organized in terms of the relational model is a relational database.

In mathematics, a projective line is, roughly speaking, the extension of a usual line by a point called a point at infinity. The statement and the proof of many theorems of geometry are simplified by the resultant elimination of special cases; for example, two distinct projective lines in a projective plane meet in exactly one point.

In computer science, a longest common substring of two or more strings is a longest string that is a substring of all of them. There may be more than one longest common substring. Applications include data deduplication and plagiarism detection.

ID/LP Grammars are a subset of Phrase Structure Grammars, differentiated from other formal grammars by distinguishing between immediate dominance (ID) and linear precedence (LP) constraints. Whereas traditional phrase structure rules incorporate dominance and precedence into a single rule, ID/LP Grammars maintains separate rule sets which need not be processed simultaneously. ID/LP Grammars are used in Computational Linguistics.

In mathematics, the discrete Fourier transform over a ring generalizes the discrete Fourier transform (DFT), of a function whose values are commonly complex numbers, over an arbitrary ring.

The Generalized vector space model is a generalization of the vector space model used in information retrieval. Wong et al. presented an analysis of the problems that the pairwise orthogonality assumption of the vector space model (VSM) creates. From here they extended the VSM to the generalized vector space model (GVSM).

Stability, also known as algorithmic stability, is a notion in computational learning theory of how a machine learning algorithm output is changed with small perturbations to its inputs. A stable learning algorithm is one for which the prediction does not change much when the training data is modified slightly. For instance, consider a machine learning algorithm that is being trained to recognize handwritten letters of the alphabet, using 1000 examples of handwritten letters and their labels as a training set. One way to modify this training set is to leave out an example, so that only 999 examples of handwritten letters and their labels are available. A stable learning algorithm would produce a similar classifier with both the 1000-element and 999-element training sets.

<span class="mw-page-title-main">De-identification</span> Preventing personal identity from being revealed

De-identification is the process used to prevent someone's personal identity from being revealed. For example, data produced during human subject research might be de-identified to preserve the privacy of research participants. Biological data may be de-identified in order to comply with HIPAA regulations that define and stipulate patient privacy laws.

Quasi-identifiers are pieces of information that are not of themselves unique identifiers, but are sufficiently well correlated with an entity that they can be combined with other quasi-identifiers to create a unique identifier.

Data anonymization is a type of information sanitization whose intent is privacy protection. It is the process of removing personally identifiable information from data sets, so that the people whom the data describe remain anonymous.

<span class="mw-page-title-main">Latanya Sweeney</span> Computer scientist

Latanya Arvette Sweeney is an American computer scientist. She is the Daniel Paul Professor of the Practice of Government and Technology at the Harvard Kennedy School and in the Harvard Faculty of Arts and Sciences at Harvard University. She is the founder and director of the Public Interest Tech Lab, founded in 2021 with a $3 million grant from the Ford Foundation as well as the Data Privacy Lab. She is the current Faculty Dean in Currier House at Harvard.

k-anonymity is a property possessed by certain anonymized data. The term k-anonymity was first introduced by Pierangela Samarati and Latanya Sweeney in a paper published in 1998, although the concept dates to a 1986 paper by Tore Dalenius.

l-diversity, also written as -diversity, is a form of group based anonymization that is used to preserve privacy in data sets by reducing the granularity of a data representation. This reduction is a trade off that results in some loss of effectiveness of data management or mining algorithms in order to gain some privacy. The l-diversity model is an extension of the k-anonymity model which reduces the granularity of data representation using techniques including generalization and suppression such that any given record maps onto at least k-1 other records in the data. The l-diversity model handles some of the weaknesses in the k-anonymity model where protected identities to the level of k-individuals is not equivalent to protecting the corresponding sensitive values that were generalized or suppressed, especially when the sensitive values within a group exhibit homogeneity. The l-diversity model adds the promotion of intra-group diversity for sensitive values in the anonymization mechanism.

<span class="mw-page-title-main">Multiple instance learning</span>

In machine learning, multiple-instance learning (MIL) is a type of supervised learning. Instead of receiving a set of instances which are individually labeled, the learner receives a set of labeled bags, each containing many instances. In the simple case of multiple-instance binary classification, a bag may be labeled negative if all the instances in it are negative. On the other hand, a bag is labeled positive if there is at least one instance in it which is positive. From a collection of labeled bags, the learner tries to either (i) induce a concept that will label individual instances correctly or (ii) learn how to label bags without inducing the concept.

Data re-identification or de-anonymization is the practice of matching anonymous data with publicly available information, or auxiliary data, in order to discover the person the data belong to. This is a concern because companies with privacy policies, health care providers, and financial institutions may release the data they collect after the data has gone through the de-identification process.

In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters are learned.

Equihash is a memory-hard Proof-of-work algorithm introduced by the University of Luxembourg's Interdisciplinary Centre for Security, Reliability and Trust (SnT) at the 2016 Network and Distributed System Security Symposium. The algorithm is based on a generalization of the Birthday problem which finds colliding hash values. It has severe time-space trade-offs but concedes vulnerability to unforeseen parallel optimizations. It was designed such that parallel implementations are bottle-necked by memory bandwidth in an attempt to worsen the cost-performance trade-offs of designing custom ASIC implementations. ASIC resistance in Equihash is based on the assumption that commercially-sold hardware already has quite high memory bandwidth, so improvements made by custom hardware may not be worth the development cost.

Spatial cloaking is a privacy mechanism that is used to satisfy specific privacy requirements by blurring users’ exact locations into cloaked regions. This technique is usually integrated into applications in various environments to minimize the disclosure of private information when users request location-based service. Since the database server does not receive the accurate location information, a set including the satisfying solution would be sent back to the user. General privacy requirements include K-anonymity, maximum area, and minimum area.

References

  1. Latanya Sweeney. "Datafly: a system for providing anonymity in medical data" . Retrieved 19 January 2014.
  2. L. Sweeney, Datafly: a system for providing anonymity in medical data. Database Security, XI: Status and Prospects, T. Lin and S. Qian (eds), Elsevier Science, Amsterdam, 1998.
  3. Xiong, Li. "Data Anonymization - Generalization Algorithms" (PDF). Retrieved 19 January 2014.
  4. Latanya Sweeney (2001). Computational Disclosure Control A Primer on Data Privacy Protection (Thesis). MIT. p. 113. hdl:1721.1/8589.