Statistical disclosure control

Last updated

Statistical disclosure control (SDC), also known as statistical disclosure limitation (SDL) or disclosure avoidance, is a technique used in data-driven research to ensure no person or organization is identifiable from the results of an analysis of survey or administrative data, or in the release of microdata. The purpose of SDC is to protect the confidentiality of the respondents and subjects of the research. [1]

Contents

SDC usually refers to 'output SDC'; ensuring that, for example, a published table or graph does not disclose confidential information about respondents. SDC can also describes protection methods applied to the data: for example, removing names and addresses, limiting extreme values, or swapping problematic observations. This is sometimes referred to as 'input SDC', but is more commonly called anonymization, de-identification, or microdata protection.

Textbooks (eg [2] ) typically cover input SDC and tabular data protection (but not other parts of output SDC). This is because these two problems are of direct interest to statistical agencies who supported the development of the field. [3] For analytical environments, output rules developed for statistical agencies were generally used until data managers began arguing for specific output SDC for research. [4]

This page focuses on output SDC.

Necessity

Many kinds of social, economic and health research use potentially sensitive data as a basis for their research, such as survey or Census data, tax records, health records, educational information, etc. Such information is usually given in confidence, and, in the case of administrative data, not always for the purpose of research.

Researchers are not usually interested in information about one single person or business; they are looking for trends among larger groups of people. [5] However, the data they use is, in the first place, linked to individual people and businesses, and SDC ensures that these cannot be identified from published data, no matter how detailed or broad. [6]

It is possible that at the end of data analysis, the researcher somehow singles out one person or business through their research. For example, a researcher may identify the exceptionally good or bad service in a geriatric department within a hospital in a remote area, where only one hospital provides such care. In that case, the data analysis 'discloses' the identity of the hospital, even if the dataset used for analysis was properly anonymised or de-identified.

Statistical disclosure control will identify this disclosure risk and ensure the results of the analysis are altered to protect confidentiality. [7] It requires a balance between protecting confidentiality and ensuring the results of the data analysis are still useful for statistical research. [8]

Output SDC: statistical models

Output SDC relies upon having a set of rules that can be followed by an output checker; for example, that a frequency table must have a minimum number of observations, or that survival tables should be right-censored for extreme values. The value and drawbacks of rules for frequency and magnitude tables have been discussed extensively since the late 20th Century. However, with awareness of the increasing need for rules for other types of analyses, a more structured approach is needed.

'Safe' and 'unsafe' statistics

Some statistical outputs, such as frequency tables, have a high level of inherent risk: differencing, low numbers, class disclosure. They therefore need to be checked before release, ideally by someone with some understanding of the data, to ensure that there is no meaningful risk on release. These are referred to as 'unsafe statistics'. However, there are some statistics, such as the coefficients from modelling, that have no meaningful risk and therefore can be released with no further checks. These are called 'safe statistics'. By separating statistics into 'safe' and 'unsafe', output checks can be concentrated on the latter, improving both security and efficiency. [4]

This is less important for official statistics, where 'unsafe' statistics such as counts, means, medians and simple indexes dominate the outputs. However, for research output this is important, as a great deal of research output (particularly estimates and test statistics) is inherently 'safe'.

Statistical barns or statbarns

The safe/unsafe model is useful but limited with two simple categories; within those categories, guidelines for SDC largely consist of long lists of statistics and how to handle them. In 2023, the SACRO project https://dareuk.org.uk/driver-project-sacro/ undertook to review the whole field and see whether a more useful classification scheme could be introduced. The result is the 'statistical barn' (or 'statbarn') concept.

A statbarn is a classification of statistics for disclosure control purposes, where all of the statistics in that class share the same characteristics as far as disclosure control is concerned:

As of March 2024, 14 statbarns have been identified, with 12 described for output checkers: [9]

These cover almost all statistics. They also cover most graph forms, where the graph can be converted into the appropriate statsbarn (for example, a pie chart is another form of frequency table). The SACRO manual provides guidance on what to look out for, and the rules to be followed fro checking.

Output SDC: operating models

There are two main approaches to output SDC: principles-based and rules-based. [10] In principles-based systems, disclosure control attempts to uphold a specific set of fundamental principles—for example, "no person should be identifiable in released microdata". [11] Rules-based systems, in contrast, are evidenced by a specific set of rules that a person performing disclosure control follows (for example, "any frequency must be based on at least five observations"), after which the data are presumed to be safe to release. In general, official statistics are rules-based; research environments are more likely to be principles-based.

In research environments, the choice of output-checking regime can have significant operational implications. [12]

Rules-Based SDC

In rules-based SDC, a rigid set of rules is used to determine whether or not the results of data analysis can be released. The rules are applied consistently, which makes it obvious what kinds of output are acceptable. Rules-based systems are good for ensuring consistency across time, across data sources, and across production teams, which makes them appealing for statistical agencies. [12] Rules-based systems also work well for remote job serves such as microdata.no or Lissy.

However, because the rules are inflexible, either disclosive information may still slip through, or the rules are over-restrictive and may only allow for results that are too broad for useful analysis to be published. [10] In practice, research environments running rules-based systems may have to bring flexibility in 'ad hoc' systems. [12]

The Northern Ireland Statistics and Research Agency (NISRA) uses a rules-based approach to releasing statistics and research results. [13]

Principles-Based SDC

In principles-based SDC, both the researcher and the output checker are trained in SDC. They receive a set of rules, which are rules-of-thumb rather than hard rules as in rules-based SDC. This means that in principle, any output may be approved or refused. The rules-of-thumb are a starting point for the researcher. A researcher may request outputs which breach the 'rules of thumb' as long as (1) they are non-disclosive (2) they are important and (3) this is an exceptional request. [14] It is up to the researcher to prove that any 'unsafe' outputs are non-disclosive, but the checker has the final say. Since there are no hard rules, this requires knowledge on disclosure risks and judgment from both the researcher and the checker. It requires training and an understanding of statistics and data analysis, [10] although it has been argued [12] that this can be used to make the process more efficient than a rules-based model.

In the UK all major secure research environments in social science and public health, with the exception of Northern Ireland, are principles-based. This includes the UK Data Service's Secure Data Service, [15] the Office for National Statistics' Secure Research Service, the Scottish Safe Havens, Secure Anonymized Information Linkage (SAIL) and OpenSAFELY.

Critiques

Many contemporary statistical disclosure control techniques, such as generalization and cell suppression, have been shown to be vulnerable to attack by a hypothetical data intruder. For example, Cox showed in 2009 that Complementary cell suppression typically leads to "over-protected" solutions because of the need to suppress both primary and complementary cells, and even then can lead to the compromise of sensitive data when exact intervals are reported. [16]

Many of the rules are arbitrary and reflect data owner's unwillingness to be different, rather than solid evidence. For example, Ritchie [17] demonstrated the choice of a minimum threshold is more about an organisation's wish to be in line with others than any statistical rationale.

A more substantive criticism is that the theoretical models used to explore control measures are not appropriate for guides for practical action. [18] Hafner et al provide a practical example of how a change in perspective can generate substantially different results. [3]

Output SDC and AI models

Artificial intelligence and machine learning models present different risks for output checking. [19] The GRAIMATTER project https://dareuk.org.uk/sprint-exemplar-project-graimatter/ provided some initial guidance and automatic tools. These were extended and simplified as part of the SACRO project (see below), and more guidelines for data services staff added. This is still a quickly-evolving area. The SDC-REBOOT community network https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=SDC-REBOOT is currently co-ordinating the ongoing development of the tools and guidance.

Automated tools

Output checking is generally labour-intensive, as it requires analysts who can understand what they are looking at and make a judgement about whether to release an output. There is therefore considerable interest in automated checking. A Eurostat-commissioned report [20] explored the options for output checking, which largely come down to two options:

tauArgus and sdcMicro

tau-Argus and sdcTable are fully-automated open-source EoPR tools for tabular data protection (frequency and magnitude tables). They are designed to work with multiple tables. Metadata needs to be set up describing the output(s), and the control parameters. They provide the output checkers with extensive information on potential problems, including secondary disclosure across tables. They can also carry out correction measures, from suppression and simple rounding to secondary suppression and controlled tabular rounding. They do not deal with non-tabular outputs.

Because of the need to rewrite the metadata for each table, these tools are poorly suited for research use. However, in official statistics, where the same tables are being repeatedly generated and where secondary differencing is considered a significant problem, the investment in setting up the tools can be very cost-effective.

The software for both is open source at GitHub https://github.com/sdcTools/tauargus and CRAN https://cran.r-project.org/web/packages/sdcTable/

SACRO

SACRO (Semi-autonomous checking of research outputs) is a WPR tool, originally commissioned (ACRO) by Eurostat in 2020 as a proof-of-concept to show that a general-purpose output checking tool could be developed. [21] In 2023 the UK Medical Research Council commissioned a generalised version (SACRO) which would work with multiple languages (as of 2024: Stata, R and Python) and provide a more user-friendly interface. [22] SACRO directly implements the statbarns model and is principles-based; hence, it is 'semi-automatic' as it allows users to request exceptions and for output checkers to override the automated recommendations. All UK social science secure facilities, and most UK public health secure facilities, are planning to adopt it.

The software is available on Github at https://github.com/AI-SDC, which also contains links to the original ACRO and tools for assessing AI models.

See also

Related Research Articles

Biostatistics is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results.

<span class="mw-page-title-main">Census</span> Acquiring and recording information about the members of a given population

A census is the procedure of systematically acquiring, recording and calculating population information about the members of a given population. This term is used mostly in connection with national population and housing censuses; other common censuses include censuses of agriculture, traditional culture, business, supplies, and traffic censuses. The United Nations (UN) defines the essential features of population and housing censuses as "individual enumeration, universality within a defined territory, simultaneity and defined periodicity", and recommends that population censuses be taken at least every ten years. UN recommendations also cover census topics to be collected, official definitions, classifications and other useful information to co-ordinate international practices.

Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information from a data set and transforming the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.

The Office for National Statistics is the executive office of the UK Statistics Authority, a non-ministerial department which reports directly to the UK Parliament.

<span class="mw-page-title-main">Eurostat</span> Statistics agency of the European Union

Eurostat is a Directorate-General of the European Commission located in the Kirchberg quarter of Luxembourg City, Luxembourg. Eurostat's main responsibilities are to provide statistical information to the institutions of the European Union (EU) and to promote the harmonisation of statistical methods across its member states and candidates for accession as well as EFTA countries. The organisations in the different countries that cooperate with Eurostat are summarised under the concept of the European Statistical System.

An information technology audit, or information systems audit, is an examination of the management controls within an Information technology (IT) infrastructure and business applications. The evaluation of evidence obtained determines if the information systems are safeguarding assets, maintaining data integrity, and operating effectively to achieve the organization's goals or objectives. These reviews may be performed in conjunction with a financial statement audit, internal audit, or other form of attestation engagement.

Security testing is a process intended to detect flaws in the security mechanisms of an information system and as such help enable it to protect data and maintain functionality as intended. Due to the logical limitations of security testing, passing the security testing process is not an indication that no flaws exist or that the system adequately satisfies the security requirements.

<span class="mw-page-title-main">Epi Info</span> Statistical software from the CDC

Epi Info is statistical software for epidemiology developed by Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia (US).

Barnardisation is a method of statistical disclosure control for tables of counts. It involves adding +1, 0 or -1 to some or all of the internal non-zero cells in a table in a pseudo-random fashion. The probability of adjustment for each internal cell is calculated as p/2, 1-p, p/2. The table totals are then calculated as the sum of the post-adjustment internal counts.

In the study of survey and census data, microdata is information at the level of individual respondents. For instance, a national census might collect age, home address, educational level, employment status, and many other variables, recorded separately for every person who responds; this is microdata.

Differential privacy (DP) is an approach for providing privacy while sharing information about a group of individuals, by describing the patterns within the group while withholding information about specific individuals. This is done by making arbitrary small changes to individual data that do not change the statistics of interest. Thus the data cannot be used to infer much about any individual.

Fraud represents a significant problem for governments and businesses and specialized analysis techniques for discovering fraud using them are required. Some of these methods include knowledge discovery in databases (KDD), data mining, machine learning and statistics. They offer applicable and successful solutions in different areas of electronic fraud crimes.

Synthetic data is information that is artificially generated rather than produced by real-world events. Typically created using algorithms, synthetic data can be deployed to validate mathematical models and to train machine learning models.

<span class="mw-page-title-main">IT risk management</span>

IT risk management is the application of risk management methods to information technology in order to manage IT risk, i.e.:

<span class="mw-page-title-main">De-identification</span> Preventing personal identity from being revealed

De-identification is the process used to prevent someone's personal identity from being revealed. For example, data produced during human subject research might be de-identified to preserve the privacy of research participants. Biological data may be de-identified in order to comply with HIPAA regulations that define and stipulate patient privacy laws.

The UK Data Service is the largest digital repository for quantitative and qualitative social science and humanities research data in the United Kingdom. The organisation is funded by the UK government through the Economic and Social Research Council and is led by the UK Data Archive at the University of Essex, in partnership with other universities.

Statistics Botswana (StatsBots) is the National statistical bureau of Botswana. The organization was previously under the Ministry of Finance and development planning as a department and was called Central Statistics Office. The organisation was initially set up in 1967 through an Act of Parliament – the Statistics Act and thereafter transformed into a parastatal through the revised Statistics Act of 2009. This act gives the Statistics Botswana the mandate and authority to collect, process, compile, analyse, publish, disseminate and archive official national statistics. It is also responsible for "coordinating, monitoring and supervising the National Statistical System" in Botswana. The office has its main offices in Gaborone and three satellite offices in Maun, Francistown and Ghanzi. The different areas in statistics that should be collected are covered under this Act and are clearly specified. The other statistics that are not specified can be collected as long as they are required by the Government, stakeholders and the users.

The GESIS – Leibniz Institute for the Social Sciences is the largest German infrastructure institute for the social sciences. It is headquartered in Mannheim, with a location in Cologne. With basic research-based services and consulting covering all levels of the scientific process, GESIS supports researchers in the social sciences. As of 2017, the president of GESIS is Christof Wolf.

In computer science, language-based security (LBS) is a set of techniques that may be used to strengthen the security of applications on a high level by using the properties of programming languages. LBS is considered to enforce computer security on an application-level, making it possible to prevent vulnerabilities which traditional operating system security is unable to handle.

The Five Safes is a framework for helping make decisions about making effective use of data which is confidential or sensitive. It is mainly used to describe or design research access to statistical data held by government and health agencies, and by data archives such as the UK Data Service.

References

  1. Skinner, Chris (2009). "Statistical Disclosure Control for Survey Data" (PDF). Handbook of Statistics Vol 29A: Sample Surveys: Design, Methods and Applications. Handbook of Statistics. 29: 381–396. doi:10.1016/S0169-7161(08)00015-1. ISBN   978-0-444-53124-7 . Retrieved March 8, 2016.
  2. "References", Statistical Disclosure Control, Chichester, UK: John Wiley & Sons, Ltd, pp. 261–277, 2012-07-05, doi: 10.1002/9781118348239.refs , ISBN   978-1-118-34823-9
  3. 1 2 Hafner, Hans-Peter; Lenz, Rainer; Ritchie, Felix (2019-01-01). "User-focused threat identification for anonymised microdata" (PDF). Statistical Journal of the IAOS. 35 (4): 703–713. doi:10.3233/SJI-190506. ISSN   1874-7655. S2CID   55976703.
  4. 1 2 Ritchie, Felix (2007). Disclosure detection in research environments in practice. Paper presented at UNECE/Eurostat work session on statistical data confidentiality.
  5. "ADRN » Safe results". adrn.ac.uk. Retrieved 2016-03-08.
  6. "Government Statistical Services: Statistical Disclosure Control" . Retrieved March 8, 2016.
  7. Templ, Matthias; et al. (2014). "International Household Survey Network" (PDF). IHSN Working Paper. Retrieved March 8, 2016.
  8. "Archived: ONS Statistical Disclosure Control". Office for National Statistics. Archived from the original on 2016-01-05. Retrieved March 8, 2016.
  9. Ritchie, Felix; Green, Elizabeth; Smith, Jim; Tilbrook, Amy; White, Paul (2023-10-30). "The SACRO guide to statistical output checking". doi:10.5281/zenodo.10054629.{{cite journal}}: Cite journal requires |journal= (help)
  10. 1 2 3 Ritchie, Felix, and Elliott, Mark (2015). "Principles- Versus Rules-Based Output Statistical Disclosure Control In Remote Access Environments" (PDF). IASSIST Quarterly. 39 (2): 5–13. doi:10.29173/iq778. S2CID   59043893 . Retrieved March 8, 2016.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  11. Ritchie, Felix (2009-01-01). "UK release practices for official microdata". Statistical Journal of the IAOS. 26 (3, 4): 103–111. doi:10.3233/SJI-2009-0706. ISSN   1874-7655.
  12. 1 2 3 4 Alves, Kyle; Ritchie, Felix (2020-11-25). "Runners, repeaters, strangers and aliens: Operationalising efficient output disclosure control". Statistical Journal of the IAOS. 36 (4): 1281–1293. doi:10.3233/SJI-200661. S2CID   209455141.
  13. "Census 2001 - Methodology" (PDF). Northern Ireland Statistics and Research Agency. 2001. Retrieved March 8, 2016.
  14. Office for National Statistics. "Safe Researcher Training".
  15. Afkhamai, Reza; et al. (2013). "Statistical Disclosure Control Practice in the Secure Access of the UK Data Service" (PDF). United Nations Economic Commission for Europe. Retrieved March 8, 2016.
  16. Lawrence H. Cox, Vulnerability of Complementary Cell Suppression to Intruder Attack, Journal of Privacy and Confidentiality (2009) 1, Number 2, pp. 235–251 http://repository.cmu.edu/jpc/vol1/iss2/8/
  17. Ritchie, Felix (2022). "10 is the safest number that there's ever been". Transactions on Data Privacy. 15 (2): 109–140.
  18. Ritchie, Felix; Hafner, Hans-Peter; Lenz, Rainer; Welpton, Richard (2018-10-18). "Evidence-based, default-open, risk-managed, user-centred data access".{{cite journal}}: Cite journal requires |journal= (help)
  19. Ritchie, Felix; Tilbrook, Amy; Cole, Christian; Jefferson, Emily; Krueger, Susan; Mansouri-Bensassi, Esma; Rogers, Simon; Smith, Jim (2023-12-14). "Machine learning models in trusted research environments -- understanding operational risks". International Journal of Population Data Science. 8 (1): 2165. doi:10.23889/ijpds.v8i1.2165. ISSN   2399-4908. PMC   10898318 . PMID   38414545.
  20. Green, Elizabeth; Ritchie, Felix; Smith, James (2020-05-31). "Understanding output checking".{{cite journal}}: Cite journal requires |journal= (help)
  21. Eurostat (European Commission); Green, Elizabeth; Smith, James; Ritchie, Felix (2021). Automatic Checking of Research Outputs (ACRO): a tool for dynamic disclosure checks : 2021 edition. LU: Publications Office of the European Union. doi:10.2785/75954. ISBN   978-92-76-41529-9.
  22. Smith, Jim; Preen, Richard; Albashir, Maha; Ritchie, Felix; Green, Elizabeth; Davy, Simon; Stokes, Pete; Bacon, Sebastian (2023-09-26). "SACRO: Semi-Automated Checking Of Research Outputs".{{cite journal}}: Cite journal requires |journal= (help)