Author |
|
---|---|
Subject | Statistics in journalism, healthcare and politics |
Publisher | Weidenfeld & Nicolson |
Publication date | March 2021 |
Pages | 200 |
ISBN | 9781474619974 |
How to Read Numbers: A Guide to Statistics in the News (and Knowing When to Trust Them) is a 2021 British book by Tom and David Chivers. It describes misleading uses of statistics in the news, with contemporary examples about the COVID-19 pandemic, healthcare, politics and crime. The book was conceived by the authors, who were cousins, in early 2020. It received positive reviews for its readability, engagingness, accessibility to non-mathematicians and applicability to journalistic writing.
Tom and David Chivers, cousins, wrote a proposal for the book in the first months of 2020 after complaining to each other about a news story with poor interpretation of numerical data. The proposal used a case study of deaths at a university that was cut from the final book and briefly mentioned the incoming COVID-19 pandemic. [1] At the time of writing, Tom Chivers was a science editor for UnHerd [2] —winning Statistical Excellence in Journalism Awards from the RSS in 2018 and 2020 [3] [4] —and author of one previous book, The Rationalist's Guide to the Galaxy. [5] David Chivers was an assistant professor of economics at the University of Durham. [2] Tom Chivers viewed journalists as more literate than numerate and incentivised to make information sound dramatic; David Chivers said the "publish or perish" motivation in academia could have a similar effect. [1]
The authors believed statistics could be given more prominence in school curricula and that numerical understanding should be viewed like literacy. Tom Chivers received some feedback from school and university teachers that they had use the book in their teaching. David Chivers said it was common to view maths as calculations rather than as interpretation of what numerical information means in context. [1]
The book was released in March 2021. [6] It concludes with a "statistical style guide", recommended for journalists. The authors presented this at the Significance lecture in 2021. [2]
An introduction outlines why the authors believe interpreting statistics is an important skill, with COVID-19 pandemic information to illustrate this. Each chapter covers a misleading use of statistics that can be found in the news:
The authors end with a recommended "statistical style guide" for journalists.
In a nomination for Chalkdust 's 2021 Book of the Year, a reviewer lauded the "readable and enjoyable" brevity of chapters, the clarity and conciseness of explanations and the utility for non-mathematicians. [7] Writing in The Big Issue , Stephen Bush approved of its light tone, informativeness and separation of expository mathematical material into optional sections. [5] Vivek Kaul of Mint praised its simplicity and the importance of the final chapter. [8]
Martin Chilton recommended the book in The Independent as informative and enjoyable, saying that the Chivers "make sense of dense material and offer engrossing insights". [6] [9] In The Times , Manjit Kumar described that "the authors do a splendid job of stringing words together so smartly that even difficult concepts are explained and understood with deceptive ease". [10] Rainer Hank of Frankfurter Allgemeine Zeitung said that he had learned much from the book and that such engaging educational materials, with little mathematical knowledge required, could lead to better journalism. [11]
Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference." An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships." Jan Tinbergen is one of the two founding fathers of econometrics. The other, Ragnar Frisch, also coined the term in the sense in which it is used today.
Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.
Simpson's paradox is a phenomenon in probability and statistics in which a trend appears in several groups of data but disappears or reverses when the groups are combined. This result is often encountered in social-science and medical-science statistics, and is particularly problematic when frequency data are unduly given causal interpretations. The paradox can be resolved when confounding variables and causal relations are appropriately addressed in the statistical modeling.
A journalist is an individual who collects/gathers information in the form of text, audio, or pictures, processes it into a news-worthy form, and disseminates it to the public. The act or process mainly done by the journalist is called journalism.
Meta-analysis is the statistical combination of the results of multiple studies addressing a similar research question. An important part of this method involves computing an effect size across all of the studies, this involves extracting effect sizes and variance measures from various studies. Meta-analyses are integral in supporting research grant proposals, shaping treatment guidelines, and influencing health policies. They are also pivotal in summarizing existing research to guide future studies, thereby cementing their role as a fundamental methodology in metascience. Meta-analyses are often, but not always, important components of a systematic review procedure. For instance, a meta-analysis may be conducted on several clinical trials of a medical treatment, in an effort to obtain a better understanding of how well the treatment works.
Goodhart's law is an adage often stated as, "When a measure becomes a target, it ceases to be a good measure". It is named after British economist Charles Goodhart, who is credited with expressing the core idea of the adage in a 1975 article on monetary policy in the United Kingdom:
Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.
A variable is considered dependent if it depends on an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule, on the values of other variables. Independent variables, in turn, are not seen as depending on any other variable in the scope of the experiment in question. In this sense, some common independent variables are time, space, density, mass, fluid flow rate, and previous values of some observed value of interest to predict future values.
Fact-checking is the process of verifying the factual accuracy of questioned reporting and statements. Fact-checking can be conducted before or after the text or content is published or otherwise disseminated. Internal fact-checking is such checking done in-house by the publisher to prevent inaccurate content from being published; when the text is analyzed by a third party, the process is called external fact-checking.
Statistics, when used in a misleading fashion, can trick the casual observer into believing something other than what the data shows. That is, a misuse of statistics occurs when a statistical argument asserts a falsehood. In some cases, the misuse may be accidental. In others, it is purposeful and for the gain of the perpetrator. When the statistical reason involved is false or misapplied, this constitutes a statistical fallacy.
This glossary of statistics and probability is a list of definitions of terms and concepts used in the mathematical sciences of statistics and probability, their sub-disciplines, and related fields. For additional related terms, see Glossary of mathematics and Glossary of experimental design.
Sir David John Spiegelhalter is a British statistician and a Fellow of Churchill College, Cambridge. From 2007 to 2018 he was Winton Professor of the Public Understanding of Risk in the Statistical Laboratory at the University of Cambridge. Spiegelhalter is an ISI highly cited researcher.
In causal inference, a confounder is a variable that influences both the dependent variable and independent variable, causing a spurious association. Confounding is a causal concept, and as such, cannot be described in terms of correlations or associations. The existence of confounders is an important quantitative explanation why correlation does not imply causation. Some notations are explicitly designed to identify the existence, possible existence, or non-existence of confounders in causal relationships between elements of a system.
In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of treatment effects. Randomization-based inference is especially important in experimental design and in survey sampling.
In causal models, controlling for a variable means binning data according to measured values of the variable. This is typically done so that the variable can no longer act as a confounder in, for example, an observational study or experiment.
A quasi-experiment is an empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment. Quasi-experimental research shares similarities with the traditional experimental design or randomized controlled trial, but it specifically lacks the element of random assignment to treatment or control. Instead, quasi-experimental designs typically allow the researcher to control the assignment to the treatment condition, but using some criterion other than random assignment.
A micromort is a unit of risk defined as a one-in-a-million chance of death. Micromorts can be used to measure the riskiness of various day-to-day activities. A microprobability is a one-in-a million chance of some event; thus, a micromort is the microprobability of death. The micromort concept was introduced by Ronald A. Howard who pioneered the modern practice of decision analysis.
Causal inference is the process of determining the independent, actual effect of a particular phenomenon that is a component of a larger system. The main difference between causal inference and inference of association is that causal inference analyzes the response of an effect variable when a cause of the effect variable is changed. The study of why things occur is called etiology, and can be described using the language of scientific causal notation. Causal inference is said to provide the evidence of causality theorized by causal reasoning.
The evaluation of binary classifiers compares two methods of assigning a binary attribute, one of which is usually a standard method and the other is being investigated. There are many metrics that can be used to measure the performance of a classifier or predictor; different fields have different preferences for specific metrics due to different goals. For example, in medicine sensitivity and specificity are often used, while in computer science precision and recall are preferred. An important distinction is between metrics that are independent on the prevalence, and metrics that depend on the prevalence – both types are useful, but they have very different properties.
Hannah Fry is a British mathematician, author, and radio and television presenter. She is Professor in the Mathematics of Cities at the UCL Centre for Advanced Spatial Analysis. In January 2024 Fry was appointed to be the new President of the Institute of Mathematics and its Applications. Her work has included studies the patterns of human behaviour, such as interpersonal relationships and dating, and how mathematics can apply to them. Fry delivered the 2019 Royal Institution Christmas Lectures, and has presented several programmes for the BBC, including The Secret Genius Of Modern Life.
The Book of Why: The New Science of Cause and Effect is a 2018 nonfiction book by computer scientist Judea Pearl and writer Dana Mackenzie. The book explores the subject of causality and causal inference from statistical and philosophical points of view for a general audience.