Data science is an interdisciplinary academic field [1] that uses statistics, scientific computing, scientific methods, processing, scientific visualization, algorithms and systems to extract or extrapolate knowledge and insights from potentially noisy, structured, or unstructured data. [2]
Data science also integrates domain knowledge from the underlying application domain (e.g., natural sciences, information technology, and medicine). [3] Data science is multifaceted and can be described as a science, a research paradigm, a research method, a discipline, a workflow, and a profession. [4]
Data science is "a concept to unify statistics, data analysis, informatics, and their related methods" to "understand and analyze actual phenomena" with data. [5] It uses techniques and theories drawn from many fields within the context of mathematics, statistics, computer science, information science, and domain knowledge. [6] However, data science is different from computer science and information science. Turing Award winner Jim Gray imagined data science as a "fourth paradigm" of science (empirical, theoretical, computational, and now data-driven) and asserted that "everything about science is changing because of the impact of information technology" and the data deluge. [7] [8]
A data scientist is a professional who creates programming code and combines it with statistical knowledge to create insights from data. [9]
Data science is an interdisciplinary field [10] focused on extracting knowledge from typically large data sets and applying the knowledge and insights from that data to solve problems in a wide range of application domains. The field encompasses preparing data for analysis, formulating data science problems, analyzing data, developing data-driven solutions, and presenting findings to inform high-level decisions in a broad range of application domains. As such, it incorporates skills from computer science, statistics, information science, mathematics, data visualization, information visualization, data sonification, data integration, graphic design, complex systems, communication and business. [11] [12] Statistician Nathan Yau, drawing on Ben Fry, also links data science to human–computer interaction: users should be able to intuitively control and explore data. [13] [14] In 2015, the American Statistical Association identified database management, statistics and machine learning, and distributed and parallel systems as the three emerging foundational professional communities. [15]
Many statisticians, including Nate Silver, have argued that data science is not a new field, but rather another name for statistics. [16] Others argue that data science is distinct from statistics because it focuses on problems and techniques unique to digital data. [17] Vasant Dhar writes that statistics emphasizes quantitative data and description. In contrast, data science deals with quantitative and qualitative data (e.g., from images, text, sensors, transactions, customer information, etc.) and emphasizes prediction and action. [18] Andrew Gelman of Columbia University has described statistics as a non-essential part of data science. [19]
Stanford professor David Donoho writes that data science is not distinguished from statistics by the size of datasets or use of computing and that many graduate programs misleadingly advertise their analytics and statistics training as the essence of a data-science program. He describes data science as an applied field growing out of traditional statistics. [20]
In 1962, John Tukey described a field he called "data analysis", which resembles modern data science. [20] In 1985, in a lecture given to the Chinese Academy of Sciences in Beijing, C. F. Jeff Wu used the term "data science" for the first time as an alternative name for statistics. [21] Later, attendees at a 1992 statistics symposium at the University of Montpellier II acknowledged the emergence of a new discipline focused on data of various origins and forms, combining established concepts and principles of statistics and data analysis with computing. [22] [23]
The term "data science" has been traced back to 1974, when Peter Naur proposed it as an alternative name to computer science. [6] In 1996, the International Federation of Classification Societies became the first conference to specifically feature data science as a topic. [6] However, the definition was still in flux. After the 1985 lecture at the Chinese Academy of Sciences in Beijing, in 1997 C. F. Jeff Wu again suggested that statistics should be renamed data science. He reasoned that a new name would help statistics shed inaccurate stereotypes, such as being synonymous with accounting or limited to describing data. [24] In 1998, Hayashi Chikio argued for data science as a new, interdisciplinary concept, with three aspects: data design, collection, and analysis. [23]
During the 1990s, popular terms for the process of finding patterns in datasets (which were increasingly large) included "knowledge discovery" and "data mining". [6] [25]
In 2012, technologists Thomas H. Davenport and DJ Patil declared "Data Scientist: The Sexiest Job of the 21st Century", [26] a catchphrase that was picked up even by major-city newspapers like the New York Times [27] and the Boston Globe. [28] A decade later, they reaffirmed it, stating that "the job is more in demand than ever with employers". [29]
The modern conception of data science as an independent discipline is sometimes attributed to William S. Cleveland. [30] In a 2001 paper, he advocated an expansion of statistics beyond theory into technical areas; because this would significantly change the field, it warranted a new name. [25] "Data science" became more widely used in the next few years: in 2002, the Committee on Data for Science and Technology launched the Data Science Journal. In 2003, Columbia University launched The Journal of Data Science. [25] In 2014, the American Statistical Association's Section on Statistical Learning and Data Mining changed its name to the Section on Statistical Learning and Data Science, reflecting the ascendant popularity of data science. [31]
The professional title of "data scientist" has been attributed to DJ Patil and Jeff Hammerbacher in 2008. [32] Though it was used by the National Science Board in their 2005 report "Long-Lived Digital Data Collections: Enabling Research and Education in the 21st Century", it referred broadly to any key role in managing a digital data collection. [33]
There is still no consensus on the definition of data science, and it is considered by some to be a buzzword. [34] Big data is a related marketing term. [35] Data scientists are responsible for breaking down big data into usable information and creating software and algorithms that help companies and organizations determine optimal operations. [36]
Data science and data analysis are both important disciplines in the field of data management and analysis, but they differ in several key ways. While both fields involve working with data, data science is more of an interdisciplinary field that involves the application of statistical, computational, and machine learning methods to extract insights from data and make predictions, while data analysis is more focused on the examination and interpretation of data to identify patterns and trends. [37] [38]
Data analysis typically involves working with smaller, structured datasets to answer specific questions or solve specific problems. This can involve tasks such as data cleaning, data visualization, and exploratory data analysis to gain insights into the data and develop hypotheses about relationships between variables. Data analysts typically use statistical methods to test these hypotheses and draw conclusions from the data. For example, a data analyst might analyze sales data to identify trends in customer behavior and make recommendations for marketing strategies. [37]
Data science, on the other hand, is a more complex and iterative process that involves working with larger, more complex datasets that often require advanced computational and statistical methods to analyze. Data scientists often work with unstructured data such as text or images and use machine learning algorithms to build predictive models and make data-driven decisions. In addition to statistical analysis, data science often involves tasks such as data preprocessing, feature engineering, and model selection. For instance, a data scientist might develop a recommendation system for an e-commerce platform by analyzing user behavior patterns and using machine learning algorithms to predict user preferences. [38] [39]
While data analysis focuses on extracting insights from existing data, data science goes beyond that by incorporating the development and implementation of predictive models to make informed decisions. Data scientists are often responsible for collecting and cleaning data, selecting appropriate analytical techniques, and deploying models in real-world scenarios. They work at the intersection of mathematics, computer science and domain expertise to solve complex problems and uncover hidden patterns in large datasets. [38]
Despite these differences, data science and data analysis are closely related fields and often require similar skill sets. Both fields require a solid foundation in statistics, programming, and data visualization, as well as the ability to communicate findings effectively to both technical and non-technical audiences. Both fields benefit from critical thinking and domain knowledge, as understanding the context and nuances of the data is essential for accurate analysis and modeling. [37] [38]
Topological data analysis (TDA) is a mathematical framework that uses tools from topology, including algebraic, differential, and geometric topology, to study the shape and structure of data.
In summary, data analysis and data science are distinct yet interconnected disciplines within the broader field of data management, analysis, and mathematics. Data analysis focuses on extracting insights and drawing conclusions from structured data, while data science involves a more comprehensive approach that combines statistical analysis, computational methods, topological data analysis, and machine learning to extract insights, build predictive models, and drive data-driven decision-making. Both fields use data to understand patterns, make informed decisions, and solve complex problems across various domains.
As illustrated in the previous sections, there is substantially some considerable differences between data science, data analysis and statistics. Consequently, just like statistics grew into an independent field from applied mathematics, similarly data science has emerged as an independent field and has gained traction over the recent years. The unique demand for professional skills on computerized data analysis skills has exploded due to the increasing amounts of data emanating from a variety of independent sources. Whereas some of these highly sought skills can be provided by statisticians, the lack of high algorithmic writing skills makes them less preferred than trained data scientists who provide unique expertise on skills such as NoSQL, Apache Hadoop, Cloud Computing platforms and use of complex networks. This paradigm shift has seen various institution craft academic programmes to prepare skilled labor for the market. Some of the institutions offering degree programmes in data science include Stanford University, Harvard University, University of Oxford, ETH Zurich, Meru University among many others.
Cloud computing can offer access to large amounts of computational power and storage. [40] In big data, where volumes of information are continually generated and processed, these platforms can be used to handle complex and resource-intensive analytical tasks. [41]
Some distributed computing frameworks are designed to handle big data workloads. These frameworks can enable data scientists to process and analyze large datasets in parallel, which can reducing processing times. [42]
Data science involve collecting, processing, and analyzing data which often including personal and sensitive information. Ethical concerns include potential privacy violations, bias perpetuation, and negative societal impacts [43] [44]
Machine learning models can amplify existing biases present in training data, leading to discriminatory or unfair outcomes. [45] [46]
Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic processes, and the development of both hardware and software. Computing has scientific, engineering, mathematical, technological, and social aspects. Major computing disciplines include computer engineering, computer science, cybersecurity, data science, information systems, information technology, and software engineering.
Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines to applied disciplines.
Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information from a data set and transforming the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.
Computational biology refers to the use of techniques in computer science, data analysis, mathematical modeling and computational simulations to understand biological systems and relationships. An intersection of computer science, biology, and data science, the field also has foundations in applied mathematics, molecular biology, cell biology, chemistry, and genetics.
Analytics is the systematic computational analysis of data or statistics. It is used for the discovery, interpretation, and communication of meaningful patterns in data, which also falls under and directly relates to the umbrella term, data science. Analytics also entails applying data patterns toward effective decision-making. It can be valuable in areas rich with recorded information; analytics relies on the simultaneous application of statistics, computer programming, and operations research to quantify performance.
Citation analysis is the examination of the frequency, patterns, and graphs of citations in documents. It uses the directed graph of citations — links from one document to another document — to reveal properties of the documents. A typical aim would be to identify the most important documents in a collection. A classic example is that of the citations between academic articles and books. For another example, judges of law support their judgements by referring back to judgements made in earlier cases. An additional example is provided by patents which contain prior art, citation of earlier patents relevant to the current claim. The digitization of patent data and increasing computing power have led to a community of practice that uses these citation data to measure innovation attributes, trace knowledge flows, and map innovation networks.
In predictive analytics, data science, machine learning and related fields, concept drift or drift is an evolution of data that invalidates the data model. It happens when the statistical properties of the target variable, which the model is trying to predict, change over time in unforeseen ways. This causes problems because the predictions become less accurate as time passes. Drift detection and drift adaptation are of paramount importance in the fields that involve dynamically changing data and data models.
Data and information visualization is the practice of designing and creating easy-to-communicate and easy-to-understand graphic or visual representations of a large amount of complex quantitative and qualitative data and information with the help of static, dynamic or interactive visual items. Typically based on data and information collected from a certain domain of expertise, these visualizations are intended for a broader audience to help them visually explore and discover, quickly understand, interpret and gain important insights into otherwise difficult-to-identify structures, relationships, correlations, local and global patterns, trends, variations, constancy, clusters, outliers and unusual groupings within data. When intended for the general public to convey a concise version of known, specific information in a clear and engaging manner, it is typically called information graphics.
In data analysis, anomaly detection is generally understood to be the identification of rare items, events or observations which deviate significantly from thety of the data and do not conform to a well defined notion of normal behavior. Such examples may arouse suspicions of being generated by a different mechanism, or appear inconsistent with the remainder of that set of data.
Data are a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted formally. A datum is an individual value in a collection of data. Data are usually organized into structures such as tables that provide additional context and meaning, and may themselves be used as data in larger structures. Data may be used as variables in a computational process. Data may represent abstract ideas or concrete measurements. Data are commonly used in scientific research, economics, and virtually every other form of human organizational activity. Examples of data sets include price indices, unemployment rates, literacy rates, and census data. In this context, data represent the raw facts and figures from which useful information can be extracted.
Computational thinking (CT) refers to the thought processes involved in formulating problems so their solutions can be represented as computational steps and algorithms. In education, CT is a set of problem-solving methods that involve expressing problems and their solutions in ways that a computer could also execute. It involves automation of processes, but also using computing to explore, analyze, and understand processes.
Computer science education or computing education is the field of teaching and learning the discipline of computer science, and computational thinking. The field of computer science education encompasses a wide range of topics, from basic programming skills to advanced algorithm design and data analysis. It is a rapidly growing field that is essential to preparing students for careers in the technology industry and other fields that require computational skills.
Informatics is the study of computational systems. According to the ACM Europe Council and Informatics Europe, informatics is synonymous with computer science and computing as a profession, in which the central notion is transformation of information. In some cases, the term "informatics" may also be used with different meanings, e.g. in the context of social computing, or in context of library science.
Emotion recognition is the process of identifying human emotion. People vary widely in their accuracy at recognizing the emotions of others. Use of technology to help people with emotion recognition is a relatively nascent research area. Generally, the technology works best if it uses multiple modalities in context. To date, the most work has been conducted on automating the recognition of facial expressions from video, spoken expressions from audio, written expressions from text, and physiology as measured by wearables.
A Master of Science in Business Analytics (MSBA) is an interdisciplinary STEM graduate professional degree that blends concepts from data science, computer science, statistics, business intelligence, and information theory geared towards commercial applications. Students generally come from a variety of backgrounds including computer science, engineering, mathematics, economics, and business. University programs mandate coding proficiency in at least one language. The languages most commonly used include R, Python, SAS, and SQL. Applicants generally have technical proficiency before starting the program. Analytics concentrations in MBA programs are less technical and focus on developing working knowledge of statistical applications rather than proficiency.
A Master of Science in Data Science is an interdisciplinary degree program designed to provide studies in scientific methods, processes, and systems to extract knowledge or insights from data in various forms, either structured or unstructured, similar to data mining.
Learning Engineering is the systematic application of evidence-based principles and methods from educational technology and the learning sciences to create engaging and effective learning experiences, support the difficulties and challenges of learners as they learn, and come to better understand learners and learning. It emphasizes the use of a human-centered design approach in conjunction with analyses of rich data sets to iteratively develop and improve those designs to address specific learning needs, opportunities, and problems, often with the help of technology. Working with subject-matter and other experts, the Learning Engineer deftly combines knowledge, tools, and techniques from a variety of technical, pedagogical, empirical, and design-based disciplines to create effective and engaging learning experiences and environments and to evaluate the resulting outcomes. While doing so, the Learning Engineer strives to generate processes and theories that afford generalization of best practices, along with new tools and infrastructures that empower others to create their own learning designs based on those best practices.
Biomedical data science is a multidisciplinary field which leverages large volumes of data to promote biomedical innovation and discovery. Biomedical data science draws from various fields including Biostatistics, Biomedical informatics, and machine learning, with the goal of understanding biological and medical data. It can be viewed as the study and application of data science to solve biomedical problems. Modern biomedical datasets often have specific features which make their analyses difficult, including:
Kai Shu is a computer scientist, academic, and author. He is an assistant professor at Emory University.