Engineering statistics

Last updated

Engineering statistics combines engineering and statistics using scientific methods for analyzing data. Engineering statistics involves data concerning manufacturing processes such as: component dimensions, tolerances, type of material, and fabrication process control. There are many methods used in engineering analysis and they are often displayed as histograms to give a visual of the data as opposed to being just numerical. Examples of methods are: [1] [2] [3] [4] [5] [6]

Contents

  1. Design of Experiments (DOE) is a methodology for formulating scientific and engineering problems using statistical models. The protocol specifies a randomization procedure for the experiment and specifies the primary data-analysis, particularly in hypothesis testing. In a secondary analysis, the statistical analyst further examines the data to suggest other questions and to help plan future experiments. In engineering applications, the goal is often to optimize a process or product, rather than to subject a scientific hypothesis to test of its predictive adequacy. [1] [2] [3] The use of optimal (or near optimal) designs reduces the cost of experimentation. [2] [7]
  2. Quality control and process control use statistics as a tool to manage conformance to specifications of manufacturing processes and their products. [1] [2] [3]
  3. Time and methods engineering use statistics to study repetitive operations in manufacturing in order to set standards and find optimum (in some sense) manufacturing procedures.
  4. Reliability engineering which measures the ability of a system to perform for its intended function (and time) and has tools for improving performance. [2] [8] [9] [10]
  5. Probabilistic design involving the use of probability in product and system design
  6. System identification uses statistical methods to build mathematical models of dynamical systems from measured data. System identification also includes the optimal design of experiments for efficiently generating informative data for fitting such models. [11] [12]

History

Engineering statistics dates back to 1000 B.C. when the Abacus was developed as means to calculate numerical data. In the 1600s, the development of information processing to systematically analyze and process data began. In 1654, the Slide Rule technique was developed by Robert Bissaker for advanced data calculations. In 1833, a British mathematician named Charles Babbage designed the idea of an automatic computer which inspired developers at Harvard University and IBM to design the first mechanical automatic-sequence-controlled calculator called MARK I. The integration of computers and calculators into the industry brought about a more efficient means of analyzing data and the beginning of engineering statistics. [13] [6] [14]

Examples

Factorial Experimental Design

A factorial experiment is one where, contrary to the standard experimental philosophy of changing only one independent variable and holding everything else constant, multiple independent variables are tested at the same time. With this design, statistical engineers can see both the direct effects of one independent variable (main effect), as well as potential interaction effects that arise when multiple independent variables provide a different result when together than either would on its own.

Six Sigma

Six Sigma is a set of techniques to improve the reliability of a manufacturing process. Ideally, all products will have the exact same specifications equivalent to what was desired, but countless imperfections of real-world manufacturing makes this impossible. The as-built specifications of a product are assumed to be centered around a mean, with each individual product deviating some amount away from that mean in a normal distribution. The goal of Six Sigma is to ensure that the acceptable specification limits are six standard deviations away from the mean of the distribution; in other words, that each step of the manufacturing process has at most a 0.00034% chance of producing a defect.

Notes

  1. 1 2 3 Box, G. E., Hunter,W.G., Hunter, J.S., Hunter,W.G., "Statistics for Experimenters: Design, Innovation, and Discovery", 2nd Edition, Wiley, 2005, ISBN   0-471-71813-0
  2. 1 2 3 4 5 Wu, C. F. Jeff; Hamada, Michael (2002). Experiments: Planning, Analysis, and Parameter Design Optimization. Wiley. ISBN   0-471-25511-4.
  3. 1 2 3 Logothetis, N. and Wynn, H. P (1989). Quality Through Design: Experimental Design, Off-line Quality Control, and Taguchi's Contributions. Oxford U. P. ISBN   0-19-851993-1.CS1 maint: multiple names: authors list (link)
  4. Hogg, Robert V. and Ledolter, J. (1992). Applied Statistics for Engineers and Physical Scientists. Macmillan, New York.
  5. Walpole, Ronald; Myers, Raymond; Ye, Keying. Probability and Statistics for Engineers and Scientists. Pearson Education, 2002, 7th edition, pg. 237
  6. 1 2 Rao, Singiresu (2002). Applied Numerical Methods of Engineers and Scientists. Upper Saddle River, New Jersey: Prentice Hall. ISBN   013089480X.
  7. Atkinson, A. C. and Donev, A. N. and Tobias, R. D. (2007). Optimum Experimental Designs, with SAS. Oxford University Press. pp. 511+xvi. ISBN   978-0-19-929660-6.CS1 maint: uses authors parameter (link)
  8. Barlow, Richard E. (1998). Engineering reliability. ASA-SIAM Series on Statistics and Applied Probability. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; American Statistical Association, Alexandria, VA. pp. xx+199. ISBN   0-89871-405-2. MR   1621421.
  9. Nelson, Wayne B., (2004), Accelerated Testing - Statistical Models, Test Plans, and Data Analysis, John Wiley & Sons, New York, ISBN   0-471-69736-2
  10. LogoWynn
  11. Goodwin, Graham C.; Payne, Robert L. (1977). Dynamic System Identification: Experiment Design and Data Analysis. Academic Press. ISBN   0-12-289750-1.
  12. Walter, Éric; Pronzato, Luc (1997). Identification of Parametric Models from Experimental Data. Springer.
  13. The Editors of Encyclopaedia Britannica. "Slide Rule". Encyclopaedia Britannica. Encyclopaedia Britannica Inc. Retrieved 17 April 2018.
  14. Montgomery, Douglas; Runger, George; Hubele, Norma. Engineering Statistics (5 ed.). ISBN   0470631473.

Related Research Articles

Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means.

Design of experiments Design of tasks set to uncover from

The design of experiments is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation.

Statistics Study of the collection, analysis, interpretation, and presentation of data

Statistics is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.

The theory of statistics provides a basis for the whole range of techniques, in both study design and data analysis, that are used within applications of statistics. The theory covers approaches to statistical-decision problems and to statistical inference, and the actions and deductions that satisfy the basic principles stated for these different approaches. Within a given approach, statistical theory gives ways of comparing statistical procedures; it can find a best possible procedure within a given context for given statistical problems, or can provide guidance on the choice between alternative procedures.

In statistics, a unit is one member of a set of entities being studied. It is the main source for the mathematical abstraction of a "random variable". Common examples of a unit would be a single person, animal, plant, manufactured item, or country that belongs to a larger collection of such entities being studied.

Chemometrics is the science of extracting information from chemical systems by data-driven means. Chemometrics is inherently interdisciplinary, using methods frequently employed in core data-analytic disciplines such as multivariate statistics, applied mathematics, and computer science, in order to address problems in chemistry, biochemistry, medicine, biology and chemical engineering. In this way, it mirrors other interdisciplinary fields, such as psychometrics and econometrics.

Taguchi methods are statistical methods, sometimes called robust design methods, developed by Genichi Taguchi to improve the quality of manufactured goods, and more recently also applied to engineering, biotechnology, marketing and advertising. Professional statisticians have welcomed the goals and improvements brought about by Taguchi methods, particularly by Taguchi's development of designs for studying variation, but have criticized the inefficiency of some of Taguchi's proposals.

Statistical process control (SPC) is a method of quality control which employs statistical methods to monitor and control a process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste. SPC can be applied to any process where the "conforming product" output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines.

Design for Six Sigma (DFSS) is an Engineering design process, business process management method related to traditional Six Sigma. It is used in many industries, like finance, marketing, basic engineering, process industries, waste management, and electronics. It is based on the use of statistical tools like linear regression and enables empirical research similar to that performed in other fields, such as social science. While the tools and order used in Six Sigma require a process to be in place and functioning, DFSS has the objective of determining the needs of customers and the business, and driving those needs into the product solution so created. DFSS is relevant for relatively simple items / systems. It is used for product or process design in contrast with process improvement. Measurement is the most important part of most Six Sigma or DFSS tools, but whereas in Six Sigma measurements are made from an existing process, DFSS focuses on gaining a deep insight into customer needs and using these to inform every design decision and trade-off.

Optimal design

In the design of experiments, optimal designs are a class of experimental designs that are optimal with respect to some statistical criterion. The creation of this field of statistics has been credited to Danish statistician Kirstine Smith.

Methods engineering is a subspecialty of industrial engineering and manufacturing engineering concerned with human integration in industrial production processes.

In the statistical theory of the design of experiments, blocking is the arranging of experimental units in groups (blocks) that are similar to one another.

The following is a glossary of terms used in the mathematical sciences statistics and probability.

Response surface methodology

In statistics, response surface methodology (RSM) explores the relationships between several explanatory variables and one or more response variables. The method was introduced by George E. P. Box and K. B. Wilson in 1951. The main idea of RSM is to use a sequence of designed experiments to obtain an optimal response. Box and Wilson suggest using a second-degree polynomial model to do this. They acknowledge that this model is only an approximation, but they use it because such a model is easy to estimate and apply, even when little is known about the process.

Uncertainty quantification (UQ) is the science of quantitative characterization and reduction of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.

Statistics, in the modern sense of the word, began evolving in the 18th century in response to the novel needs of industrializing sovereign states. The evolution of statistics was, in particular, intimately connected with the development of European states following the peace of Westphalia (1648), and with the development of probability theory, which put statistics on a firm theoretical basis.

In statistics, regression validation is the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained from regression analysis, are acceptable as descriptions of the data. The validation process can involve analyzing the goodness of fit of the regression, analyzing whether the regression residuals are random, and checking whether the model's predictive performance deteriorates substantially when applied to data that were not used in model estimation.

Oscar Kempthorne was a British statistician and geneticist known for his research on randomization-analysis and the design of experiments, which had wide influence on research in agriculture, genetics, and other areas of science.

Software that is used for designing factorial experiments plays an important role in scientific experiments and represents a route to the implementation of design of experiments procedures that derive from statistical and combinatorial theory. In principle, easy-to-use design of experiments (DOE) software should be available to all experimenters to foster use of DOE.

The following outline is provided as an overview of and topical guide to formal science:

References