Control chart

Last updated
Control chart
Xbar chart for a paired xbar and R chart.svg
One of the Seven basic tools of quality
First described by Walter A. Shewhart
PurposeTo determine whether a process should undergo a formal examination for quality-related problems

Control charts are graphical plots used in production control to determine whether quality and manufacturing processes are being controlled under stable conditions. (ISO 7870-1) [1] The hourly status is arranged on the graph, and the occurrence of abnormalities is judged based on the presence of data that differs from the conventional trend or deviates from the control limit line. Control charts are classified into Shewhart individuals control chart (ISO 7870-2) [2] and CUSUM(CUsUM)(or cumulative sum control chart)(ISO 7870-4). [3]

Contents

Control charts, also known as Shewhart charts (after Walter A. Shewhart) or process-behavior charts, are a statistical process control tool used to determine if a manufacturing or business process is in a state of control. It is more appropriate to say that the control charts are the graphical device for Statistical Process Monitoring (SPM). Traditional control charts are mostly designed to monitor process parameters when the underlying form of the process distributions are known. However, more advanced techniques are available in the 21st century where incoming data streaming can-be monitored even without any knowledge of the underlying process distributions. Distribution-free control charts are becoming increasingly popular[ citation needed ].

Overview

If analysis of the control chart indicates that the process is currently under control (i.e., is stable, with variation only coming from sources common to the process), then no corrections or changes to process control parameters are needed or desired. In addition, data from the process can be used to predict the future performance of the process. If the chart indicates that the monitored process is not in control, analysis of the chart can help determine the sources of variation, as this will result in degraded process performance. [4] A process that is stable but operating outside desired (specification) limits (e.g., scrap rates may be in statistical control but above desired limits) needs to be improved through a deliberate effort to understand the causes of current performance and fundamentally improve the process. [5]

The control chart is one of the seven basic tools of quality control. [6] Typically control charts are used for time-series data, also known as continuous data or variable data. Although they can also be used for data that has logical comparability (i.e. you want to compare samples that were taken all at the same time, or the performance of different individuals); however the type of chart used to do this requires consideration. [7]

History

The control chart was invented by Walter A. Shewhart working for Bell Labs in the 1920s. [8] The company's engineers had been seeking to improve the reliability of their telephony transmission systems. Because amplifiers and other equipment had to be buried underground, there was a stronger business need to reduce the frequency of failures and repairs. By 1920, the engineers had already realized the importance of reducing variation in a manufacturing process. Moreover, they had realized that continual process-adjustment in reaction to non-conformance actually increased variation and degraded quality. Shewhart framed the problem in terms of Common- and special-causes of variation and, on May 16, 1924, wrote an internal memo introducing the control chart as a tool for distinguishing between the two. Shewhart's boss, George Edwards, recalled: "Dr. Shewhart prepared a little memorandum only about a page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram, and the short text which preceded and followed it set forth all of the essential principles and considerations which are involved in what we know today as process quality control." [9] Shewhart stressed that bringing a production process into a state of statistical control, where there is only common-cause variation, and keeping it in control, is necessary to predict future output and to manage a process economically.

Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Shewhart drew from pure mathematical statistical theories, he understood that data from physical processes typically produce a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times. [10]

In 1924, or 1925, Shewhart's innovation came to the attention of W. Edwards Deming, then working at the Hawthorne facility. Deming later worked at the United States Department of Agriculture and became the mathematical advisor to the United States Census Bureau. Over the next half a century, Deming became the foremost champion and proponent of Shewhart's work. After the defeat of Japan at the close of World War II, Deming served as statistical consultant to the Supreme Commander for the Allied Powers. His ensuing involvement in Japanese life, and long career as an industrial consultant there, spread Shewhart's thinking, and the use of the control chart, widely in Japanese manufacturing industry throughout the 1950s and 1960s.

Bonnie Small, worked in an Allentown plant in the 1950s after the transistor was made. Used Shewhart's methods to improve plant performance in quality control and made up to 5000 control charts. In 1958, “The Western Electric Statistical Quality Control Handbook” had appeared from her writings and led to use at AT&T. [11]

Chart details

A control chart consists of:

The chart may have other optional features, including:

(n.b., there are several rule sets for detection of signal; this is just one set. The rule set should be clearly stated.)

  1. Any point outside the control limits
  2. A Run of 7 Points all above or all below the central line - Stop the production
    • Quarantine and 100% check
    • Adjust Process.
    • Check 5 Consecutive samples
    • Continue The Process.
  3. A Run of 7 Point Up or Down - Instruction as above

Chart usage

If the process is in control (and the process statistic is normal), 99.7300% of all the points will fall between the control limits. Any observations outside the limits, or systematic patterns within, suggest the introduction of a new (and likely unanticipated) source of variation, known as a special-cause variation. Since increased variation means increased quality costs, a control chart "signaling" the presence of a special-cause requires immediate investigation.

This makes the control limits very important decision aids. The control limits provide information about the process behavior and have no intrinsic relationship to any specification targets or engineering tolerance. In practice, the process mean (and hence the centre line) may not coincide with the specified value (or target) of the quality characteristic because the process design simply cannot deliver the process characteristic at the desired level.

Control charts limit specification limits or targets because of the tendency of those involved with the process (e.g., machine operators) to focus on performing to specification when in fact the least-cost course of action is to keep process variation as low as possible. Attempting to make a process whose natural centre is not the same as the target perform to target specification increases process variability and increases costs significantly and is the cause of much inefficiency in operations. Process capability studies do examine the relationship between the natural process limits (the control limits) and specifications, however.

The purpose of control charts is to allow simple detection of events that are indicative of an increase in process variability. [12] This simple decision can be difficult where the process characteristic is continuously varying; the control chart provides statistically objective criteria of change. When change is detected and considered good its cause should be identified and possibly become the new way of working, where the change is bad then its cause should be identified and eliminated.

The purpose in adding warning limits or subdividing the control chart into zones is to provide early notification if something is amiss. Instead of immediately launching a process improvement effort to determine whether special causes are present, the Quality Engineer may temporarily increase the rate at which samples are taken from the process output until it is clear that the process is truly in control. Note that with three-sigma limits, common-cause variations result in signals less than once out of every twenty-two points for skewed processes and about once out of every three hundred seventy (1/370.4) points for normally distributed processes. [13] The two-sigma warning levels will be reached about once for every twenty-two (1/21.98) plotted points in normally distributed data. (For example, the means of sufficiently large samples drawn from practically any underlying distribution whose variance exists are normally distributed, according to the Central Limit Theorem.)

Choice of limits

Shewhart set 3-sigma (3-standard deviation) limits on the following basis.

Shewhart summarized the conclusions by saying:

... the fact that the criterion which we happen to use has a fine ancestry in highbrow statistical theorems does not justify its use. Such justification must come from empirical evidence that it works. As the practical engineer might say, the proof of the pudding is in the eating. [14]

Although he initially experimented with limits based on probability distributions, Shewhart ultimately wrote:

Some of the earliest attempts to characterize a state of statistical control were inspired by the belief that there existed a special form of frequency function f and it was early argued that the normal law characterized such a state. When the normal law was found to be inadequate, then generalized functional forms were tried. Today, however, all hopes of finding a unique functional form f are blasted. [15]

The control chart is intended as a heuristic. Deming insisted that it is not a hypothesis test and is not motivated by the Neyman–Pearson lemma. He contended that the disjoint nature of population and sampling frame in most industrial situations compromised the use of conventional statistical techniques. Deming's intention was to seek insights into the cause system of a process ...under a wide range of unknowable circumstances, future and past....[ citation needed ] He claimed that, under such conditions, 3-sigma limits provided ... a rational and economic guide to minimum economic loss... from the two errors:[ citation needed ]

  1. Ascribe a variation or a mistake to a special cause (assignable cause) when in fact the cause belongs to the system (common cause). (Also known as a Type I error or False Positive)
  2. Ascribe a variation or a mistake to the system (common causes) when in fact the cause was a special cause (assignable cause). (Also known as a Type II error or False Negative)

Calculation of standard deviation

As for the calculation of control limits, the standard deviation (error) required is that of the common-cause variation in the process. Hence, the usual estimator, in terms of sample variance, is not used as this estimates the total squared-error loss from both common- and special-causes of variation.

An alternative method is to use the relationship between the range of a sample and its standard deviation derived by Leonard H. C. Tippett, as an estimator which tends to be less influenced by the extreme observations which typify special-causes. [ citation needed ]

Rules for detecting signals

The most common sets are:

There has been particular controversy as to how long a run of observations, all on the same side of the centre line, should count as a signal, with 6, 7, 8 and 9 all being advocated by various writers.

The most important principle for choosing a set of rules is that the choice be made before the data is inspected. Choosing rules once the data have been seen tends to increase the Type I error rate owing to testing effects suggested by the data.

Alternative bases

In 1935, the British Standards Institution, under the influence of Egon Pearson and against Shewhart's spirit, adopted control charts, replacing 3-sigma limits with limits based on percentiles of the normal distribution. This move continues to be represented by John Oakland and others but has been widely deprecated by writers in the Shewhart–Deming tradition.

Performance of control charts

When a point falls outside the limits established for a given control chart, those responsible for the underlying process are expected to determine whether a special cause has occurred. If one has, it is appropriate to determine if the results with the special cause are better than or worse than results from common causes alone. If worse, then that cause should be eliminated if possible. If better, it may be appropriate to intentionally retain the special cause within the system producing the results.[ citation needed ]

Even when a process is in control (that is, no special causes are present in the system), there is approximately a 0.27% probability of a point exceeding 3-sigma control limits. So, even an in control process plotted on a properly constructed control chart will eventually signal the possible presence of a special cause, even though one may not have actually occurred. For a Shewhart control chart using 3-sigma limits, this false alarm occurs on average once every 1/0.0027 or 370.4 observations. Therefore, the in-control average run length (or in-control ARL) of a Shewhart chart is 370.4.[ citation needed ]

Meanwhile, if a special cause does occur, it may not be of sufficient magnitude for the chart to produce an immediate alarm condition. If a special cause occurs, one can describe that cause by measuring the change in the mean and/or variance of the process in question. When those changes are quantified, it is possible to determine the out-of-control ARL for the chart.[ citation needed ]

It turns out that Shewhart charts are quite good at detecting large changes in the process mean or variance, as their out-of-control ARLs are fairly short in these cases. However, for smaller changes (such as a 1- or 2-sigma change in the mean), the Shewhart chart does not detect these changes efficiently. Other types of control charts have been developed, such as the EWMA chart, the CUSUM chart and the real-time contrasts chart, which detect smaller changes more efficiently by making use of information from observations collected prior to the most recent data point. [17]

Many control charts work best for numeric data with Gaussian assumptions. The real-time contrasts chart was proposed to monitor process with complex characteristics, e.g. high-dimensional, mix numerical and categorical, missing-valued, non-Gaussian, non-linear relationship. [17]

Criticisms

Several authors have criticised the control chart on the grounds that it violates the likelihood principle.[ citation needed ] However, the principle is itself controversial and supporters of control charts further argue that, in general, it is impossible to specify a likelihood function for a process not in statistical control, especially where knowledge about the cause system of the process is weak.[ citation needed ]

Some authors have criticised the use of average run lengths (ARLs) for comparing control chart performance, because that average usually follows a geometric distribution, which has high variability and difficulties.[ citation needed ]

Some authors have criticized that most control charts focus on numeric data. Nowadays, process data can be much more complex, e.g. non-Gaussian, mix numerical and categorical, or be missing-valued. [17]

Types of charts

Chart Process observation Process observations relationships Process observations type Size of shift to detect
and R chart Quality characteristic measurement within one subgroupIndependentVariablesLarge (≥ 1.5σ)
and s chart Quality characteristic measurement within one subgroupIndependentVariablesLarge (≥ 1.5σ)
Shewhart individuals control chart (ImR chart or XmR chart)Quality characteristic measurement for one observationIndependentVariablesLarge (≥ 1.5σ)
Three-way chart Quality characteristic measurement within one subgroupIndependentVariablesLarge (≥ 1.5σ)
p-chart Fraction nonconforming within one subgroupIndependentAttributesLarge (≥ 1.5σ)
np-chart Number nonconforming within one subgroupIndependentAttributesLarge (≥ 1.5σ)
c-chart Number of nonconformances within one subgroupIndependentAttributesLarge (≥ 1.5σ)
u-chart Nonconformances per unit within one subgroupIndependentAttributesLarge (≥ 1.5σ)
EWMA chart Exponentially weighted moving average of quality characteristic measurement within one subgroupIndependentAttributes or variablesSmall (< 1.5σ)
CUSUM chartCumulative sum of quality characteristic measurement within one subgroupIndependentAttributes or variablesSmall (< 1.5σ)
Time series modelQuality characteristic measurement within one subgroupAutocorrelatedAttributes or variablesN/A
Regression control chart Quality characteristic measurement within one subgroupDependent of process control variablesVariablesLarge (≥ 1.5σ)

Some practitioners also recommend the use of Individuals charts for attribute data, particularly when the assumptions of either binomially distributed data (p- and np-charts) or Poisson-distributed data (u- and c-charts) are violated. [18] Two primary justifications are given for this practice. First, normality is not necessary for statistical control, so the Individuals chart may be used with non-normal data. [19] Second, attribute charts derive the measure of dispersion directly from the mean proportion (by assuming a probability distribution), while Individuals charts derive the measure of dispersion from the data, independent of the mean, making Individuals charts more robust than attributes charts to violations of the assumptions about the distribution of the underlying population. [20] It is sometimes noted that the substitution of the Individuals chart works best for large counts, when the binomial and Poisson distributions approximate a normal distribution. i.e. when the number of trials n > 1000 for p- and np-charts or λ > 500 for u- and c-charts.

Critics of this approach argue that control charts should not be used when their underlying assumptions are violated, such as when process data is neither normally distributed nor binomially (or Poisson) distributed. Such processes are not in control and should be improved before the application of control charts. Additionally, application of the charts in the presence of such deviations increases the type I and type II error rates of the control charts, and may make the chart of little practical use.[ citation needed ]

See also

Related Research Articles

<span class="mw-page-title-main">Standard deviation</span> In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

<span class="mw-page-title-main">W. Edwards Deming</span> American engineer and statistician (1900–1993)

William Edwards Deming was an American business theorist, composer, economist, industrial engineer, management consultant, statistician, and writer. Educated initially as an electrical engineer and later specializing in mathematical physics, he helped develop the sampling techniques still used by the United States Census Bureau and the Bureau of Labor Statistics. He is also known as the father of the quality movement and was hugely influential in post-WWII Japan. He is best known for his theories of management.

Six Sigma () is a set of techniques and tools for process improvement. It was introduced by American engineer Bill Smith while working at Motorola in 1986.

<span class="mw-page-title-main">Walter A. Shewhart</span> American statistician

Walter Andrew Shewhart was an American physicist, engineer and statistician, sometimes known as the father of statistical quality control and also related to the Shewhart cycle.

Common and special causes are the two distinct origins of variation in a process, as defined in the statistical thinking and methods of Walter A. Shewhart and W. Edwards Deming. Briefly, "common causes", also called natural patterns, are the usual, historical, quantifiable variation in a system, while "special causes" are unusual, not previously observed, non-quantifiable variation.

<span class="mw-page-title-main">Standard error</span> Statistical property

The standard error (SE) of a statistic is the standard deviation of its sampling distribution or an estimate of that standard deviation. If the statistic is the sample mean, it is called the standard error of the mean (SEM). The standard error is a key ingredient in producing confidence intervals.

Statistical process control (SPC) or statistical quality control (SQC) is the application of statistical methods to monitor and control the quality of a production process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste scrap. SPC can be applied to any process where the "conforming product" output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines.

In probability theory and statistics, the coefficient of variation (CV), also known as Normalized Root-Mean-Square Deviation (NRMSD), Percent RMS, and relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is defined as the ratio of the standard deviation to the mean , and often expressed as a percentage ("%RSD"). The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R, by economists and investors in economic models, and in psychology/neuroscience.

The Western Electric rules are decision rules in statistical process control for detecting out-of-control or non-random conditions on control charts. Locations of the observations relative to the control chart control limits and centerline indicate whether the process in question should be investigated for assignable causes. The Western Electric rules were codified by a specially-appointed committee of the manufacturing division of the Western Electric Company and appeared in the first edition of a 1956 handbook, that became a standard text of the field. Their purpose was to ensure that line workers and engineers interpret control charts in a uniform way.

<span class="texhtml mvar" style="font-style:italic;">x̅</span> and R chart

In statistical process control (SPC), the and R chart is a type of scheme, popularly known as control chart, used to monitor the mean and range of a normally distributed variables simultaneously, when samples are collected at regular intervals from a business or industrial process. It is often used to monitor the variables data but the performance of the and R chart may suffer when the normality assumption is not valid.

The process capability is a measurable property of a process to the specification, expressed as a process capability index or as a process performance index. The output of this measurement is often illustrated by a histogram and calculations that predict how many parts will be produced out of specification (OOS).

p-chart

In statistical quality control, the p-chart is a type of control chart used to monitor the proportion of nonconforming units in a sample, where the sample proportion nonconforming is defined as the ratio of the number of nonconforming units to the sample size, n.

The process capability index, or process capability ratio, is a statistical measure of process capability: the ability of an engineering process to produce an output within specification limits. The concept of process capability only holds meaning for processes that are in a state of statistical control. This means it cannot account for deviations which are not expected, such as misaligned, damaged, or worn equipment. Process capability indices measure how much "natural variation" a process experiences relative to its specification limits, and allows different processes to be compared to how well an organization controls them. Somewhat counterintuitively, higher index values indicate better performance, with zero indicating high deviation.

Natural process variation, sometimes just called process variation, is the statistical description of natural fluctuations in process outputs.

<span class="mw-page-title-main">68–95–99.7 rule</span> Shorthand used in statistics

In statistics, the 68–95–99.7 rule, also known as the empirical rule, is a shorthand used to remember the percentage of values that lie within an interval estimate in a normal distribution: 68%, 95%, and 99.7% of the values lie within one, two, and three standard deviations of the mean, respectively.

<span class="mw-page-title-main">Shewhart individuals control chart</span>

In statistical quality control, the individual/moving-range chart is a type of control chart used to monitor variables data from a business or industrial process for which it is impractical to use rational subgroups.

<span class="texhtml mvar" style="font-style:italic;">x̅</span> and s chart

In statistical quality control, the and s chart is a type of control chart used to monitor variables data when samples are collected at regular intervals from a business or industrial process. This is connected to traditional statistical quality control (SQC) and statistical process control (SPC). However, Woodall noted that "I believe that the use of control charts and other monitoring methods should be referred to as “statistical process monitoring,” not “statistical process control (SPC).”"

Laboratory quality control is designed to detect, reduce, and correct deficiencies in a laboratory's internal analytical process prior to the release of patient results, in order to improve the quality of the results reported by the laboratory. Quality control (QC) is a measure of precision, or how well the measurement system reproduces the same result over time and under varying operating conditions. Laboratory quality control material is usually run at the beginning of each shift, after an instrument is serviced, when reagent lots are changed, after equipment calibration, and whenever patient results seem inappropriate. Quality control material should approximate the same matrix as patient specimens, taking into account properties such as viscosity, turbidity, composition, and color. It should be simple to use, with minimal vial-to-vial variability, because variability could be misinterpreted as systematic error in the method or instrument. It should be stable for long periods of time, and available in large enough quantities for a single batch to last at least one year. Liquid controls are more convenient than lyophilized (freeze-dried) controls because they do not have to be reconstituted, minimizing pipetting error. Dried Tube Specimen (DTS) is slightly cumbersome as a QC material but it is very low-cost, stable over long periods and efficient, especially useful for resource-restricted settings in under-developed and developing countries. DTS can be manufactured in-house by a laboratory or Blood Bank for its use.

<span class="mw-page-title-main">EWMA chart</span> Type of control chart in statistical quality control

In statistical quality control, the EWMA chart is a type of control chart used to monitor either variables or attributes-type data using the monitored business or industrial process's entire history of output. While other control charts treat rational subgroups of samples individually, the EWMA chart tracks the exponentially-weighted moving average of all prior sample means. EWMA weights samples in geometrically decreasing order so that the most recent samples are weighted most highly while the most distant samples contribute very little.

References

  1. "Control charts — Part 1: General guidelines". iso.org. Retrieved 2022-12-11.
  2. "Control charts — Part 2: Shewhart control charts". iso.org. Retrieved 2022-12-11.
  3. "Control charts — Part 4: Cumulative sum charts". iso.org. Retrieved 2022-12-11.
  4. McNeese, William (July 2006). "Over-controlling a Process: The Funnel Experiment". BPI Consulting, LLC. Retrieved 2010-03-17.
  5. Wheeler, Donald J. (2000). Understanding Variation. Knoxville, Tennessee: SPC Press. ISBN   978-0-945320-53-1.
  6. Nancy R. Tague (2004). "Seven Basic Quality Tools". The Quality Toolbox. Milwaukee, Wisconsin: American Society for Quality. p. 15. Retrieved 2010-02-05.
  7. A Poots, T Woodcock (2012). "Statistical process control for data without inherent order". BMC Medical Informatics and Decision Making. 12: 86. doi:10.1186/1472-6947-12-86. PMC   3464151 . PMID   22867269.
  8. "Western Electric History". www.porticus.org. Archived from the original on 2011-01-27. Retrieved 2015-03-26.
  9. "Western Electric – A Brief History". Archived from the original on 2008-05-11. Retrieved 2008-03-14.
  10. "Why SPC?" British Deming Association SPC Press, Inc. 1992
  11. Best, M; Neuhauser, D (1 April 2006). "Walter A Shewhart, 1924, and the Hawthorne factory". Quality and Safety in Health Care. 15 (2): 142–143. doi:10.1136/qshc.2006.018093. PMC   2464836 . PMID   16585117.
  12. Statistical Process Controls for Variable Data. Lean Six sigma. (n.d.). Retrieved from https://theengineeringarchive.com/sigma/page-variable-control-charts.html.
  13. Wheeler, Donald J. (1 November 2010). "Are You Sure We Don't Need Normally Distributed Data?". Quality Digest. Retrieved 7 December 2010.
  14. Shewhart, W A (1931). Economic Control of Quality of Manufactured Product. Van Nordstrom. p. 18.
  15. Shewart, Walter Andrew; Deming, William Edwards (1939). Statistical Method from the Viewpoint of Quality Control. University of California: Graduate School, The Department of Agriculture. p. 12. ISBN   9780877710325.
  16. Wheeler, Donald J.; Chambers, David S. (1992). Understanding statistical process control (2 ed.). Knoxville, Tennessee: SPC Press. p. 96. ISBN   978-0-945320-13-5. OCLC   27187772.
  17. 1 2 3 Deng, H.; Runger, G.; Tuv, E. (2012). "System monitoring with real-time contrasts". Journal of Quality Technology. 44 (1). pp. 9–27. doi:10.1080/00224065.2012.11917878. S2CID   119835984.
  18. Wheeler, Donald J. (2000). Understanding Variation: the key to managing chaos. SPC Press. p.  140. ISBN   978-0-945320-53-1.
  19. Staufer, Rip. "Some Problems with Attribute Charts". Quality Digest . Retrieved 2 Apr 2010.
  20. Wheeler, Donald J. "What About Charts for Count Data?". Quality Digest . Retrieved 2010-03-23.

Bibliography