Statistical process control

Last updated

Statistical process control (SPC) is a method of quality control which employs statistical methods to monitor and control a process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste (rework or scrap). SPC can be applied to any process where the "conforming product" (product meeting specifications) output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines.

Contents

SPC must be practiced in two phases: The first phase is the initial establishment of the process, and the second phase is the regular production use of the process. In the second phase, a decision of the period to be examined must be made, depending upon the change in 5M&E conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of parts used in the manufacturing process (machine parts, jigs, and fixtures).

An advantage of SPC over other methods of quality control, such as "inspection", is that it emphasizes early detection and prevention of problems, rather than the correction of problems after they have occurred.

In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product. SPC makes it less likely the finished product will need to be reworked or scrapped.

History

Statistical process control was pioneered by Walter A. Shewhart at Bell Laboratories in the early 1920s. Shewhart developed the control chart in 1924 and the concept of a state of statistical control. Statistical control is equivalent to the concept of exchangeability [1] [2] developed by logician William Ernest Johnson also in 1924 in his book Logic, Part III: The Logical Foundations of Science. [3] Along with a team at AT&T that included Harold Dodge and Harry Romig he worked to put sampling inspection on a rational statistical basis as well. Shewhart consulted with Colonel Leslie E. Simon in the application of control charts to munitions manufacture at the Army's Picatinny Arsenal in 1934. That successful application helped convince Army Ordnance to engage AT&T's George Edwards to consult on the use of statistical quality control among its divisions and contractors at the outbreak of World War II.

W. Edwards Deming invited Shewhart to speak at the Graduate School of the U.S. Department of Agriculture and served as the editor of Shewhart's book Statistical Method from the Viewpoint of Quality Control (1939), which was the result of that lecture. Deming was an important architect of the quality control short courses that trained American industry in the new techniques during WWII. The graduates of these wartime courses formed a new professional society in 1945, the American Society for Quality Control, which elected Edwards as its first president. Deming travelled to Japan during the Allied Occupation and met with the Union of Japanese Scientists and Engineers (JUSE) in an effort to introduce SPC methods to Japanese industry . [4] [5]

'Common' and 'special' sources of variation

Shewhart read the new statistical theories coming out of Britain, especially the work of William Sealy Gosset, Karl Pearson, and Ronald Fisher. However, he understood that data from physical processes seldom produced a normal distribution curve (that is, a Gaussian distribution or 'bell curve'). He discovered that data from measurements of variation in manufacturing did not always behave the way as data from measurements of natural phenomena (for example, Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display variation that is natural to the process ("common" sources of variation); these processes he described as being in (statistical) control. Other processes additionally display variation that is not present in the causal system of the process at all times ("special" sources of variation), which Shewhart described as not in control. [6]

Application to non-manufacturing processes

Statistical process control is appropriate to support any repetitive process, and has been implemented in many settings where for example ISO 9000 quality management systems are used, including financial auditing and accounting, IT operations, health care processes, and clerical processes such as loan arrangement and administration, customer billing etc. Despite criticism of its use in design and development, it is well-placed to manage semi-automated data governance of high-volume data processing operations, for example in an enterprise data warehouse, or an enterprise data quality management system. [7]

In the 1988 Capability Maturity Model (CMM) the Software Engineering Institute suggested that SPC could be applied to software engineering processes. The Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI) use this concept.

The application of SPC to non-repetitive, knowledge-intensive processes, such as research and development or systems engineering, has encountered skepticism and remains controversial. [8] [9] [10]

In No Silver Bullet, Fred Brooks points out that the complexity, conformance requirements, changeability, and invisibility of software [11] [12] results in inherent and essential variation that cannot be removed. This implies that SPC is less effective in the software development than in, e.g., manufacturing.

Variation in manufacturing

In manufacturing, quality is defined as conformance to specification. However, no two products or characteristics are ever exactly the same, because any process contains many sources of variability. In mass-manufacturing, traditionally, the quality of a finished article is ensured by post-manufacturing inspection of the product. Each article (or a sample of articles from a production lot) may be accepted or rejected according to how well it meets its design specifications, SPC uses statistical tools to observe the performance of the production process in order to detect significant variations before they result in the production of a sub-standard article. Any source of variation at any point of time in a process will fall into one of two classes.

(1) Common causes
'Common' causes are sometimes referred to as 'non-assignable', or 'normal' sources of variation. It refers to any source of variation that consistently acts on process, of which there are typically many. This type of causes collectively produce a statistically stable and repeatable distribution over time.
(2) Special causes
'Special' causes are sometimes referred to as 'assignable' sources of variation. The term refers to any factor causing variation that affects only some of the process output. They are often intermittent and unpredictable.

Most processes have many sources of variation; most of them are minor and may be ignored. If the dominant assignable sources of variation are detected, potentially they can be identified and removed. When they are removed, the process is said to be 'stable'. When a process is stable, its variation should remain within a known set of limits. That is, at least, until another assignable source of variation occurs.

For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of cereal. Some boxes will have slightly more than 500 grams, and some will have slightly less. When the package weights are measured, the data will demonstrate a distribution of net weights.

If the production process, its inputs, or its environment (for example, the machine on the line) change, the distribution of the data will change. For example, as the cams and pulleys of the machinery wear, the cereal filling machine may put more than the specified amount of cereal into each box. Although this might benefit the customer, from the manufacturer's point of view it is wasteful, and increases the cost of production. If the manufacturer finds the change and its source in a timely manner, the change can be corrected (for example, the cams and pulleys replaced).

From a Statistical Process Control perspective, if the weight of each cereal box varies randomly, some higher and some lower, always within an acceptable range, then the process is considered stable. If the cams and pulleys of the machinery start to wear out, the weights of the cereal box might not be random. The degraded functionality of the cams and pulleys may lead to a non-random linear pattern of increasing cereal box weights. We call this common cause variation. If, however, all the cereal boxes suddenly weighed much more than average because of an unexpected malfunction of the cams and pulleys, this would be considered a special cause variation.

Application

The application of SPC involves three main phases of activity:

  1. Understanding the process and the specification limits.
  2. Eliminating assignable (special) sources of variation, so that the process is stable.
  3. Monitoring the ongoing production process, assisted by the use of control charts, to detect significant changes of mean or variation.

Control charts

The data from measurements of variations at points on the process map is monitored using control charts. Control charts attempt to differentiate "assignable" ("special") sources of variation from "common" sources. "Common" sources, because they are an expected part of the process, are of much less concern to the manufacturer than "assignable" sources. Using control charts is a continuous activity, ongoing over time.

Stable process

When the process does not trigger any of the control chart "detection rules" for the control chart, it is said to be "stable". A process capability analysis may be performed on a stable process to predict the ability of the process to produce "conforming product" in the future.

A stable process can be demonstrated by a process signature that is free of variances outside of the capability index. A process signature is the plotted points compared with the capability index.

Excessive variations

When the process triggers any of the control chart "detection rules", (or alternatively, the process capability is low), other activities may be performed to identify the source of the excessive variation. The tools used in these extra activities include: Ishikawa diagram, designed experiments, and Pareto charts. Designed experiments are a means of objectively quantifying the relative importance (strength) of sources of variation. Once the sources of (special cause) variation are identified, they can be minimized or eliminated. Steps to eliminating a source of variation might include: development of standards, staff training, error-proofing, and changes to the process itself or its inputs.

Process stability metrics

When monitoring many processes with control charts, it is sometimes useful to calculate quantitative measures of the stability of the processes. These metrics can then be used to identify/prioritize the processes that are most in need of corrective actions. These metrics can also be viewed as supplementing the traditional process capability metrics. Several metrics have been proposed, as described in Ramirez and Runger. [13] They are (1) a Stability Ratio which compares the long-term variability to the short-term variability, (2) an ANOVA Test which compares the within-subgroup variation to the between-subgroup variation, and (3) an Instability Ratio which compares the number of subgroups that have one or more violations of the Western Electric rules to the total number of subgroups.

Mathematics of control charts

Digital control charts use logic-based rules that determine "derived values" which signal the need for correction. For example,

derived value = last value + average absolute difference between the last N numbers.

See also

Related Research Articles

A quality management system (QMS) is a collection of business processes focused on consistently meeting customer requirements and enhancing their satisfaction. It is aligned with an organization's purpose and strategic direction. It is expressed as the organizational goals and aspirations, policies, processes, documented information, and resources needed to implement and maintain it. Early quality management systems emphasized predictable outcomes of an industrial product production line, using simple statistics and random sampling. By the 20th century, labor inputs were typically the most costly inputs in most industrialized societies, so focus shifted to team cooperation and dynamics, especially the early signaling of problems via a continual improvement cycle. In the 21st century, QMS has tended to converge with sustainability and transparency initiatives, as both investor and customer satisfaction and perceived quality are increasingly tied to these factors. Of QMS regimes, the ISO 9000 family of standards is probably the most widely implemented worldwide – the ISO 19011 audit regime applies to both and deals with quality and sustainability and their integration.

W. Edwards Deming American professor, author, and consultant

William Edwards Deming was an American engineer, statistician, professor, author, lecturer, and management consultant. Educated initially as an electrical engineer and later specializing in mathematical physics, he helped develop the sampling techniques still used by the U.S. Department of the Census and the Bureau of Labor Statistics.

Six Sigma () is a set of techniques and tools for process improvement. It was introduced by American engineer Bill Smith while working at Motorola in 1986. A six sigma process is one in which 99.99966% of all opportunities to produce some feature of a part are statistically expected to be free of defects.

Quality assurance (QA) is a way of preventing mistakes and defects in manufactured products and avoiding problems when delivering products or services to customers; which ISO 9000 defines as "part of quality management focused on providing confidence that quality requirements will be fulfilled". This defect prevention in quality assurance differs subtly from defect detection and rejection in quality control and has been referred to as a shift left since it focuses on quality earlier in the process.

Walter A. Shewhart American statistician

Walter Andrew Shewhart was an American physicist, engineer and statistician, sometimes known as the father of statistical quality control and also related to the Shewhart cycle.

Common and special causes are the two distinct origins of variation in a process, as defined in the statistical thinking and methods of Walter A. Shewhart and W. Edwards Deming. Briefly, "common causes", also called natural patterns, are the usual, historical, quantifiable variation in a system, while "special causes" are unusual, not previously observed, non-quantifiable variation.

Control chart Process control tool to determine if a manufacturing process is in a state of control

Control charts, also known as Shewhart charts or process-behavior charts, are a statistical process control tool used to determine if a manufacturing or business process is in a state of control. It is more appropriate to say that the control charts are the graphical device for Statistical Process Monitoring (SPM). Traditional control charts are mostly designed to monitor process parameters when underlying form of the process distributions are known. However, more advanced techniques are available in the 21st century where incoming data streaming can-be monitored even without any knowledge of the underlying process distributions. Distribution-free control charts are becoming increasingly popular.

Genichi Taguchi was an engineer and statistician. From the 1950s onwards, Taguchi developed a methodology for applying statistics to improve the quality of manufactured goods. Taguchi methods have been controversial among some conventional Western statisticians, but others have accepted many of the concepts introduced by him as valid extensions to the body of knowledge.

PDCA is an iterative design and management method used in business for the control and continuous improvement of processes and products. It is also known as the Deming circle/cycle/wheel, the Shewhart cycle, the control circle/cycle, or plan–do–study–act (PDSA). Another version of this PDCA cycle is OPDCA. The added "O" stands for observation or as some versions say: "Observe the current condition." This emphasis on observation and current condition has currency with the literature on lean manufacturing and the Toyota Production System. The PDCA cycle, with Ishikawa's changes, can be traced back to S. Mizuno of the Tokyo Institute of Technology in 1959.

In the context of software engineering, software quality refers to two related but distinct notions:

Quality management ensures that an organization, product or service is consistent. It has four main components: quality planning, quality assurance, quality control and quality improvement. Quality management is focused not only on product and service quality, but also on the means to achieve it. Quality management, therefore, uses quality assurance and control of processes as well as products to achieve more consistent quality. Quality control is also part of Quality Management. What a customer wants and is willing to pay for it, determines quality. It is a written or unwritten commitment to a known or unknown consumer in the market. Thus, quality can be defined as fitness for intended use or, in other words, how well the product performs its intended function.

The process capability is a measurable property of a process to the specification, expressed as a process capability index or as a process performance index. The output of this measurement is often illustrated by a histogram and calculations that predict how many parts will be produced out of specification (OOS).

A measurement systems analysis (MSA) is a thorough assessment of a measurement process, and typically includes a specially designed experiment that seeks to identify the components of variation in that measurement process. Just as processes that produce a product may vary, the process of obtaining measurements and data may also have variation and produce incorrect results. A measurement systems analysis evaluates the test method, measuring instruments, and the entire process of obtaining measurements to ensure the integrity of data used for analysis and to understand the implications of measurement error for decisions made about a product or process. Proper measurement system analysis is critical for producing a consistent product in manufacturing and when left uncontrolled can result in a drift of key parameters and unusable final products. MSA is also an important element of Six Sigma methodology and of other quality management systems. MSA analyzes the collection of equipment, operations, procedures, software and personnel that affects the assignment of a number to a measurement characteristic.

Corrective and preventive action consists of improvements to an organization's processes taken to eliminate causes of non-conformities or other undesirable situations. It is usually a set of actions, laws or regulations required by an organization to take in manufacturing, documentation, procedures, or systems to rectify and eliminate recurring non-conformance. Non-conformance is identified after systematic evaluation and analysis of the root cause of the non-conformance. Non-conformance may be a market complaint or customer complaint or failure of machinery or a quality management system, or misinterpretation of written instructions to carry out work. The corrective and preventive action is designed by a team that includes quality assurance personnel and personnel involved in the actual observation point of non-conformance. It must be systematically implemented and observed for its ability to eliminate further recurrence of such non-conformation. The Eight disciplines problem solving method, or 8D framework, can be used as an effective method of structuring a CAPA.

Analyse-it is a statistical analysis add-in for Microsoft Excel. Analyse-it is the successor to Astute, developed in 1992 for Excel 4 and the first statistical analysis add-in for Microsoft Excel. Analyse-it provides a range of standard parametric and non-parametric procedures, including Descriptive statistics, ANOVA, ANCOVA, Mann–Whitney, Wilcoxon, chi-square, correlation, linear regression, logistic regression, polynomial regression and advanced model fitting, principal component analysis, and factor analysis.

Quality engineering is the discipline of engineering concerned with the principles and practice of product and service quality assurance and control. In software development, it is the management, development, operation and maintenance of IT systems and enterprise architectures with a high quality standard.

Shewhart individuals control chart

In statistical quality control, the individual/moving-range chart is a type of control chart used to monitor variables data from a business or industrial process for which it is impractical to use rational subgroups.

In business, engineering, and manufacturing, quality – or high quality – has a pragmatic interpretation as the non-inferiority or superiority of something ; it is also defined as being suitable for the intended purpose while satisfying customer expectations. Quality is a perceptual, conditional, and somewhat subjective attribute and may be understood differently by different people. Consumers may focus on the specification quality of a product/service, or how it compares to competitors in the marketplace. Producers might measure the conformance quality, or degree to which the product/service was produced correctly. Support personnel may measure quality in the degree that a product is reliable, maintainable, or sustainable. In such ways, the subjectivity of quality is rendered objective via operational definitions and measured with metrics such as proxy measures.

Process Window Index (PWI) is a statistical measure that quantifies the robustness of a manufacturing process, e.g. one which involves heating and cooling, known as a thermal process. In manufacturing industry, PWI values are used to calibrate the heating and cooling of soldering jobs while baked in a reflow oven.

Laboratory quality control is designed to detect, reduce, and correct deficiencies in a laboratory's internal analytical process prior to the release of patient results, in order to improve the quality of the results reported by the laboratory. Quality control is a measure of precision, or how well the measurement system reproduces the same result over time and under varying operating conditions. Laboratory quality control material is usually run at the beginning of each shift, after an instrument is serviced, when reagent lots are changed, after equipment calibration, and whenever patient results seem inappropriate. Quality control material should approximate the same matrix as patient specimens, taking into account properties such as viscosity, turbidity, composition, and color. It should be simple to use, with minimal vial-to-vial variability, because variability could be misinterpreted as systematic error in the method or instrument. It should be stable for long periods of time, and available in large enough quantities for a single batch to last at least one year. Liquid controls are more convenient than lyophilized (freeze-dried) controls because they do not have to be reconstituted, minimizing pipetting error.

References

  1. Barlow & Irony (1992)
  2. Bergman (2009)
  3. Zabell (1992)
  4. Deming, W. Edwards, Lectures on statistical control of quality., Nippon Kagaku Gijutsu Remmei, 1950
  5. Deming, W. Edwards and Dowd S. John (translator) Lecture to Japanese Management, Deming Electronic Network Web Site, 1950 (from a Japanese transcript of a lecture by Deming to "80% of Japanese top management" given at the Hotel de Yama at Mr. Hakone in August 1950)
  6. Why SPC?. SPC Press, Inc. British Deming Association. 1992.
  7. Larry English Improving Data Warehouse and Business Information Quality : Methods for Reducing Costs and Increasing Profits 1999
  8. Bob Raczynski and Bill Curtis (2008) Software Data Violate SPC's Underlying Assumptions, IEEE Software, May/June 2008, Vol. 25, No. 3, pp. 49-51
  9. Robert V. Binder (1997) Can a Manufacturing Quality Model Work for Software?, IEEE Software, September/October 1997, pp. 101-105
  10. Raczynski, Bob (February 20, 2009). "Is Statistical Process Control Applicable to Software Development Processes?". StickyMinds.
  11. Brooks, F. P., J. (1987). "No Silver Bullet—Essence and Accidents of Software Engineering" (PDF). Computer. 20 (4): 10–19. CiteSeerX   10.1.1.117.315 . doi:10.1109/MC.1987.1663532.
  12. Fred P. Brooks (1986) No Silver Bullet — Essence and Accident in Software Engineering, Proceedings of the IFIP Tenth World Computing Conference 1986, pp. 1069–1076
  13. Ramirez, B.; Runger, G. (2006). "Quantitative Techniques to Evaluate Process Stability". Quality Engineering. 18 (1). pp. 53–68. doi:10.1080/08982110500403581.

Bibliography