Process capability index

Last updated

In process improvement efforts, the process capability index or process capability ratio is a statistical measure of process capability: the ability of a process to produce output within specification limits. [1] The concept of process capability only holds meaning for processes that are in a state of statistical control. Process capability indices measure how much "natural variation" a process experiences relative to its specification limits and allows different processes to be compared with respect to how well an organization controls them.

Contents

Example for non-specialists

A company produces axles with nominal diameter 20 mm on a lathe. As no axle can be made to exactly 20 mm, the designer specifies the maximum admissible deviations (called tolerances or specification limits). For instance, the requirement could be that axles need to be between 19.9 and 20.2 mm. The process capability index is a measure for how likely it is that a produced axle satisfies this requirement. The index pertains to statistical (natural) variations only. These are variations that naturally occur without a specific cause. Errors not addressed include operator errors, or play in the lathe's mechanisms resulting in a wrong or unpredictable tool position. If errors of the latter kinds occur, the process is not in a state of statistical control. When this is the case, the process capability index is meaningless.

Introduction

If the upper and lower specification limits of the process are USL and LSL, the target process mean is T, the estimated mean of the process is and the estimated variability of the process (expressed as a standard deviation) is , then commonly accepted process capability indices include:

Index Description
Estimates what the process is capable of producing if the process mean were to be centered between the specification limits. Assumes process output is approximately normally distributed.
Estimates process capability for specifications that consist of a lower limit only (for example, strength). Assumes process output is approximately normally distributed.
Estimates process capability for specifications that consist of an upper limit only (for example, concentration). Assumes process output is approximately normally distributed.
Estimates what the process is capable of producing, considering that the process mean may not be centered between the specification limits. (If the process mean is not centered, overestimates process capability.) if the process mean falls outside of the specification limits. Assumes process output is approximately normally distributed.
Estimates process capability around a target, T. is always greater than zero. Assumes process output is approximately normally distributed. is also known as the Taguchi capability index. [2]
Estimates process capability around a target, T, and accounts for an off-center process mean. Assumes process output is approximately normally distributed.

is estimated using the sample standard deviation.

Process capability indices are constructed to express more desirable capability with increasingly higher values. Values near or below zero indicate processes operating off target ( far from T) or with high variation.

Fixing values for minimum "acceptable" process capability targets is a matter of personal opinion, and what consensus exists varies by industry, facility, and the process under consideration. For example, in the automotive industry, the Automotive Industry Action Group sets forth guidelines in the Production Part Approval Process, 4th edition for recommended Cpk minimum values for critical-to-quality process characteristics. However, these criteria are debatable and several processes may not be evaluated for capability just because they have not properly been assessed.

Since the process capability is a function of the specification, the Process Capability Index is only as good as the specification. For instance, if the specification came from an engineering guideline without considering the function and criticality of the part, a discussion around process capability is useless, and would have more benefits if focused on what are the real risks of having a part borderline out of specification. The loss function of Taguchi better illustrates this concept.

At least one academic expert recommends [3] the following:

Situation Recommended minimum process capability for two-sided specifications Recommended minimum process capability for one-sided specification
Existing process1.331.25
New process1.501.45
Safety or critical parameter for existing process1.501.45
Safety or critical parameter for new process1.671.60
Six Sigma quality process2.002.00

However where a process produces a characteristic with a capability index greater than 2.5, the unnecessary precision may be expensive. [4]

Relationship to measures of process fallout

The mapping from process capability indices, such as Cpk, to measures of process fallout is straightforward. Process fallout quantifies how many defects a process produces and is measured by DPMO or PPM. Process yield is the complement of process fallout and is approximately equal to the area under the probability density function if the process output is approximately normally distributed.

In the short term ("short sigma"), the relationships are:

CpSigma level (σ)Area under the

probability density function

Process yieldProcess fallout

(in terms of DPMO/PPM)

0.3310.682689492168.27%317311
0.6720.954499736195.45%45500
1.0030.997300203999.73%2700
1.3340.999936657599.99%63
1.6750.999999426799.9999%1
2.0060.999999998099.9999998%0.002

In the long term, processes can shift or drift significantly (most control charts are only sensitive to changes of 1.5σ or greater in process output). If there was a 1.5 sigma shift 1.5σ off of target in the processes (see Six Sigma), it would then produce these relationships: [5]

CpAdjusted

Sigma level (σ)

Area under the

probability density function

Process yieldProcess fallout

(in terms of DPMO/PPM)

0.3310.308537538730.85%691462
0.6720.691462461369.15%308538
1.0030.933192798793.32%66807
1.3340.993790334799.38%6209
1.6750.999767370999.9767%232.6
2.0060.999996602399.99966%3.40

Because processes can shift or drift significantly long term, each process would have a unique sigma shift value, thus process capability indices are less applicable as they require statistical control.


Example

Consider a quality characteristic with target of 100.00 μm and upper and lower specification limits of 106.00 μm and 94.00 μm respectively. If, after carefully monitoring the process for a while, it appears that the process is in control and producing output predictably (as depicted in the run chart below), we can meaningfully estimate its mean and standard deviation.

ProcessCapabilityExample.svg

If and are estimated to be 98.94 μm and 1.03 μm, respectively, then

Index

The fact that the process is running off-center (about 1σ below its target) is reflected in the markedly different values for Cp, Cpk, Cpm, and Cpkm.

See also

Related Research Articles

Normal distribution Probability distribution

In brief, a normal distribution is a structure used to approximate some phenomenon's long-term behaviour. For instance, a proficient archer will have an accuracy that is around the bull's eye of a target. In this case, the word accuracy represents the average location of all arrows relative to the bull's eye. The word around represents the precision or clumping of the arrows relative to their collective average. In the context of a normal distribution, accuracy and precision are referred to as the average and the deviation, respectively. To summarize, a narrow measure of an archer's proficiency can be expressed with two values: an average and a deviation. In a normal distribution, these two values mean: there is a ~68% probability that an arrow will land within one deviation of the archer's average accuracy; a ~95% probability that an arrow will land within two deviations of the archer's average accuracy; ~99.7% within three; and so on, slowly increasing towards 100%.

Standard deviation In statistics, a measure of variation

In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean of the set, while a high standard deviation indicates that the values are spread out over a wider range.

Students <i>t</i>-distribution Probability distribution

In probability and statistics, Student's t-distribution is any member of a family of continuous probability distributions that arise when estimating the mean of a normally distributed population in situations where the sample size is small and the population's standard deviation is unknown. It was developed by English statistician William Sealy Gosset under the pseudonym "Student".

In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator measures the average of the squares of the errors—that is, the average squared difference between the estimated values and the actual value. MSE is a risk function, corresponding to the expected value of the squared error loss. The fact that MSE is almost always strictly positive is because of randomness or because the estimator does not account for information that could produce a more accurate estimate.

Standard score How many standard deviations apart from the mean an observed datum is

In statistics, the standard score is the number of standard deviations by which the value of a raw score is above or below the mean value of what is being observed or measured. Raw scores above the mean have positive standard scores, while those below the mean have negative standard scores.

Six Sigma () is a set of techniques and tools for process improvement. It was introduced by American engineer Bill Smith while working at Motorola in 1986. A six sigma process is one in which 99.99966% of all opportunities to produce some feature of a part are statistically expected to be free of defects.

<i>Z</i>-test

A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution. Z-tests test the mean of a distribution. For each significance level in the confidence interval, the Z-test has a single critical value which makes it more convenient than the Student's t-test whose critical values are defined by the sample size.

Control chart Process control tool to determine if a manufacturing process is in a state of control

Control charts, also known as Shewhart charts or process-behavior charts, are a statistical process control tool used to determine if a manufacturing or business process is in a state of control. It is more appropriate to say that the control charts are the graphical device for Statistical Process Monitoring (SPM). Traditional control charts are mostly designed to monitor process parameters when underlying form of the process distributions are known. However, more advanced techniques are available in the 21st century where incoming data streaming can-be monitored even without any knowledge of the underlying process distributions. Distribution-free control charts are becoming increasingly popular.

In statistical inference, specifically predictive inference, a prediction interval is an estimate of an interval in which a future observation will fall, with a certain probability, given what has already been observed. Prediction intervals are often used in regression analysis.

A tolerance interval is a statistical interval within which, with some confidence level, a specified proportion of a sampled population falls. "More specifically, a 100×p%/100×(1−α) tolerance interval provides limits within which at least a certain proportion (p) of the population falls with a given level of confidence (1−α)." "A tolerance interval (TI) based on a sample is constructed so that it would include at least a proportion p of the sampled population with confidence 1−α; such a TI is usually referred to as p-content − (1−α) coverage TI." "A upper tolerance limit (TL) is simply a 1−α upper confidence limit for the 100 p percentile of the population."

In probability theory and statistics, the coefficient of variation (CV), also known as relative standard deviation (RSD), is a standardized measure of dispersion of a probability distribution or frequency distribution. It is often expressed as a percentage, and is defined as the ratio of the standard deviation to the mean . The CV or RSD is widely used in analytical chemistry to express the precision and repeatability of an assay. It is also commonly used in fields such as engineering or physics when doing quality assurance studies and ANOVA gauge R&R. In addition, CV is utilized by economists and investors in economic models.

A process is a unique combination of tools, materials, methods, and people engaged in producing a measurable output; for example a manufacturing line for machine parts. All processes have inherent statistical variability which can be evaluated by statistical methods.

The Z-factor is a measure of statistical effect size. It has been proposed for use in high-throughput screening, and commonly written as Z' to judge whether the response in a particular assay is large enough to warrant further attention.

Natural process variation, sometimes just called process variation, is the statistical description of natural fluctuations in process outputs.

In process improvement efforts, the process performance index is an estimate of the process capability of a process during its initial set-up, before it has been brought into a state of statistical control.

Shewhart individuals control chart

In statistical quality control, the individual/moving-range chart is a type of control chart used to monitor variables data from a business or industrial process for which it is impractical to use rational subgroups.

In probability theory and statistics, the index of dispersion, dispersion index,coefficient of dispersion,relative variance, or variance-to-mean ratio (VMR), like the coefficient of variation, is a normalized measure of the dispersion of a probability distribution: it is a measure used to quantify whether a set of observed occurrences are clustered or dispersed compared to a standard statistical model.

Experimental uncertainty analysis is a technique that analyses a derived quantity, based on the uncertainties in the experimentally measured quantities that are used in some form of mathematical relationship ("model") to calculate that derived quantity. The model used to convert the measurements into the derived quantity is usually based on fundamental principles of a science or engineering discipline.

In statistics, the strictly standardized mean difference (SSMD) is a measure of effect size. It is the mean divided by the standard deviation of a difference between two random values each from one of two groups. It was initially proposed for quality control and hit selection in high-throughput screening (HTS) and has become a statistical parameter measuring effect sizes for the comparison of any two groups with random values.

Worst-case distance

In fabrication the yield Y=(number of good samples)/(total number of samples) is one of the most important measures. Also in the design phase engineers already try to maximize the yield by using simulation techniques and statistical models. Often the data follows the well-known bell-shaped normal distribution, and for such distributions there is a simple direct relationship between the design margin and the yield. If we express the specification margin in terms of standard deviation sigma, we can immediately calculate yield Y according to this specification. The concept of worst-case distance (WCD) extends this simple idea for applying it to more complex problems. The WCD is a metric originally applied in electronic design for yield optimization and design centering, nowadays also applied as a metric for quantifying electronic system and device robustness.

References

  1. "What is Process Capability?". NIST/Sematech Engineering Statistics Handbook . National Institute of Standards and Technology . Retrieved 2008-06-22.{{cite web}}: External link in |work= (help)
  2. Boyles, Russell (1991). "The Taguchi Capability Index". Journal of Quality Technology. Vol. 23, no. 1. Milwaukee, Wisconsin: American Society for Quality Control. pp. 17–26. ISSN   0022-4065. OCLC   1800135.
  3. Montgomery, Douglas (2004). Introduction to Statistical Quality Control. New York, New York: John Wiley & Sons, Inc. p. 776. ISBN   978-0-471-65631-9. OCLC   56729567. Archived from the original on 2008-06-20.
  4. Booker, J. M.; Raines, M.; Swift, K. G. (2001). Designing Capable and Reliable Products. Oxford: Butterworth-Heinemann. ISBN   978-0-7506-5076-2. OCLC   47030836.
  5. "Sigma Conversion Calculator | BMGI.org". bmgi.org. Archived from the original on 2016-03-16. Retrieved 2016-03-17.