Quality control - University of Belgrade

Quality control –
history and recent development
V. Jevremović, K. Veljković, H. Elfaghihe
Faculty of Mathematics,
University of Belgrade, Serbia
Introductory remarks
 Quality – from latin “qualitas” introduced by Cicero
(106-43,BC), term originated from Greek, given by
Plato (428-348,BC)
 It is not possible to cover all the history for such wide
 It is not possible to enumerate all of its recent
“Few important names in Quality control and
few, old and new ideas”
Working with process or systems
 “Anything that can go wrong will go wrong” –
Murphy’s law
How to nullify Murphy’s law and improve the process
under consideration?
One possibility: statistical control by monitoring and
Synergistic control: statistical process control and
engineering process control
Speaking statisticaly – probability to have absolutely
stable production process is ZERO
 “Change alone is unchanging”
History - beginings
 The control chart - invented by Walter
 How to improve the reliability of telephony
transmission systems?
 Shewhart framed the problem in terms of
common and special causes of variation
 In 1924, May 16, in an internal memo
Shewhart introduced the control chart as a
tool for distinguishing between the two
causes of variation.
 Shewhart's boss, George Edwards: "Dr.
Shewhart prepared a little memorandum
only about a page in length… all of the
essential principles and considerations which
are involved in what we know today as
process quality control."
Not only control charts but also…
 Shewhart Cycle Learning and Improvement
cycle, combining both creative
management thinking with statistical
analysis. This cycle contains four
continuous steps:
Plan, Do, Study and Act.
 These steps (commonly refered to as the
PDSA cycle), Shewhart believed, ultimately
lead to total quality improvement.
From USA…
 Shewhart created the basis for the
control chart and the concept of a state
of statistical control by carefully
designed experiments.
 William Edwards Deming after 1924
and over the next 50 years, became
the foremost champion and proponent
of Shewhart's work.
… to Japan…
 W.E.Deming was statistical consultant to the Supreme
Commander for the Allied Powers and worked in
Japan, so he spread Shewhart's thinking, and the use
of the control chart, widely in Japanese manufacturing
industry from the 1950s and 1960s.
 “It is not enough to do your best; you must know
what to do, and then do your best.”
 “If you can't describe what you are doing as a
process, you don't know what you're doing.”
… and then in Japan
 In 1974 Dr. Kaoru Ishikawa - collection of process
improvement tools in his text Guide to Quality
Control. Known around the world:
 Seven basic tools in SQC
Ishikawa - citations
 1. "Engineers who pass judgment based on
their experimental data, must know
statistical methods by heart. "
 2. "… quality control and statistical quality
control must be conducted with utmost
care. "
 3. "… by studying quality control, and by
applying QC properly, the irrational
behaviour of industry and society could be
" Old seven "
Cause–and–effect analysis
Check sheets/tally sheets
Control charts
Pareto analysis
Scatter analysis
Fishbone diagram
 The fishbone diagram was drawn by a manufacturing
team to try to understand the source of periodic iron
contamination. The team used the six generic
headings to prompt ideas. Layers of branches show
thorough thinking about the causes of the problem.
 For example, under the heading “Machines,” the idea
“materials of construction” shows four kinds of
equipment and then several specific machine
 Note that some ideas appear in different places.
“Calibration” shows up under “Methods” as a factor in
the analytical procedure, and also under
“Measurement” as a cause of lab error.
Quality control - definition
 Quality control (QC) is a procedure or
set of procedures intended to ensure
that a manufactured product or
performed service adheres to a
defined set of quality criteria or
meets the requirements of the client
or customer.
 S(tatistical)Q(uality)C(ontrol) – a set
of statistical tools
 SQC can be divided into three
 traditional statistical tools
 acceptance sampling
 statistical process control (SPC).
 Descriptive statistics - describing quality
characteristics (mean, range, variance)
 Acceptance sampling - randomly inspecting
a sample of goods and deciding whether to
accept or reject the entire lot.
 Statistical process control - inspecting a
random sample of output from a process
and deciding whether the process fall within
preset specification limits.
 Causes of variation in the quality of a
product or process:
 common causes
 assignable causes.
 Common causes of variation are random
causes that we cannot identify.
 Assignable causes of variation are those
that can be identified and eliminated.
 Control chart - a graph used in SPC that
shows whether a sample of data falls within
the normal range of variation.
 Control chart: central line (CL), upper
(UCL) and lower control limits (LCL).
 Control limits separate common from assignable
causes of variation
 Control charts for variables
monitor characteristics that can be measured
 Control charts for attributes
 monitor characteristics that can be counted
 Control charts for variables:
 X-bar charts monitor the mean or average value
of a product characteristic.
 R-charts monitor the range or dispersion of the
values of a product characteristic.
 S-charts monitor sample variance
 Control charts for attributes:
 P-charts are used to monitor the proportion of
defects in a sample,
 C-charts are used to monitor the actual number
of defects in a sample.
 Process capability
 the ability of the production process to
meet or exceed preset specifications.
 measured by the process capability index
 the ratio of the specification width to the
width of the process variable.
UCL and LCL are set based on previous knowledge and then
from each sample we calculate mean value:
Collect as many subgroups as possible before calculating
control limits. With smaller amounts of data, the X-bar and R
chart may not represent variability of the entire system. The
more subgroups you use in control limit calculations, the more
reliable the analysis. Typically, twenty to twenty-five subgroups
will be used in control limit calculations.
Steps In Making the Xbar and R
Collect the data. It is best to have at least 100
Divide the data into subgroups, (4 or 5 data points
The data obtained should be from the same grouping of
products produced.
A subgroup should not include data from a different lot
or different process.
Record the data on a data sheet. Design the sheet so
that it is easy to compute the values of X bar and R
for each subgroup
Find the mean value (Xbar), the range, R for each
subgroup, the overall mean, or X double
bar . Compute the average value of the range (R).
Compute the Control Limit Lines, they are calculated
based on properties of normal distribution
 R chart is examined before the Xbar chart;
if the R chart indicates the sample
variability is in statistical control, then
the Xbar chart is examined to determine if
the sample mean is also in statistical
 If the sample variability is not in statistical
control, then the entire process is judged to
be not in statistical control regardless of
what the Xbar chart indicates
Getting the most
Without a control chart, there is no way to know if the
process has changed or to identify sources of process
Only the values out of limits stop the process – is it
possible to have some other informations from charts?
R1 - Any single data point falls outside the 3σ limit from the centerline
R2 - Two out of three consecutive points fall beyond the 2σ limit on the same side of
the centerline
R3 - Four out of five consecutive points fall beyond the 1σ limit on the same side of the
R4 - Nine consecutive points fall on the same side of the centerline
 R1-R4 - the justification for investigation if
assignable causes are present
 Possibility of false positives: Assuming
observations are normally distributed, one
expects Rule R1 to be triggered by chance
one out of every 370 observations on
average. The false alarm rate rises to one
out of every 91.75 observations when
evaluating all four rules
X-bar and R charts applications
 To assess the system’s stability
 To determine if there is a need to
stratify the data.
 To analyze the results of process
 For standardization
Where is the main problem?
Normality assumptions:
 The quality characteristic to be monitored has
normal distribution
 The parameters μ and σ for the random
variable are the same for each unit
 each unit is independent of its predecessors
or successors
Average Run Length (ARL)
 The Average Run Length is the number of
points that, on average, will be plotted on a
control chart before an out of control
condition is indicated
 If the process is in control: ARL=1/ α
 If the process is out of control:
ARL=1/(1- β)
α - the probability of a Type I error,
β - the probability of a Type II error.
CUSUM charts
 CUSUM is short for cumulative sums.
 As measurements are taken, the difference between
each measurement and the bench mark value is
calculated, and this is cumulatively summed up.
 If the processes are in control, measurements do not
deviate significantly from the bench mark, so
measurements greater than the bench mark and
those less than the bench mark averaged each other
out, and the CUSUM value should vary narrowly
around the bench mark level.
 If the processes are out of control, measurements
will more likely to be on one side of the bench mark,
so the CUSUM value will progressively depart from
that of the bench mark.
 CUSUM involves the calculation of
a cumulative sum (which is what makes it
"sequential"). Samples from a process xn are
assigned weights wn , and summed as follows:
 So=0
 Sn+1=max(0, Sn+xn-wn)
 When the value of S exceeds a certain threshold
value, a change in value has been found. The above
formula only detects changes in the positive direction.
When negative changes need to be found as well, the
min operation should be used instead of the max
operation, and this time a change has been found
when the value of S is below the (negative) value of
the threshold value.
CUSUM charts
 CUSUM and Shewhart charts for Poisson distributed
counts are used when the measurements are counts
of events within a defined environment.
 CUSUM and Shewhart charts for normally distributed
means and standard deviations are used when the
measurements are usually made with a gauge or a
measureing instrument, and are continuous with a
normal distribution.
 CUSUM for binomially distributed proportions based
on the Benulli distribution can be used to evaluate
Time series in QC
 univariate time series
nonstationary time series models
 IMA – integrated moving average
 ARIMA(p,d,q)
 multivariate time series
EWMA charts
 Exponentially Weighted Moving Average is a statistic that
averages the data in a way that gives less and less weight to data as they
are further removed in time.
EWMA(t)=λY(t)+(1−λ)EWMA(t−1) ,
EWMA(0) is the mean of historical data (target)
Y(t) is the observation at time t
n is the number of observations to be monitored including EWMA(0)
0<λ≤1 is a constant that determines the depth of memory of the EWMA.
The equation is due to Roberts (1959).
The EWMA chart is sensitive to small shifts in the process
mean, but does not match the ability of Shewhart-style charts to detect
larger shifts
EWMA control procedure can be made sensitive to a small or gradual drift in
the process, whereas the Shewhart control procedure can only react when
the last data point is outside a control limit.
 Generalization of the run rules for the
Shewhart control charts
- signal occurs when m successive
observations exceed k1*sigma control
- signal occurs when (m-1) out of m
successive observations exceed
k2* sigma control limit
 The Frechet control chart
 A new cc based on Frechet distance
for monitoring simultaneously the
process level and spread
 Frechet distance, in particular case
when the distributions are closed with
the respect to changes of location and
scale, between rv X and Y:
D = sqrt((mx-my)*2+(sx-sy)*2)
 Economical desing of control charts
 how to determine the sample size, the
interval between samples, and the
control limits that will yield
approximately maximum average net
 Different approaches
One important name in SQC
 Genichi Taguchi (1924 - 2012), engineer
and statistician
 From the 1950s onwards, Taguchi
developed a methodology for applying
statistics to improve the quality of
manufactured goods.
 Taguchi methods have been controversial
among some conventional Western
statisticians, but others have accepted
many of the concepts introduced by Taguchi.
Taguchi loss function
 Taguchi Loss Function - includes
assessing economic loss from a
deviation in quality without having
to develop the unique function for
each quality characteristic.
 TLF for one piece of product
 TLF for a sample
 TLF for one piece:
Loss =
Constant*(quality characteristic – target value)^2
 TLF for a sample is:
Loss =
Constant*(standard deviation^2+ (process mean–target value) ^2)
‘Constant’ is the coefficient of the Taguchi Loss, or the ratio of
functional tolerance and customer loss. Functional tolerance is
the value at which 50 percent of the customers view the
product as defective, and customer loss is the average loss to
the customer at this point.
‘Quality characteristic’ indicates the actual value of the
‘Target value’ is the specified ideal value for this quality
The upper and lower specification limits (USL and LSL, respectively)
The amount of loss is minimum for the target (or nominal value of a part)
and as you deviate from the target the amount of loss increases even if you
are within the specified limits of the process.
Some other loss functions: Assymetric loss functions
 Statistical Control by Monitoring and
Adjustment, George E.P. Box, A. Luceno, M.
del Carmen Paniagua-Quinones
 Frontiers in Statistical Quality Control,
series of books, editors: H.-J. Lentz, P.-Th. Wilrich
 And a lot of internet sites 