Assessment of process stability and capability in a manufacturing organization: a case study

Quality is considered asthe principal factor that determines the long-termsuccess or failure of any organization. Organizations perform quality control by monitoring process output using Statistical Quality Control, performed as part of the production process (Statistical Process Control, SPC) or as a final quality control check (Acceptance Sampling).SPC is a major quality management statistical tool and its instruments (control charts and capability analysis) are applied to virtually any type of organization (manufacturing, services or transactions for example, those involving data, communications, software, or movement of materials). The aim of this paper is to present a case study, realized in a manufacturing organizationfrom Sibiu, for a new product used in the automotive industry to check its conformance to designed requirements. The output data were analyzed using statistical analysis softwareMinitab. 1 Fundamentals of statistical process control 1.1 Process variation Quality is considered as the principal factor that determines the long-term success or failure of any organization and control is the third principal managerial function that is vital for an organization to produce outputs (products or services) of appropriate quality. Control is the continuing process of evaluating performance (actual results), comparing the performance to goals or standards that an organization seeks to attain (reflected by measurable quality characteristics) and taking corrective actions when necessary [1, 2]. Any production process contains different sources of variation like variations in the inputs (materials, tools), in the conversion/manufacturing process (machines, operators, methods, environment) and in measurement of the outputs (measurements instruments, human inspection performance), determined by a multitude of factors. The factors that are inherent to any process and are always present are called common (normal, random, unassignable) causes of variation. Even if these causes cannot be predicted individually, their combined effect can be stable and described quite accurately using different probability distributions. Corresponding author: carmen.simion@ulbsibiu.ro © The Authors, published by EDP Sciences. This is an open access article distributed under the terms of the Creative Commons Attribution License 4.0 (http://creativecommons.org/licenses/by/4.0/). MATEC Web of Conferences 343, 05011 (2021) https://doi.org/10.1051/matecconf/202134305011 MSE 2021

1 Fundamentals of statistical process control 1

.1 Process variation
Quality is considered as the principal factor that determines the long-term success or failure of any organization and control is the third principal managerial function that is vital for an organization to produce outputs (products or services) of appropriate quality.
Control is the continuing process of evaluating performance (actual results), comparing the performance to goals or standards that an organization seeks to attain (reflected by measurable quality characteristics) and taking corrective actions when necessary [1,2].
Any production process contains different sources of variation like variations in the inputs (materials, tools), in the conversion/manufacturing process (machines, operators, methods, environment) and in measurement of the outputs (measurements instruments, human inspection performance), determined by a multitude of factors.
The factors that are inherent to any process and are always present are called common (normal, random, unassignable) causes of variation. Even if these causes cannot be predicted individually, their combined effect can be stable and described quite accurately using different probability distributions.
Common causes generally account for about 85% of the observed variation in a production process and the remaining 15% are the result of special (abnormal) causes. These causes, often called assignable causes of variation, are not a natural part of a process and they appear from external sources.
Although their overall contribution in the variation is small, any special cause may generate a substantial amount of variation and as consequence the process output may be nonconforming with requirements.
If the result of a process is influenced only by random causes, it is said that the process is stable (in statistical control or simply in control) and no change of process is necessary. When assignable causes occur, the process is said to be out of control and these causes must be identified and eliminated [1][2][3].
In many industries, organizations monitor their process output using Statistical Quality Control (SQC), a well-established quality management tool that involves the use of statistical tools and techniques to monitor and maintain product quality. SQC is performed as part of the production process (Statistical Process Control, SPC) or as a final quality control check (statistical product control -acceptance sampling) [4,5].

Control charts
A basic tool of SPC is the control chart, a graphical representation of certain descriptive statistics calculated for the quality characteristics of product, that are sampling results taken over time from the process.
In practice different type of control charts are used falling into one of the following two categories: control charts by variables for continuous data and control charts by attributes for discrete data. Control charts by variables are composed of two graphs: one graph monitors process location and the other one monitors process variation. Attribute control charts are composed of only one graph that monitors sample-to-sample variations in terms of percent or number of nonconforming (defective products or defects).
The sample statistics, specifically to each type of control chart, are plotted as points on charts and compared to their "in-control" sampling distributions.
Each control chart has a center line and two control limits (lower and upper control limit) that aid in the decision-making process. If there are no points beyond the control limits (the most common basis rule for a decision), no trends up, down, above or below the center line and no patterns in the data the process is said to be stable or in statistical control [5][6][7].
In conclusion, control charts are quality engineering tools that monitor a process behavior to detect special causes of variation in order to eliminate the potential problem on time, before it occurs.

Capability analysis
In industry, capability analysis of a process has become a well defined method in the last two decades, considered on its own or within Six sigma methodology. Process capability is the ability of the process to produce output within given specifications. A process may be stable but does not meet the requirements of the specifications [8][9][10][11].
To determine how well a process produces parts in relation to requirements (lower specification limit -LSL and upper specification limit -USL), capability analysis summarized in different indices, respectively capability indices of process (Cp and Cpk) and performance indices of process (Pp and Ppk) [12][13][14].
Cp and Pp compare the process natural variation to the product given specifications. Cpk and Ppk analyze the location of the process (process mean) in relation to the specifications.
Even if both types of indices show the same kind of information, capability indices include only the "within subgroup "variation (determined by unassignable causes) while performance indices include both "within subgroup" variation and "between subgroups" variation (determined by assignable causes). So, because performance indices include overall process variation, they give a truer picture of what was happened in the process.
Cpk (respectively Cp) will typically be higher than Ppk (respectively Pp), but their values will tend to about the same value when the process is in control, because the actual variation of the process and the subgroup variation will be identical.
Usually, the values of the capability and performance indices are defined by the client, but if not, the following recommendations are indicated for the automotive industry [6]: • For processes that are in control and data are normally distributed, the value of Cpk should be greater than or equal to 1,33. However, in the last years organizations that must deliver "Six sigma" performance require Cp > 2,0 and Cpk > 1,5 [10].
• For processes that are chronically unstable (but with output that meet specifications) and have a predictable pattern, the value of Ppk should be greater than or equal to 1,67.
The essential difference between the two categories of indices, capability and performance indices, is that the use of former is based on the assumption that a process is in control and its output tends to normal distribution (the most commonly distribution).
Actually in manufacturing organizations, normality is not always fulfilled and nonnormally distributed processes are common in practice. Indices developed based under the assumption of normally distributed data are very sensitive to non-normal processes, so the resulted indices could be highly misleading. To estimate capability for processes that are not normally distributed, it is possible to proceed in one of the following ways [12][13][14]: • identifying other theoretical model which adequately describes the distribution of data (like as lognormal, exponential, Weibull, Gamma, logistic and other).
To decide which of the distributions better fit the data, the probability value P-value should be assessed for each distribution and selected that distribution with the highest Pvalue and greater than 0,05. A P-value≤ 0,05 indicates that the data do not follow that distribution.
• transforming the original data so that to make the new data normally distributed, which will permit the use of statistical tools requiring normality.
In this situation, suitable transformation function for original data must be used, such as: Box-Cox power transformation, Johnson transformation system, Clements' percentile method, Burr percentile method and Cumulative Distribution Function (CDF) method. The recommendation is to use transformations only as a last way. Then, the next step is to verify the normality of the transformed data using suitable numerical tests.
In the first case, the analysis of process capability can be performed according to the procedure for the selected distribution and in the second one according to the procedure for the normal distribution.

Organization and product
The case study was conducted in a manufacturing organization from Sibiu. Its field of activity is the production of flexible polyurethane foams, both in the furniture industry and in the automotive industry.
In the automotive industry, foams are used in the car interior (in seat covers and seat upholstery, door panels, headliners and headrests), the car exterior (air outlet and external components) and the engine compartment. The analyzed product is an evaporator decoupling seal ( fig. 1) which is located in the engine compartment and acts as a gasket.

Fig. 1. Evaporator decoupling seal
The evaporator decoupling seal has the role of sealing, preventing the involuntary loss of liquids, air and gases existing in the engine compartment.
Critical quality characteristics (CQCs) of the evaporator decoupling seal are the following dimensions: the length of 965±2 mm, the width of 100±1 mm and 10±1 mm and the thickness of 40±1 mm, indicated in figure 2.

Fig. 2. Critical quality characteristics
The manufacturing process involves the following operations: material reception (Hydroseal P440), lamination, cutting, visual inspection, dimensional inspection of critical quality characteristics (with caliper and thickness gauge) and packaging.
This product is part of a new project, which means that before starting of the series production, the process must be validated by the customer. Customer validation is an important issue for a new product; it is also the only time when decisions about the steps to be taken in manufacturing of the product can be discussed.

Experimental work
Prior to start the manufacturing of any type of product, the generating process must first be approved so to best meet customer requirements. The concept of process approval includes: 965±2 10±1 40±1 100±1 the state of statistical stability of the process and a certain convenient value of the process capability and performance indices.
Before quantifying the process capability and performance indices for cutting operation, the following critical assumptions have been verified for all CQCs: • The process must be in state of statistical control (stable) • The quality characteristic has a normal distribution or another type of distribution • Observations must be random and independent of each other.
Validation of these critical assumptions were tested with the help of statistical tools like control charts, normal probability plot or concordance tests and run chart using statistical software Minitab as follows [15].

Assessing the process stability
In order to perform the preliminary analysis of stability and capability of this new process, data for fifteen subgroups were collected, the sample size being of 2 parts and sampling frequency was 1 hour.
The Xbar-R control charts for all critical characteristics are shown in figures from 3 to 6. It can be observed that all plotted sample mean and range values are within the control limits on both Xbar chart as well as R chart, so all of the observations passed test 1 (which tests for points more than 3 standard deviations from the center line). This is one of the 8 standardized tests and it provides the strongest evidence that a process can be out of control.  It can be concluded that for all critical characteristics of the product, the cutting process is in statistical control over time and it is operating only under the influence of common causes of variation.
However, it can be observed that for all critical characteristics, on the Xbar chart there are some observations (the points marked with a red color and a number) that failed test number 5: two out of three points in a row are more than two standard deviations from the center line in the same direction. Because the appearance of any pattern suggests an assignable cause of variation, this it must be closely investigated.

Checking the normality assumption
The normality assumption is checked either using different graphical methods like histogram and probability plot or performing a goodness-of-fit test.
Figures from 7 to 10show the normal probability plots for all critical characteristics. The data fits the normal distribution if: • The points marked on the graph plot will form approximately a straight line • The points marked on the graph will fall close to the fitted line • The associated P-value will be larger than the chosen significance level, α (commonly chosen level is 0,05).
Analysis the results of probability plots from figures 7 to 10 shows that data for width of 100±1 mm, thickness of 40±1 mm and length of 965±2 mm follow a normal distribution but the data for width of 10±1 mm do not follow a normal distribution.
Because one approach is to identify another distribution that fits the data, the results of goodness-of-fit test (Anderson-Darling test) for width of 10±1 mm are presented in table 1.
The optimal distribution for data regarding the width of 10±1 mm is the Weibull distribution (of 2 parameters) because it provides the best fit in comparison with other distributions as its P-value (0,169) is the highest and greater than critical value of 0,05. Because another approach is to transform data so that they fit to normal distribution, the Box-Cox transformation was used by which the original data are raised to a lambda power.
Figure11 shows the output generated by Minitab software. Using the Box-Cox transformation, an optimal lambda value that correspond to 5,00 was determined; this value was used to transform data and to calculate the process capability and performance indices.
Then, the next step was to check the normality of the transformed data and for this purpose the probability plot test was used ( fig. 12). Because the associated P-value is larger than the chosen significance level α=0,05 the conclusion was that the transformed data follow the normal distribution.

Checking the randomness assumption
To check the assumption of randomness, the run chart test was used and the results are presented from figures 13 to 16. The analysis of results shows that the observations in the data sets corresponding to thickness of 40±1 mm, width of 10±1 mm and length of 965±2 mm are random because the associated P -values for clustering, mixtures, trends and oscillation are larger than the chosen significance level α=0,05.
In case of data set corresponding to width of 100±1 mm, the P-value for trends is smaller than the chosen significance level α=0,05and it warns that the process is about to go out of control.

Estimation of process capability and performance indices
After checking the critical assumptions for all quality characteristics of the evaporator decoupling seal, their capability analysis was carried out and the results are shown in the figures from 17 to 21. Therefore, the conclusion of the analysis is: • The cutting process for all critical quality characteristics of the evaporator decoupling seal is capable because capability index Cpk> 1,33 and performance index Ppk>1,67; • The cutting process also meets the requirement for Six Sigma because Cpk and Ppk > 2.00; • Cp and Cpk values, respectively Pp and Ppk values are not very close and this indicates that the process is not well centered; • Parts per Million (PPM) is zero.

Conclusions
Two issues that a company must address before starting to manufacture any new product are: • process control: the assessment of the process stability over time respectively the identification and elimination of any special cause of variation before this generates nonconforming parts; • capability analysis: the assessment of the process capability respectively the ability of the process to meets product specifications or the ability of the process to produce conforming parts.
It was the goal of this paper to highlight, by a case study conducted in a manufacturing organization supplying for automotive industry, some important aspects regarding these two statistical tools useful to evaluate the behavior of any manufacturing process so that it best meets the customer requirements.
The paper represents a practical and applicable based approach that may serve as an example of good practices in manufacturing industry to assess the stability and capability of any manufacturing process.