FTC 2005 Abstracts
Thursday, 9:15 to 10: 00 am
Logarithmic SPC Session
An SPC Control Chart Procedure Based on Censored Lognormal Observations
Uwe Koehn
Koehn Statistical Consulting LLC
Purpose: To present an SPC control chart procedure based on censored lognormal observations
Abstract: A bicycle manufacturer was interested in testing whether or not the lifetimes of purchased bicycle frames were in statistical control. Based on prior data, it was determined that the lifetimes had a lognormal distribution. Censoring the observations was desirable since time till failure can be very long, especially when the process is in control or better. A procedure using censoring was developed and its properties explored. Graphical techniques, a corresponding method for spread, and sequential methods will be discussed. The procedure is seen to compare very favorably to using non-censored data. From a practical standpoint, the procedure using censoring is better not only from a time standpoint, but usually because it better models the data.
Robust Parameter Design Session
Process Optimization through Robust Parameter Design in the Presence of Categorical Noise Variables
Timothy J. Robinson,
University of Wyoming
William A. Brenneman
William R. Myers,
The Procter & Gamble Company
Purpose: To demonstrate the use of response surface methods for robust design categorical noise factors are present and when some control factors have levels which are not equally desirable due to cost and/or time issues.
Abstract: When categorical noise variables are present in the Robust Parameter Design (RPD) context, it is possible to reduce process variance by not only manipulating the levels of the control factors but also by adjusting the proportions associated with the levels of the categorical noise factor(s). When no adjustment factors exist or when the adjustment factors are unable to bring the process mean close to target, a popular approach to determining optimal operating conditions is to find the levels of the control factors which minimize the estimated mean squared error of the response. Although this approach is effective, engineers may have a difficult time translating mean squared error into quality. We propose the use of a parts per million (PPM) defective objective function in the case of categorical noise variables in the RPD setting. Furthermore, we point out that in many situations the levels of the control factors are not equally desirable due to cost and/or time issues. We have termed these types factors non-uniform control factors. We propose the use of desirability functions to determine optimal operating conditions in the RPD with categorical noise factor setting when non-uniform control factors are present and illustrate this methodology with an example from industry.
Business Process Modeling Session
Business Process Characterization Using Categorical Data Models
Cathy Lawsom
General Dynamics
Douglas Montgomery
ArizonaStateUniversity
Purpose: The purpose of this paper is to demonstrate how logistic regression is used to model a complex business process.
Abstract: Variation exists in all processes especially business processes where critical input and process variables may be controlled by human intervention. Significant returns may be realized by identifying and removing sources of variation from business processes. Because business processes tend to be heavily dependent on human interaction, they can be difficult to characterize and model. This research develops a methodology for synthesizing the qualitative information about the performance of a business process and transforming it into specifically defined categorical data that can be used for statistical modeling and optimization. The process under investigation is the identification and pursuit of new business opportunities for a Department of Defense (DoD) prime contractor. This process is heavily dependent on the people who obtain information about potential opportunities and make decisions about whether to pursue an identified opportunity. This research explores methods for taking the demographic, anecdotal and qualitative data associated with particular business opportunities and creating categorical data sets that can be statistically modeled. This research illustrates how binary logistic regression was used to analyze these data and establish significant relationships between these key process attributes and the process outcome which is either the win or loss of the opportunity.
Thursday, 10:30 to 12:00 am
Mulitvariate SPC Session
Using Nonparametric Methods to Lower False Alarm Rates in Multivariate Statistical Process Control
Luis A Beltran
Linda Malone
University of CentralFlorida
Purpose:To devise a systematic distribution-free approach by extending current developments and focusing on the dimensionality reduction using Principal Component Analysis (PCA) without restricting the technique or techniques to normality requirements.
Abstract: Although there has been progress in the area of multivariate SPC,there are numerous limitations as well as unanswered questions with the current techniques. MSPC charts plotting Hotelling’s require the normality assumption for the joint distribution among the process variables, which is not feasible in many industrial settings. Themotivation to investigate nonparametric techniques for multivariate data in quality control is that fewer restrictive assumptions of the data are imposed; as such the assumption of normality is not a requirement. The nonparametric approach is also less sensitive to outliers, hence more robust and prone to fewer false alarms. In this research, the goal will be to create a systematic distribution-free approach by extending current developments and focusing on the dimensionality reduction using PCA without restricting the technique or techniques to normality requirements. The proposed technique is different from current approaches in that it creates a unified distribution-free approach to non-normal multivariate quality control. The proposed technique will be better based on its ease of use and robustness to outliers in MSPC. By making the approach simple to use in an industrial setting, recommendations on the process could be obtained efficiently, resulting in a cost savings and consequently improving quality.
Statistical Monitoring of Dose-Response Quality Profiles from High-Throughput Screening
James D. Williams
General Electric
Jeffrey B. Birch,
William H. Woodall,
Virginia Tech
Purpose: To present a statistical monitoring procedure for researchers in pharmaceutical and chemical companies to evaluate the quality of their bioassay test procedures over time.
Abstract: In pharmaceutical drug discovery and agricultural crop product discovery, in vitro bioassay experiments are used to identify promising compounds for further research. The reproducibility and accuracy of the bioassay is crucial to be able to correctly distinguish between active and inactive compounds. In the case of agricultural product discovery, a replicated dose response of commercial crop protection products is assayed and used to monitor test quality. The activity of these compounds on the test organisms, the weeds, insects, or fungi, is characterized by a dose-response curve measured from the bioassay. These curves are used to monitor the quality of the bioassays. If undesirable conditions in the bioassay arise, such as equipment failure or problems with the test organisms, then a bioassay monitoring procedure is needed to quickly detect such issues. In this paper we illustrate a proposed nonlinear profile monitoring method to monitor the within variability of multiple assays, the adequacy of the dose-response model chosen, and the estimated dose-response curves for aberrant cases. We illustrate these methods with in vitro bioassay data collected over one year from DuPont Crop Protection.
Topics In DOE Session
Bayesian Analysis of Data from Split-Plot Designs
Steven G. Gilmour
University of London
Peter Goos
Universiteit Antwerpen
Purpose: In this talk, a reliable method to analyze data from split-plot experiments will be presented.
Abstract: Often, industrial experiments involve one or more hard-to-change variables which are not reset for every run of the experiment. The resulting experimental designs are of the split-plot type and fall in the category of multi-stratum designs (see, for example, Trinca and Gilmour (2001)). A proper classical statistical analysis requires the use of generalized least squares estimation and inference procedures and, hence, the estimation of the variance components in the statistical model under investigation.
In most split-plot or multi-stratum designs utilized in practice, the hard-to-change variables are reset a small number of times, such that estimation of the variance components corresponding to the whole plots stratum of the experiment is either impossible or inefficient. As a consequence, these variance components are estimated by most packages to be zero (see, for example, Goos, Langhans and Vandebroek (2004)) and the generalized least squares inferences collapse to ordinary least squares ones. The resulting statistical analysis may lead to erroneous decisions regarding the significance of certain effects in the model.
In the presentation, it will be shown that this problem can be avoided by incorporating the researcher’s prior beliefs regarding the magnitude of the variance components in a Bayesian analysis of the data. This approach guarantees a more correct analysis of the data as it will only produce ordinary instead of generalized least squares results if the data contain enough information to contradict the researcher’s prior. A split-plot experiment conducted at the University of Reading to identify the factors influencing the aroma of freeze-dried coffee will be used as an illustration.
How will the status quo be changed?
There is still a lot of research on the topic of analyzing industrial split-plot and other multi-stratum experiments. The research described here shows that it is essential to incorporate prior knowledge regarding the variance components in the analysis. Otherwise, the statistical analysis of the data may be flawed.
Adapting Second Order Designs for Specific Needs: a Case Study
James R Simpson.
FAMU-FSU
Drew Landman
OldDominionUniversity
Rupert Giroux
FAMU-FSU
Purpose: To develop an experiment and analysis method for efficiently characterizing and calibrating large load strain gauge balances for detecting aerodynamic forces and moments on aircraft models in wind tunnel testing
Abstract: Strain gauge balances can be used to capture aerodynamic forces and moments on aircraft and other vehicles tested in wind tunnels. These balances must be calibrated periodically using static load testing. The calibration testing procedure at NASA was originally developed during the 1940’s and is based on a modified one-factor-at-a-time method that is time intensive and does not provide a statistically rigorous estimate of model quality. An approach using experimental design is proposed to characterize the relationships between applied load (force) and response voltages for each of the six aerodynamic forces and moments. A second order experimental design based on the structure of the Box Behnken design was constructed to suit the unique requirements of the calibration operation. Monte Carlo simulation was used to compare the proposed design’s performance potential relative to existing designs prior to conducting a set of actual experiments. Lessons were learned in constructing nonstandard designs for second order models and in leveraging simulation effectively prior to live testing. The new calibration process will require significantly fewer tests to achieve the same or improved precision in characterization.
Six Sigma Session
Six Sigma beyond the Factory Floor
Ron Snee
Tunnell Consulting
Abstract: Six Sigma is a process-focused, statistically-based approach to business improvement that companies as diverse as Motorola, Honeywell, General Electric, DuPont, 3M, American Express, Bank of America and Commonwealth Health Corporation have used to produce millions of dollars in bottom-line improvements. Initially Six Sigma initiatives were focused on improving the performance of manufacturing processes. Today there is a growing awareness that additional gains in efficiency and operational performance can be achieved by widening the scope of Six Sigma beyond the factory floor to include the improvement of non-manufacturing, administrative and service functions. The economic advantage of such efforts are potentially very significant because (a) the administrative component of modern manufacturing is large and has a great influence on overall economic performance, and (b) the service industry is in a modern society well over two-thirds of the entire economy. This presentation will discuss how Six Sigma can be used to improve processes beyond the factory floor including dealing with the “We’re different” barrier, appropriate roadmaps and common technical challenges for the methods and tools used. Several illustrative examples will be presented
Some Trends in Six Sigma Education
Douglas Montgomery
ArizonaStateUniversity
Thursday, 2:00 to 3:30 pm
TECHNOMETRICS Session
Control Charts and the Efficient Allocation of Sampling Resources
Marion R Reynolds, Jr.
Virginia Tech
Zachary G. Stoumbos
RutgersUniversity
Abstract: Control charts for monitoring the process mean and process standard deviation are often based on samples of n1 observations, but in many applications, individual observations are used (n=1). In this paper, we investigate the question of whether it is better, from the perspective of statistical performance, to use n=1 or n1. We assume that the sampling rate in terms of the number of observations per unit time is fixed, so using n=1 means that samples can be taken more frequently than when n1. The best choice for n depends on the type of control chart being used, so we consider Shewhart, exponentially weighted moving average (EWMA), and cumulative sum (CUSUM) charts. For each type of control chart, a combination of two charts is investigated; one chart designed to monitor , and the other designed to monitor . Most control chart comparisons in the literature assume that a special cause produces a sustained shift in a process parameter that lasts until the shift is detected. We also consider transient shifts in process parameters, which are of a short duration, and drifts in which a parameter moves away from its in-control value at a constant rate. Control chart combinations are evaluated using the expected detection time for the various types of process changes, and a quadratic loss function. When a signal is generated, it is important to know which parameters have changed, so the ability of control chart combinations to correctly indicate the type of parameter change is also evaluated. Our overall conclusion is that it is best to take samples of n=1 observations and use an EWMA or CUSUM chart combination. The Shewhart chart combination with the best overall performance is based on n1, but this combination is inferior to the EWMA and CUSUM chart combinations on almost all performance characteristics (the exception being simplicity). This conclusion seems to contradict the conventional wisdom about some of the advantages and disadvantages of EWMA and CUSUM charts, relative to Shewhart charts.
The Inertial Properties of Quality Control Charts
William H. Woodall
Virginia Tech
Mahmoud A.Mahmoud
CairoUniversity
Abstract: Many types of control charts have an ability to detect process changesthat can weaken over time depending on the past data observed. This is often referred to as the “inertia problem.” We propose a new measure of inertia, the signal resistance, to be the largest standardized deviation from target not leading to an immediate out-of-control signal. We calculate the signal resistance values for several types of univariate and multivariate charts. Our conclusions support the recommendation that Shewhart limits should be used with EWMA charts, especially when the smoothing parameter is small.
DOE for Computer Simulation Session
Application of Design of Experiments in Computer Simulation Studies
Shu Yamada
Hiroe Tsubaki
University of Tsukuba
Purpose: This paper presents an approach of design of experiments in computer
simulation with some case studies in automobile industry.
Abstract: In recent days, computer simulation has been applied in many fields, such as
Computer Aided Engineering in manufacturing industry and so forth. In order to apply
computer simulation effectively, we need to consider the following two points: (1)Exploring a model for computer simulation, (2) Effective application of simulationbased on the explored model. As regard (1), once a tentative model is derived basedon knowledge in the field, it is necessary to examine validity of the model. At thisexamination, design of experiments plays an important role. After exploring acomputer model, the next stage is (2), such as optimization of the response byutilizing computer simulation. This paper presents an approach of design ofexperiments in computer simulation in terms of (1) and (2) with some case studies inautomobile industry. For example, in order to optimize a response by many factors,the first step may be screening active factors from many candidate factors. Design ofexperiments, such as supersaturated design, etc help at this screening problem. Afterfining some active factors, the next step may be approximation of the response by anappropriate function. Composite design, Uniform design is helpful to fit second ordermodel as an approximation.
Computer Experimental Designs to Achieve Multiple Objectives
Leslie M. Moore
Los Alamos National Laboratory
Purpose: Issues and strategies for designing computer experiments are reviewed for conducting sensitivity analysis and construction of an emulator.
Abstract: Simulator codes are a basis for inference in many complex problems including weapons performance, materials aging, infrastructure modeling, nuclear reactor production, and manufacturing process improvement. Goals of computer experiments include sensitivity analysis to gain understanding of the input space and construction of an emulator that may form a basis for uncertainty analysis or prediction. Orthogonal arrays, or highly fractionated factorial designs, and near-orthogonal arrays are used for computer experiments for sensitivity analyses. Latin hypercube samples, possibly selected by space-filling criterion, are in common use when Gaussian spatial processes are the modeling paradigm or uncertainty analysis is the objective. Orthogonal-array based Latin hypercube designs are used to achieve both objectives. Improvement in terms of obtaining a space-filling design will be demonstrated for orthogonal-array based Latin hypercube design. The impact of competing experiment objectives will be discussed in terms of loss of efficiency in sensitivity analysis conducted with data from a Latin hypercube design.