首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article considers the problem of monitoring Poisson count data when sample sizes are time varying without assuming a priori knowledge of sample sizes. Traditional control charts, whose control limits are often determined before the control charts are activated, are constructed based on perfect knowledge of sample sizes. In practice, however, future sample sizes are often unknown. Making an inappropriate assumption of the distribution function could lead to unexpected performance of the control charts, for example, excessive false alarms in the early runs of the control charts, which would in turn hurt an operator's confidence in valid alarms. To overcome this problem, we propose the use of probability control limits, which are determined based on the realization of sample sizes online. The conditional probability that the charting statistic exceeds the control limit at present given that there has not been a single alarm before can be guaranteed to meet a specified false alarm rate. Simulation studies show that our proposed control chart is able to deliver satisfactory run length performance for any time‐varying sample sizes. The idea presented in this article can be applied to any effective control charts such as the exponentially weighted moving average or cumulative sum chart. © 2013 Wiley Periodicals, Inc. Naval Research Logistics 60: 625–636, 2013  相似文献   

2.
MacGregor and Harris (J Quality Technol 25 (1993) 106–118) proposed the exponentially weighted mean squared deviation (EWMS) and the exponentially weighted moving variance (EWMV) charts as ways of monitoring process variability. These two charts are particularly useful for individual observations where no estimate of variability is available from replicates. However, the control charts derived by using the approximate distributions of the EWMS and EWMV statistics are difficult to interpret in terms of the average run length (ARL). Furthermore, both control charting schemes are biased procedures. In this article, we propose two new control charts by applying a normal approximation to the distributions of the logarithms of the weighted sum of chi squared random variables, which are respectively functions of the EWMS and EWMV statistics. These new control charts are easy to interpret in terms of the ARL. On the basis of the simulation studies, we demonstrate that the proposed charts are superior to the EWMS and EWMV charts and they both are nearly unbiased for the commonly used smoothing constants. We also compare the performance of the proposed charts with that of the change point (CP) CUSUM chart of Acosta‐Mejia (1995). The design of the proposed control charts is discussed. An example is also given to illustrate the applicability of the proposed control charts. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2009  相似文献   

3.
Consider a binary, monotone system of n components. The assessment of the parameter vector, θ, of the joint distribution of the lifetimes of the components and hence of the reliability of the system is often difficult due to scarcity of data. It is therefore important to make use of all information in an efficient way. For instance, prior knowledge is often of importance and can indeed conveniently be incorporated by the Bayesian approach. It may also be important to continuously extract information from a system currently in operation. This may be useful both for decisions concerning the system in operation as well as for decisions improving the components or changing the design of similar new systems. As in Meilijson [12], life‐monitoring of some components and conditional life‐monitoring of some others is considered. In addition to data arising from this monitoring scheme, so‐called autopsy data are observed, if not censored. The probabilistic structure underlying this kind of data is described, and basic likelihood formulae are arrived at. A thorough discussion of an important aspect of this probabilistic structure, the inspection strategy, is given. Based on a version of this strategy a procedure for preventive system maintenance is developed and a detailed application to a network system presented. All the way a Bayesian approach to estimation of θ is applied. For the special case where components are conditionally independent given θ with exponentially distributed lifetimes it is shown that the weighted sum of products of generalized gamma distributions, as introduced in Gåsemyr and Natvig [7], is the conjugate prior for θ. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 551–577, 2001.  相似文献   

4.
Lifetime experiments are common in many research areas and industrial applications. Recently, process monitoring for lifetime observations has received increasing attention. However, some existing methods are inadequate as neither their in control (IC) nor out of control (OC) performance is satisfactory. In addition, the challenges associated with designing robust and flexible control schemes have yet to be fully addressed. To overcome these limitations, this article utilizes a newly developed weighted likelihood ratio test, and proposes a novel monitoring strategy that automatically combines the likelihood of past samples with the exponential weighted sum average scheme. The proposed Censored Observation‐based Weighted‐Likelihood (COWL) control chart gives desirable IC and OC performances and is robust under various scenarios. In addition, a self‐starting control chart is introduced to cope with the problem of insufficient reference samples. Our simulation shows a stronger power in detecting changes in the censored lifetime data using our scheme than using other alternatives. A real industrial example based on the breaking strength of carbon fiber also demonstrates the effectiveness of the proposed method. © 2016 Wiley Periodicals, Inc. Naval Research Logistics 63: 631–646, 2017  相似文献   

5.
Assemble‐to‐order (ATO) is an important operational strategy for manufacturing firms to achieve quick response to customer orders while keeping low finished good inventories. This strategy has been successfully used not only by manufacturers (e.g., Dell, IBM) but also by retailers (e.g., Amazon.com). The evaluation of order‐based performance is known to be an important but difficult task, and the existing literature has been mainly focused on stochastic comparison to obtain performance bounds. In this article, we develop an extremely simple Stein–Chen approximation as well as its error‐bound for order‐based fill rate for a multiproduct multicomponent ATO system with random leadtimes to replenish components. This approximation gives an expression for order‐based fill rate in terms of component‐based fill rates. The approximation has the property that the higher the component replenishment leadtime variability, the smaller the error bound. The result allows an operations manager to analyze the improvement in order‐based fill rates when the base‐stock level for any component changes. Numerical studies demonstrate that the approximation performs well, especially when the demand processes of different components are highly correlated; when the components have high base‐stock levels; or when the component replenishment leadtimes have high variability. © 2012 Wiley Periodicals, Inc. Naval Research Logistics, 2012  相似文献   

6.
We consider a parallel‐machine scheduling problem with jobs that require setups. The duration of a setup does not depend only on the job just completed but on a number of preceding jobs. These setup times are referred to as history‐dependent. Such a scheduling problem is often encountered in the food processing industry as well as in other process industries. In our model, we consider two types of setup times—a regular setup time and a major setup time that becomes necessary after several “hard‐to‐clean” jobs have been processed on the same machine. We consider multiple objectives, including facility utilization, flexibility, number of major setups, and tardiness. We solve several special cases assuming predetermined job sequences and propose strongly polynomial time algorithms to determine the optimal timing of the major setups for given job sequences. We also extend our analysis to develop pseudopolynomial time algorithms for cases with additional objectives, including the total weighted completion time, the total weighted tardiness, and the weighted number of tardy jobs. © 2012 Wiley Periodicals, Inc. Naval Research Logistics, 2012  相似文献   

7.
A basic assumption in process mean estimation is that all process data are clean. However, many sensor system measurements are often corrupted with outliers. Outliers are observations that do not follow the statistical distribution of the bulk of the data and consequently may lead to erroneous results with respect to statistical analysis and process control. Robust estimators of the current process mean are crucial to outlier detection, data cleaning, process monitoring, and other process features. This article proposes an outlier‐resistant mean estimator based on the L1 norm exponential smoothing (L1‐ES) method. The L1‐ES statistic is essentially model‐free and demonstrably superior to existing estimators. It has the following advantages: (1) it captures process dynamics (e.g., autocorrelation), (2) it is resistant to outliers, and (3) it is easy to implement. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 2009  相似文献   

8.
We consider the problem of service rate control of a single‐server queueing system with a finite‐state Markov‐modulated Poisson arrival process. We show that the optimal service rate is nondecreasing in the number of customers in the system; higher congestion levels warrant higher service rates. On the contrary, however, we show that the optimal service rate is not necessarily monotone in the current arrival rate. If the modulating process satisfies a stochastic monotonicity property, the monotonicity is recovered. We examine several heuristics and show where heuristics are reasonable substitutes for the optimal control. None of the heuristics perform well in all the regimes and the fluctuation rate of the modulating process plays an important role in deciding the right heuristic. Second, we discuss when the Markov‐modulated Poisson process with service rate control can act as a heuristic itself to approximate the control of a system with a periodic nonhomogeneous Poisson arrival process. Not only is the current model of interest in the control of Internet or mobile networks with bursty traffic, but it is also useful in providing a tractable alternative for the control of service centers with nonstationary arrival rates. © 2013 Wiley Periodicals, Inc. Naval Research Logistics 60: 661–677, 2013  相似文献   

9.
Instead of measuring a Wiener degradation or performance process at predetermined time points to track degradation or performance of a product for estimating its lifetime, we propose to obtain the first‐passage times of the process over certain nonfailure thresholds. Based on only these intermediate data, we obtain the uniformly minimum variance unbiased estimator and uniformly most accurate confidence interval for the mean lifetime. For estimating the lifetime distribution function, we propose a modified maximum likelihood estimator and a new estimator and prove that, by increasing the sample size of the intermediate data, these estimators and the above‐mentioned estimator of the mean lifetime can achieve the same levels of accuracy as the estimators assuming one has failure times. Thus, our method of using only intermediate data is useful for highly reliable products when their failure times are difficult to obtain. Furthermore, we show that the proposed new estimator of the lifetime distribution function is more accurate than the standard and modified maximum likelihood estimators. We also obtain approximate confidence intervals for the lifetime distribution function and its percentiles. Finally, we use light‐emitting diodes as an example to illustrate our method and demonstrate how to validate the Wiener assumption during the testing. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008  相似文献   

10.
In system reliability analysis, for an n ‐component system, the estimation of the performance of the components in the system is not straightforward in practice, especially when the components are dependent. Here, by assuming the n components in the system to be identically distributed with a common distribution belonging to a scale‐family and the dependence structure between the components being known, we discuss the estimation of the lifetime distributions of the components in the system based on the lifetimes of systems with the same structure. We develop a general framework for inference on the scale parameter of the component lifetime distribution. Specifically, the method of moments estimator (MME) and the maximum likelihood estimator (MLE) are derived for the scale parameter, and the conditions for the existence of the MLE are also discussed. The asymptotic confidence intervals for the scale parameter are also developed based on the MME and the MLE. General simulation procedures for the system lifetime under this model are described. Finally, some examples of two‐ and three‐component systems are presented to illustrate all the inferential procedures developed here. © 2012 Wiley Periodicals, Inc. Naval Research Logistics, 2012  相似文献   

11.
If the number of customers in a queueing system as a function of time has a proper limiting steady‐state distribution, then that steady‐state distribution can be estimated from system data by fitting a general stationary birth‐and‐death (BD) process model to the data and solving for its steady‐state distribution using the familiar local‐balance steady‐state equation for BD processes, even if the actual process is not a BD process. We show that this indirect way to estimate the steady‐state distribution can be effective for periodic queues, because the fitted birth and death rates often have special structure allowing them to be estimated efficiently by fitting parametric functions with only a few parameters, for example, 2. We focus on the multiserver Mt/GI/s queue with a nonhomogeneous Poisson arrival process having a periodic time‐varying rate function. We establish properties of its steady‐state distribution and fitted BD rates. We also show that the fitted BD rates can be a useful diagnostic tool to see if an Mt/GI/s model is appropriate for a complex queueing system. © 2015 Wiley Periodicals, Inc. Naval Research Logistics 62: 664–685, 2015  相似文献   

12.
This paper proposes a kurtosis correction (KC) method for constructing the X? and R control charts for symmetrical long‐tailed (leptokurtic) distributions. The control charts are similar to the Shewhart control charts and are very easy to use. The control limits are derived based on the degree of kurtosis estimated from the actual (subgroup) data. It is assumed that the underlying quality characteristic is symmetrically distributed and no other distributional and/or parameter assumptions are made. The control chart constants are tabulated and the performance of these charts is compared with that of the Shewhart control charts. For the case of the logistic distribution, the exact control limits are derived and are compared with the KC method and the Shewhart method. © 2007 Wiley Periodicals, Inc. Naval Research Logistics, 2007  相似文献   

13.
This article is concerned with a general multi‐class multi‐server priority queueing system with customer priority upgrades. The queueing system has various applications in inventory control, call centers operations, and health care management. Through a novel design of Lyapunov functions, and using matrix‐analytic methods, sufficient conditions for the queueing system to be stable or instable are obtained. Bounds on the queue length process are obtained by a sample path method, with the help of an auxiliary queueing system. © 2012 Wiley Periodicals, Inc. Naval Research Logistics, 2012  相似文献   

14.
We consider the problem of scheduling orders on identical machines in parallel. Each order consists of one or more individual jobs. A job that belongs to an order can be processed by any one of the machines. Multiple machines can process the jobs of an order concurrently. No setup is required if a machine switches over from one job to another. Each order is released at time zero and has a positive weight. Preemptions are not allowed. The completion time of an order is the time at which all jobs of that order have been completed. The objective is to minimize the total weighted completion time of the orders. The problem is NP‐hard for any fixed number (≥2) of machines. Because of this, we focus our attention on two classes of heuristics, which we refer to as sequential two‐phase heuristics and dynamic two‐phase heuristics. We perform a worst case analysis as well as an empirical analysis of nine heuristics. Our analyses enable us to rank these heuristics according to their effectiveness, taking solution quality as well as running time into account. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2006  相似文献   

15.
Conventional control charts are often designed to optimize out‐of‐control average run length (ARL), while constraining in‐control ARL to a desired value. The widely employed grid search approach in statistical process control (SPC) is time‐consuming with unsatisfactory accuracy. Although the simulation‐based ARL gradient estimators proposed by Fu and Hu [Manag Sci 45 (1999), 395–413] can alleviate this issue, it still requires a large number of simulation runs to significantly reduce the variance of gradient estimators. This article proposes a novel ARL gradient estimation approach based on integral equation for efficient analysis and design of control charts. Although this article compares with the results of Fu and Hu [Manag Sci 45 (1999), 395–413] based on the exponentially weighted moving average (EWMA) control chart, the proposed approach has wide applicability as it can generally fit into any control chart with Markovian property under any distributions. It is shown that the proposed method is able to provide a fast, accurate, and easy‐to‐implement algorithm for the design and analysis of EWMA charts, as compared to the simulation‐based gradient estimation method. Moreover, the proposed gradient estimation method facilitates the computation of high‐order derivatives that are valuable in sensitivity analysis. The code is written in Matlab, which is available on request. © 2014 Wiley Periodicals, Inc. Naval Research Logistics 61: 223–237, 2014  相似文献   

16.
In this article, we study the Shewhart chart of Q statistics proposed for the detection of process mean shifts in start‐up processes and short runs. Exact expressions for the run‐length distribution of this chart are derived and evaluated using an efficient computational procedure. The procedure can be considerably faster than using direct simulation. We extend our work to analyze the practice of requiring multiple signals from the chart before responding, a practice sometimes followed with Shewhart charts. The results show that waiting to receive multiple signals severely reduces the probability of quickly detecting shifts in certain cases, and therefore may be considered a risky practice. Operational guidelines for practitioners implementing the chart are discussed. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2009  相似文献   

17.
We study two‐agent scheduling on a single sequential and compatible batching machine in which jobs in each batch are processed sequentially and compatibility means that jobs of distinct agents can be processed in a common batch. A fixed setup time is required before each batch is started. Each agent seeks to optimize some scheduling criterion that depends on the completion times of its own jobs only. We consider several scheduling problems arising from different combinations of some regular scheduling criteria, including the maximum cost (embracing lateness and makespan as its special cases), the total completion time, and the (weighted) number of tardy jobs. Our goal is to find an optimal schedule that minimizes the objective value of one agent, subject to an upper bound on the objective value of the other agent. For each problem under consideration, we provide either a polynomial‐time or a pseudo‐polynomial‐time algorithm to solve it. We also devise a fully polynomial‐time approximation scheme when both agents’ scheduling criteria are the weighted number of tardy jobs.  相似文献   

18.
A new connection between the distribution of component failure times of a coherent system and (adaptive) progressively Type‐II censored order statistics is established. Utilizing this property, we develop inferential procedures when the data is given by all component failures until system failure in two scenarios: In the case of complete information, we assume that the failed component is also observed whereas in the case of incomplete information, we have only information about the failure times but not about the components which have failed. In the first setting, we show that inferential methods for adaptive progressively Type‐II censored data can directly be applied to the problem. For incomplete information, we face the problem that the corresponding censoring plan is not observed and that the available inferential procedures depend on the knowledge of the used censoring plan. To get estimates for distributional parameters, we propose maximum likelihood estimators which can be obtained by solving the likelihood equations directly or via an Expectation‐Maximization‐algorithm type procedure. For an exponential distribution, we discuss also a linear estimator to estimate the mean. Moreover, we establish exact distributions for some estimators in the exponential case which can be used, for example, to construct exact confidence intervals. The results are illustrated by a five component bridge system. © 2015 Wiley Periodicals, Inc. Naval Research Logistics 62: 512–530, 2015  相似文献   

19.
Most modern processes involve multiple quality characteristics that are all measured on attribute levels, and their overall quality is determined by these characteristics simultaneously. The characteristic factors usually correlate with each other, making multivariate categorical control techniques a must. We study Phase I analysis of multivariate categorical processes (MCPs) to identify the presence of change‐points in the reference dataset. A directional change‐point detection method based on log‐linear models is proposed. The method exploits directional shift information and integrates MCPs into the unified framework of multivariate binomial and multivariate multinomial distributions. A diagnostic scheme for identifying the change‐point location and the shift direction is also suggested. Numerical simulations are conducted to demonstrate the detection effectiveness and the diagnostic accuracy.© 2013 Wiley Periodicals, Inc. Naval Research Logistics, 2013  相似文献   

20.
Ranking is a common task for selecting and evaluating alternatives. In the past few decades, combining rankings results from various sources into a consensus ranking has become an increasingly active research topic. In this study, we focus on the evaluation of rank aggregation methods. We first develop an experimental data generation method, which can provide ground truth ranking for alternatives based on their “inherent ability.” This experimental data generation method can generate the required individual synthetic rankings with adjustable accuracy and length. We propose characterizing the effectiveness of rank aggregation methods by calculating the Kendall tau distance between the aggregated ranking and the ground truth ranking. We then compare four classical rank aggregation methods and present some useful findings on the relative performances of the four methods. The results reveal that both the accuracy and length of individual rankings have a remarkable effect on the comparison results between rank aggregation methods. Our methods and results may be helpful to both researchers and decision‐makers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号