首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Burn‐in is a widely used method to improve the quality of products or systems after they have been produced. In this paper, we study burn‐in procedure for a system that is maintained under periodic inspection and perfect repair policy. Assuming that the underlying lifetime distribution of a system has an initially decreasing and/or eventually increasing failure rate function, we derive upper and lower bounds for the optimal burn‐in time, which maximizes the system availability. Furthermore, adopting an age replacement policy, we derive upper and lower bounds for the optimal age parameter of the replacement policy for each fixed burn‐in time and a uniform upper bound for the optimal burn‐in time given the age replacement policy. These results can be used to reduce the numerical work for determining both optimal burn‐in time and optimal replacement policy. © 2007 Wiley Periodicals, Inc. Naval Research Logistics, 2007  相似文献   

2.
Burn‐in procedure is a manufacturing technique that is intended to eliminate early failures of system or product. Burning‐in a component or system means to subject it to a period of use prior to being used in field. Generally, burn‐in is considered expensive and so the length of burn‐in is typically limited. Thus, burn‐in is most often accomplished in an accelerated environment in order to shorten the burn‐in process. A new failure rate model for an accelerated burn‐in procedure, which incorporates the accelerated ageing process induced by the accelerated environmental stress, is proposed. Under a more general assumption on the shape of failure rate function of products, which includes the traditional bathtub‐shaped failure rate function as a special case, upper bounds for optimal burn‐in time will be derived. A numerical example will also be given for illustration. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2006  相似文献   

3.
Today, many products are designed and manufactured to function for a long period of time before they fail. Determining product reliability is a great challenge to manufacturers of highly reliable products with only a relatively short period of time available for internal life testing. In particular, it may be difficult to determine optimal burn‐in parameters and characterize the residual life distribution. A promising alternative is to use data on a quality characteristic (QC) whose degradation over time can be related to product failure. Typically, product failure corresponds to the first passage time of the degradation path beyond a critical value. If degradation paths can be modeled properly, one can predict failure time and determine the life distribution without actually observing failures. In this paper, we first use a Wiener process to describe the continuous degradation path of the quality characteristic of the product. A Wiener process allows nonconstant variance and nonzero correlation among data collected at different time points. We propose a decision rule for classifying a unit as normal or weak, and give an economic model for determining the optimal termination time and other parameters of a burn‐in test. Next, we propose a method for assessing the product's lifetime distribution of the passed units. The proposed methodologies are all based only on the product's initial observed degradation data. Finally, an example of an electronic product, namely contact image scanner (CIS), is used to illustrate the proposed procedure. © 2002 Wiley Periodicals, Inc. Naval Research Logistics, 2003  相似文献   

4.
In this paper we first introduce and study the notion of failure profiles which is based on the concepts of paths and cuts in system reliability. The relationship of failure profiles to two notions of component importance is highlighted, and an expression for the density function of the lifetime of a coherent system, with independent and not necessarily identical component lifetimes, is derived. We then demonstrate the way that failure profiles can be used to establish likelihood ratio orderings of lifetimes of two systems. Finally we use failure profiles to obtain bounds, in the likelihood ratio sense, on the lifetimes of coherent systems with independent and not necessarily identical component lifetimes. The bounds are relatively easy to compute and use. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004  相似文献   

5.
Burn‐in is a technique to enhance reliability by eliminating weak items from a population of items having heterogeneous lifetimes. System burn‐in can improve system reliability, but the conditions for system burn‐in to be performed after component burn‐in remain a little understood mathematical challenge. To derive such conditions, we first introduce a general model of heterogeneous system lifetimes, in which the component burn‐in information and assembly problems are related to the prediction of system burn‐in. Many existing system burn‐in models become special cases and two important results are identified. First, heterogeneous system lifetimes can be understood naturally as a consequence of heterogeneous component lifetimes and heterogeneous assembly quality. Second, system burn‐in is effective if assembly quality variation in the components and connections which are arranged in series is greater than a threshold, where the threshold depends on the system structure and component failure rates. © 2003 Wiley Periodicals, Inc. Naval Research Logistics 50: 364–380, 2003.  相似文献   

6.
We consider the problem of scheduling customer orders in a flow shop with the objective of minimizing the sum of tardiness, earliness (finished goods inventory holding), and intermediate (work‐in‐process) inventory holding costs. We formulate this problem as an integer program, and based on approximate solutions to two different, but closely related, Dantzig‐Wolfe reformulations, we develop heuristics to minimize the total cost. We exploit the duality between Dantzig‐Wolfe reformulation and Lagrangian relaxation to enhance our heuristics. This combined approach enables us to develop two different lower bounds on the optimal integer solution, together with intuitive approaches for obtaining near‐optimal feasible integer solutions. To the best of our knowledge, this is the first paper that applies column generation to a scheduling problem with different types of strongly ????‐hard pricing problems which are solved heuristically. The computational study demonstrates that our algorithms have a significant speed advantage over alternate methods, yield good lower bounds, and generate near‐optimal feasible integer solutions for problem instances with many machines and a realistically large number of jobs. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004.  相似文献   

7.
In this article we present an optimum maintenance policy for a group of machines subject to stochastic failures where the repair cost and production loss due to the breakdown of machines are minimized. A nomograph was developed for machines with exponential failure time distributions. The optimal schedule time for repair as well as the total repair cost per cycle can be obtained easily from the nomograph. Conditions for the existence of a unique solution for the optimum schedule and the bounds for the schedule are discussed.  相似文献   

8.
We study an infinite‐horizon, N‐stage, serial production/inventory system with two transportation modes between stages: regular shipping and expedited shipping. The optimal inventory policy for this system is a top–down echelon base‐stock policy, which can be computed through minimizing 2N nested convex functions recursively (Lawson and Porteus, Oper Res 48 (2000), 878–893). In this article, we first present some structural properties and comparative statics for the parameters of the optimal inventory policies, we then derive simple, newsvendor‐type lower and upper bounds for the optimal control parameters. These results are used to develop near optimal heuristic solutions for the echelon base‐stock policies. Numerical studies show that the heuristic performs well. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2010  相似文献   

9.
We consider the single server Markovian queue subject to Poisson generated catastrophes. Whenever a catastrophe occurs, all customers are forced to abandon the system, the server is rendered inoperative and an exponential repair time is set on. During the repair time new arrivals are allowed to join the system. We assume that the arriving customers decide whether to join the system or balk, based on a natural linear reward‐cost structure with two types of rewards: A (usual) service reward for those customers that receive service and a (compensation) failure reward for those customers that are forced to abandon the system due to a catastrophe. We study the strategic behavior of the customers regarding balking and derive the corresponding (Nash) equilibrium strategies for the observable and unobservable cases. We show that both types of strategic behavior may be optimal: to avoid the crowd or to follow it. The crucial factor that determines the type of customer behavior is the relative value of the service reward to the failure compensation. © 2013 Wiley Periodicals, Inc. Naval Research Logistics, 2013  相似文献   

10.
In Assemble‐To‐Order (ATO) systems, situations may arise in which customer demand must be backlogged due to a shortage of some components, leaving available stock of other components unused. Such unused component stock is called remnant stock. Remnant stock is a consequence of both component ordering decisions and decisions regarding allocation of components to end‐product demand. In this article, we examine periodic‐review ATO systems under linear holding and backlogging costs with a component installation stock policy and a First‐Come‐First‐Served (FCFS) allocation policy. We show that the FCFS allocation policy decouples the problem of optimal component allocation over time into deterministic period‐by‐period optimal component allocation problems. We denote the optimal allocation of components to end‐product demand as multimatching. We solve the multi‐matching problem by an iterative algorithm. In addition, an approximation scheme for the joint replenishment and allocation optimization problem with both upper and lower bounds is proposed. Numerical experiments for base‐stock component replenishment policies show that under optimal base‐stock policies and optimal allocation, remnant stock holding costs must be taken into account. Finally, joint optimization incorporating optimal FCFS component allocation is valuable because it provides a benchmark against which heuristic methods can be compared. © 2015 Wiley Periodicals, Inc. Naval Research Logistics 62: 158–169, 2015  相似文献   

11.
In this article, a mixture of Type‐I censoring and Type‐II progressive censoring schemes, called an adaptive Type‐II progressive censoring scheme, is introduced for life testing or reliability experiments. For this censoring scheme, the effective sample size m is fixed in advance, and the progressive censoring scheme is provided but the number of items progressively removed from the experiment upon failure may change during the experiment. If the experimental time exceeds a prefixed time T but the number of observed failures does not reach m, we terminate the experiment as soon as possible by adjusting the number of items progressively removed from the experiment upon failure. Computational formulae for the expected total test time are provided. Point and interval estimation of the failure rate for exponentially distributed failure times are discussed for this censoring scheme. The various methods are compared using Monte Carlo simulation. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2009  相似文献   

12.
A new connection between the distribution of component failure times of a coherent system and (adaptive) progressively Type‐II censored order statistics is established. Utilizing this property, we develop inferential procedures when the data is given by all component failures until system failure in two scenarios: In the case of complete information, we assume that the failed component is also observed whereas in the case of incomplete information, we have only information about the failure times but not about the components which have failed. In the first setting, we show that inferential methods for adaptive progressively Type‐II censored data can directly be applied to the problem. For incomplete information, we face the problem that the corresponding censoring plan is not observed and that the available inferential procedures depend on the knowledge of the used censoring plan. To get estimates for distributional parameters, we propose maximum likelihood estimators which can be obtained by solving the likelihood equations directly or via an Expectation‐Maximization‐algorithm type procedure. For an exponential distribution, we discuss also a linear estimator to estimate the mean. Moreover, we establish exact distributions for some estimators in the exponential case which can be used, for example, to construct exact confidence intervals. The results are illustrated by a five component bridge system. © 2015 Wiley Periodicals, Inc. Naval Research Logistics 62: 512–530, 2015  相似文献   

13.
Mixed censoring is useful extension of Type I and Type II censoring and combines some advantages of both types of censoring. This paper proposes a general Bayesian framework for designing a variable acceptance sampling scheme with mixed censoring. A general loss function which includes the sampling cost, the time‐consuming cost, the salvage value, and the decision loss is employed to determine the Bayes risk and the corresponding optimal sampling plan. An explicit expression of the Bayes risk is derived. The new model can easily be adapted to create life testing models for different distributions. Specifically, two commonly used distributions including the exponential distribution and the Weibull distribution are considered with a special decision loss function. We demonstrate that the proposed model is superior to models with Type I or Type II censoring. Numerical examples are reported to illustrate the effectiveness of the method proposed. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004  相似文献   

14.
The signature of a system with independent and identically distributed (i.i.d.) component lifetimes is a vector whose ith element is the probability that the ith component failure is fatal to the system. System signatures have been found to be quite useful tools in the study and comparison of engineered systems. In this article, the theory of system signatures is extended to versions of signatures applicable in dynamic reliability settings. It is shown that, when a working used system is inspected at time t and it is noted that precisely k failures have occurred, the vector s [0,1]nk whose jth element is the probability that the (k + j)th component failure is fatal to the system, for j = 1,2,2026;,nk, is a distribution‐free measure of the design of the residual system. Next, known representation and preservation theorems for system signatures are generalized to dynamic versions. Two additional applications of dynamic signatures are studied in detail. The well‐known “new better than used” (NBU) property of aging systems is extended to a uniform (UNBU) version, which compares systems when new and when used, conditional on the known number of failures. Sufficient conditions are given for a system to have the UNBU property. The application of dynamic signatures to the engineering practice of “burn‐in” is also treated. Specifically, we consider the comparison of new systems with working used systems burned‐in to a given ordered component failure time. In a reliability economics framework, we illustrate how one might compare a new system to one successfully burned‐in to the kth component failure, and we identify circumstances in which burn‐in is inferior (or is superior) to the fielding of a new system. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2009  相似文献   

15.
By running life tests at higher stress levels than normal operating conditions, accelerated life testing (ALT) quickly yields information on the lifetime distribution of a test unit. The lifetime at the design stress is then estimated through extrapolation using a regression model. In constant‐stress testing, a unit is tested at a fixed stress level until failure or the termination time point of test, whereas step‐stress testing allows the experimenter to gradually increase the stress levels at some prefixed time points during the test. In this work, the optimal k‐level constant‐stress and step‐stress ALTs are compared for the exponential failure data under complete sampling and Type‐I censoring. The objective is to quantify the advantage of using the step‐stress testing relative to the constant‐stress one. Assuming a log‐linear life–stress relationship with the cumulative exposure model for the effect of changing stress in step‐stress testing, the optimal design points are determined under C/D/A‐optimality criteria. The efficiency of step‐stress testing to constant‐stress one is then discussed in terms of the ratio of optimal objective functions based on the information matrix. © 2013 Wiley Periodicals, Inc. Naval Research Logistics 00: 000–000, 2013  相似文献   

16.
In this paper, we discuss two‐dimensional failure modeling for a system where degradation is due to age and usage. We extend the concept of minimal repair for the one‐dimensional case to the two‐dimensional case and characterize the failures over a two‐dimensional region under minimal repair. An application of this important result to a manufacturer's servicing costs for a two‐dimensional warranty policy is given and we compare the minimal repair strategy with the strategy of replacement of failure. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004.  相似文献   

17.
We consider the problem of scheduling a set of jobs on a single machine subject to random breakdowns. We focus on the preemptive‐repeat model, which addresses the situation where, if a machine breaks down during the processing of a job, the work done on the job prior to the breakdown is lost and the job will have to be started from the beginning again when the machine resumes its work. We allow that (i) the uptimes and downtimes of the machine follow general probability distributions, (ii) the breakdown process of the machine depends upon the job being processed, (iii) the processing times of the jobs are random variables following arbitrary distributions, and (iv) after a breakdown, the processing time of a job may either remain a same but unknown amount, or be resampled according to its probability distribution. We first derive the optimal policy for a class of problems under the criterion to maximize the expected discounted reward earned from completing all jobs. The result is then applied to further obtain the optimal policies for other due date‐related criteria. We also discuss a method to compute the moments and probability distributions of job completion times by using their Laplace transforms, which can convert a general stochastic scheduling problem to its deterministic equivalent. The weighted squared flowtime problem and the maintenance checkup and repair problem are analyzed as applications. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004  相似文献   

18.
The correlated improvement in yield and reliability has been observed in the case studies on integrated circuits and electronic assemblies. This paper presents a model that incorporates yield and reliability with the addition of a burn‐in step to explain their correlated improvement. The proposed model includes as special cases several yield and reliability models that have been previously published and thus provides a unifying framework. The model is used to derive a condition for which yield functions can be multiplied to obtain the overall yield. Yield and reliability are compared as a function of operation time, and an analytical condition for burn‐in to be effective is also obtained. Finally, Poisson and negative binomial defects models are further considered to investigate how reliability is based on yield. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004.  相似文献   

19.
A Markov modulated shock models is studied in this paper. In this model, both the interarrival time and the magnitude of the shock are determined by a Markov process. The system fails whenever a shock magnitude exceeds a pre‐specified level η. Nonexponential bounds of the reliability are given when the interarrival time has heavy‐tailed distribution. The exponential decay of the reliability function and the asymptotic failure rate are also considered for the light‐tailed case. © 2005 Wiley Periodicals, Inc. Naval Research Logistics, 2005  相似文献   

20.
We consider the infinite horizon serial inventory system with both average cost and discounted cost criteria. The optimal echelon base‐stock levels are obtained in terms of only probability distributions of leadtime demands. This analysis yields a novel approach for developing bounds and heuristics for optimal inventory control polices. In addition to deriving the known bounds in literature, we develop several new upper bounds for both average cost and discounted cost models. Numerical studies show that the bounds and heuristic are very close to optimal.© 2007 Wiley Periodicals, Inc. Naval Research Logistics, 2007  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号