首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The accelerated degradation test (ADT) is an efficient tool for assessing the lifetime information of highly reliable products. However, conducting an ADT is very expensive. Therefore, how to conduct a cost-constrained ADT plan is a great challenging issue for reliability analysts. By taking the experimental cost into consideration, this paper proposes a semi-analytical procedure to determine the total sample size, testing stress levels, the measurement frequencies, and the number of measurements (within a degradation path) globally under a class of exponential dispersion degradation models. The proposed method is also extended to determine the global planning of a three-level compromise plan. The advantage of the proposed method not only provides better design insights for conducting an ADT plan, but also provides an efficient algorithm to obtain a cost-constrained ADT plan, compared with conventional optimal plans by grid search algorithms.  相似文献   

2.
Accelerated degradation testing (ADT) is usually conducted under deterministic stresses such as constant‐stress, step‐stress, and cyclic‐stress. Based on ADT data, an ADT model is developed to predict reliability under normal (field) operating conditions. In engineering applications, the “standard” approach for reliability prediction assumes that the normal operating conditions are deterministic or simply uses the mean values of the stresses while ignoring their variability. Such an approach may lead to significant prediction errors. In this paper, we extend an ADT model obtained from constant‐stress ADT experiments to predict field reliability by considering the stress variations. A case study is provided to demonstrate the proposed statistical inference procedure. The accuracy of the procedure is verified by simulation using various distributions of field stresses. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2006.  相似文献   

3.
Instead of measuring a Wiener degradation or performance process at predetermined time points to track degradation or performance of a product for estimating its lifetime, we propose to obtain the first‐passage times of the process over certain nonfailure thresholds. Based on only these intermediate data, we obtain the uniformly minimum variance unbiased estimator and uniformly most accurate confidence interval for the mean lifetime. For estimating the lifetime distribution function, we propose a modified maximum likelihood estimator and a new estimator and prove that, by increasing the sample size of the intermediate data, these estimators and the above‐mentioned estimator of the mean lifetime can achieve the same levels of accuracy as the estimators assuming one has failure times. Thus, our method of using only intermediate data is useful for highly reliable products when their failure times are difficult to obtain. Furthermore, we show that the proposed new estimator of the lifetime distribution function is more accurate than the standard and modified maximum likelihood estimators. We also obtain approximate confidence intervals for the lifetime distribution function and its percentiles. Finally, we use light‐emitting diodes as an example to illustrate our method and demonstrate how to validate the Wiener assumption during the testing. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008  相似文献   

4.
Reliability data obtained from life tests and degradation tests have been extensively used for purposes such as estimating product reliability and predicting warranty costs. When there is more than one candidate model, an important task is to discriminate between the models. In the literature, the model discrimination was often treated as a hypothesis test and a pairwise model discrimination procedure was carried out. Because the null distribution of the test statistic is unavailable in most cases, the large sample approximation and the bootstrap were frequently used to find the acceptance region of the test. Although these two methods are asymptotically accurate, their performance in terms of size and power is not satisfactory in small sample size. To enhance the small‐sample performance, we propose a new method to approximate the null distribution, which builds on the idea of generalized pivots. Conventionally, the generalized pivots were often used for interval estimation of a certain parameter or function of parameters in presence of nuisance parameters. In this study, we further extend the idea of generalized pivots to find the acceptance region of the model discrimination test. Through extensive simulations, we show that the proposed method performs better than the existing methods in discriminating between two lifetime distributions or two degradation models over a wide range of sample sizes. Two real examples are used to illustrate the proposed methods.  相似文献   

5.
Degradation experiments are widely used to assess the reliability of highly reliable products which are not likely to fail under the traditional life tests. In order to conduct a degradation experiment efficiently, several factors, such as the inspection frequency, the sample size, and the termination time, need to be considered carefully. These factors not only affect the experimental cost, but also affect the precision of the estimate of a product's lifetime. In this paper, we deal with the optimal design of a degradation experiment. Under the constraint that the total experimental cost does not exceed a predetermined budget, the optimal decision variables are solved by minimizing the variance of the estimated 100pth percentile of the lifetime distribution of the product. An example is provided to illustrate the proposed method. Finally, a simulation study is conducted to investigate the robustness of this proposed method. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 689–706, 1999  相似文献   

6.
Accelerated life testing (ALT) is concerned with subjecting items to a series of stresses at several levels higher than those experienced under normal conditions so as to obtain the lifetime distribution of items under normal levels. A parametric approach to this problem requires two assumptions. First, the lifetime of an item is assumed to have the same distribution under all stress levels, that is, a change of stress level does not change the shape of the life distribution but changes only its scale. Second, a functional relationship is assumed between the parameters of the life distribution and the accelerating stresses. A nonparametric approach, on the other hand, assumes a functional relationship between the life distribution functions at the accelerated and nonaccelerated stress levels without making any assumptions on the forms of the distribution functions. In this paper, we treat the problem nonparametrically. In particular, we extend the methods of Shaked, Zimmer, and Ball [7] and Strelec and Viertl [8] and develop a nonparametric estimation procedure for a version of the generalized Arrhenius model with two stress variables assuming a linear acceleration function. We obtain consistent estimates as well as confidence intervals of the parameters of the life distribution under normal stress level and compare our nonparametric method with parametric methods assuming exponential, Weibull and lognormal life distributions using both real life and simulated data. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 629–644, 1998  相似文献   

7.
In this article, we discuss the optimal allocation problem in a multiple stress levels life‐testing experiment when an extreme value regression model is used for statistical analysis. We derive the maximum likelihood estimators, the Fisher information, and the asymptotic variance–covariance matrix of the maximum likelihood estimators. Three optimality criteria are defined and the optimal allocation of units for two‐ and k‐stress level situations are determined. We demonstrate the efficiency of the optimal allocation of units in a multiple stress levels life‐testing experiment by using real experimental situations discussed earlier by McCool and Nelson and Meeker. Monte Carlo simulations are used to show that the optimality results hold for small sample sizes as well. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2007  相似文献   

8.
In this paper, we study a m‐parallel machine scheduling problem with a non‐crossing constraint motivated by crane scheduling in ports. We decompose the problem to allow time allocations to be determined once crane assignments are known and construct a backtracking search scheme that manipulates domain reduction and pruning strategies. Simple approximation heuristics are developed, one of which guarantees solutions to be at most two times the optimum. For large‐scale problems, a simulated annealing heuristic that uses random neighborhood generation is provided. Computational experiments are conducted to test the algorithms. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2007.  相似文献   

9.
We consider the problem of scheduling multiprocessor tasks with prespecified processor allocations to minimize the total completion time. The complexity of both preemptive and nonpreemptive cases of the two-processor problem are studied. We show that the preemptive case is solvable in O(n log n) time. In the nonpreemptive case, we prove that the problem is NP-hard in the strong sense, which answers an open question mentioned in Hoogeveen, van de Velde, and Veltman (1994). An efficient heuristic is also developed for this case. The relative error of this heuristic is at most 100%. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 231–242, 1998  相似文献   

10.
By running life tests at higher stress levels than normal operating conditions, accelerated life testing (ALT) quickly yields information on the lifetime distribution of a test unit. The lifetime at the design stress is then estimated through extrapolation using a regression model. In constant‐stress testing, a unit is tested at a fixed stress level until failure or the termination time point of test, whereas step‐stress testing allows the experimenter to gradually increase the stress levels at some prefixed time points during the test. In this work, the optimal k‐level constant‐stress and step‐stress ALTs are compared for the exponential failure data under complete sampling and Type‐I censoring. The objective is to quantify the advantage of using the step‐stress testing relative to the constant‐stress one. Assuming a log‐linear life–stress relationship with the cumulative exposure model for the effect of changing stress in step‐stress testing, the optimal design points are determined under C/D/A‐optimality criteria. The efficiency of step‐stress testing to constant‐stress one is then discussed in terms of the ratio of optimal objective functions based on the information matrix. © 2013 Wiley Periodicals, Inc. Naval Research Logistics 00: 000–000, 2013  相似文献   

11.
步进加速退化试验及其在电子产品可靠性评估中的应用   总被引:1,自引:0,他引:1  
为快速评估具有高可靠、长寿命特点的电子产品的可靠性,提出了使用步进加速退化试验技术的方法。文中首先给出了步进加速退化的试验方法及基本假设,然后给出了步进退化数据向恒加退化数据的折算方法,在此基础上提出了基于伪失效寿命的步进加速退化可靠性评估算法,最后利用试验数据对该方法进行了验证。该方法与恒加退化试验相比,在保持样本量不变的基础上,可以极大地缩短试验时间,因此,具有更高的效费比。  相似文献   

12.
加速退化试验广泛应用于橡胶密封件等长寿命产品的可靠性评估,试验过程中需要将高应力水平下的试验结果外推到正常应力水平。要获得准确的产品可靠性评估结果,需要保证加速应力下的退化失效机理与正常应力下的退化失效机理一致。基于似然比检验原理,提出加速退化试验机理一致性判别方法及流程。针对失效机理一致与失效机理变化两种场合,提出对数线性及非对数线性两类加速模型,并结合混合效应模型描述产品退化过程。利用似然比检验判断加速模型参数是否变化,完成失效机理一致性判别。仿真算例和应用实例表明,该方法能够有效判别橡胶密封件失效机理是否变化,并找到失效机理不变的应力水平边界。  相似文献   

13.
We consider the optimal control of a production inventory‐system with a single product and two customer classes where items are produced one unit at a time. Upon arrival, customer orders can be fulfilled from existing inventory, if there is any, backordered, or rejected. The two classes are differentiated by their backorder and lost sales costs. At each decision epoch, we must determine whether or not to produce an item and if so, whether to use this item to increase inventory or to reduce backlog. At each decision epoch, we must also determine whether or not to satisfy demand from a particular class (should one arise), backorder it, or reject it. In doing so, we must balance inventory holding costs against the costs of backordering and lost sales. We formulate the problem as a Markov decision process and use it to characterize the structure of the optimal policy. We show that the optimal policy can be described by three state‐dependent thresholds: a production base‐stock level and two order‐admission levels, one for each class. The production base‐stock level determines when production takes place and how to allocate items that are produced. This base‐stock level also determines when orders from the class with the lower shortage costs (Class 2) are backordered and not fulfilled from inventory. The order‐admission levels determine when orders should be rejected. We show that the threshold levels are monotonic (either nonincreasing or nondecreasing) in the backorder level of Class 2. We also characterize analytically the sensitivity of these thresholds to the various cost parameters. Using numerical results, we compare the performance of the optimal policy against several heuristics and show that those that do not allow for the possibility of both backordering and rejecting orders can perform poorly.© 2010 Wiley Periodicals, Inc. Naval Research Logistics 2010  相似文献   

14.
We present transient and asymptotic reliability indices for a single‐unit system that is subject to Markov‐modulated shocks and wear. The transient results are derived from the (transform) solution of an integro‐differential equation describing the joint distribution of the cumulative degradation process and the state of the modulating process. Additionally, we prove the asymptotic normality of a properly centered and time‐scaled version of the cumulative degradation at time t. This asymptotic result leads to a simple normal approximation for a properly centered and space‐scaled version of the systes lifetime distribution. Two numerical examples illustrate the quality of the normal approximation. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 2009  相似文献   

15.
Today, many products are designed and manufactured to function for a long period of time before they fail. Determining product reliability is a great challenge to manufacturers of highly reliable products with only a relatively short period of time available for internal life testing. In particular, it may be difficult to determine optimal burn‐in parameters and characterize the residual life distribution. A promising alternative is to use data on a quality characteristic (QC) whose degradation over time can be related to product failure. Typically, product failure corresponds to the first passage time of the degradation path beyond a critical value. If degradation paths can be modeled properly, one can predict failure time and determine the life distribution without actually observing failures. In this paper, we first use a Wiener process to describe the continuous degradation path of the quality characteristic of the product. A Wiener process allows nonconstant variance and nonzero correlation among data collected at different time points. We propose a decision rule for classifying a unit as normal or weak, and give an economic model for determining the optimal termination time and other parameters of a burn‐in test. Next, we propose a method for assessing the product's lifetime distribution of the passed units. The proposed methodologies are all based only on the product's initial observed degradation data. Finally, an example of an electronic product, namely contact image scanner (CIS), is used to illustrate the proposed procedure. © 2002 Wiley Periodicals, Inc. Naval Research Logistics, 2003  相似文献   

16.
We consider the problem of determining the capacity to assign to each arc in a given network, subject to uncertainty in the supply and/or demand of each node. This design problem underlies many real‐world applications, such as the design of power transmission and telecommunications networks. We first consider the case where a set of supply/demand scenarios are provided, and we must determine the minimum‐cost set of arc capacities such that a feasible flow exists for each scenario. We briefly review existing theoretical approaches to solving this problem and explore implementation strategies to reduce run times. With this as a foundation, our primary focus is on a chance‐constrained version of the problem in which α% of the scenarios must be feasible under the chosen capacity, where α is a user‐defined parameter and the specific scenarios to be satisfied are not predetermined. We describe an algorithm which utilizes a separation routine for identifying violated cut‐sets which can solve the problem to optimality, and we present computational results. We also present a novel greedy algorithm, our primary contribution, which can be used to solve for a high quality heuristic solution. We present computational analysis to evaluate the performance of our proposed approaches. © 2016 Wiley Periodicals, Inc. Naval Research Logistics 63: 236–246, 2016  相似文献   

17.
Within a reasonable life‐testing time, how to improve the reliability of highly reliable products is one of the great challenges to today's manufacturers. By using a resolution III experiment together with degradation test, Tseng, Hamada, and Chiao (1995) presented an interesting case study of improving the reliability of fluorescent lamps. However, in conducting such an experiment, they did not address the problem of how to choose the optimal settings of variables, such as sample size, inspection frequency, and termination time for each run, which are influential to the correct identification of significant factors and the experimental cost. Assuming that the product's degradation paths satisfy Wiener processes, this paper proposes a systematic approach to the aforementioned problem. First, an intuitively appealing identification rule is proposed. Next, under the constraints of a minimum probability of correct decision and a maximum probability of incorrect decision of the proposed identification rule, the optimum test plan (including the determinations of inspection frequency, sample size, and termination time for each run) can be obtained by minimizing the total experimental cost. An example is provided to illustrate the proposed method. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 514–526, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10024  相似文献   

18.
We consider a two‐level system in which a warehouse manages the inventories of multiple retailers. Each retailer employs an order‐up‐to level inventory policy over T periods and faces an external demand which is dynamic and known. A retailer's inventory should be raised to its maximum limit when replenished. The problem is to jointly decide on replenishment times and quantities of warehouse and retailers so as to minimize the total costs in the system. Unlike the case in the single level lot‐sizing problem, we cannot assume that the initial inventory will be zero without loss of generality. We propose a strong mixed integer program formulation for the problem with zero and nonzero initial inventories at the warehouse. The strong formulation for the zero initial inventory case has only T binary variables and represents the convex hull of the feasible region of the problem when there is only one retailer. Computational results with a state‐of‐the art solver reveal that our formulations are very effective in solving large‐size instances to optimality. © 2010 Wiley Periodicals, Inc. Naval Research Logistics, 2010  相似文献   

19.
A method is presented to locate and allocate p new facilities in relation to n existing facilities. Each of the n existing facilities has a requirement flow which must be supplied by the new facilities. Rectangular distances are assumed to exist between all facilities. The algorithm proceeds in two stages. In the first stage a set of all possible optimal new facility locations is determined by a set reduction algorithm. The resultant problem is shown to be equivalent to finding the p-median of a weighted connected graph. In the second stage the optimal locations and allocations are obtained by using a technique for solving the p-median problem.  相似文献   

20.
In this study, we propose a new parsimonious policy for the stochastic joint replenishment problem in a single‐location, N‐item setting. The replenishment decisions are based on both group reorder point‐group order quantity and the time since the last decision epoch. We derive the expressions for the key operating characteristics of the inventory system for both unit and compound Poisson demands. In a comprehensive numerical study, we compare the performance of the proposed policy with that of existing ones over a standard test bed. Our numerical results indicate that the proposed policy dominates the existing ones in 100 of 139 instances with comparably significant savings for unit demands. With batch demands, the savings increase as the stochasticity of demand size gets larger. We also observe that it performs well in environments with low demand diversity across items. The inventory system herein also models a two‐echelon setting with a single item, multiple retailers, and cross docking at the upper echelon. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2006  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号