首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 28 毫秒
1.
Degradation experiments are widely used to assess the reliability of highly reliable products which are not likely to fail under the traditional life tests. In order to conduct a degradation experiment efficiently, several factors, such as the inspection frequency, the sample size, and the termination time, need to be considered carefully. These factors not only affect the experimental cost, but also affect the precision of the estimate of a product's lifetime. In this paper, we deal with the optimal design of a degradation experiment. Under the constraint that the total experimental cost does not exceed a predetermined budget, the optimal decision variables are solved by minimizing the variance of the estimated 100pth percentile of the lifetime distribution of the product. An example is provided to illustrate the proposed method. Finally, a simulation study is conducted to investigate the robustness of this proposed method. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 689–706, 1999  相似文献   

2.
Today, many products are designed and manufactured to function for a long period of time before they fail. Determining product reliability is a great challenge to manufacturers of highly reliable products with only a relatively short period of time available for internal life testing. In particular, it may be difficult to determine optimal burn‐in parameters and characterize the residual life distribution. A promising alternative is to use data on a quality characteristic (QC) whose degradation over time can be related to product failure. Typically, product failure corresponds to the first passage time of the degradation path beyond a critical value. If degradation paths can be modeled properly, one can predict failure time and determine the life distribution without actually observing failures. In this paper, we first use a Wiener process to describe the continuous degradation path of the quality characteristic of the product. A Wiener process allows nonconstant variance and nonzero correlation among data collected at different time points. We propose a decision rule for classifying a unit as normal or weak, and give an economic model for determining the optimal termination time and other parameters of a burn‐in test. Next, we propose a method for assessing the product's lifetime distribution of the passed units. The proposed methodologies are all based only on the product's initial observed degradation data. Finally, an example of an electronic product, namely contact image scanner (CIS), is used to illustrate the proposed procedure. © 2002 Wiley Periodicals, Inc. Naval Research Logistics, 2003  相似文献   

3.
Optimal allocation and control of limited inspection capacity for multiple production processes are considered. The production processes, which operate independently but share inspection capacity, are subject to random failures and are partially observed through inspection. This study proposes an approach of stochastic allocation, using a Markov decision process, to minimize expected total discounted cost over an infinite time horizon. Both an optimal model and a disaggregate approximation model are introduced. The study provides some structural results and establishes that the control policy is of a threshold type. Numerical experiments demonstrate a significantly decreased amount of computational time required for the disaggregate approach when compared to the optimal solution, while generating very good control policies. © 2002 John Wiley & Sons, Inc. Naval Research Logistics, 49: 78–94, 2002; DOI 10.1002/nav.1049  相似文献   

4.
Motivated by wind energy applications, we consider the problem of optimally replacing a stochastically degrading component that resides and operates in a partially observable environment. The component's rate of degradation is modulated by the stochastic environment process, and the component fails when it is accumulated degradation first reaches a fixed threshold. Assuming periodic inspection of the component, the objective is to minimize the long‐run average cost per unit time of performing preventive and reactive replacements for two distinct cases. The first case examines instantaneous replacements and fixed costs, while the second considers time‐consuming replacements and revenue losses accrued during periods of unavailability. Formulated and solved are mixed state space, partially observable Markov decision process models, both of which reveal the optimality of environment‐dependent threshold policies with respect to the component's cumulative degradation level. Additionally, it is shown that for each degradation value, a threshold policy with respect to the environment belief state is optimal if the environment alternates between two states. The threshold policies are illustrated by way of numerical examples using both synthetic and real wind turbine data. © 2015 Wiley Periodicals, Inc. Naval Research Logistics 62: 395–415, 2015  相似文献   

5.
In this paper we address the problem of how to decide when to terminate the testing/modification process and to release the software during the development phase. We present a Bayesian decision theoretic approach by formulating the optimal release problem as a sequential decision problem. By using a non‐Gaussian Kalman filter type model, proposed by Chen and Singpurwalla (1994), to track software reliability, we are able to obtain tractable expressions for inference and determine a one‐stage look ahead stopping rule under reasonable conditions and a class of loss functions. © 2002 Wiley Periodicals, Inc. Naval Research Logistics, 2003  相似文献   

6.
In this paper we optimally control service rates for an inventory system of service facilities with perishable products. We consider a finite capacity system where arrivals are Poisson‐distributed, lifetime of items have exponential distribution, and replenishment is instantaneous. We determine the service rates to be employed at each instant of time so that the long‐run expected cost rate is minimized for fixed maximum inventory level and capacity. The problem is modelled as a semi‐Markov decision problem. We establish the existence of a stationary optimal policy and we solve it by employing linear programming. Several numerical examples which provide insight to the behavior of the system are presented. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 464–482, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10021  相似文献   

7.
This article investigates the method of allocating arriving vessels to the terminals in transshipment hubs. The terminal allocation decision faced by a shipping alliance has the influence on the scheduled arrival time of vessels and further affects the bunker consumption cost for the vessels. A model is formulated to minimize the bunker consumption cost as well as the transportation cost of inter‐terminal transshipment flows/movements. The capacity limitation of the port resources such as quay cranes (QCs) and berths is taken into account. Besides the terminal allocation, the QC assignment decision is also incorporated in the proposed model. A local branching based method and a particle swarm optimization based method are developed to solve the model in large‐scale problem instances. Numerical experiments are also conducted to validate the effectiveness of the proposed model, which can save around 14% of the cost when compared with the “First Come First Served” decision rule. Moreover, the proposed solution methods not only solve the proposed model within a reasonable computation time, but also obtain near‐optimal results with about 0.1~0.7% relative gap. © 2016 Wiley Periodicals, Inc. Naval Research Logistics 63: 529–548, 2016  相似文献   

8.
分析了单部件系统退化过程的特点,建立了基于状态的检查与修理决策模型。该模型根据系统的当前状态来决定检查与修理,通过分析计算系统在一个更新周期内平均检查次数、预防性维修及修复性故障的概率,建立维修费用与检查问隔及预防性维修阈值的关系,以平均维修费用最小为目标,优化检查间隔及预防性维修阈值。最后运用Matlab对模型进行数值计算,结果表明,模型能有效地降低维修费用。  相似文献   

9.
A system reliability is often evaluated by individual tests of components that constitute the system. These component test plans have advantages over complete system based tests in terms of time and cost. In this paper, we consider the series system with n components, where the lifetime of the i‐th component follows exponential distribution with parameter λi. Assuming test costs for the components are different, we develop an efficient algorithm to design a two‐stage component test plan that satisfies the usual probability requirements on the system reliability and in addition minimizes the maximum expected cost. For the case of prior information in the form of upper bounds on λi's, we use the genetic algorithm to solve the associated optimization problems which are otherwise difficult to solve using mathematical programming techniques. The two‐stage component test plans are cost effective compared to single‐stage plans developed by Rajgopal and Mazumdar. We demonstrate through several numerical examples that our approach has the potential to reduce the overall testing costs significantly. © 2002 John Wiley & Sons, Inc. Naval Research Logistics, 49: 95–116, 2002; DOI 10.1002/nav.1051  相似文献   

10.
Acceptance sampling plans are used to assess the quality of an ongoing production process, in addition to the lot acceptance. In this paper, we consider sampling inspection plans for monitoring the Markov‐dependent production process. We construct sequential plans that satisfy the usual probability requirements at acceptable quality level and rejectable quality level and, in addition, possess the minimum average sample number under semicurtailed inspection. As these plans result in large sample sizes, especially when the serial correlation is high, we suggest new plans called “systematic sampling plans.” The minimum average sample number systematic plans that satisfy the probability requirements are constructed. Our algorithm uses some simple recurrence relations to compute the required acceptance probabilities. The optimal systematic plans require much smaller sample sizes and acceptance numbers, compared to the sequential plans. However, they need larger production runs to make a decision. Tables for choosing appropriate sequential and systematic plans are provided. The problem of selecting the best systematic sampling plan is also addressed. The operating characteristic curves of some of the sequential and the systematic plans are compared, and are observed to be almost identical. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 451–467, 2001  相似文献   

11.
Optimal operating policies and corresponding managerial insight are developed for the decision problem of coordinating supply and demand when (i) both supply and demand can be influenced by the decision maker and (ii) learning is pursued. In particular, we determine optimal stocking and pricing policies over time when a given market parameter of the demand process, though fixed, initially is unknown. Because of the initially unknown market parameter, the decision maker begins the problem horizon with a subjective probability distribution associated with demand. Learning occurs as the firm monitors the market's response to its decisions and then updates its characterization of the demand function. Of primary interest is the effect of censored data since a firm's observations often are restricted to sales. We find that the first‐period optimal selling price increases with the length of the problem horizon. However, for a given problem horizon, prices can rise or fall over time, depending on how the scale parameter influences demand. Further results include the characterization of the optimal stocking quantity decision and a computationally viable algorithm. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 303–325, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10013  相似文献   

12.
针对直升机载制导火箭弹特点,根据命中精度概念,分析出直升机载制导火箭命中精度的合理表示方法;考虑研制成本、试验条件等因素,提出研制阶段基于点估计和区间估计的制导火箭命中精度评定方案,并通过试验数据验证其正确性。鉴于产品批量生产交付阶段抽检质量时实弹难以统计弹着点和用弹数量有限的困难,提出了CEP评定转换为命中率考核,采用截尾序贯法,并进行优化的评定方案。  相似文献   

13.
Strengthening the United States' ability to prevent adversaries from smuggling nuclear materials into the country is a vital and ongoing issue. The prospect of additional countries, such as Iran, obtaining the know‐how and equipment to produce these special nuclear materials in the near future underscores the need for efficient and effective inspection policies at ports and border crossings. In addition, the reduction of defense and homeland security budgets in recent years has made it increasingly important to accomplish the interdiction mission with fewer funds. Addressing these complications, in this article, we present a novel two‐port interdiction model. We propose using prior inspection data as a low‐cost way of increasing overall interdiction performance. We provide insights into two primary questions: first, how should a decision maker at a domestic port use detection data from the foreign port to improve the overall detection capability? Second, what are potential limitations to the usefulness of prior inspection data—is it possible that using prior data actually harms decision making at the domestic port? We find that a boundary curve policy (BCP) that takes into account both foreign and domestic inspection data can provide a significant improvement in detection probability. This BCP also proves to be surprisingly robust, even if adversaries are able to infiltrate shipments during transit. © 2013 Wiley Periodicals, Inc. Naval Research Logistics 60: 433‐448, 2013  相似文献   

14.
This article considers the problem of monitoring Poisson count data when sample sizes are time varying without assuming a priori knowledge of sample sizes. Traditional control charts, whose control limits are often determined before the control charts are activated, are constructed based on perfect knowledge of sample sizes. In practice, however, future sample sizes are often unknown. Making an inappropriate assumption of the distribution function could lead to unexpected performance of the control charts, for example, excessive false alarms in the early runs of the control charts, which would in turn hurt an operator's confidence in valid alarms. To overcome this problem, we propose the use of probability control limits, which are determined based on the realization of sample sizes online. The conditional probability that the charting statistic exceeds the control limit at present given that there has not been a single alarm before can be guaranteed to meet a specified false alarm rate. Simulation studies show that our proposed control chart is able to deliver satisfactory run length performance for any time‐varying sample sizes. The idea presented in this article can be applied to any effective control charts such as the exponentially weighted moving average or cumulative sum chart. © 2013 Wiley Periodicals, Inc. Naval Research Logistics 60: 625–636, 2013  相似文献   

15.
The paper considers the economic lot scheduling problem (ELSP) where production facility is assumed to deteriorate, owing to aging, with an increasing failure rate. The time to shift from an “in‐control” state to an “out‐of‐control” state is assumed to be normally distributed. The system is scheduled to be inspected at the end of each production lot. If the process is found to be in an “out‐of‐control” state, then corrective maintenance is performed to restore it to an “in‐control” state before the start of the next production run. Otherwise, preventive maintenance is carried out to enhance system reliability. The ELSP is formulated under the capacity constraint taking into account the quality related cost due to possible production of non‐conforming items, process inspection, and maintenance costs. In order to find a feasible production schedule, both the common cycle and time‐varying lot sizes approaches are utilized. © 2003 Wiley Periodicals, Inc. Naval Research Logistics 50: 650–661, 2003  相似文献   

16.
One approach to evaluating system reliability is the use of system based component test plans. Such plans have numerous advantages over complete system level tests, primarily in terms of time and cost savings. This paper considers one of the two basic building blocks of many complex systems, namely a system of n parallel components, and develops minimum cost component test plans for evaluating the reliability of such a system when the component reliabilities are known to be high. Two different decision rules are considered and the corresponding optimization problems are formulated and solved using techniques from mathematical programming. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44 : 401–418, 1997  相似文献   

17.
This article considers a modified inspection policy with periodic check intervals, where the unit after check has the same age as before with probability p and is as good as new with probability q. The mean time to failure and the expected number of checks before failure are derived, forming renewal-type equations. The total expected cost and the expected cost per unit of time until detection of failure are obtained. Optimum inspection policies which minimize the expected costs are given as a numerical example.  相似文献   

18.
This paper presents a branch and bound algorithm for computing optimal replacement policies in a discrete‐time, infinite‐horizon, dynamic programming model of a binary coherent system with n statistically independent components, and then specializes the algorithm to consecutive k‐out‐of‐n systems. The objective is to minimize the long‐run expected average undiscounted cost per period. (Costs arise when the system fails and when failed components are replaced.) An earlier paper established the optimality of following a critical component policy (CCP), i.e., a policy specified by a critical component set and the rule: Replace a component if and only if it is failed and in the critical component set. Computing an optimal CCP is a optimization problem with n binary variables and a nonlinear objective function. Our branch and bound algorithm for solving this problem has memory storage requirement O(n) for consecutive k‐out‐of‐n systems. Extensive computational experiments on such systems involving over 350,000 test problems with n ranging from 10 to 150 find this algorithm to be effective when n ≤ 40 or k is near n. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 288–302, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10017  相似文献   

19.
A rule that constrains decision‐makers is enforced by an inspector who is supplied with a fixed level of inspection resources—inspection personnel, equipment, or time. How should the inspector distribute its inspection resources over several independent inspectees? What minimum level of resources is required to deter all violations? Optimal enforcement problems occur in many contexts; the motivating application for this study is the role of the International Atomic Energy Agency in support of the Treaty on the Non‐Proliferation of Nuclear Weapons. Using game‐theoretic models, the resource level adequate for deterrence is characterized in a two‐inspectee problem with inspections that are imperfect in the sense that violations can be missed. Detection functions, or probabilities of detecting a violation, are assumed to be increasing in inspection resources, permitting optimal allocations over inspectees to be described both in general and in special cases. When detection functions are convex, inspection effort should be concentrated on one inspectee chosen at random, but when they are concave it should be spread deterministicly over the inspectees. Our analysis provides guidance for the design of arms‐control verification operations, and implies that a priori constraints on the distribution of inspection effort can result in significant inefficiencies. © 2003 Wiley Periodicals, Inc. Naval Research Logistics, 2004.  相似文献   

20.
目标识别中的传感器管理方法   总被引:1,自引:0,他引:1  
基于假设检验的方法,研究了目标识别中的传感器管理方法。分别以用于假设检验问题中的最大后验概率、最小代价函数和最大正确检测概率为目标函数,给出了用于目标识别问题的多传感器管理方法,并针对目标识别问题的特点,对其进行了相应的改进。通过对一定场景的仿真,比较了三种不同方法的识别正确率和平均采样次数,分析了各种方法的特点。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号