首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Quantile is an important quantity in reliability analysis, as it is related to the resistance level for defining failure events. This study develops a computationally efficient sampling method for estimating extreme quantiles using stochastic black box computer models. Importance sampling has been widely employed as a powerful variance reduction technique to reduce estimation uncertainty and improve computational efficiency in many reliability studies. However, when applied to quantile estimation, importance sampling faces challenges, because a good choice of the importance sampling density relies on information about the unknown quantile. We propose an adaptive method that refines the importance sampling density parameter toward the unknown target quantile value along the iterations. The proposed adaptive scheme allows us to use the simulation outcomes obtained in previous iterations for steering the simulation process to focus on important input areas. We prove some convergence properties of the proposed method and show that our approach can achieve variance reduction over crude Monte Carlo sampling. We demonstrate its estimation efficiency through numerical examples and wind turbine case study.  相似文献   

2.
In this article, we discuss the optimal allocation problem in a multiple stress levels life‐testing experiment when an extreme value regression model is used for statistical analysis. We derive the maximum likelihood estimators, the Fisher information, and the asymptotic variance–covariance matrix of the maximum likelihood estimators. Three optimality criteria are defined and the optimal allocation of units for two‐ and k‐stress level situations are determined. We demonstrate the efficiency of the optimal allocation of units in a multiple stress levels life‐testing experiment by using real experimental situations discussed earlier by McCool and Nelson and Meeker. Monte Carlo simulations are used to show that the optimality results hold for small sample sizes as well. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2007  相似文献   

3.
We propose three related estimators for the variance parameter arising from a steady‐state simulation process. All are based on combinations of standardized‐time‐series area and Cramér–von Mises (CvM) estimators. The first is a straightforward linear combination of the area and CvM estimators; the second resembles a Durbin–Watson statistic; and the third is related to a jackknifed version of the first. The main derivations yield analytical expressions for the bias and variance of the new estimators. These results show that the new estimators often perform better than the pure area, pure CvM, and benchmark nonoverlapping and overlapping batch means estimators, especially in terms of variance and mean squared error. We also give exact and Monte Carlo examples illustrating our findings.© 2007 Wiley Periodicals, Inc. Naval Research Logistics, 2007  相似文献   

4.
We propose a novel simulation‐based approach for solving two‐stage stochastic programs with recourse and endogenous (decision dependent) uncertainty. The proposed augmented nested sampling approach recasts the stochastic optimization problem as a simulation problem by treating the decision variables as random. The optimal decision is obtained via the mode of the augmented probability model. We illustrate our methodology on a newsvendor problem with stock‐dependent uncertain demand both in single and multi‐item (news‐stand) cases. We provide performance comparisons with Markov chain Monte Carlo and traditional Monte Carlo simulation‐based optimization schemes. Finally, we conclude with directions for future research.  相似文献   

5.
This paper considers the statistical analysis of masked data in a series system, where the components are assumed to have Marshall‐Olkin Weibull distribution. Based on type‐I progressive hybrid censored and masked data, we derive the maximum likelihood estimates, approximate confidence intervals, and bootstrap confidence intervals of unknown parameters. As the maximum likelihood estimate does not exist for small sample size, Gibbs sampling is used to obtain the Bayesian estimates and Monte Carlo method is employed to construct the credible intervals based on Jefferys prior with partial information. Numerical simulations are performed to compare the performances of the proposed methods and one data set is analyzed.  相似文献   

6.
针对基于方差的全局可靠性灵敏度指标,分别提出基于方差的区域和参数化可靠性灵敏度指标,以衡量输入变量的取值区域发生变化时或输入变量的方差减小时整个输入变量系统对失效概率不确定性贡献的变化情况。然后从Pearson相关系数的视角分别将所提指标表述成无条件失效域指示函数与固定某一随机输入时的条件失效域指示函数之间的相关系数。在此转换的基础上,提出基于Pearson相关系数的两种求解方法,一种采用蒙特卡洛方法重复抽样进行循环计算,另一种借鉴重要抽样的思想。功能函数的计算样本可重复使用而不增加任何额外的计算代价,故后者大大提高了求解所提区域和参数化灵敏度指标的计算效率。算例结果验证了所提指标的合理性,同时也证明了所提方法的准确性与高效性。  相似文献   

7.
In many practical situations of production scheduling, it is either necessary or recommended to group a large number of jobs into a relatively small number of batches. A decision needs to be made regarding both the batching (i.e., determining the number and the size of the batches) and the sequencing (of batches and of jobs within batches). A setup cost is incurred whenever a batch begins processing on a given machine. This paper focuses on batch scheduling of identical processing‐time jobs, and machine‐ and sequence‐independent setup times on an m‐machine flow‐shop. The objective is to find an allocation to batches and their schedule in order to minimize flow‐time. We introduce a surprising and nonintuitive solution for the problem. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004  相似文献   

8.
We study a stochastic outpatient appointment scheduling problem (SOASP) in which we need to design a schedule and an adaptive rescheduling (i.e., resequencing or declining) policy for a set of patients. Each patient has a known type and associated probability distributions of random service duration and random arrival time. Finding a provably optimal solution to this problem requires solving a multistage stochastic mixed‐integer program (MSMIP) with a schedule optimization problem solved at each stage, determining the optimal rescheduling policy over the various random service durations and arrival times. In recognition that this MSMIP is intractable, we first consider a two‐stage model (TSM) that relaxes the nonanticipativity constraints of MSMIP and so yields a lower bound. Second, we derive a set of valid inequalities to strengthen and improve the solvability of the TSM formulation. Third, we obtain an upper bound for the MSMIP by solving the TSM under the feasible (and easily implementable) appointment order (AO) policy, which requires that patients are served in the order of their scheduled appointments, independent of their actual arrival times. Fourth, we propose a Monte Carlo approach to evaluate the relative gap between the MSMIP upper and lower bounds. Finally, in a series of numerical experiments, we show that these two bounds are very close in a wide range of SOASP instances, demonstrating the near‐optimality of the AO policy. We also identify parameter settings that result in a large gap in between these two bounds. Accordingly, we propose an alternative policy based on neighbor‐swapping. We demonstrate that this alternative policy leads to a much tighter upper bound and significantly shrinks the gap.  相似文献   

9.
This article presents new tools and methods for finding optimum step‐stress accelerated life test plans. First, we present an approach to calculate the large‐sample approximate variance of the maximum likelihood estimator of a quantile of the failure time distribution at use conditions from a step‐stress accelerated life test. The approach allows for multistep stress changes and censoring for general log‐location‐scale distributions based on a cumulative exposure model. As an application of this approach, the optimum variance is studied as a function of shape parameter for both Weibull and lognormal distributions. Graphical comparisons among test plans using step‐up, step‐down, and constant‐stress patterns are also presented. The results show that depending on the values of the model parameters and quantile of interest, each of the three test plans can be preferable in terms of optimum variance. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008  相似文献   

10.
建立火箭及其分离残骸弹道计算动力学模型,并采用四元数方法对姿态角解算进行处理。提出基于优化加点Kriging模型的安全区预示方法,结合Monte Carlo和Kriging代理模型的特点,给出安全区预示流程。以某型助推火箭残骸安全区计算为例,对提出的安全区预示方法进行仿真验证。仿真结果表明:提出的基于优化加点Kriging模型安全区预示方法与Monte Carlo方法相比,在不损失计算精度的前提下,具有更高的计算效率,满足快速迭代的工程需求,相比传统极限偏差叠加方法,可显著降低安全区覆盖面积,具有较强的工程应用价值。  相似文献   

11.
In this article, we address a stochastic generalized assignment machine scheduling problem in which the processing times of jobs are assumed to be random variables. We develop a branch‐and‐price (B&P) approach for solving this problem wherein the pricing problem is separable with respect to each machine, and has the structure of a multidimensional knapsack problem. In addition, we explore two other extensions of this method—one that utilizes a dual‐stabilization technique and another that incorporates an advanced‐start procedure to obtain an initial feasible solution. We compare the performance of these methods with that of the branch‐and‐cut (B&C) method within CPLEX. Our results show that all B&P‐based approaches perform better than the B&C method, with the best performance obtained for the B&P procedure that includes both the extensions aforementioned. We also utilize a Monte Carlo method within the B&P scheme, which affords the use of a small subset of scenarios at a time to estimate the “true” optimal objective function value. Our experimental investigation reveals that this approach readily yields solutions lying within 5% of optimality, while providing more than a 10‐fold savings in CPU times in comparison with the best of the other proposed B&P procedures. © 2014 Wiley Periodicals, Inc. Naval Research Logistics 61: 131–143, 2014  相似文献   

12.
Various indices of component importance with respect to system reliability have been proposed. The most popular one is the Birnbaum importance. In particular, a special case called uniform Birnbaum importance in which all components have the same reliability p has been widely studied for the consecutive‐k system. Since it is not easy to compare uniform Birnbaum importance, the literature has looked into the case p = ½, p → 1, or p ≥ ½. In this paper, we look into the case p → 0 to complete the spectrum of examining Birnbaum importance over the whole range of p. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 159–166, 2002; DOI 10.1002/nav.10001  相似文献   

13.
Stochastic transportation networks arise in various real world applications, for which the probability of the existence of a feasible flow is regarded as an important performance measure. Although the necessary and sufficient condition for the existence of a feasible flow represented by an exponential number of inequalities is a well‐known result in the literature, the computation of the probability of all such inequalities being satisfied jointly is a daunting challenge. The state‐of‐the‐art approach of Prékopa and Boros, Operat Res 39 (1991) 119–129 approximates this probability by giving its lower and upper bounds using a two‐part procedure. The first part eliminates all redundant inequalities and the second gives the lower and upper bounds of the probability by solving two well‐defined linear programs with the inputs obtained from the first part. Unfortunately, the first part may still leave many non‐redundant inequalities. In this case, it would be very time consuming to compute the inputs for the second part even for small‐sized networks. In this paper, we first present a model that can be used to eliminate all redundant inequalities and give the corresponding computational results for the same numerical examples used in Prékopa and Boros, Operat Res 39 (1991) 119–129. We also show how to improve the lower and upper bounds of the probability using the multitree and hypermultitree, respectively. Furthermore, we propose an exact solution approach based on the state space decomposition to compute the probability. We derive a feasible state from a state space and then decompose the space into several disjoint subspaces iteratively. The probability is equal to the sum of the probabilities in these subspaces. We use the 8‐node and 15‐node network examples in Prékopa and Boros, Operat Res 39 (1991) 119–129 and the Sioux‐Falls network with 24 nodes to show that the space decomposition algorithm can obtain the exact probability of these classical examples efficiently. © 2016 Wiley Periodicals, Inc. Naval Research Logistics 63: 479–491, 2016  相似文献   

14.
We study a problem of scheduling a maintenance activity on parallel identical machines, under the assumption that all the machines must be maintained simultaneously. One example for this setting is a situation where the entire system must be stopped for maintenance because of a required electricity shut‐down. The objective is minimum flow‐time. The problem is shown to be NP‐hard, and moreover impossible to approximate unless P = NP. We introduce a pseudo‐polynomial dynamic programming algorithm, and show how to convert it into a bicriteria FPTAS for this problem. We also present an efficient heuristic and a lower bound. Our numerical tests indicate that the heuristic provides in most cases very close‐to‐optimal schedules. © 2008 Wiley Periodicals, Inc. Naval Research Logistics 2009  相似文献   

15.
In February 1998, Osama Bin Laden published a signed statement calling for a fatwa against the United States for its having ‘declared war against God’. As we now know, the fatwa resulted in the unprecedented attack of 9/11. The issue of whether or not 9/11 was in any way predictable culminated in the public debate between Richard Clarke, former CIA Director George Tenet and the White House. This paper examines whether there was any evidence of a structural change in the terrorism data at or after February 1998 but prior to June 2001, controlling for the possibility of other breaks in earlier periods. In doing so, we use the standard Bai–Perron procedure and our sequential importance sampling (SIS) Markov Chain Monte Carlo (MCMC) method for identifying an unknown number of breaks at unknown dates. We conclude that sophisticated statistical time‐series analysis would not have predicted 9/11.  相似文献   

16.
Gamma accelerated degradation tests (ADT) are widely used to assess timely lifetime information of highly reliable products with degradation paths that follow a gamma process. In the existing literature, there is interest in addressing the problem of deciding how to conduct an efficient, ADT that includes determinations of higher stress‐testing levels and their corresponding sample‐size allocations. The existing results mainly focused on the case of a single accelerating variable. However, this may not be practical when the quality characteristics of the product have slow degradation rates. To overcome this difficulty, we propose an analytical approach to address this decision‐making problem using the case of two accelerating variables. Specifically, based on the criterion of minimizing the asymptotic variance of the estimated q quantile of lifetime distribution of the product, we analytically show that the optimal stress levels and sample‐size allocations can be simultaneously obtained via a general equivalence theorem. In addition, we use a practical example to illustrate the proposed procedure.  相似文献   

17.
采用Fay-Riddell关系式、直接模拟Monte Carlo方法和基于直接模拟Monte Carlo流场温度的Fourier传热三种热流表达方式,分别对比研究了不同来流克努森数(Kn)和不同来流马赫数(Ma)的结果,以期从微观视角给出经典连续方法在稀薄流区高估驻点热流的新理解。结果表明,驻点热流的稀薄效应体现在三个方面:一是温度跳跃,削弱温度梯度导致驻点热流降低;二是壁面附近平动非平衡,导致Fourier热传导定律失效且高估热流;三是壁面约束,致使Fourier热传导定律在距壁面3倍分子平均自由程内高估热流。  相似文献   

18.
A univariate meta analysis is often used to summarize various study results on the same research hypothesis concerning an effect of interest. When several marketing studies produce sets of more than one effect, multivariate meta analysis can be conducted. Problems one might have with such a multivariate meta analysis are: (1) Several effects estimated in one model could be correlated to each other but their correlation is seldom published and (2) an estimated effect in one model could be correlated to the corresponding effect in the other model due to similar model specification or the data set partly shared, but their correlation is not known. Situations like (2) happen often in military recruiting studies. We employ a Monte‐Carlo simulation to evaluate how neglecting such potential correlation affects the result of a multivariate meta analysis in terms of Type I, Type II errors, and MSE. Simulation results indicate that such effect is not significant. What matters is rather the size of the variance component due to random error in multivariate effects. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 500–510, 2000.  相似文献   

19.
Consider a repeated newsvendor problem for managing the inventory of perishable products. When the parameter of the demand distribution is unknown, it has been shown that the traditional separated estimation and optimization (SEO) approach could lead to suboptimality. To address this issue, an integrated approach called operational statistics (OS) was developed by Chu et al., Oper Res Lett 36 (2008) 110–116. In this note, we first study the properties of this approach and compare its performance with that of the traditional SEO approach. It is shown that OS is consistent and superior to SEO. The benefit of using OS is larger when the demand variability is higher. We then generalize OS to the risk‐averse case under the conditional value‐at‐risk (CVaR) criterion. To model risk from both demand sampling and future demand uncertainty, we introduce a new criterion, called the total CVaR, and find the optimal OS under this new criterion. © 2015 Wiley Periodicals, Inc. Naval Research Logistics 62: 206–214, 2015  相似文献   

20.
针对基于仅测角导航的空间交会问题,开展了采用线性协方差进行闭环控制误差快速分析方法的研究。建立了基于SRUKF (Square Root Unscented Kalman Filter) 的仅测角导航算法并推导了观测敏感矩阵,构建了基于多脉冲Hill制导的闭环控制线性协方差分析模型。经算例验证,本文提出的闭环控制协方差分析结果与Monte Carlo打靶结果能够很好吻合;该方法适用于采用传统Extended Kalman Filter (EKF)的仅测角导航问题,但其迹向位置的估计存一个与该方向控制误差方差相当的偏心,其误差椭圆的长轴和短轴分别比基于SRUKF的估计结果大了24.68%和20.56%。此外,由于采用了QR分解和Cholesky 因子更新两种高效的代数运算,基于SRUKF的协方差分析模型要比基于EKF的协方差分析模型在计算速度上快了10%。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号