首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 796 毫秒
1.
Given herein is an easily implemented method for obtaining, from complete or censored data, approximate tolerance intervals associated with the upper tail of a Weibull distribution. These approximate intervals are based on point estimators that make essentially most efficient use of sample data. They agree extremely well with exact intervals (obtained by Monte Carlo simulation procedures) for sample sizes of about 10 or larger when specified survival proportions are sufficiently small. Ranges over which the error in the approximation is within 2 percent are determined. The motivation for investigation of the methodology for obtaining the approximate tolerance intervals was provided by the new formulation of Lanchester Combat Theory by Grubbs and Shuford [3], which suggests a Weibull assumption for time-to-incapacitation of key targets. With the procedures investigated herein, one can use (censored) data from battle simulations to obtain confidence intervals on battle times associated with given low survivor proportions of key targets belonging to either specified side in a future battle. It is also possible to calculate confidence intervals on a survival proportion of key targets corresponding to a given battle duration time.  相似文献   

2.
Multicollinearity and nonnormal errors are problems often encountered in the application of linear regression. Estimators are proposed for dealing with the simultaneous occurrence of both multicollinearity and nonnormality. These estimators are developed by combining biased estimation techniques with certain robust criteria. An iteratively reweighted least-squares procedure is used to compute the estimates. The performance of the combined estimators is studied empirically through Monte Carlo experiments structured according to factorial designs. With respect to a mean-squared-error criterion, the combined estimators are superior to ordinary least-squares, pure biased estimators, and pure robust estimators when multicollinearity and nonnormality are present. The loss in efficiency for the combined estimators relative to least squares is small when these problems do not occur. Some guidelines for the use of these combined estimators are given.  相似文献   

3.
Instead of measuring a Wiener degradation or performance process at predetermined time points to track degradation or performance of a product for estimating its lifetime, we propose to obtain the first‐passage times of the process over certain nonfailure thresholds. Based on only these intermediate data, we obtain the uniformly minimum variance unbiased estimator and uniformly most accurate confidence interval for the mean lifetime. For estimating the lifetime distribution function, we propose a modified maximum likelihood estimator and a new estimator and prove that, by increasing the sample size of the intermediate data, these estimators and the above‐mentioned estimator of the mean lifetime can achieve the same levels of accuracy as the estimators assuming one has failure times. Thus, our method of using only intermediate data is useful for highly reliable products when their failure times are difficult to obtain. Furthermore, we show that the proposed new estimator of the lifetime distribution function is more accurate than the standard and modified maximum likelihood estimators. We also obtain approximate confidence intervals for the lifetime distribution function and its percentiles. Finally, we use light‐emitting diodes as an example to illustrate our method and demonstrate how to validate the Wiener assumption during the testing. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008  相似文献   

4.
This paper studies a new steady‐state simulation output analysis method called replicated batch means in which a small number of replications are conducted and the observations in these replications are grouped into batches. This paper also introduces and compares methods for selecting the initial state of each replication. More specifically, we show that confidence intervals constructed by the replicated batch means method are valid for large batch sizes and derive expressions for the expected values and variances of the steady‐state mean and variance estimators for stationary processes and large sample sizes. We then use these expressions, analytical examples, and numerical experiments to compare the replicated batch means method with the standard batch means and multiple replications methods. The numerical results, which are obtained from an AR(1) process and a small, nearly‐decomposable Markov chain, show that the multiple replications method often gives confidence intervals with poorer coverage than the standard and replicated batch means methods and that the replicated batch means method, implemented with good choices of initialization method and number of replications, provides confidence interval coverages that range from being comparable with to being noticeably better than coverages obtained by the standard batch means method. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2006  相似文献   

5.
A new connection between the distribution of component failure times of a coherent system and (adaptive) progressively Type‐II censored order statistics is established. Utilizing this property, we develop inferential procedures when the data is given by all component failures until system failure in two scenarios: In the case of complete information, we assume that the failed component is also observed whereas in the case of incomplete information, we have only information about the failure times but not about the components which have failed. In the first setting, we show that inferential methods for adaptive progressively Type‐II censored data can directly be applied to the problem. For incomplete information, we face the problem that the corresponding censoring plan is not observed and that the available inferential procedures depend on the knowledge of the used censoring plan. To get estimates for distributional parameters, we propose maximum likelihood estimators which can be obtained by solving the likelihood equations directly or via an Expectation‐Maximization‐algorithm type procedure. For an exponential distribution, we discuss also a linear estimator to estimate the mean. Moreover, we establish exact distributions for some estimators in the exponential case which can be used, for example, to construct exact confidence intervals. The results are illustrated by a five component bridge system. © 2015 Wiley Periodicals, Inc. Naval Research Logistics 62: 512–530, 2015  相似文献   

6.
An approximation suggested in Mann, Schafer and Singpurwalla [18] for obtaining small-sample tolerance bounds based on possibly censored two-parameter Weibull and lognormal samples is investigated. The tolerance bounds obtained are those that effectively make most efficient use of sample data. Values based on the approximation are compared with some available exact values and shown to be in surprisingly good agreement, even in certain cases in which sample sizes are very small or censoring is extensive. Ranges over which error in the approximation is less than about 1 or 2 percent are determined. The investigation of the precision of the approximation extends results of Lawless [8], who considered large-sample maximum-likelihood estimates of parameters as the basis for approximate 95 percent Weibull tolerance bounds obtained by the general approach described in [18]. For Weibull (or extreme-value) data the approximation is particularly useful when sample sizes are moderately large (more than 25), but not large enough (well over 100 for severely censored data) for asymptotic normality of estimators to apply. For such cases simplified efficient linear estimates or maximum-likelihood estimates may be used to obtain the approximate tolerance bounds. For lognormal censored data, best linear unbiased estimates may be used, or any efficient unbiased estimators for which variances and covariances are known as functions of the square of the distribution variance.  相似文献   

7.
Moment estimators for the parameters of the Weibull distribution are considered in the context of analysis of field data. The data available are aggregated, with individual failure times not recorded. In this case, the complexity of the likelihood function argues against the use of maximum-likelihood estimation, particularly for relatively large sets of data, and moment estimators are a reasonable alternative. In this article, we derive the asymptotic covariance matrix of the moment estimators, and provide listings for BASIC computer programs which generate tables useful for calculation of the estimates as well as for estimating the asymptotic covariance matrix using aggregated data.  相似文献   

8.
The Signal‐to‐Interference‐plus‐Noise Ratio (SINR) is an important metric of wireless communication link quality. SINR estimates have several important applications. These include optimizing the transmit power level for a target quality of service, assisting with handoff decisions and dynamically adapting the data rate for wireless Internet applications. Accurate SINR estimation provides for both a more efficient system and a higher user‐perceived quality of service. In this paper, we develop new SINR estimators and compare their mean squared error (MSE) performance. We show that our new estimators dominate estimators that have previously appeared in the literature with respect to MSE. The sequence of transmitted bits in wireless communication systems consists of both pilot bits (which are known both to the transmitter and receiver) and user bits (which are known only by the transmitter). The SINR estimators we consider alternatively depend exclusively on pilot bits, exclusively on user bits, or simultaneously use both pilot and user bits. In addition, we consider estimators that utilize smoothing and feedback mechanisms. Smoothed estimators are motivated by the fact that the interference component of the SINR changes relatively slowly with time, typically with the addition or departure of a user to the system. Feedback estimators are motivated by the fact that receivers typically decode bits correctly with a very high probability, and therefore user bits can be thought of as quasipilot bits. For each estimator discussed, we derive an exact or approximate formula for its MSE. Satterthwaite approximations, noncentral F distributions (singly and doubly) and distribution theory of quadratic forms are the key statistical tools used in developing the MSE formulas. In the case of approximate MSE formulas, we validate their accuracy using simulation techniques. The approximate MSE formulas, of interest in their own right for comparing the quality of the estimators, are also used for optimally combining estimators. In particular, we derive optimal weights for linearly combining an estimator based on pilot bits with an estimator based on user bits. The optimal weights depend on the MSE of the two estimators being combined, and thus the accurate approximate MSE formulas can conveniently be used. The optimal weights also depend on the unknown SINR, and therefore need to be estimated in order to construct a useable combined estimator. The impact on the MSE of the combined estimator due to estimating the weights is examined. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004  相似文献   

9.
针对惯导平台误差分离过程中输入输出观测数据均含有噪声的问题,应用基于EV模型的总体最小二乘方法进行误差分离.给出了惯导平台的误差模型及误差观测方程,介绍了EV模型及总体最小二乘方法,分析了利用车载试验进行误差分离时平台的安装方式以及所需的外测信号等问题.应用最小二乘法和总体最小二乘法进行仿真对比研究.仿真结果表明,基于EV模型的总体最小二乘法对惯导平台的误差系数分离精度较最小二乘法要高.  相似文献   

10.
The aim of this articles is to study the asymptotic behavior of two imperfect repair models, called Arithmetic Reduction of Intensity and Arithmetic Reduction of Age models. These models have been proposed by Doyen and Gaudoin (Reliab Eng Syst Safe 84 (2004) 45–56) and include many usual virtual age models. First, it is proved that the failure intensity of these models is asymptotically almost surely equivalent to a deterministic increasing function with a cumulative error proportional to a logarithm. Second, the almost sure convergence and asymptotic normality of several estimators of repair efficiency are derived, when the wear‐out process without repair is known. Finally, the coverage rate of the asymptotic confidence intervals issued from those estimators is empirically studied. © 2010 Wiley Periodicals, Inc. Naval Research Logistics, 2010  相似文献   

11.
12.
Hazard rate processes are discussed in the context of doubly stochastic Poisson processes. We derive an explicit expression for the reliability function corresponding to an increasing hazard rate processes with independent increments. Also, bounds are obtained for the reliability function of a system with a general hazard rate process.  相似文献   

13.
混合指数分布模型的Bayes分析   总被引:2,自引:0,他引:2       下载免费PDF全文
针对截尾试验数据的情况,给出了二元混合指数分布模型的平均寿命和可靠性函数的严格的Bayes点估计,并运用最大熵准则给出了可靠性函数的近似的Bayes置信下限估计。  相似文献   

14.
《防务技术》2022,18(9):1546-1551
In the present study, thermal hazards of TNT and DNAN used as the molten binder in TKX-50-based melt-cast explosives were comparatively studied through accelerating rate calorimeter (ARC) and Cook-off experiments. Two kinds of ARC operation modes were performed to investigate the thermal safety performance under adiabatic conditions (HWS mode) and constant heating (CHR mode). The obtained results demonstrated that at both heating modes, DNAN/TKX-50 outperformed TNT/TKX-50 from the thermal safety point of view. However, the sensitivity to heat of the samples was reverse because of the different heating modes. In addition, the results of thermal hazard assessment obtained from the cook-off experiment complied with ARC analysis which indicated the molten binder TNT replaced by DNAN would reduce the hazard of the TKX-50 melt cast explosive. Furthermore, the results of cook-off experiments also showed that DNAN/TKX-50 outperformed TNT/TKX-50 from the aspect of thermal stability, which was consistent with the result of CHR mode because of the similar heating process.  相似文献   

15.
对半参数回归模型Yi=xiβ+g(ti)+εi,i=1,2,…,n定义了参数β与回归函数g(·)的估计,并证明了 它们的强相合性。  相似文献   

16.
For various parameter combinations, the logistic–exponential survival distribution belongs to four common classes of survival distributions: increasing failure rate, decreasing failure rate, bathtub‐shaped failure rate, and upside‐down bathtub‐shaped failure rate. Graphical comparison of this new distribution with other common survival distributions is seen in a plot of the skewness versus the coefficient of variation. The distribution can be used as a survival model or as a device to determine the distribution class from which a particular data set is drawn. As the three‐parameter version is less mathematically tractable, our major results concern the two‐parameter version. Boundaries for the maximum likelihood estimators of the parameters are derived in this article. Also, a fixed‐point method to find the maximum likelihood estimators for complete and censored data sets has been developed. The two‐parameter and the three‐parameter versions of the logistic–exponential distribution are applied to two real‐life data sets. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008  相似文献   

17.
It is important for applied statisticians to know if data that can be described by the Weibull distribution can also be described by the normal distribution. Some investigations have been published on this subject. In this article, we review the results in papers related to the above problem. Next, we define the mean hazard rate. We then investigate the properties of the mean hazard rate for the Weibull distribution and the normal distribution, respectively. Finally, we discuss the method of approximating the normal distribution by the Weibull distribution using the mean hazard rate.  相似文献   

18.
Kalman filter is commonly used in data filtering and parameters estimation of nonlinear system,such as projectile's trajectory estimation and control.While there is a drawback that the prior error covariance matrix and filter parameters are difficult to be determined,which may result in filtering divergence.As to the problem that the accuracy of state estimation for nonlinear ballistic model strongly depends on its mathematical model,we improve the weighted least squares method(WLSM)with minimum model error principle.Invariant embedding method is adopted to solve the cost function including the model error.With the knowledge of measurement data and measurement error covariance matrix,we use gradient descent algorithm to determine the weighting matrix of model error.The uncertainty and linearization error of model are recursively estimated by the proposed method,thus achieving an online filtering estimation of the observations.Simulation results indicate that the proposed recursive esti-mation algorithm is insensitive to initial conditions and of good robustness.  相似文献   

19.
在II型混合截尾样本下,得到了广义逆指数分布未知参数的最大似然估计。利用最大似然估计的渐近正态性构造了参数的渐近置信区间,运用Lindley's逼近方法和TierneyKadane's逼近方法计算出了参数的Bayes估计。最后,运用Monte-Carlo方法对上述估计方法结果作了模拟比较。  相似文献   

20.
Consider a stochastic simulation experiment consisting of v independent vector replications consisting of an observation from each of k independent systems. Typical system comparisons are based on mean (long‐run) performance. However, the probability that a system will actually be the best is sometimes more relevant, and can provide a very different perspective than the systems' means. Empirically, we select one system as the best performer (i.e., it wins) on each replication. Each system has an unknown constant probability of winning on any replication and the numbers of wins for the individual systems follow a multinomial distribution. Procedures exist for selecting the system with the largest probability of being the best. This paper addresses the companion problem of estimating the probability that each system will be the best. The maximum likelihood estimators (MLEs) of the multinomial cell probabilities for a set of v vector replications across k systems are well known. We use these same v vector replications to form vk unique vectors (termed pseudo‐replications) that contain one observation from each system and develop estimators based on AVC (All Vector Comparisons). In other words, we compare every observation from each system with every combination of observations from the remaining systems and note the best performer in each pseudo‐replication. AVC provides lower variance estimators of the probability that each system will be the best than the MLEs. We also derive confidence intervals for the AVC point estimators, present a portion of an extensive empirical evaluation and provide a realistic example. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 341–358, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10019  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号