首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
This paper develops a forward algorithm and planning horizon procedures for an important machine replacement model where it is assumed that the technological environment is improving over time and that the machine-in-use can be replaced by any of the several different kinds of machines available at that time. The set of replacement alternatives may include (i) new machines with different types of technologies such as labor- and capital- intensive, (ii) used machines, (iii) repairs and/or improvements which affect the performance characteristics of the existing machine, and so forth. The forward dynamic programming algorithm in the paper can be used to solve a finite horizon problem. The planning horizon results give a procedure to identify the forecast horizon T such that the optimal replacement decision for the first machine based on the forecast of machine technology until period T remains optimal for any problem with horizon longer than T and, for that matter, for the infinite horizon problem. A flow chart and a numerical example have been included to illustrate the algorithm.  相似文献   

2.
A series of independent trials is considered in which one of k ≥ 2 mutually exclusive and exhaustive outcomes occurs at each trial. The series terminates when m outcomes of any one type have occurred. The limiting distribution (as m → ∞) of the number of trials performed until termination is found with particular attention to the situation where a Dirichlet distribution is assigned to the k vector of probabilities for each outcome. Applications to series of races involving k runners and to spares problems in reliability modeling are discussed. The problem of selecting a stopping rule so that the probability of the series terminating on outcome i is k?1 (i.e., a “fair” competition) is also studied. Two generalizations of the original asymptotic problem are addressed.  相似文献   

3.
The present study is concerned with the determination of a few observations from a sufficiently large complete or censored sample from the extreme value distribution with location and scale parameters μ and σ, respectively, such that the asymptotically best linear unbiased estimators (ABLUE) of the parameters in Ref. [24] yield high efficiencies among other choices of the same number of observations. (All efficiencies considered are relative to the Cramér-Rao lower bounds for regular unbiased estimators.) The study is on the asymptotic theory and under Type II censoring scheme. For the estimation of μ when σ is known, it has been proved that there exists a unique optimum spacing whether the sample is complete, right censored, left censored, or doubly censored. Several tables are prepared to aid in the numerical computation of the estimates as well as to furnish their efficiencies. For the estimation of σ when μ is known, it has been observed that there does not exist a unique optimum spacing. Accordingly we have obtained a spacing based on a complete sample which yields high efficiency. A similar table as above is prepared. When both μ and σ are unknown, we have considered four different spacings based on a complete sample and chosen the one yielding highest efficiency. A table of the efficiencies is also prepared. Finally we apply the above results for the estimation of the scale and/or shape parameters of the Weibull distribution.  相似文献   

4.
A system reliability is often evaluated by individual tests of components that constitute the system. These component test plans have advantages over complete system based tests in terms of time and cost. In this paper, we consider the series system with n components, where the lifetime of the i‐th component follows exponential distribution with parameter λi. Assuming test costs for the components are different, we develop an efficient algorithm to design a two‐stage component test plan that satisfies the usual probability requirements on the system reliability and in addition minimizes the maximum expected cost. For the case of prior information in the form of upper bounds on λi's, we use the genetic algorithm to solve the associated optimization problems which are otherwise difficult to solve using mathematical programming techniques. The two‐stage component test plans are cost effective compared to single‐stage plans developed by Rajgopal and Mazumdar. We demonstrate through several numerical examples that our approach has the potential to reduce the overall testing costs significantly. © 2002 John Wiley & Sons, Inc. Naval Research Logistics, 49: 95–116, 2002; DOI 10.1002/nav.1051  相似文献   

5.
We formulate exact expressions for the expected values of selected estimators of the variance parameter (that is, the sum of covariances at all lags) of a steady‐state simulation output process. Given in terms of the autocovariance function of the process, these expressions are derived for variance estimators based on the simulation analysis methods of nonoverlapping batch means, overlapping batch means, and standardized time series. Comparing estimator performance in a first‐order autoregressive process and the M/M/1 queue‐waiting‐time process, we find that certain standardized time series estimators outperform their competitors as the sample size becomes large. © 2007 Wiley Periodicals, Inc. Naval Research Logistics, 2007  相似文献   

6.
A new method for the solution of minimax and minisum location–allocation problems with Euclidean distances is suggested. The method is based on providing differentiable approximations to the objective functions. Thus, if we would like to locate m service facilities with respect to n given demand points, we have to minimize a nonlinear unconstrained function in the 2m variables x1,y1, ?,xm,ym. This has been done very efficiently using a quasi-Newton method. Since both the original problems and their approximations are neither convex nor concave, the solutions attained may be only local minima. Quite surprisingly, for small problems of locating two or three service points, the global minimum was reached even when the initial position was far from the final result. In both the minisum and minimax cases, large problems of locating 10 service facilities among 100 demand points have been solved. The minima reached in these problems are only local, which is seen by having different solutions for different initial guesses. For practical purposes, one can take different initial positions and choose the final result with best values of the objective function. The likelihood of the best results obtained for these large problems to be close to the global minimum is discussed. We also discuss the possibility of extending the method to cases in which the costs are not necessarily proportional to the Euclidean distances but may be more general functions of the demand and service points coordinates. The method also can be extended easily to similar three-dimensional problems.  相似文献   

7.
This paper develops bounds on the uncertainties in system availabilities or reliabilities which have been computed from structural (series, parallel, etc.) relations among uncertain subsystem availabilities or reliabilities. It is assumed that the highly available (reliable) subsystems have been tested or simulated to determine their unavailabilities (unreliabilities) to within some small percentages of uncertainty. It is shown that series, parallel and r out of n structures which are nominally highly available will have unavailability uncertainties whose percentages errors are of the same order as the subsystem uncertainties. Thus overall system analysis errors, even for large systems, are of the same order of magnitude as the uncertainties in the component probabilities. Both systematic (bias type) uncertainties and independent random uncertainties are considered.  相似文献   

8.
The maximum likelihood estimator of the service distribution function of an M/G/∞ service system is obtained based on output time observations. This estimator is useful when observation of the service time of each customer could introduce bias or may be impossible. The maximum likelihood estimator is compared to the estimator proposed by Mark Brown, [2]. Relative to each other, Brown's estimator is useful in light traffic while the maximum likelihood estimator is applicble in heavy trafic. Both estimators are compared to the empirical distribution function based on a sample of service times and are found to have drawbacks although each estimator may have applications in special circumstances.  相似文献   

9.
In a series of articles published in the Fall 1990 issue of Defence Economics Alexander (1990), Atesoglu and Mueller (1990) and Huang and Mintz (1990) have all specified and empirically estimated defence‐growth models based on a neoclassical production function. We now isolate the externality component in our model, re‐estimate its coefficients using data on the U.S. economy 1952–88 and compare specifications and results with Atesoglu and Mueller (1990) and Alexander (1990).  相似文献   

10.
Consider a stochastic simulation experiment consisting of v independent vector replications consisting of an observation from each of k independent systems. Typical system comparisons are based on mean (long‐run) performance. However, the probability that a system will actually be the best is sometimes more relevant, and can provide a very different perspective than the systems' means. Empirically, we select one system as the best performer (i.e., it wins) on each replication. Each system has an unknown constant probability of winning on any replication and the numbers of wins for the individual systems follow a multinomial distribution. Procedures exist for selecting the system with the largest probability of being the best. This paper addresses the companion problem of estimating the probability that each system will be the best. The maximum likelihood estimators (MLEs) of the multinomial cell probabilities for a set of v vector replications across k systems are well known. We use these same v vector replications to form vk unique vectors (termed pseudo‐replications) that contain one observation from each system and develop estimators based on AVC (All Vector Comparisons). In other words, we compare every observation from each system with every combination of observations from the remaining systems and note the best performer in each pseudo‐replication. AVC provides lower variance estimators of the probability that each system will be the best than the MLEs. We also derive confidence intervals for the AVC point estimators, present a portion of an extensive empirical evaluation and provide a realistic example. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 341–358, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10019  相似文献   

11.
This paper extends the Low-Lippman M/M/1 model to the case of Gamma service times. Specifically, we have a queue in which arrivals are Poisson, service time is Gamma-distributed, and the arrival rate to the system is subject to setting an admission fee p. The arrival rate λ(p) is non-increasing in p. We prove that the optimal admission fee p* is a non-decreasing function of the customer work load on the server. The proof is for an infinite capacity queue and holds for the infinite horizon continuous time Markov decision process. In the special case of exponential service time, we extend the Low-Lippman model to include a state-dependent service rate and service cost structure (for finite or infinite time horizon and queue capacity). Relatively recent dynamic programming techniques are employed throughout the paper. Due to the large class of functions represented by the Gamma family, the extension is of interest and utility.  相似文献   

12.
We implement a solution procedure for general convex separable programs where a series of relatively small piecewise linear programs are solved as opposed to a single large one, and where, based on bound calculations developed in [13] and [14], the ranges of linearization are systematically reduced for successive programs. The procedure inherits ε-convergence to the global optimum in a finite number of steps, but perhaps its most distinct feature is the rigorous way in which ranges containing an optimal solution are reduced from iteration to iteration. This paper describes the procedure, called successive approximation, discusses its convergence, tightness of the bounds, bound-calculation overhead, and its robustness. It presents a computer implementation to demonstrate its effectiveness for general problems and compares it (1) with the more standard separable programming approach and (2) with one of the recent augmented Lagrangian methods [10] included in a comprehensive study of nonlinear programming codes [12]. It seems clear from over 130 cases resulting from 80 distinct problems studied here that significant savings in terms of computational effort can be realized by a judicious use of the procedure, and the ease with which it can be used is appreciably increased by the robustness it shows. Moreover, for most of these problems, the advantage increases as the size, nonlinearity, and the degree of desired accuracy increase. Other important benefits include significantly smaller storage requirements, the ability to estimate the error in the current solution, and to terminate the algorithm as soon as the acceptable level of accuracy has been achieved. Problems requiring up to about 10,000 nonzero elements in their specification and about 45,000 nonzero elements in the generated separable programs resulting from up to 70 original nonlinear variables and 70 nonlinear constraints are included in the computations.  相似文献   

13.
The problem of computing reliability and availability and their associated confidence limits for multi-component systems has appeared often in the literature. This problem arises where some or all of the component reliabilities and availabilities are statistical estimates (random variables) from test and other data. The problem of computing confidence limits has generally been considered difficult and treated only on a case-by-case basis. This paper deals with Bayes confidence limits on reliability and availability for a more general class of systems than previously considered including, as special cases, series-parallel and standby systems applications. The posterior distributions obtained are exact in theory and their numerical evaluation is limited only by computing resources, data representation and round-off in calculations. This paper collects and generalizes previous results of the authors and others. The methods presented in this paper apply both to reliability and availability analysis. The conceptual development requires only that system reliability or availability be probabilities defined in terms acceptable for a particular application. The emphasis is on Bayes Analysis and the determination of the posterior distribution functions. Having these, the calculation of point estimates and confidence limits is routine. This paper includes several examples of estimating system reliability and confidence limits based on observed component test data. Also included is an example of the numerical procedure for computing Bayes confidence limits for the reliability of a system consisting of N failure independent components connected in series. Both an exact and a new approximate numerical procedure for computing point and interval estimates of reliability are presented. A comparison is made of the results obtained from the two procedures. It is shown that the approximation is entirely sufficient for most reliability engineering analysis.  相似文献   

14.
Suppose that a nonhomogeneous Poisson process is observed for a length of time T, say Let λ (t) denote the mean value function of the process. It is assumed that λ (t) is first increasing then decreasing inside the interval (0, T) with peak at t = t0, say. Three methods are given for estimating to. One of these methods is nonparametric, and the other two methods are based on the standard regression technique and the maximum likelihood principle The given resull has application in a problem of determining the azimuth of a target from the radar-impulse data. The time series of incoming signals may be approximated by the occurrence of a nonhomogeneous Poisson process with mean value function λ (t). The azimuth of the target is reasonably determined from the direction of the axis of the radar beam at the instant to, corresponding to the peak value of λ (t).  相似文献   

15.
If the number of customers in a queueing system as a function of time has a proper limiting steady‐state distribution, then that steady‐state distribution can be estimated from system data by fitting a general stationary birth‐and‐death (BD) process model to the data and solving for its steady‐state distribution using the familiar local‐balance steady‐state equation for BD processes, even if the actual process is not a BD process. We show that this indirect way to estimate the steady‐state distribution can be effective for periodic queues, because the fitted birth and death rates often have special structure allowing them to be estimated efficiently by fitting parametric functions with only a few parameters, for example, 2. We focus on the multiserver Mt/GI/s queue with a nonhomogeneous Poisson arrival process having a periodic time‐varying rate function. We establish properties of its steady‐state distribution and fitted BD rates. We also show that the fitted BD rates can be a useful diagnostic tool to see if an Mt/GI/s model is appropriate for a complex queueing system. © 2015 Wiley Periodicals, Inc. Naval Research Logistics 62: 664–685, 2015  相似文献   

16.
A series of independent Bernoulli trials is considered in which either an outcome of type A or type B occurs at each trial. The series terminates when n outcomes of one type have occurred. Two observable random variables of interest are the total number of outcomes in the series and the number of outcomes of the “losing kind.” Two methods of approximation of the expectations of these random variables for large n are obtained and compared. The limiting distribution of the number of outcomes of the “losing kind” is considered when a beta distribution is assigned to p.  相似文献   

17.
Capacity improvement and conditional penalties are two computational aides for fathoming subproblems in a branch‐and‐bound procedure. In this paper, we apply these techniques to the fixed charge transportation problem (FCTP) and show how relaxations of the FCTP subproblems can be posed as concave minimization problems (rather than LP relaxations). Using the concave relaxations, we propose a new conditional penalty and three new types of capacity improvement techniques for the FCTP. Based on computational experiments using a standard set of FCTP test problems, the new capacity improvement and penalty techniques are responsible for a three‐fold reduction in the CPU time for the branch‐and‐bound algorithm and nearly a tenfold reduction in the number of subproblems that need to be evaluated in the branch‐and‐bound enumeration tree. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 341–355, 1999  相似文献   

18.
Suppose that we have enough computer time to make n observations of a stochastic process by means of simulation and would like to construct a confidence interval for the steady-state mean. We can make k independent runs of m observations each (n=k.m) or, alternatively, one run of n observations which we then divide into k batches of length m. These methods are known as replication and batch means, respectively. In this paper, using the probability of coverage and the half length of a confidence interval as criteria for comparison, we empirically show that batch means is superior to replication, but that neither method works well if n is too small. We also show that if m is chosen too small for replication, then the coverage may decrease dramatically as the total sample size n is increased.  相似文献   

19.
The standard issue bayonet of the British Army immediately preceding and during the First World War was the Pattern 1907. This was manufactured at different times and in varying numbers during that period by one official body, the Royal Small Arms Factory at Enfield, and by five private contractors. These bayonets were made according to published official specifications issued by the War Department and based on a ‘pattern example’ provided by the Royal Small Arms Factory. The specifications indicate, inter alia, the quality of metal used in making the bayonets, methods of inspection and proofing, and the required maximum and minimum weight range of the completed bayonet. However, examination of a series of these bayonets in a private collection suggested that their weights varied considerably from the mid-point values of the allowed weight ranges in the original and amended specifications (16.5 oz. and 17 oz., respectively). To establish if this was a common feature among this class of bayonet as opposed to a chance factor, the weights of other surviving Pattern 1907 bayonets were determined and compared to establish the degree of variance from the official specifications as originally set out by the Royal Small Arms Factory. Seventy-six percent of the 142 bayonets surveyed were found to be above the mid-point of the allowed weight range given in the amended manufacturing specifications, with many being at the upper end of the allowed range. This is a statistically unusual result. It is speculated that the target weight may have been deliberately set higher by the individual manufacturers to eliminate the possibility of rejection of any underweight bayonets by the Royal Small Arms Factory inspectors and so a refusal of acceptance and payment for the work.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号