首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A series of independent Bernoulli trials is considered in which either an outcome of type A or type B occurs at each trial. The series terminates when n outcomes of one type have occurred. Two observable random variables of interest are the total number of outcomes in the series and the number of outcomes of the “losing kind.” Two methods of approximation of the expectations of these random variables for large n are obtained and compared. The limiting distribution of the number of outcomes of the “losing kind” is considered when a beta distribution is assigned to p.  相似文献   

2.
Consider a stochastic simulation experiment consisting of v independent vector replications consisting of an observation from each of k independent systems. Typical system comparisons are based on mean (long‐run) performance. However, the probability that a system will actually be the best is sometimes more relevant, and can provide a very different perspective than the systems' means. Empirically, we select one system as the best performer (i.e., it wins) on each replication. Each system has an unknown constant probability of winning on any replication and the numbers of wins for the individual systems follow a multinomial distribution. Procedures exist for selecting the system with the largest probability of being the best. This paper addresses the companion problem of estimating the probability that each system will be the best. The maximum likelihood estimators (MLEs) of the multinomial cell probabilities for a set of v vector replications across k systems are well known. We use these same v vector replications to form vk unique vectors (termed pseudo‐replications) that contain one observation from each system and develop estimators based on AVC (All Vector Comparisons). In other words, we compare every observation from each system with every combination of observations from the remaining systems and note the best performer in each pseudo‐replication. AVC provides lower variance estimators of the probability that each system will be the best than the MLEs. We also derive confidence intervals for the AVC point estimators, present a portion of an extensive empirical evaluation and provide a realistic example. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 341–358, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10019  相似文献   

3.
The coverage C of area targets by salvos of weapons generally varies randomly, because of random target location and weapon impact point fluctuations. A third source of variation appears when, instead of an area target, a multiple-element target is considered, consisting of m point targets distributed randomly and independently of one another around the target center. A multiple-integral expression is derived for the probability pk of killing exactly k target elements. It is shown that pk is a linear function of the higher moments, of the order k to m, of the area coverage C. More explicit expressions are derived for the case of two weapons and for circular-symmetric functions. Similar to well-known results for the expectation and variance of coverage of area targets, these expressions can be evaluated by numerical quadrature. Furthermore, the coverage problem in which all underlying functions are Gaussian can be completely solved in closed form. For such a problem, with two weapons, numerical results are presented. They show that the distribution of k can be approximated by a binomial distribution only if the target center and weapon impact point fluctuations are small.  相似文献   

4.
This paper addresses the problem of finding a feasible schedule of n jobs on m parallel machines, where each job has a deadline and some jobs are preassigned to some machine. This problem arises in the daily assignment of workload to a set of flight dispatchers, and it is strongly characterized by the fact that the job lengths may assume one out of k different values, for small k. We prove the problem to be NP‐complete for k = 2 and propose an effective implicit enumeration algorithm which allows efficiently solution a set of real‐life instances. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 359–376, 2000  相似文献   

5.
Consider a simulation experiment consisting of v independent vector replications across k systems, where in any given replication one system is selected as the best performer (i.e., it wins). Each system has an unknown constant probability of winning in any replication and the numbers of wins for the individual systems follow a multinomial distribution. The classical multinomial selection procedure of Bechhofer, Elmaghraby, and Morse (Procedure BEM) prescribes a minimum number of replications, denoted as v*, so that the probability of correctly selecting the true best system (PCS) meets or exceeds a prespecified probability. Assuming that larger is better, Procedure BEM selects as best the system having the largest value of the performance measure in more replications than any other system. We use these same v* replications across k systems to form (v*)k pseudoreplications that contain one observation from each system, and develop Procedure AVC (All Vector Comparisons) to achieve a higher PCS than with Procedure BEM. For specific small-sample cases and via a large-sample approximation we show that the PCS with Procedure AVC exceeds the PCS with Procedure BEM. We also show that with Procedure AVC we achieve a given PCS with a smaller v than the v* required with Procedure BEM. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 459–482, 1998  相似文献   

6.
Previous methods for solving the nonlinear one-parametric linear programming problem min {c(t)Tx |Ax = b, x ≥ 0} for t ? [α,β] were based on the simplex method using a considerably extended tableau. The proposed method avoids such an extension. A finite sequence of feasible bases (Bk | k = 1, 2, …, r) optimal in [tk, tk+1] for k = 1, 2, …,r with α = t1 < t2 < … < tr+1 = β is determined using the zeroes of a set of nonlinear functions. Computational experience is discussed in the special case of t-norm transportation problems.  相似文献   

7.
Suppose one object is hidden in the k-th of n boxes with probability p(k). The boxes are to be searched sequentially. Associated with the j-th search of box k is a cost c(j,k) and a conditional probability q(j,k) that the first j - 1 searches of box k are unsuccessful while the j-th search is successful given that the object is hidden in box k. The problem is to maximize the probability that we find the object if we are not allowed to offer more than L for the search. We prove the existence of an optimal allocation of the search effort L and state an algorithm for the construction of an optimal allocation. Finally, we discuss some problems concerning the complexity of our problem.  相似文献   

8.
We investigate the solvability of two single‐machine scheduling problems when the objective is to identify among all job subsets with cardinality k,1≤kn, the one that has the minimum objective function value. For the single‐machine minimum maximum lateness problem, we conclude that the problem is solvable in O(n2) time using the proposed REMOVE algorithm. This algorithm can also be used as an alternative to Moore's algorithm to solve the minimum number of tardy jobs problem by actually solving the hierarchical problem in which the objective is to minimize the maximum lateness subject to the minimum number of tardy jobs. We then show that the REMOVE algorithm cannot be used to solve the general case of the single‐machine total‐weighted completion time problem; we derive sufficient conditions among the job parameters so that the total weighted completion time problem becomes solvable in O(n2) time. © 2013 Wiley Periodicals, Inc. Naval Research Logistics 60: 449–453, 2013  相似文献   

9.
Tolerance limits which control both tails of the normal distribution so that there is no more than a proportion β1 in one tail and no more than β2 in the other tail with probability γ may be computed for any size sample. They are computed from X? - k1S and X? - k2S, where X? and S are the usual sample mean and standard deviation and k1 and k2 are constants previously tabulated in Odeh and Owen [3]. The question addressed is, “Just how accurate are the coverages of these intervals (– Infin;, X?k1S) and (X? + k2S, ∞) for various size samples?” The question is answered in terms of how widely the coverage of each tail interval differs from the corresponding required content with a given confidence γ′.  相似文献   

10.
The signature of a system with independent and identically distributed (i.i.d.) component lifetimes is a vector whose ith element is the probability that the ith component failure is fatal to the system. System signatures have been found to be quite useful tools in the study and comparison of engineered systems. In this article, the theory of system signatures is extended to versions of signatures applicable in dynamic reliability settings. It is shown that, when a working used system is inspected at time t and it is noted that precisely k failures have occurred, the vector s [0,1]nk whose jth element is the probability that the (k + j)th component failure is fatal to the system, for j = 1,2,2026;,nk, is a distribution‐free measure of the design of the residual system. Next, known representation and preservation theorems for system signatures are generalized to dynamic versions. Two additional applications of dynamic signatures are studied in detail. The well‐known “new better than used” (NBU) property of aging systems is extended to a uniform (UNBU) version, which compares systems when new and when used, conditional on the known number of failures. Sufficient conditions are given for a system to have the UNBU property. The application of dynamic signatures to the engineering practice of “burn‐in” is also treated. Specifically, we consider the comparison of new systems with working used systems burned‐in to a given ordered component failure time. In a reliability economics framework, we illustrate how one might compare a new system to one successfully burned‐in to the kth component failure, and we identify circumstances in which burn‐in is inferior (or is superior) to the fielding of a new system. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2009  相似文献   

11.
In this paper, we consider a general covering problem in which k subsets are to be selected such that their union covers as large a weight of objects from a universal set of elements as possible. Each subset selected must satisfy some structural constraints. We analyze the quality of a k-stage covering algorithm that relies, at each stage, on greedily selecting a subset that gives maximum improvement in terms of overall coverage. We show that such greedily constructed solutions are guaranteed to be within a factor of 1 − 1/e of the optimal solution. In some cases, selecting a best solution at each stage may itself be difficult; we show that if a β-approximate best solution is chosen at each stage, then the overall solution constructed is guaranteed to be within a factor of 1 − 1/eβ of the optimal. Our results also yield a simple proof that the number of subsets used by the greedy approach to achieve entire coverage of the universal set is within a logarithmic factor of the optimal number of subsets. Examples of problems that fall into the family of general covering problems considered, and for which the algorithmic results apply, are discussed. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 615–627, 1998  相似文献   

12.
We consider a system composed of k components, each of which is subject to failure if temperature is above a critical level. The failure of one component causes the failure of the system as a whole (a serially connected system). If zi is the critical temperature of the ith component then z* = min{zi: i = 1,2,…, k} is the critical level of the system. The components may be tested individually at different temperature levels, if the temperature is below the critical level the cost is $1, otherwise the test is destructive and the cost is m > 1 dollars. The purpose of this article is to construct, under a budgetary constraint, an efficient (in a minmax sense) testing procedure which will locate the critical level of the system with maximal accuracy.  相似文献   

13.
Let , where A (t)/t is nondecreasing in t, {P(k)1/k} is nonincreasing. It is known that H(t) = 1 — H (t) is an increasing failure rate on the average (IFRA) distribution. A proof based on the IFRA closure theorem is given. H(t) is the distribution of life for systems undergoing shocks occurring according to a Poisson process where P (k) is the probability that the system survives k shocks. The proof given herein shows there is an underlying connection between such models and monotone systems of independent components that explains the IFRA life distribution occurring in both models.  相似文献   

14.
Suppose that we have enough computer time to make n observations of a stochastic process by means of simulation and would like to construct a confidence interval for the steady-state mean. We can make k independent runs of m observations each (n=k.m) or, alternatively, one run of n observations which we then divide into k batches of length m. These methods are known as replication and batch means, respectively. In this paper, using the probability of coverage and the half length of a confidence interval as criteria for comparison, we empirically show that batch means is superior to replication, but that neither method works well if n is too small. We also show that if m is chosen too small for replication, then the coverage may decrease dramatically as the total sample size n is increased.  相似文献   

15.
In an accumulation game, a HIDER attempts to accumulate a certain number of objects or a certain quantity of material before a certain time, and a SEEKER attempts to prevent this. In a continuous accumulation game the HIDER can pile material either at locations $1, 2, …, n, or over a region in space. The HIDER will win (payoff 1) it if accumulates N units of material before a given time, and the goal of the SEEKER will win (payoff 0) otherwise. We assume the HIDER can place continuous material such as fuel at discrete locations i = 1, 2, …, n, and the game is played in discrete time. At each time k > 0 the HIDER acquires h units of material and can distribute it among all of the locations. At the same time, k, the SEEKER can search a certain number s < n of the locations, and will confiscate (or destroy) all material found. After explicitly describing what we mean by a continuous accumulation game on discrete locations, we prove a theorem that gives a condition under which the HIDER can always win by using a uniform distribution at each stage of the game. When this condition does not hold, special cases and examples show that the resulting game becomes complicated even when played only for a single stage. We reduce the single stage game to an optimization problem, and also obtain some partial results on its solution. We also consider accumulation games where the locations are arranged in either a circle or in a line segment and the SEEKER must search a series of adjacent locations. © 2002 John Wiley & Sons, Inc. Naval Research Logistics, 49: 60–77, 2002; DOI 10.1002/nav.1048  相似文献   

16.
This study is concerned with a game model involving repeated play of a matrix game with unknown entries; it is a two-person, zero-sum, infinite game of perfect recall. The entries of the matrix ((pij)) are selected according to a joint probability distribution known by both players and this unknown matrix is played repeatedly. If the pure strategy pair (i, j) is employed on day k, k = 1, 2, …, the maximizing player receives a discounted income of βk - 1 Xij, where β is a constant, 0 ≤ β ? 1, and Xij assumes the value one with probability pij or the value zero with probability 1 - pij. After each trial, the players are informed of the triple (i, j, Xij) and retain this knowledge. The payoff to the maximizing player is the expected total discounted income. It is shown that a solution exists, the value being characterized as the unique solution of a functional equation and optimal strategies consisting of locally optimal play in an auxiliary matrix determined by the past history. A definition of an ?-learning strategy pair is formulated and a theorem obtained exhibiting ?-optimal strategies which are ?-learning. The asymptotic behavior of the value is obtained as the discount tends to one.  相似文献   

17.
The first problem considered in this paper is concerned with the assembly of independent components into parallel systems so as to maximize the expected number of systems that perform satisfactorily. Associated with each component is a probability of it performing successfully. It is shown that an optimal assembly is obtained if the reliability of each assembled system can be made equal. If such equality is not attainable, then bounds are given so that the maximum expected number of systems that perform satisfactorily will lie within these stated bounds; the bounds being a function of an arbitrarily chosen assembly. An improvement algorithm is also presented. A second problem treated is concerned with the optimal design of a system. Instead of assembling given units, there is an opportunity to “control” their quality, i.e., the manufacturer is able to fix the probability, p, of a unit performing successfully. However, his resources, are limited so that a constraint is imposed on these probabilities. For (1) series systems, (2) parallel systems, and (3) k out of n systems, results are obtained for finding the optimal p's which maximize the reliability of a single system, and which maximize the expected number of systems that perform satisfactorily out of a total assembly of J systems.  相似文献   

18.
Consider an auction in which increasing bids are made in sequence on an object whose value θ is known to each bidder. Suppose n bids are received, and the distribution of each bid is conditionally uniform. More specifically, suppose the first bid X1 is uniformly distributed on [0, θ], and the ith bid is uniformly distributed on [Xi?1, θ] for i = 2, …?, n. A scenario in which this auction model is appropriate is described. We assume that the value θ is un known to the statistician and must be esimated from the sample X1, X2, …?, Xn. The best linear unbiased estimate of θ is derived. The invariance of the estimation problem under scale transformations in noted, and the best invariant estimation problem under scale transformations is noted, and the best invariant estimate of θ under loss L(θ, a) = [(a/θ) ? 1]2 is derived. It is shown that this best invariant estimate has uniformly smaller mean-squared error than the best linear unbiased estimate, and the ratio of the mean-squared errors is estimated from simulation experiments. A Bayesian formulation of the estimation problem is also considered, and a class of Bayes estimates is explicitly derived.  相似文献   

19.
Suppose that observations from populations π1, …, πk (k ≥ 1) are normally distributed with unknown means μ1., μk, respectively, and a common known variance σ2. Let μ[1] μ … ≤ μ[k] denote the ranked means. We take n independent observations from each population, denote the sample mean of the n observation from π1 by X i (i = 1, …, k), and define the ranked sample means X [1] ≤ … ≤ X [k]. The problem of confidence interval estimation of μ(1), …,μ[k] is stated and related to previous work (Section 1). The following results are obtained (Section 2). For i = 1, …, k and any γ(0 < γ < 1) an upper confidence interval for μ[i] with minimal probability of coverage γ is (? ∞, X [i]+ h) with h = (σ/n1/2) Φ?11/k-i+1), where Φ(·) is the standard normal cdf. A lower confidence interval for μ[i] with minimal probability of coverage γ is (X i[i]g, + ∞) with g = (σ/n1/2) Φ?11/i). For the upper confidence interval on μ[i] the maximal probability of coverage is 1– [1 – γ1/k-i+1]i, while for the lower confidence interval on μ[i] the maximal probability of coverage is 1–[1– γ1/i] k-i+1. Thus the maximal overprotection can always be calculated. The overprotection is tabled for k = 2, 3. These results extend to certain translation parameter families. It is proven that, under a bounded completeness condition, a monotone upper confidence interval h(X 1, …, X k) for μ[i] with probability of coverage γ(0 < γ < 1) for all μ = (μ[1], …,μ[k]), does not exist.  相似文献   

20.
Job shop scheduling with a bank of machines in parallel is important from both theoretical and practical points of view. Herein we focus on the scheduling problem of minimizing the makespan in a flexible two-center job shop. The first center consists of one machine and the second has k parallel machines. An easy-to-perform approximate algorithm for minimizing the makespan with one-unit-time operations in the first center and k-unit-time operations in the second center is proposed. The algorithm has the absolute worst-case error bound of k − 1 , and thus for k = 1 it is optimal. Importantly, it runs in linear time and its error bound is independent of the number of jobs to be processed. Moreover, the algorithm can be modified to give an optimal schedule for k = 2 .  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号