首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Suppose one object is hidden in the k-th of n boxes with probability p(k). The boxes are to be searched sequentially. Associated with the j-th search of box k is a cost c(j,k) and a conditional probability q(j,k) that the first j - 1 searches of box k are unsuccessful while the j-th search is successful given that the object is hidden in box k. The problem is to maximize the probability that we find the object if we are not allowed to offer more than L for the search. We prove the existence of an optimal allocation of the search effort L and state an algorithm for the construction of an optimal allocation. Finally, we discuss some problems concerning the complexity of our problem.  相似文献   

2.
This paper deals with a two searchers game and it investigates the problem of how the possibility of finding a hidden object simultaneously by players influences their behavior. Namely, we consider the following two‐sided allocation non‐zero‐sum game on an integer interval [1,n]. Two teams (Player 1 and 2) want to find an immobile object (say, a treasure) hidden at one of n points. Each point i ∈ [1,n] is characterized by a detection parameter λi (μi) for Player 1 (Player 2) such that pi(1 ? exp(?λixi)) (pi(1 ? exp(?μiyi))) is the probability that Player 1 (Player 2) discovers the hidden object with amount of search effort xi (yi) applied at point i where pi ∈ (0,1) is the probability that the object is hidden at point i. Player 1 (Player 2) undertakes the search by allocating the total amount of effort X(Y). The payoff for Player 1 (Player 2) is 1 if he detects the object but his opponent does not. If both players detect the object they can share it proportionally and even can pay some share to an umpire who takes care that the players do not cheat each other, namely Player 1 gets q1 and Player 2 gets q2 where q1 + q2 ≤ 1. The Nash equilibrium of this game is found and numerical examples are given. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2007  相似文献   

3.
Much work has been done in search theory; however, very little effort has occurred where an object's presence at a location can be accepted when no object is present there. The case analyzed is of this type. The number of locations is finite, a single object is stationary at one location, and only one location is observed each step of the search. The object's location has a known prior probability distribution. Also known are the conditional probability of acceptance given the object's absence (small) and the conditional probability of rejection given the object's presence (not too large); these Probabilities remain fixed for all searching and locations. The class of sequential search policies which terminate the search at the first acceptance is assumed. A single two-part optimization criterion is considered. The search sequence is found which (i) minimizes the probability of obtaining n rejections in the first n steps for all n, and (ii) maximizes the probability that the first acceptance occurs within the first n steps and occurs at the object's location for all n. The optimum sequential search policy specifies that the next location observed is one with the largest posterior probability of the object's presence (evaluated after each step from Bayes Rule) and that the object is at the first location where acceptance occurs. Placement at the first acceptance seems appropriate when the conditional probability of acceptance given the object's absence is sufficiently small. Search always terminates (with probability one). Optimum truncated sequential policies are also considered. Methods are given for evaluating some pertinent properties and for investigating the possibility that no object occurs at any location.  相似文献   

4.
This paper considers the problem of finding optimal solutions to a class of separable constrained extremal problems involving nonlinear functionals. The results are proved for rather general situations, but they may be easily stated for the case of search for a stationary object whose a priori location distribution is given by a density function on R, a subset of Euclidean n-space. The functional to be optimized in this case is the probability of detection and the constraint is on the amount of effort to be used Suppose that a search of the above type is conducted in such a manner as to produce the maximum increase in probability of detection for each increment of effort added to the search. Then under very weak assumptions, it is proven that this search will produce an optimal allocation of the total effort involved. Under some additional assumptions, it is shown that any amount of search effort may be allocated in an optimal fashion.  相似文献   

5.
We consider the salvo policy problem, in which there are k moments, called salvos, at which we can fire multiple missiles simultaneously at an incoming object. Each salvo is characterized by a probability pi: the hit probability of a single missile. After each salvo, we can assess whether the incoming object is still active. If it is, we fire the missiles assigned to the next salvo. In the salvo policy problem, the goal is to assign at most n missiles to salvos in order to minimize the expected number of missiles used. We consider three problem versions. In Gould's version, we have to assign all n missiles to salvos. In the Big Bomb version, a cost of B is incurred when all salvo's are unsuccessful. Finally, we consider the Quota version in which the kill probability should exceed some quota Q. We discuss the computational complexity and the approximability of these problem versions. In particular, we show that Gould's version and the Big Bomb version admit pseudopolynomial time exact algorithms and fully polynomial time approximation schemes. We also present an iterative approximation algorithm for the Quota version, and show that a related problem is NP-complete.  相似文献   

6.
The authors study a discrete-time, infinite-horizon, dynamic programming model for the replacement of components in a binary k-out-of-n failure system. (The system fails when k or more of its n components fail.) Costs are incurred when the system fails and when failed components are replaced. The objective is to minimize the long-run expected average undiscounted cost per period. A companion article develops a branch-and-bound algorithm for computing optimal policies. Extensive computational experiments find it effective for k to be small or near n; however, difficulties are encountered when n ≥ 30 and 10 ≤ kn − 4. This article presents a simple, intuitive heuristic rule for determining a replacement policy whose memory storage and computation time requirements are O(n − k) and O(n(n − k) + k), respectively. This heuristic is based on a plausible formula for ranking components in order of their usefulness. The authors provide sufficient conditions for it to be optimal and undertake computational experiments that suggest that it handles parallel systems (k = n) effectively and, further, that its effectiveness increases as k moves away from n. In our test problems, the mean relative errors are under 5% when n ≤ 100 and under 2% when kn − 3 and n ≤ 50. © 1997 John Wiley & Sons, Inc. Naval Research Logistics 44, 273–286, 1997.  相似文献   

7.
This study is concerned with a game model involving repeated play of a matrix game with unknown entries; it is a two-person, zero-sum, infinite game of perfect recall. The entries of the matrix ((pij)) are selected according to a joint probability distribution known by both players and this unknown matrix is played repeatedly. If the pure strategy pair (i, j) is employed on day k, k = 1, 2, …, the maximizing player receives a discounted income of βk - 1 Xij, where β is a constant, 0 ≤ β ? 1, and Xij assumes the value one with probability pij or the value zero with probability 1 - pij. After each trial, the players are informed of the triple (i, j, Xij) and retain this knowledge. The payoff to the maximizing player is the expected total discounted income. It is shown that a solution exists, the value being characterized as the unique solution of a functional equation and optimal strategies consisting of locally optimal play in an auxiliary matrix determined by the past history. A definition of an ?-learning strategy pair is formulated and a theorem obtained exhibiting ?-optimal strategies which are ?-learning. The asymptotic behavior of the value is obtained as the discount tends to one.  相似文献   

8.
A classic problem in Search Theory is one in which a searcher allocates resources to the points of the integer interval [1, n] in an attempt to find an object which has been hidden in them using a known probability function. In this paper we consider a modification of this problem in which there is a protector who can also allocate resources to the points; allocating these resources makes it more difficult for the searcher to find an object. We model the situation as a two‐person non‐zero‐sum game so that we can take into account the fact that using resources can be costly. It is shown that this game has a unique Nash equilibrium when the searcher's probability of finding an object located at point i is of the form (1 − exp (−λixi)) exp (−μiyi) when the searcher and protector allocate resources xi and yi respectively to point i. An algorithm to find this Nash equilibrium is given. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47:85–96, 2000  相似文献   

9.
Let , where A (t)/t is nondecreasing in t, {P(k)1/k} is nonincreasing. It is known that H(t) = 1 — H (t) is an increasing failure rate on the average (IFRA) distribution. A proof based on the IFRA closure theorem is given. H(t) is the distribution of life for systems undergoing shocks occurring according to a Poisson process where P (k) is the probability that the system survives k shocks. The proof given herein shows there is an underlying connection between such models and monotone systems of independent components that explains the IFRA life distribution occurring in both models.  相似文献   

10.
We investigate the problem in which an agent has to find an object that moves between two locations according to a discrete Markov process (Pollock, Operat Res 18 (1970) 883–903). At every period, the agent has three options: searching left, searching right, and waiting. We assume that waiting is costless whereas searching is costly. Moreover, when the agent searches the location that contains the object, he finds it with probability 1 (i.e. there is no overlooking). Waiting can be useful because it could induce a more favorable probability distribution over the two locations next period. We find an essentially unique (nearly) optimal strategy, and prove that it is characterized by two thresholds (as conjectured by Weber, J Appl Probab 23 (1986) 708–717). We show, moreover, that it can never be optimal to search the location with the lower probability of containing the object. The latter result is far from obvious and is in clear contrast with the example in Ross (1983) for the model without waiting. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 2009  相似文献   

11.
Suppose that observations from populations π1, …, πk (k ≥ 1) are normally distributed with unknown means μ1., μk, respectively, and a common known variance σ2. Let μ[1] μ … ≤ μ[k] denote the ranked means. We take n independent observations from each population, denote the sample mean of the n observation from π1 by X i (i = 1, …, k), and define the ranked sample means X [1] ≤ … ≤ X [k]. The problem of confidence interval estimation of μ(1), …,μ[k] is stated and related to previous work (Section 1). The following results are obtained (Section 2). For i = 1, …, k and any γ(0 < γ < 1) an upper confidence interval for μ[i] with minimal probability of coverage γ is (? ∞, X [i]+ h) with h = (σ/n1/2) Φ?11/k-i+1), where Φ(·) is the standard normal cdf. A lower confidence interval for μ[i] with minimal probability of coverage γ is (X i[i]g, + ∞) with g = (σ/n1/2) Φ?11/i). For the upper confidence interval on μ[i] the maximal probability of coverage is 1– [1 – γ1/k-i+1]i, while for the lower confidence interval on μ[i] the maximal probability of coverage is 1–[1– γ1/i] k-i+1. Thus the maximal overprotection can always be calculated. The overprotection is tabled for k = 2, 3. These results extend to certain translation parameter families. It is proven that, under a bounded completeness condition, a monotone upper confidence interval h(X 1, …, X k) for μ[i] with probability of coverage γ(0 < γ < 1) for all μ = (μ[1], …,μ[k]), does not exist.  相似文献   

12.
Suppose that we have enough computer time to make n observations of a stochastic process by means of simulation and would like to construct a confidence interval for the steady-state mean. We can make k independent runs of m observations each (n=k.m) or, alternatively, one run of n observations which we then divide into k batches of length m. These methods are known as replication and batch means, respectively. In this paper, using the probability of coverage and the half length of a confidence interval as criteria for comparison, we empirically show that batch means is superior to replication, but that neither method works well if n is too small. We also show that if m is chosen too small for replication, then the coverage may decrease dramatically as the total sample size n is increased.  相似文献   

13.
Consider a simulation experiment consisting of v independent vector replications across k systems, where in any given replication one system is selected as the best performer (i.e., it wins). Each system has an unknown constant probability of winning in any replication and the numbers of wins for the individual systems follow a multinomial distribution. The classical multinomial selection procedure of Bechhofer, Elmaghraby, and Morse (Procedure BEM) prescribes a minimum number of replications, denoted as v*, so that the probability of correctly selecting the true best system (PCS) meets or exceeds a prespecified probability. Assuming that larger is better, Procedure BEM selects as best the system having the largest value of the performance measure in more replications than any other system. We use these same v* replications across k systems to form (v*)k pseudoreplications that contain one observation from each system, and develop Procedure AVC (All Vector Comparisons) to achieve a higher PCS than with Procedure BEM. For specific small-sample cases and via a large-sample approximation we show that the PCS with Procedure AVC exceeds the PCS with Procedure BEM. We also show that with Procedure AVC we achieve a given PCS with a smaller v than the v* required with Procedure BEM. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 459–482, 1998  相似文献   

14.
Consider a stochastic simulation experiment consisting of v independent vector replications consisting of an observation from each of k independent systems. Typical system comparisons are based on mean (long‐run) performance. However, the probability that a system will actually be the best is sometimes more relevant, and can provide a very different perspective than the systems' means. Empirically, we select one system as the best performer (i.e., it wins) on each replication. Each system has an unknown constant probability of winning on any replication and the numbers of wins for the individual systems follow a multinomial distribution. Procedures exist for selecting the system with the largest probability of being the best. This paper addresses the companion problem of estimating the probability that each system will be the best. The maximum likelihood estimators (MLEs) of the multinomial cell probabilities for a set of v vector replications across k systems are well known. We use these same v vector replications to form vk unique vectors (termed pseudo‐replications) that contain one observation from each system and develop estimators based on AVC (All Vector Comparisons). In other words, we compare every observation from each system with every combination of observations from the remaining systems and note the best performer in each pseudo‐replication. AVC provides lower variance estimators of the probability that each system will be the best than the MLEs. We also derive confidence intervals for the AVC point estimators, present a portion of an extensive empirical evaluation and provide a realistic example. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 341–358, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10019  相似文献   

15.
This paper deals with a two‐person zero‐sum game called a search allocation game, where a searcher and a target participate, taking account of false contacts. The searcher distributes his search effort in a search space in order to detect the target. On the other hand, the target moves to avoid the searcher. As a payoff of the game, we take the cumulative amount of search effort weighted by the target distribution, which can be derived as an approximation of the detection probability of the target. The searcher's strategy is a plan of distributing search effort and the target's is a movement represented by a path or transition probability across the search space. In the search, there are false contacts caused by environmental noises, signal processing noises, or real objects resembling true targets. If they happen, the searcher must take some time for their investigation, which interrupts the search for a while. There have been few researches dealing with search games with false contacts. In this paper, we formulate the game into a mathematical programming problem to obtain its equilibrium point. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2007  相似文献   

16.
There are given k (? 2) univariate cumulative distribution functions (c.d.f.'s) G(x; θi) indexed by a real-valued parameter θi, i=1,…, k. Assume that G(x; θi) is stochastically increasing in θi. In this paper interval estimation on the ith smallest of the θ's and related topics are studied. Applications are considered for location parameter, normal variance, binomial parameter, and Poisson parameter.  相似文献   

17.
This paper discusses the operations analysis in the underwater search for the remains of the submarine Scorpion The a priori target location probability distribution for the search was obtained by monte-carlo procedures based upon nine different scenarios concerning the Scorpion loss and associated credibility weights. These scenarios and weights were postulated by others. Scorpion was found within 260 yards of the search grid cell having the largest a priori probability Frequent computations of local effectiveness probabilities (LEPs) were carried out on scene during the search and were used to determine an updated (a posteriori) target location distribution. This distribution formed the basis for recommendation of the current high probability areas for search The sum of LEPs weighted by the a priori target location probabilities is called search effectiveness probability (SEP) and was used as the overall measure of effectiveness for the operation. SEP and LEPs were used previously in the Mediterranean H-bomb search On-scene and stateside operations analysis are discussed and the progress of the search is indicated by values of SEP for various periods during the operation.  相似文献   

18.
A series of independent trials is considered in which one of k ≥ 2 mutually exclusive and exhaustive outcomes occurs at each trial. The series terminates when m outcomes of any one type have occurred. The limiting distribution (as m → ∞) of the number of trials performed until termination is found with particular attention to the situation where a Dirichlet distribution is assigned to the k vector of probabilities for each outcome. Applications to series of races involving k runners and to spares problems in reliability modeling are discussed. The problem of selecting a stopping rule so that the probability of the series terminating on outcome i is k?1 (i.e., a “fair” competition) is also studied. Two generalizations of the original asymptotic problem are addressed.  相似文献   

19.
The signature of a system with independent and identically distributed (i.i.d.) component lifetimes is a vector whose ith element is the probability that the ith component failure is fatal to the system. System signatures have been found to be quite useful tools in the study and comparison of engineered systems. In this article, the theory of system signatures is extended to versions of signatures applicable in dynamic reliability settings. It is shown that, when a working used system is inspected at time t and it is noted that precisely k failures have occurred, the vector s [0,1]nk whose jth element is the probability that the (k + j)th component failure is fatal to the system, for j = 1,2,2026;,nk, is a distribution‐free measure of the design of the residual system. Next, known representation and preservation theorems for system signatures are generalized to dynamic versions. Two additional applications of dynamic signatures are studied in detail. The well‐known “new better than used” (NBU) property of aging systems is extended to a uniform (UNBU) version, which compares systems when new and when used, conditional on the known number of failures. Sufficient conditions are given for a system to have the UNBU property. The application of dynamic signatures to the engineering practice of “burn‐in” is also treated. Specifically, we consider the comparison of new systems with working used systems burned‐in to a given ordered component failure time. In a reliability economics framework, we illustrate how one might compare a new system to one successfully burned‐in to the kth component failure, and we identify circumstances in which burn‐in is inferior (or is superior) to the fielding of a new system. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2009  相似文献   

20.
This paper addresses the problem of finding a feasible schedule of n jobs on m parallel machines, where each job has a deadline and some jobs are preassigned to some machine. This problem arises in the daily assignment of workload to a set of flight dispatchers, and it is strongly characterized by the fact that the job lengths may assume one out of k different values, for small k. We prove the problem to be NP‐complete for k = 2 and propose an effective implicit enumeration algorithm which allows efficiently solution a set of real‐life instances. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 359–376, 2000  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号