首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 281 毫秒
1.
Scheduling a set of n jobs on a single machine so as to minimize the completion time variance is a well‐known NP‐hard problem. In this paper, we propose a sequence, which can be constructed in O(n log n) time, as a solution for the problem. Our primary concern is to establish the asymptotical optimality of the sequence within the framework of probabilistic analysis. Our main result is that, when the processing times are randomly and independently drawn from the same uniform distribution, the sequence is asymptotically optimal in the sense that its relative error converges to zero in probability as n increases. Other theoretical results are also derived, including: (i) When the processing times follow a symmetric structure, the problem has 2⌊(n−1)/2⌋ optimal sequences, which include our proposed sequence and other heuristic sequences suggested in the literature; and (ii) when these 2⌊(n−1)/2⌋ sequences are used as approximate solutions for a general problem, our proposed sequence yields the best approximation (in an average sense) while another sequence, which is commonly believed to be a good approximation in the literature, is interestingly the worst. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 373–398, 1999  相似文献   

2.
Discussed in this article are tests for the extreme-value distribution, or, equivalently, for the two-parameter Weibull distribution when parameters are unknown and the sample may be censored. The three tests investigated are based on the median, the mean, and the Anderson-Darling A2 statistic calculated from a set zi of values derived from the spacings of the sample. The median and the mean have previously been discussed by Mann, Scheuer, and Fertig [10] and by Tiku and Singh [14]. Asymptotic distributions and points are given for the test statistics, based on recently developed theory, and power studies are conducted to compare them with each other and with two other statistics suitable for the test. Of the normalized spacings tests, A2 is recommended overall; the mean also gives good power in many situations, but can be nonconsistent.  相似文献   

3.
A Student's t-test proposed by Ogawa is considered for the hypothesis Ho: σ=σo against the alternative hypothesis H1: σ ≠ σo, where σ is the scale parameter of the Extremevalue distribution of smallest values with known location parameter μ. The test is based on a few sample quantiles chosen from a large sample so as to give asymptotically maximum power to the test when the number of sample quantiles is fixed. A table which facilitates the computation of the test statistic is given. Several schemes for determining the ranks of the sample quantiles by the optimal spacings are compared and the effect of the bias of the estimate of σ on the test is investigated through a Monte Carlo study.  相似文献   

4.
A point is placed at random on the real line according to some known distribution F, and a search is made for this point, beginning at some starting points s on the line, and moving along the line according to some function x(t). The objective of this article is to maximize the probability of finding the point while traveling at most d units. Characterizations of simple optimal searches are found for arbitrary distributions, for continuous distributions with continuous density everywhere (e.g., normal, Cauchy, triangular), and for continuous distributions with density which is continuous on its support (e.g., exponential, uniform). These optimal searches are also shown to be optimal for maximization of the expected number of points found if the points are placed on the line independently from a known distribution F.  相似文献   

5.
This paper uses the holding time model (HTM) method to derive an approximate analytic formula for the calculation of the mean throughput of a K-station production line with no buffers between any two successive stations. Service times follow the two-stage Coxian (C2) distribution at all stations. The paper provides a formula that relates the third moment of the service completion (or virtual service) time with the respective parameters of the service time, the repair time and the time to breakdown (the latter is assumed to follow the exponential distribution). In this way, it concludes that under certain conditions the two-stage Coxian distribution can be used to approximate any general distribution matching the first three moments of the service completion time distribution. The mean holding times (consisting of the service and blocking periods) of all stations of the line are obtained in an analytical form. Numerical results are provided for the mean throughput of lines with up to 20 stations. These results are shown to have a good accuracy compared against results obtained from the Markovian state method (for short lines) and results from simulation (for longer lines). © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 669–685, 1998  相似文献   

6.
Extending Sastry's result on the uncapacitated two‐commodity network design problem, we completely characterize the optimal solution of the uncapacitated K‐commodity network design problem with zero flow costs for the case when K = 3. By solving a set of shortest‐path problems on related graphs, we show that the optimal solutions can be found in O(n3) time when K = 3, where n is the number of nodes in the network. The algorithm depends on identifying a list of “basic patterns”; the number of basic patterns grows exponentially with K. We also show that the uncapacitated K‐commodity network design problem can be solved in O(n3) time for general K if K is fixed; otherwise, the time for solving the problem is exponential. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004  相似文献   

7.
A system reliability is often evaluated by individual tests of components that constitute the system. These component test plans have advantages over complete system based tests in terms of time and cost. In this paper, we consider the series system with n components, where the lifetime of the i‐th component follows exponential distribution with parameter λi. Assuming test costs for the components are different, we develop an efficient algorithm to design a two‐stage component test plan that satisfies the usual probability requirements on the system reliability and in addition minimizes the maximum expected cost. For the case of prior information in the form of upper bounds on λi's, we use the genetic algorithm to solve the associated optimization problems which are otherwise difficult to solve using mathematical programming techniques. The two‐stage component test plans are cost effective compared to single‐stage plans developed by Rajgopal and Mazumdar. We demonstrate through several numerical examples that our approach has the potential to reduce the overall testing costs significantly. © 2002 John Wiley & Sons, Inc. Naval Research Logistics, 49: 95–116, 2002; DOI 10.1002/nav.1051  相似文献   

8.
This paper considers the problem of computing E(X?n; X > t) when X is a normal variate having the property that the mean is substantially larger than the standard deviation. An approximation is developed which is determined from the mean, standard deviation, and the cumulative standard normal distribution. Computations comparing the approximate moments with the actual are reported for various values of the relevant parameters. These results are applied to the problem of computing the expected number of shortages in a lead-time for a single product which exhibits continuous exponential decay.  相似文献   

9.
Independent samples are taken from C multivariate populations with continuous but unknown cumulative distribution function c.d.f.). The problem is to test the hypothesis that the C population c.d.f's are identical to a specified c.d.f. We approach this problem by first transforming the data so that the hypothesis being tested is that the common distribution is uniform over a unit hypercube. We then construct some Bayes tests and investigate their asymptotic properties. These tests are based on the asymptotic normality of the number of observations falling in the “asymptotically sufficient groupings”.  相似文献   

10.
In this paper, we consider a general covering problem in which k subsets are to be selected such that their union covers as large a weight of objects from a universal set of elements as possible. Each subset selected must satisfy some structural constraints. We analyze the quality of a k-stage covering algorithm that relies, at each stage, on greedily selecting a subset that gives maximum improvement in terms of overall coverage. We show that such greedily constructed solutions are guaranteed to be within a factor of 1 − 1/e of the optimal solution. In some cases, selecting a best solution at each stage may itself be difficult; we show that if a β-approximate best solution is chosen at each stage, then the overall solution constructed is guaranteed to be within a factor of 1 − 1/eβ of the optimal. Our results also yield a simple proof that the number of subsets used by the greedy approach to achieve entire coverage of the universal set is within a logarithmic factor of the optimal number of subsets. Examples of problems that fall into the family of general covering problems considered, and for which the algorithmic results apply, are discussed. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 615–627, 1998  相似文献   

11.
A pseudo-monotonic interval program is a problem of maximizing f(x) subject to x ε X = {x ε Rn | a < Ax < b, a, b ε Rm} where f is a pseudomonotonic function on X, the set defined by the linear interval constraints. In this paper, an algorithm to solve the above program is proposed. The algorithm is based on solving a finite number of linear interval programs whose solutions techniques are well known. These optimal solutions then yield an optimal solution of the proposed pseudo-monotonic interval program.  相似文献   

12.
We address the so‐called maximum dispersion problems where the objective is to maximize the sum or the minimum of interelement distances amongst a subset chosen from a given set. The problems arise in a variety of contexts including the location of obnoxious facilities, the selection of diverse groups, and the identification of dense subgraphs. They are known to be computationally difficult. In this paper, we propose a Lagrangian approach toward their solution and report the results of an extensive computational experimentation. Our results show that our Lagrangian approach is reasonably fast, that it yields heuristic solutions which provide good lower bounds on the optimum solution values for both the sum and the minimum problems, and further that it produces decent upper bounds in the case of the sum problem. For the sum problem, the results also show that the Lagrangian heuristic compares favorably against several existing heuristics. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 97–114, 2000  相似文献   

13.
One branch of the reliability literature is concerned with devising statistical procedures with various nonparametric “restricted family” model assumptions because of the potential improved operating characteristics of such procedures over totally nonparametric ones. In the single-sample problem with unknown increasing failure rate (IFR) distribution F, (1) maximum-likelihood estimators of F have been calculated, (2) upper or lower tolerance limits for F have been determined, and (3) tests of the null hypothesis that F is exponential have been constructed. Barlow and Campo proposed graphical methods for assessing goodness of fit to the IFR model when the validity of this assumption is unknown. This article proposes several analytic tests of the IFR null hypothesis based on the maximum distance and area between the cumulative hazard function and its greatest convex minorant (GCM), and the maximum distance and area between the total time on test statistic and its GCM. A table of critical points is provided to implement a specific test having good overall power properties.  相似文献   

14.
In this article, an integral equation satisfied by the second moment function M2(t) of a geometric process is obtained. The numerical method based on the trapezoidal integration rule proposed by Tang and Lam for the geometric function M(t) is adapted to solve this integral equation. To illustrate the numerical method, the first interarrival time is assumed to be one of four common lifetime distributions, namely, exponential, gamma, Weibull, and lognormal. In addition to this method, a power series expansion is derived using the integral equation for the second moment function M2(t), when the first interarrival time has an exponential distribution.  相似文献   

15.
Under a free-replacement warranty of duration W, the customer is provided, for an initial cost of C, as many replacement items as needed to provide service for a period W. Payments of C are not made at fixed intervals of length W, but in random cycles of length Y = W + γ(W), where γ(W) is the (random) remaining life-time of the item in service W time units after the beginning of a cycle. The expected number of payments over the life cycle, L, of the item is given by MY(L), the renewal function for the random variable Y. We investigate this renewal function analytically and numerically and compare the latter with known asymptotic results. The distribution of Y, and hence the renewal function, depends on the underlying failure distribution of the items. Several choices for this distribution, including the exponential, uniform, gamma and Weibull, are considered.  相似文献   

16.
A basic assumption in process mean estimation is that all process data are clean. However, many sensor system measurements are often corrupted with outliers. Outliers are observations that do not follow the statistical distribution of the bulk of the data and consequently may lead to erroneous results with respect to statistical analysis and process control. Robust estimators of the current process mean are crucial to outlier detection, data cleaning, process monitoring, and other process features. This article proposes an outlier‐resistant mean estimator based on the L1 norm exponential smoothing (L1‐ES) method. The L1‐ES statistic is essentially model‐free and demonstrably superior to existing estimators. It has the following advantages: (1) it captures process dynamics (e.g., autocorrelation), (2) it is resistant to outliers, and (3) it is easy to implement. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 2009  相似文献   

17.
In this paper the reliability function K = P(X < Y) has been estimated when X and Y follow gamma, exponential or bivariate exponential distributions. The paper is partly expository.  相似文献   

18.
If material failures follow a Poisson distribution, then the expected number of failures is exactly proportional to flight hours. However, this article demonstrates that proportionality will not be revealed by simple correlation or regression analysis between monthly flight hours and the number of monthly failures. To test for proportionality, one must instead test the underlying hypothesis that the data follow a Poisson distribution. This article presents three simple tests that may be used for this purpose. The Poisson distribution requires that the mean and variance of the number of failures be equal. This article suggests several alternative models that may be used for samples in which the variance exceeds the mean. First, the mean of the Poisson distribution may itself be randomly distributed across the observational units according to a gamma distribution. If so, the number of failures will have a negative binomial distribution. Second, the mean of the Poisson distribution may depend systematically upon a set of observable explanatory variables. In this case, the Poisson regression model is appropriate. Finally, the mean of the Poisson distribution may contain both a systematic component that depends upon observable variables and a random component. This situation yields a generalized Poisson regression model.  相似文献   

19.
The purpose of this paper is to explore an extension of the output discipline for the Poisson input, general output, single channel, first-come, first-served queueing system. The service time parameter, μ, is instead considered a random variable, M. In other words, the service time random variable, T, is to be conditioned by a parameter random variable, M. Therefore, if the distribution function of M is denoted by FM(μ) and the known conditional service time distribution as B(t |μ), then the unconditional service distribution is given by B(t) = Pr {T ≤ t}. = ∫-∞ B(t |μ) dFM(μ). Results are obtained that characterize queue size and waiting time using the imbedded Markov chain approach. Expressions are derived for the expected queue length and Laplace-Stieltjes transforms of the steady-state waiting time when conditional service times are exponential. More specific results are found for three special distributions of M: (1) uniform on [1.2]; (2) two-point; and (3) gamma.  相似文献   

20.
Exact expressions for the first and second order moments of order statistics from the truncated exponential distribution, when the proportion 1–P of truncation is known in advance, are presented in this paper. Tables of expected values and variances-covariances are given for P = 0.5 (0.1) 0.9 and n = 1 (1) 10.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号