首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In this journal in 1967. Szware presented an algorithm for the optimal routing of a common vehicle fleet between m sources and n sinks with p different types of commodities. The main premise of the formulation is that a truck may carry only one commodity at a time and must deliver the entire load to one demand area. This eliminates the problem of routing vehicles between sources or between sinks and limits the problem to the routing of loaded trucks between sources and sinks and empty trucks making the return trip. Szwarc considered only the transportation aspect of the problem (i. e., no intermediate points) and presented a very efficient algorithm for solution of the case he described. If the total supply is greater than the total demand, Szwarc shows that the problem is equivalent to a (mp + n) by (np + m) Hitchcock transportation problem. Digital computer codes for this algorithm require rapid access storage for a matrix of size (mp + n) by (np + m); therefore, computer storage required grows proportionally to p2. This paper offers an extension of his work to a more general form: a transshipment network with capacity constraints on all arcs and facilities. The problem is shown to be solvable directly by Fulkerson's out-of-kilter algorithm. Digital computer codes for this formulation require rapid access storage proportional to p instead of p2. Computational results indicate that, in addition to handling the extensions, the out-of-kilter algorithm is more efficient in the solution of the original problem when there is a mad, rate number of commodities and a computer of limited storage capacity.  相似文献   

2.
The problem considered involves the assignment of n facilities to n specified locations. Each facility has a given nonnegative flow from each of the other facilities. The objective is to minimize the sum of transportation costs. Assume these n locations are given as points on a two-dimensional plane and transportation costs are proportional to weighted rectangular distances. Then the problem is formulated as a binary mixed integer program. The number of integer variables (all binary) involved equals the number of facilities squared. Without increasing the number of integer variables, the formulation is extended to include “site costs” Computational results of the formulation are presented.  相似文献   

3.
This paper presents a branch and bound algorithm for computing optimal replacement policies in a discrete‐time, infinite‐horizon, dynamic programming model of a binary coherent system with n statistically independent components, and then specializes the algorithm to consecutive k‐out‐of‐n systems. The objective is to minimize the long‐run expected average undiscounted cost per period. (Costs arise when the system fails and when failed components are replaced.) An earlier paper established the optimality of following a critical component policy (CCP), i.e., a policy specified by a critical component set and the rule: Replace a component if and only if it is failed and in the critical component set. Computing an optimal CCP is a optimization problem with n binary variables and a nonlinear objective function. Our branch and bound algorithm for solving this problem has memory storage requirement O(n) for consecutive k‐out‐of‐n systems. Extensive computational experiments on such systems involving over 350,000 test problems with n ranging from 10 to 150 find this algorithm to be effective when n ≤ 40 or k is near n. © 2002 Wiley Periodicals, Inc. Naval Research Logistics 49: 288–302, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/nav.10017  相似文献   

4.
An algorithm is given for the conditional p-center problem, namely, the optimal location of one or more additional facilities in a region with given demand points and one or more preexisting facilities. The solution dealt with here involves the minimax criterion and Euclidean distances in two-dimensional space. The method used is a generalization to the present conditional case of a relaxation method previously developed for the unconditional p-center problems. Interestingly, its worst-case complexity is identical to that of the unconditional version, and in practice, the conditional algorithm is more efficient. Some test problems with up to 200 demand points have been solved. © 1993 John Wiley & Sons, Inc.  相似文献   

5.
In this paper we consider the capacitated multi‐facility Weber problem with the Euclidean, squared Euclidean, and ?p‐distances. This problem is concerned with locating m capacitated facilities in the Euclidean plane to satisfy the demand of n customers with the minimum total transportation cost. The demand and location of each customer are known a priori and the transportation cost between customers and facilities is proportional to the distance between them. We first present a mixed integer linear programming approximation of the problem. We then propose new heuristic solution methods based on this approximation. Computational results on benchmark instances indicate that the new methods are both accurate and efficient. © 2006 Wiley Periodicals, Inc. Naval Research Logistics 2006  相似文献   

6.
We consider a multi‐stage inventory system composed of a single warehouse that receives a single product from a single supplier and replenishes the inventory of n retailers through direct shipments. Fixed costs are incurred for each truck dispatched and all trucks have the same capacity limit. Costs are stationary, or more generally monotone as in Lippman (Management Sci 16, 1969, 118–138). Demands for the n retailers over a planning horizon of T periods are given. The objective is to find the shipment quantities over the planning horizon to satisfy all demands at minimum system‐wide inventory and transportation costs without backlogging. Using the structural properties of optimal solutions, we develop (1) an O(T2) algorithm for the single‐stage dynamic lot sizing problem; (2) an O(T3) algorithm for the case of a single‐warehouse single‐retailer system; and (3) a nested shortest‐path algorithm for the single‐warehouse multi‐retailer problem that runs in polynomial time for a given number of retailers. To overcome the computational burden when the number of retailers is large, we propose aggregated and disaggregated Lagrangian decomposition methods that make use of the structural properties and the efficient single‐stage algorithm. Computational experiments show the effectiveness of these algorithms and the gains associated with coordinated versus decentralized systems. Finally, we show that the decentralized solution is asymptotically optimal. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 2009  相似文献   

7.
This paper develops a forward algorithm and planning horizon procedures for an important machine replacement model where it is assumed that the technological environment is improving over time and that the machine-in-use can be replaced by any of the several different kinds of machines available at that time. The set of replacement alternatives may include (i) new machines with different types of technologies such as labor- and capital- intensive, (ii) used machines, (iii) repairs and/or improvements which affect the performance characteristics of the existing machine, and so forth. The forward dynamic programming algorithm in the paper can be used to solve a finite horizon problem. The planning horizon results give a procedure to identify the forecast horizon T such that the optimal replacement decision for the first machine based on the forecast of machine technology until period T remains optimal for any problem with horizon longer than T and, for that matter, for the infinite horizon problem. A flow chart and a numerical example have been included to illustrate the algorithm.  相似文献   

8.
This paper considers the problem of locating multiple new facilities in order to minimize a total cost function consisting of the sum of weighted Euclidean distances among the new facilities and between the new and existing facilities, the locations of which are known. A new procedure is derived from a set of results pertaining to necessary conditions for a minimum of the objective function. The results from a number of sample problems which have been executed on a programmed version of this algorithm are used to illustrate the effectiveness of the new technique.  相似文献   

9.
In an accumulation game, a HIDER attempts to accumulate a certain number of objects or a certain quantity of material before a certain time, and a SEEKER attempts to prevent this. In a continuous accumulation game the HIDER can pile material either at locations $1, 2, …, n, or over a region in space. The HIDER will win (payoff 1) it if accumulates N units of material before a given time, and the goal of the SEEKER will win (payoff 0) otherwise. We assume the HIDER can place continuous material such as fuel at discrete locations i = 1, 2, …, n, and the game is played in discrete time. At each time k > 0 the HIDER acquires h units of material and can distribute it among all of the locations. At the same time, k, the SEEKER can search a certain number s < n of the locations, and will confiscate (or destroy) all material found. After explicitly describing what we mean by a continuous accumulation game on discrete locations, we prove a theorem that gives a condition under which the HIDER can always win by using a uniform distribution at each stage of the game. When this condition does not hold, special cases and examples show that the resulting game becomes complicated even when played only for a single stage. We reduce the single stage game to an optimization problem, and also obtain some partial results on its solution. We also consider accumulation games where the locations are arranged in either a circle or in a line segment and the SEEKER must search a series of adjacent locations. © 2002 John Wiley & Sons, Inc. Naval Research Logistics, 49: 60–77, 2002; DOI 10.1002/nav.1048  相似文献   

10.
This paper considers a problem of locating new facilities in the plane with respect to existing facilities, the locations of which are known. The problem consists of finding locations of new facilities which will minimize a total cost function which consists of a sum of costs directly proportional to the Euclidian distances among the new facilities, and costs directly proportional to the Euclidian distances between new and existing facilities. It is established that the total cost function has a minimum; necessary conditions for a mimumum are obtained; necessary and sufficient conditions are obtained for the function to be strictly convex (it is always convex); when the problem is “well structured,” it is established that for a minimum cost solution the locations of the new facilities will lie in the convex hull of the locations of the existing facilities. Also, a dual to the problem is obtained and interpreted; necessary and sufficient conditions for optimum solutions to the problem, and to its dual, are developed, as well as complementary slackness conditions. Many of the properties to be presented are motivated by, based on, and extend the results of Kuhn's study of the location problem known as the General Fermat Problem.  相似文献   

11.
A pseudo-monotonic interval program is a problem of maximizing f(x) subject to x ε X = {x ε Rn | a < Ax < b, a, b ε Rm} where f is a pseudomonotonic function on X, the set defined by the linear interval constraints. In this paper, an algorithm to solve the above program is proposed. The algorithm is based on solving a finite number of linear interval programs whose solutions techniques are well known. These optimal solutions then yield an optimal solution of the proposed pseudo-monotonic interval program.  相似文献   

12.
We consider the transportation problem of determining nonnegative shipments from a set of m warehouses with given availabilities to a set of n markets with given requirements. Three objectives are defined for each solution: (i) total cost, TC, (ii) bottleneck time, BT (i.e., maximum transportation time for a positive shipment), and (iii) bottleneck shipment, SB (i.e., total shipment over routes with bottleneck time). An algorithm is given for determining all efficient (pareto-optimal or nondominated) (TC, BT) solution pairs. The special case of this algorithm when all the unit cost coefficients are zero is shown to be the same as the algorithms for minimizing BT. provided by Szwarc and Hammer. This algorithm for minimizing BT is shown to be computationally superior. Transportation or assignment problems with m=n=100 average about a second on the UNIVAC 1108 computer (FORTRAN V)) to the threshold algorithm for minimizing BT. The algorithm is then extended to provide not only all the efficient (TC, BT) solution pairs but also, for each such BT, all the efficient (TC, SB) solution pairs. The algorithms are based on the cost operator theory of parametric programming for the transportation problem developed by the authors.  相似文献   

13.
The network redesign problem attempts to design an optimal network that serves both existing and new demands. In addition to using spare capacity on existing network facilities and deploying new facilities, the model allows for rearrangement of existing demand units. As rearrangements mean reassigning existing demand units, at a cost, to different facilities, they may lead to disconnecting of uneconomical existing facilities, resulting in significant savings. The model is applied to an access network, where the demands from many sources need to be routed to a single destination, using either low‐capacity or high‐capacity facilities. Demand from any location can be routed to the destination either directly or through one other demand location. Low‐capacity facilities can be used between any pair of locations, whereas high‐capacity facilities are used only between demand locations and the destination. We present a new modeling approach to such problems. The model is described as a network flow problem, where each demand location is represented by multiple nodes associated with demands, low‐capacity and high‐capacity facilities, and rearrangements. Each link has a capacity and a cost per unit flow parameters. Some of the links also have a fixed‐charge cost. The resulting network flow model is formulated as a mixed integer program, and solved by a heuristic and a commercially available software. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 487–506, 1999  相似文献   

14.
In this paper, we consider a general covering problem in which k subsets are to be selected such that their union covers as large a weight of objects from a universal set of elements as possible. Each subset selected must satisfy some structural constraints. We analyze the quality of a k-stage covering algorithm that relies, at each stage, on greedily selecting a subset that gives maximum improvement in terms of overall coverage. We show that such greedily constructed solutions are guaranteed to be within a factor of 1 − 1/e of the optimal solution. In some cases, selecting a best solution at each stage may itself be difficult; we show that if a β-approximate best solution is chosen at each stage, then the overall solution constructed is guaranteed to be within a factor of 1 − 1/eβ of the optimal. Our results also yield a simple proof that the number of subsets used by the greedy approach to achieve entire coverage of the universal set is within a logarithmic factor of the optimal number of subsets. Examples of problems that fall into the family of general covering problems considered, and for which the algorithmic results apply, are discussed. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 615–627, 1998  相似文献   

15.
Extending Sastry's result on the uncapacitated two‐commodity network design problem, we completely characterize the optimal solution of the uncapacitated K‐commodity network design problem with zero flow costs for the case when K = 3. By solving a set of shortest‐path problems on related graphs, we show that the optimal solutions can be found in O(n3) time when K = 3, where n is the number of nodes in the network. The algorithm depends on identifying a list of “basic patterns”; the number of basic patterns grows exponentially with K. We also show that the uncapacitated K‐commodity network design problem can be solved in O(n3) time for general K if K is fixed; otherwise, the time for solving the problem is exponential. © 2004 Wiley Periodicals, Inc. Naval Research Logistics, 2004  相似文献   

16.
This paper considers the problem of locating m new facilities in the plane so as to minimize a weighted rectangular distance between the new facilities and n existing facilities. A special purpose primal simplex algorithm is developed to solve this problem. The algorithm will maintain at all times a basis of dimension m by m; however, because of the triangularity of the basis matrix, it will not be necessary to form a basis inverse explicitly.  相似文献   

17.
The problem considered is to locate one or more new facilities relative to a number of existing facilities when both the locations of the existing facilities, the weights between new facilities, and the weights between new and existing facilities are random variables. The new facilities are to be located such that expected distance traveled is minimized. Euclidean distance measure is considered; both unconstrained and chance-constrained formulations are treated.  相似文献   

18.
Scheduling a set of n jobs on a single machine so as to minimize the completion time variance is a well‐known NP‐hard problem. In this paper, we propose a sequence, which can be constructed in O(n log n) time, as a solution for the problem. Our primary concern is to establish the asymptotical optimality of the sequence within the framework of probabilistic analysis. Our main result is that, when the processing times are randomly and independently drawn from the same uniform distribution, the sequence is asymptotically optimal in the sense that its relative error converges to zero in probability as n increases. Other theoretical results are also derived, including: (i) When the processing times follow a symmetric structure, the problem has 2⌊(n−1)/2⌋ optimal sequences, which include our proposed sequence and other heuristic sequences suggested in the literature; and (ii) when these 2⌊(n−1)/2⌋ sequences are used as approximate solutions for a general problem, our proposed sequence yields the best approximation (in an average sense) while another sequence, which is commonly believed to be a good approximation in the literature, is interestingly the worst. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 373–398, 1999  相似文献   

19.
The isotonic median regression problem arising in statistics is as follows. We are given m observations falling into n sets, the ith set containing mi observations. The problem requires the determination of n real numbers, the ith being the value “fitted” to each observation in the ith set. These n numbers chosen must satisfy certain (total or partial) order requirements and minimize the distance between the vectors of observed and fitted values in the l1 norm. We present a simple algorithm, of time complexity O(mn), for calculating isotonic median regression for orders representable by rooted trees. We believe that this algorithm is the best currently available for this problem. The algorithm is validated by a linear programming approach which provides additional insight.  相似文献   

20.
We consider the salvo policy problem, in which there are k moments, called salvos, at which we can fire multiple missiles simultaneously at an incoming object. Each salvo is characterized by a probability pi: the hit probability of a single missile. After each salvo, we can assess whether the incoming object is still active. If it is, we fire the missiles assigned to the next salvo. In the salvo policy problem, the goal is to assign at most n missiles to salvos in order to minimize the expected number of missiles used. We consider three problem versions. In Gould's version, we have to assign all n missiles to salvos. In the Big Bomb version, a cost of B is incurred when all salvo's are unsuccessful. Finally, we consider the Quota version in which the kill probability should exceed some quota Q. We discuss the computational complexity and the approximability of these problem versions. In particular, we show that Gould's version and the Big Bomb version admit pseudopolynomial time exact algorithms and fully polynomial time approximation schemes. We also present an iterative approximation algorithm for the Quota version, and show that a related problem is NP-complete.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号