首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   376篇
  免费   0篇
  376篇
  2021年   6篇
  2019年   7篇
  2018年   4篇
  2017年   3篇
  2016年   7篇
  2015年   2篇
  2014年   6篇
  2013年   55篇
  2012年   3篇
  2009年   4篇
  2007年   12篇
  2005年   2篇
  2004年   3篇
  2003年   7篇
  2002年   6篇
  2001年   2篇
  2000年   3篇
  1999年   3篇
  1998年   7篇
  1997年   8篇
  1996年   10篇
  1995年   3篇
  1994年   6篇
  1993年   10篇
  1992年   7篇
  1991年   9篇
  1990年   10篇
  1989年   17篇
  1988年   14篇
  1987年   7篇
  1986年   13篇
  1985年   7篇
  1984年   4篇
  1983年   3篇
  1982年   10篇
  1981年   6篇
  1980年   7篇
  1979年   9篇
  1978年   8篇
  1977年   5篇
  1976年   5篇
  1975年   4篇
  1974年   8篇
  1973年   5篇
  1972年   3篇
  1971年   7篇
  1970年   6篇
  1969年   5篇
  1967年   6篇
  1948年   4篇
排序方式: 共有376条查询结果,搜索用时 15 毫秒
241.
In this paper we present the results of a limited number of experiments with linear fractional problems. Six solution procedures were tested and the results are expressed in the number of simplex-like pivots required to solve a sample of twenty problems randomly generated.  相似文献   
242.
Minimum cardinality set covering problems (MCSCP) tend to be more difficult to solve than weighted set covering problems because the cost or weight associated with each variable is the same. Since MCSCP is NP-complete, large problem instances are commonly solved using some form of a greedy heuristic. In this paper hybrid algorithms are developed and tested against two common forms of the greedy heuristic. Although all the algorithms tested have the same worst case bounds provided by Ho [7], empirical results for 60 large randomly generated problems indicate that one algorithm performed better than the others.  相似文献   
243.
Consider an experiment in which only record-breaking values (e.g., values smaller than all previous ones) are observed. The data available may be represented as X1,K1,X2,K2, …, where X1,X2, … are successive minima and K1,K2, … are the numbers of trials needed to obtain new records. We treat the problem of estimating the mean of an underlying exponential distribution, and we consider both fixed sample size problems and inverse sampling schemes. Under inverse sampling, we demonstrate certain global optimality properties of an estimator based on the “total time on test” statistic. Under random sampling, it is shown than an analogous estimator is consistent, but can be improved for any fixed sample size.  相似文献   
244.
Many organizations providing service support for products or families of products must allocate inventory investment among the parts (or, identically, items) that make up those products or families. The allocation decision is crucial in today's competitive environment in which rapid response and low levels of inventory are both required for providing competitive levels of customer service in marketing a firm's products. This is particularly important in high-tech industries, such as computers, military equipment, and consumer appliances. Such rapid response typically implies regional and local distribution points for final products and for spare parts for repairs. In this article we fix attention on a given product or product family at a single location. This single-location problem is the basic building block of multi-echelon inventory systems based on level-by-level decomposition, and our modeling approach is developed with this application in mind. The product consists of field-replaceable units (i.e., parts), which are to be stocked as spares for field service repair. We assume that each part will be stocked at each location according to an (s, S) stocking policy. Moreover, we distinguish two classes of demand at each location: customer (or emergency) demand and normal replenishment demand from lower levels in the multiechelon system. The basic problem of interest is to determine the appropriate policies (si Si) for each part i in the product under consideration. We formulate an approximate cost function and service level constraint, and we present a greedy heuristic algorithm for solving the resulting approximate constrained optimization problem. We present experimental results showing that the heuristics developed have good cost performance relative to optimal. We also discuss extensions to the multiproduct component commonality problem.  相似文献   
245.
246.
A new technique for solving large‐scale allocation problems with partially observable states and constrained action and observation resources is introduced. The technique uses a master linear program (LP) to determine allocations among a set of control policies, and uses partially observable Markov decision processes (POMDPs) to determine improving policies using dual prices from the master LP. An application is made to a military problem where aircraft attack targets in a sequence of stages, with information acquired in one stage being used to plan attacks in the next. © 2000 John Wiley & Sons, Inc., Naval Research Logistics 47: 607–619, 2000  相似文献   
247.
The classical work of Gittins, which resulted in the celebrated index result, had applications to research planning as an important part of its motivation. However, research planning problems often have features that are not accommodated within Gittins's original framework. These include precedence constraints on the task set, influence between tasks, stopping or investment options and routes to success in which some tasks do not feature. We consider three classes of Markovian decision models for research planning, each of which has all of these features. Gittins-index heuristics are proposed and are assessed both analytically and computationally. They perform impressively. © 1995 John Wiley & Sons, Inc.  相似文献   
248.
A method previously devised for the solution of the p-center problem on a network has now been extended to solve the analogous minimax location-allocation problem in continuous space. The essence of the method is that we choose a subset of the n points to be served and consider the circles based on one, two, or three points. Using a set-covering algorithm we find a set of p such circles which cover the points in the relaxed problem (the one with m < n points). If this is possible, we check whether the n original points are covered by the solution; if so, we have a feasible solution to the problem. We now delete the largest circle with radius rp (which is currently an upper limit to the optimal solution) and try to find a better feasible solution. If we have a feasible solution to the relaxed problem which is not feasible to the original, we augment the relaxed problem by adding a point, preferably the one which is farthest from its nearest center. If we have a feasible solution to the original problem and we delete the largest circle and find that the relaxed problem cannot be covered by p circles, we conclude that the latest feasible solution to the original problem is optimal. An example of the solution of a problem with ten demand points and two and three service points is given in some detail. Computational data for problems of 30 demand points and 1–30 service points, and 100, 200, and 300 demand points and 1–3 service points are reported.  相似文献   
249.
This article argues that an increasingly sea-power-minded China will neither shelter passively in coastal waters, nor throw itself into competition with the United States in the Pacific Ocean. Rather, Beijing will direct its energies toward South and Southeast Asia, where supplies of oil, natural gas, and other commodities critical to China's economic development must pass. There China will encounter an equally sea-power-minded India that enjoys marked geostrategic advantages. Beijing will likely content itself with ‘soft power’ diplomacy in these regions until it can settle the dispute with Taiwan, freeing up resources for maritime endeavors farther from China's coasts.  相似文献   
250.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号