首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   228篇
  免费   13篇
  2021年   4篇
  2020年   4篇
  2019年   10篇
  2018年   4篇
  2017年   5篇
  2016年   9篇
  2015年   12篇
  2014年   3篇
  2013年   49篇
  2012年   2篇
  2011年   5篇
  2009年   2篇
  2008年   2篇
  2007年   6篇
  2006年   2篇
  2005年   3篇
  2004年   6篇
  2003年   6篇
  2002年   5篇
  2001年   3篇
  2000年   3篇
  1999年   8篇
  1998年   4篇
  1997年   3篇
  1995年   3篇
  1994年   6篇
  1993年   4篇
  1992年   6篇
  1991年   7篇
  1990年   4篇
  1989年   3篇
  1988年   4篇
  1987年   2篇
  1986年   3篇
  1985年   2篇
  1984年   2篇
  1981年   3篇
  1979年   3篇
  1977年   3篇
  1976年   1篇
  1975年   1篇
  1974年   4篇
  1973年   1篇
  1972年   1篇
  1971年   4篇
  1969年   2篇
  1968年   3篇
  1967年   1篇
  1966年   2篇
  1948年   1篇
排序方式: 共有241条查询结果,搜索用时 15 毫秒
201.
In this article we consider a single-server, bulk-service queueing system in which the waiting room is of finite capacity. Arrival process is Poisson and all the arrivals taking place when the waiting room is full are lost. The service times are generally distributed independent random variables and the distribution is depending on the batch size being served. Using renewal theory, we derive the time-dependent solution for the system-size probabilities at arbitrary time points. Also we give expressions for the distribution of virtual waiting time in the queue at any time t.  相似文献   
202.
The problem considered in this article is a generalization of the familiar makespan problem, in which n jobs are allocated among m parallel processors, so as to minimize the maximum time (or cost) on any processor. Our problem is more general, in that we allow the processors to have (a) different initial costs, (b) different utilization levels before new costs are incurred, and (c) different rates of cost increase. A heuristic adapted from the bin-packing problem is shown to provide solutions which are close to optimal as the number of iterations is allowed to increase. Computational testing, over a large number of randomly generated problem instances, suggests that heuristic errors are, on average, very small.  相似文献   
203.
We consider the problem of scheduling n independent and simultaneously available jobs without preemption on a single machine, where the machine has a fixed maintenance activity. The objective is to find the optimal job sequence to minimize the total amount of late work, where the late work of a job is the amount of processing of the job that is performed after its due date. We first discuss the approximability of the problem. We then develop two pseudo‐polynomial dynamic programming algorithms and a fully polynomial‐time approximation scheme for the problem. Finally, we conduct extensive numerical studies to evaluate the performance of the proposed algorithms. © 2016 Wiley Periodicals, Inc. Naval Research Logistics 63: 172–183, 2016  相似文献   
204.
We undertake inference for a stochastic form of the Lanchester combat model. In particular, given battle data, we assess the type of battle that occurred and whether or not it makes any difference to the number of casualties if an army is attacking or defending. Our approach is Bayesian and we use modern computational techniques to fit the model. We illustrate our method using data from the Ardennes campaign. We compare our results with previous analyses of these data by Bracken and Fricker. Our conclusions are somewhat different to those of Bracken. Where he suggests that a linear law is appropriate, we show that the logarithmic or linear‐logarithmic laws fit better. We note however that the basic Lanchester modeling assumptions do not hold for the Ardennes data. Using Fricker's modified data, we show that although his “super‐logarithmic” law fits best, the linear, linear‐logarithmic, and logarithmic laws cannot be ruled out. We suggest that Bayesian methods can be used to make inference for battles in progress. We point out a number of advantages: Prior information from experts or previous battles can be incorporated; predictions of future casualties are easily made; more complex models can be analysed using stochastic simulation techniques. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 541–558, 2000  相似文献   
205.
A univariate meta analysis is often used to summarize various study results on the same research hypothesis concerning an effect of interest. When several marketing studies produce sets of more than one effect, multivariate meta analysis can be conducted. Problems one might have with such a multivariate meta analysis are: (1) Several effects estimated in one model could be correlated to each other but their correlation is seldom published and (2) an estimated effect in one model could be correlated to the corresponding effect in the other model due to similar model specification or the data set partly shared, but their correlation is not known. Situations like (2) happen often in military recruiting studies. We employ a Monte‐Carlo simulation to evaluate how neglecting such potential correlation affects the result of a multivariate meta analysis in terms of Type I, Type II errors, and MSE. Simulation results indicate that such effect is not significant. What matters is rather the size of the variance component due to random error in multivariate effects. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 500–510, 2000.  相似文献   
206.
We investigate the strategy of transshipments in a dynamic deterministic demand environment over a finite planning horizon. This is the first time that transshipments are examined in a dynamic or deterministic setting. We consider a system of two locations which replenish their stock from a single supplier, and where transshipments between the locations are possible. Our model includes fixed (possibly joint) and variable replenishment costs, fixed and variable transshipment costs, as well as holding costs for each location and transshipment costs between locations. The problem is to determine how much to replenish and how much to transship each period; thus this work can be viewed as a synthesis of transshipment problems in a static stochastic setting and multilocation dynamic deterministic lot sizing problems. We provide interesting structural properties of optimal policies which enhance our understanding of the important issues which motivate transshipments and allow us to develop an efficient polynomial time algorithm for obtaining the optimal strategy. By exploring the reasons for using transshipments, we enable practitioners to envision the sources of savings from using this strategy and therefore motivate them to incorporate it into their replenishment strategies. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48:386–408, 2001  相似文献   
207.
We address the problem of inventory management in a two‐location inventory system, in which the transshipments are carried out as means of emergency or alternative supply after demand has been realized. This model differs from previous ones as regards its replenishment costs structure, in which nonnegligible fixed replenishment costs and a joint replenishment cost are considered. The single period planning horizon is analyzed, with the form and several properties of the optimal replenishment and transshipment policies developed, discussed and illustrated. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 525–547, 1999  相似文献   
208.
We apply the techniques of response surface methodology (RSM) to approximate the objective function of a two‐stage stochastic linear program with recourse. In particular, the objective function is estimated, in the region of optimality, by a quadratic function of the first‐stage decision variables. The resulting response surface can provide valuable modeling insight, such as directions of minimum and maximum sensitivity to changes in the first‐stage variables. Latin hypercube (LH) sampling is applied to reduce the variance of the recourse function point estimates that are used to construct the response surface. Empirical results show the value of the LH method by comparing it with strategies based on independent random numbers, common random numbers, and the Schruben‐Margolin assignment rule. In addition, variance reduction with LH sampling can be guaranteed for an important class of two‐stage problems which includes the classical capacity expansion model. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 753–776, 1999  相似文献   
209.
Quantile is an important quantity in reliability analysis, as it is related to the resistance level for defining failure events. This study develops a computationally efficient sampling method for estimating extreme quantiles using stochastic black box computer models. Importance sampling has been widely employed as a powerful variance reduction technique to reduce estimation uncertainty and improve computational efficiency in many reliability studies. However, when applied to quantile estimation, importance sampling faces challenges, because a good choice of the importance sampling density relies on information about the unknown quantile. We propose an adaptive method that refines the importance sampling density parameter toward the unknown target quantile value along the iterations. The proposed adaptive scheme allows us to use the simulation outcomes obtained in previous iterations for steering the simulation process to focus on important input areas. We prove some convergence properties of the proposed method and show that our approach can achieve variance reduction over crude Monte Carlo sampling. We demonstrate its estimation efficiency through numerical examples and wind turbine case study.  相似文献   
210.
Using two recently published essays by the current writer that assesses the dismal record of performance of Planning, Programming, and Budgeting System in enabling communist legacy defence institutions in Central and Eastern Europe to develop viable defence plans, this essay argues the need for deep reforms in the region’s defence institutions. To guide this reform effort, pragmatic solutions are suggested to improve the ability of these organisations to produce viable defence plans. Recommended reforms are: (1) conduct conceptual and cultural “audits,” (2) make operational and financial data central to decision-making, (3) change current organisational sociology, (4) examine planning methods and practices, and (5) stress the need to adopt policy frameworks to drive the operation of defence institutions.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号