首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   168篇
  免费   2篇
  2020年   3篇
  2019年   6篇
  2018年   9篇
  2017年   6篇
  2016年   6篇
  2015年   5篇
  2014年   3篇
  2013年   62篇
  2012年   2篇
  2010年   1篇
  2009年   1篇
  2008年   3篇
  2007年   1篇
  2006年   2篇
  2005年   1篇
  2001年   2篇
  2000年   1篇
  1999年   2篇
  1998年   4篇
  1997年   4篇
  1996年   5篇
  1995年   4篇
  1994年   3篇
  1993年   3篇
  1992年   1篇
  1991年   1篇
  1990年   3篇
  1989年   2篇
  1988年   2篇
  1987年   1篇
  1986年   4篇
  1985年   1篇
  1984年   2篇
  1982年   3篇
  1980年   2篇
  1975年   2篇
  1973年   1篇
  1972年   1篇
  1971年   1篇
  1968年   2篇
  1967年   1篇
  1966年   1篇
排序方式: 共有170条查询结果,搜索用时 15 毫秒
81.
Learning curves have been used extensively to predict future costs in the airframe and other industries. This paper deals with the effect of perturbations induced by design changes on the learning curves. Equations that are developed and applied make it possible to predict future costs accurately in a perturbed environment. The formulations can be used effectively in EDP programs.  相似文献   
82.
83.
Traditional methods of due-date assignment presented in the literature and used in practice generally assume cost-of-earliness and cost-of-tardiness functions that may bear little resemblance to true costs. For example, practitioners using ordinary least-squares (OLS) regression implicitly minimize a quadratic cost function symmetric about the due date, thereby assigning equal second-order costs to early completion and tardy behavior. In this article the consequences of such assumptions are pointed out, and a cost-based assignment scheme is suggested whereby the cost of early completion may differ in form and/or degree from the cost of tardiness. Two classical approaches (OLS regression and mathematical programming) as well as a neural-network methodology for solving this problem are developed and compared on three hypothetical shops using simulation techniques. It is found for the cases considered that: (a) implicitly ignoring cost-based assignments can be very costly; (b) simpler regression-based rules cited in the literature are very poor cost performers; (c) if the earliness and tardiness cost functions are both linear, linear programming and neural networks are the methodologies of choice; and (d) if the form of the earliness cost function differs from that of the tardiness cost function, neural networks are statistically superior performers. Finally, it is noted that neural networks can be used for a wide range of cost functions, whereas the other methodologies are significantly more restricted. © 1997 John Wiley & Sons, Inc.  相似文献   
84.
We develop a simple approximation for multistage production-inventory systems with limited production capacity and variable demands. Each production stage follows a base-stock policy for echelon inventory, constrained by production capacity and the availability of upstream inventory. Our objective is to find base-stock levels that approximately minimize holding and backorder costs. The key step in our procedure approximates the distribution of echelon inventory by a sum of exponentials; the parameters of the exponentials are chosen to match asymptotically exact expressions. The computational requirements of the method are minimal. In a test bed of 72 problems, each with five production stages, the average relative error for our approximate optimization procedure is 1.9%. © 1996 John Wiley & Sons, Inc.  相似文献   
85.
86.
Does an emergency such as a natural disaster lead to a surge of terrorism? This paper contributes to the emerging literature on this issue. We consider the experience of 129 countries during the period 1998–2012 to determine the effect of a natural disaster on both domestic as well as transnational terrorism. We also control for endogeneity using expenditure on health care and land area in a country as instruments. In contrast to the existing literature, we measure the extent of terrorism by the value of property damage. The results indicate that after natural disasters, (a) transnational terrorism increases with a lag, and (b) a statistically significant impact on domestic terrorism is not observed.  相似文献   
87.
This article generalizes the models in Guo and Zipkin, who focus on exponential service times, to systems with phase‐type service times. Each arriving customer decides whether to stay or balk based on his expected waiting cost, conditional on the information provided. We show how to compute the throughput and customers' average utility in each case. We then obtain some analytical and numerical results to assess the effect of more or less information. We also show that service‐time variability degrades the system's performance. © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008  相似文献   
88.
89.
Decades ago, simulation was famously characterized as a “method of last resort,” to which analysts should turn only “when all else fails.” In those intervening decades, the technologies supporting simulation—computing hardware, simulation‐modeling paradigms, simulation software, design‐and‐analysis methods—have all advanced dramatically. We offer an updated view that simulation is now a very appealing option for modeling and analysis. When applied properly, simulation can provide fully as much insight, with as much precision as desired, as can exact analytical methods that are based on more restrictive assumptions. The fundamental advantage of simulation is that it can tolerate far less restrictive modeling assumptions, leading to an underlying model that is more reflective of reality and thus more valid, leading to better decisions. Published 2015 Wiley Periodicals, Inc. Naval Research Logistics 62: 293–303, 2015  相似文献   
90.
Determination of the gunfire probability of kill against a target requires two parameters to be taken into consideration: the likelihood of hitting the target (susceptibility) and the conditional probability of kill given a hit (vulnerability). Two commonly used methods for calculating the latter probability are (1) treating each hit upon the target independently, and (2) setting an exact number of hits to obtain a target kill. Each of these methods contains an implicit assumption about the probability distribution of the number of hits‐to‐kill. Method (1) assumes that the most likely kill scenario occurs with exactly one hit, whereas (2) implies that achieving a precise number of hits always results in a kill. These methods can produce significant differences in the predicted gun effectiveness, even if the mean number of hits‐to‐kill for each distribution is the same. We therefore introduce a new modeling approach with a more general distribution for the number of hits‐to‐kill. The approach is configurable to various classes of damage mechanism and is able to match both methods (1) and (2) with a suitable choice of parameter. We use this new approach to explore the influence of various damage accumulation models on the predicted effectiveness of weapon‐target engagements.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号