首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   741篇
  免费   18篇
  2021年   11篇
  2019年   15篇
  2018年   8篇
  2017年   19篇
  2016年   13篇
  2015年   14篇
  2014年   18篇
  2013年   134篇
  2012年   7篇
  2010年   10篇
  2009年   17篇
  2007年   11篇
  2006年   10篇
  2005年   11篇
  2004年   16篇
  2002年   17篇
  2001年   9篇
  2000年   11篇
  1999年   7篇
  1998年   15篇
  1997年   9篇
  1996年   9篇
  1995年   11篇
  1994年   17篇
  1993年   16篇
  1992年   11篇
  1991年   18篇
  1990年   12篇
  1989年   19篇
  1988年   20篇
  1987年   20篇
  1986年   20篇
  1985年   12篇
  1984年   13篇
  1983年   9篇
  1982年   8篇
  1981年   11篇
  1980年   7篇
  1979年   11篇
  1978年   15篇
  1977年   6篇
  1976年   8篇
  1975年   8篇
  1974年   10篇
  1973年   11篇
  1972年   11篇
  1971年   6篇
  1970年   11篇
  1969年   8篇
  1967年   6篇
排序方式: 共有759条查询结果,搜索用时 15 毫秒
621.
Qualitative studies of terrorist movements frequently highlight the importance of diaspora communities as important factors in producing and sustaining terrorist activity in countries. The underlying theoretical argument is that bifurcation of tight-knit minority communities between countries nurtures separatist or irredentist sentiments among affected community members, thus prompting terrorist activity, while minority community members in other countries might mobilize financial and political resources to support terrorist activity among their compatriots. In this study, we empirically test whether transnational dispersion, versus domestic concentration, of minority communities in countries produces higher incidents of terrorism. Conducting a series of negative binomial estimations on a reshaped database of around 170 countries from 1981 to 2006, derived from the Minorities at Risk database and the Global Terrorism Database, we determine that both transnational dispersion of kin minority communities and domestic concentration of minorities within countries increase terrorism and that transnational dispersion is a particularly robust predictor of terrorist attacks.  相似文献   
622.
In a recent paper, Teng, Chern, and Yang consider four possible inventory replenishment models and determine the optimal replenishment policies for them. They compare these models to identify the best alternative on the basis of minimum total relevant inventory costs. The total cost functions for Model 1 and Model 4 as derived by them are not exact for the comparison. As a result, their conclusion on the least expensive replenishment policy is incorrect. The present article provides the actual total costs for Model 1 and Model 4 to make a correct comparison of the four models. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 602–606, 2000  相似文献   
623.
We undertake inference for a stochastic form of the Lanchester combat model. In particular, given battle data, we assess the type of battle that occurred and whether or not it makes any difference to the number of casualties if an army is attacking or defending. Our approach is Bayesian and we use modern computational techniques to fit the model. We illustrate our method using data from the Ardennes campaign. We compare our results with previous analyses of these data by Bracken and Fricker. Our conclusions are somewhat different to those of Bracken. Where he suggests that a linear law is appropriate, we show that the logarithmic or linear‐logarithmic laws fit better. We note however that the basic Lanchester modeling assumptions do not hold for the Ardennes data. Using Fricker's modified data, we show that although his “super‐logarithmic” law fits best, the linear, linear‐logarithmic, and logarithmic laws cannot be ruled out. We suggest that Bayesian methods can be used to make inference for battles in progress. We point out a number of advantages: Prior information from experts or previous battles can be incorporated; predictions of future casualties are easily made; more complex models can be analysed using stochastic simulation techniques. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 541–558, 2000  相似文献   
624.
625.
In this paper we study strategies for better utilizing the network capacity of Internet Service Providers (ISPs) when they are faced with stochastic and dynamic arrivals and departures of customers attempting to log‐on or log‐off, respectively. We propose a method in which, depending on the number of modems available, and the arrival and departure rates of different classes of customers, a decision is made whether to accept or reject a log‐on request. The problem is formulated as a continuous time Markov Decision Process for which optimal policies can be readily derived using techniques such as value iteration. This decision maximizes the discounted value to ISPs while improving service levels for higher class customers. The methodology is similar to yield management techniques successfully used in airlines, hotels, etc. However, there are sufficient differences, such as no predefined time horizon or reservations, that make this model interesting to pursue and challenging. This work was completed in collaboration with one of the largest ISPs in Connecticut. The problem is topical, and approaches such as those proposed here are sought by users. © 2001 John Wiley & Sons, Inc., Naval Research Logistics 48:348–362, 2001  相似文献   
626.
A mathematical formulation of an optimization model designed to select projects for inclusion in an R&D portfolio, subject to a wide variety of constraints (e.g., capital, headcount, strategic intent, etc.), is presented. The model is similar to others that have previously appeared in the literature and is in the form of a mixed integer programming (MIP) problem known as the multidimensional knapsack problem. Exact solution of such problems is generally difficult, but can be accomplished in reasonable time using specialized algorithms. The main contribution of this paper is an examination of two important issues related to formulation of project selection models such as the one presented here. If partial funding and implementation of projects is allowed, the resulting formulation is a linear programming (LP) problem which can be solved quite easily. Several plausible assumptions about how partial funding impacts project value are presented. In general, our examples suggest that the problem might best be formulated as a nonlinear programming (NLP) problem, but that there is a need for further research to determine an appropriate expression for the value of a partially funded project. In light of that gap in the current body of knowledge and for practical reasons, the LP relaxation of this model is preferred. The LP relaxation can be implemented in a spreadsheet (even for relatively large problems) and gives reasonable results when applied to a test problem based on GM's R&D project selection process. There has been much discussion in the literature on the topic of assigning a quantitative measure of value to each project. Although many alternatives are suggested, no one way is universally accepted as the preferred way. There does seem to be general agreement that all of the proposed methods are subject to considerable uncertainty. A systematic way to examine the sensitivity of project selection decisions to variations in the measure of value is developed. It is shown that the solution for the illustrative problem is reasonably robust to rather large variations in the measure of value. We cannot, however, conclude that this would be the case in general. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 18–40, 2001  相似文献   
627.
The Selection Allocation Problem (SAP) is a single period decision problem which involves selecting profit‐maximizing (or cost‐minimizing) activities from various distinct groups, and determining the volume of those activities. The activities in each group are selected subject to the availability of that group's resource, which is provided by either pooling or blending raw inputs from several potential sources. Imbedded in the decision process is the additional task of determining how much raw input is to be allocated to each group to form the resource for that group. Instances of this problem can be found in many different areas, such as in tool selection for flexible manufacturing systems, facility location, and funding for social services. Our goal in this paper is to identify and exploit special structures in the (SAP) and use those structures to develop an efficient solution procedure. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 707–725, 1999  相似文献   
628.
Capacity improvement and conditional penalties are two computational aides for fathoming subproblems in a branch‐and‐bound procedure. In this paper, we apply these techniques to the fixed charge transportation problem (FCTP) and show how relaxations of the FCTP subproblems can be posed as concave minimization problems (rather than LP relaxations). Using the concave relaxations, we propose a new conditional penalty and three new types of capacity improvement techniques for the FCTP. Based on computational experiments using a standard set of FCTP test problems, the new capacity improvement and penalty techniques are responsible for a three‐fold reduction in the CPU time for the branch‐and‐bound algorithm and nearly a tenfold reduction in the number of subproblems that need to be evaluated in the branch‐and‐bound enumeration tree. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 341–355, 1999  相似文献   
629.
We consider a routing policy that forms a dynamic shortest path in a network with independent, positive and discrete random arc costs. When visiting a node in the network, the costs for the arcs going out of this node are realized, and then the policy will determine which node to visit next with the objective of minimizing the expected cost from the current node to the destination node. This paper proposes an approach, which mimics the classical label-correcting approach, to compute the expected path cost. First, we develop a sequential implementation of this approach and establish some properties about the implementation. Next, we develop stochastic versions of some well-known label-correcting methods, including the first-in-first-out method, the two-queue method, the threshold algorithms, and the small-label-first principle. We perform numerical experiments to evaluate these methods and observe that fast methods for deterministic networks can become very slow for stochastic networks. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 769–789, 1998  相似文献   
630.
We develop polynomial algorithms for several cases of the NP-hard open shop scheduling problem of minimizing the number of late jobs by utilizing some recent results for the open shop makespan problem. For the two machine common due date problem, we assume that either the machines or the jobs are ordered. For the m machine common due date problem, we assume that one machine is maximal and impose a restriction on its load. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 525–532, 1998  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号