首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   205篇
  免费   41篇
  2021年   2篇
  2019年   8篇
  2018年   3篇
  2017年   5篇
  2016年   13篇
  2015年   14篇
  2014年   9篇
  2013年   59篇
  2012年   9篇
  2011年   7篇
  2010年   4篇
  2009年   7篇
  2008年   9篇
  2007年   12篇
  2006年   8篇
  2005年   12篇
  2004年   10篇
  2003年   13篇
  2002年   10篇
  2001年   6篇
  2000年   5篇
  1999年   7篇
  1995年   2篇
  1994年   2篇
  1993年   2篇
  1991年   1篇
  1990年   2篇
  1988年   1篇
  1987年   1篇
  1985年   2篇
  1981年   1篇
排序方式: 共有246条查询结果,搜索用时 812 毫秒
71.
In this article we introduce a 2‐machine flowshop with processing flexibility. Two processing modes are available for each task: namely, processing by the designated processor, and processing simultaneously by both processors. The objective studied is makespan minimization. This production environment is encountered in repetitive manufacturing shops equipped with processors that have the flexibility to execute orders either individually or in coordination. In the latter case, the product designer exploits processing synergies between two processors so as to execute a particular task much faster than a dedicated processor. This type of flowshop environment is also encountered in labor‐intensive assembly lines where products moving downstream can be processed either in the designated assembly stations or by pulling together the work teams of adjacent stations. This scheduling problem requires determining the mode of operation of each task, and the subsequent scheduling that preserves the flowshop constraints. We show that the problem is ordinary NP‐complete and obtain an optimal solution using a dynamic programming algorithm with considerable computational requirements for medium and large problems. Then, we present a number of dynamic programming relaxations and analyze their worst‐case error performance. Finally, we present a polynomial time heuristic with worst‐case error performance comparable to that of the dynamic programming relaxations. © 2003 Wiley Periodicals, Inc. Naval Research Logistics, 2004.  相似文献   
72.
In this paper we consider n jobs and a number of machines in parallel. The machines are identical and subject to breakdown and repair. The number may therefore vary over time and is at time t equal to m(t). Preemptions are allowed. We consider three objectives, namely, the total completion time, ∑ Cj, the makespan Cmax, and the maximum lateness Lmax. We study the conditions on m(t) under which various rules minimize the objective functions under consideration. We analyze cases when the jobs have deadlines to meet and when the jobs are subject to precedence constraints. © 2003 Wiley Periodicals, Inc. Naval Research Logistics, 2004.  相似文献   
73.
The parallel machine replacement problem consists of finding a minimum cost replacement policy for a finite population of economically interdependent machines. In this paper, we formulate a stochastic version of the problem and analyze the structure of optimal policies under general classes of replacement cost functions. We prove that for problems with arbitrary cost functions, there can be optimal policies where a machine is replaced only if all machines in worse states are replaced (Worse Cluster Replacement Rule). We then show that, for problems with replacement cost functions exhibiting nonincreasing marginal costs, there are optimal policies such that, in any stage, machines in the same state are either all kept or all replaced (No‐Splitting Rule). We also present an example that shows that economies of scale in replacement costs do not guarantee optimal policies that satisfy the No‐Splitting Rule. These results lead to the fundamental insight that replacement decisions are driven by marginal costs, and not by economies of scale as suggested in the literature. Finally, we describe how the optimal policy structure, i.e., the No‐Splitting and Worse Cluster Replacement Rules, can be used to reduce the computational effort required to obtain optimal replacement policies. © 2005 Wiley Periodicals, Inc. Naval Research Logistics, 2005  相似文献   
74.
We study a generalization of the weighted set covering problem where every element needs to be covered multiple times. When no set contains more than two elements, we can solve the problem in polynomial time by solving a corresponding weighted perfect b‐matching problem. In general, we may use a polynomial‐time greedy heuristic similar to the one for the classical weighted set covering problem studied by D.S. Johnson [Approximation algorithms for combinatorial problems, J Comput Syst Sci 9 (1974), 256–278], L. Lovasz [On the ratio of optimal integral and fractional covers, Discrete Math 13 (1975), 383–390], and V. Chvatal [A greedy heuristic for the set‐covering problem, Math Oper Res 4(3) (1979), 233–235] to get an approximate solution for the problem. We find a worst‐case bound for the heuristic similar to that for the classical problem. In addition, we introduce a general type of probability distribution for the population of the problem instances and prove that the greedy heuristic is asymptotically optimal for instances drawn from such a distribution. We also conduct computational studies to compare solutions resulting from running the heuristic and from running the commercial integer programming solver CPLEX on problem instances drawn from a more specific type of distribution. The results clearly exemplify benefits of using the greedy heuristic when problem instances are large. © 2003 Wiley Periodicals, Inc. Naval Research Logistics, 2005  相似文献   
75.
Degradation experiments are widely used to assess the reliability of highly reliable products which are not likely to fail under the traditional life tests. In order to conduct a degradation experiment efficiently, several factors, such as the inspection frequency, the sample size, and the termination time, need to be considered carefully. These factors not only affect the experimental cost, but also affect the precision of the estimate of a product's lifetime. In this paper, we deal with the optimal design of a degradation experiment. Under the constraint that the total experimental cost does not exceed a predetermined budget, the optimal decision variables are solved by minimizing the variance of the estimated 100pth percentile of the lifetime distribution of the product. An example is provided to illustrate the proposed method. Finally, a simulation study is conducted to investigate the robustness of this proposed method. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 689–706, 1999  相似文献   
76.
This paper introduces a general or “distribution‐free” model to analyze the lifetime of components under accelerated life testing. Unlike the accelerated failure time (AFT) models, the proposed model shares the advantage of being “distribution‐free” with the proportional hazard (PH) model and overcomes the deficiency of the PH model not allowing survival curves corresponding to different values of a covariate to cross. In this research, we extend and modify the extended hazard regression (EHR) model using the partial likelihood function to analyze failure data with time‐dependent covariates. The new model can be easily adopted to create an accelerated life testing model with different types of stress loading. For example, stress loading in accelerated life testing can be a step function, cyclic, or linear function with time. These types of stress loadings reduce the testing time and increase the number of failures of components under test. The proposed EHR model with time‐dependent covariates which incorporates multiple stress loadings requires further verification. Therefore, we conduct an accelerated life test in the laboratory by subjecting components to time‐dependent stresses, and we compare the reliability estimation based on the developed model with that obtained from experimental results. The combination of the theoretical development of the accelerated life testing model verified by laboratory experiments offers a unique perspective to reliability model building and verification. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 303–321, 1999  相似文献   
77.
This article examines a game of multiproduct technology adoption. We consider a duopoly model in which firms choose when to switch from a traditional single-product technology to a more flexible and more expensive multiproduct technology. The multiproduct technology allows a firm to invade the other firm's market, creating a more competitive environment and reducing profits. We analyze this investment decision as a game of timing using two different equilibrium concepts. First, we utilize the “silent” equilibrium concept, where firms commit at time zero to a switching time. This concept would be applicable to situations where firms cannot observe each other's actions, or when the implementation of the technology requires long lead times and the investment decision is private information. Using this notion we find that both firms adopt the multiproduct technology simultaneously within a certain time interval. We then characterize this time interval in terms of cost and demand conditions. We also derive conditions under which sequential adoption of the multiproduct technology occurs. The second concept used is that of noisy equilibrium, where firms cannot precommit themselves to an adoption time. This concept is appropriate when investment decisions are common knowledge. In this case a firm can credibly threaten to immediately follow suit if the other firm decides to adopt. This threat is sufficient to ensure the collusive outcome where neither firm adopts the flexible technology. © 1994 John Wiley & Sons, Inc.  相似文献   
78.
In this paper, we present an O(nm log(U/n)) time maximum flow algorithm. If U = O(n) then this algorithm runs in O(nm) time for all values of m and n. This gives the best available running time to solve maximum flow problems satisfying U = O(n). Furthermore, for unit capacity networks the algorithm runs in O(n2/3m) time. It is a two‐phase capacity scaling algorithm that is easy to implement and does not use complex data structures. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 511–520, 2000  相似文献   
79.
This article examines the historical impact of foreign fighters and how the international community has sought to counter this threat. It argues that foreign fighters have contributed significantly to the metastasis of Salafi-jihadism over the past 30 years. They have globalized local conflicts. They have brought advanced skills to battlefields. Further, the logistics infrastructure built by foreign fighters has allowed Salafi-jihadism to expand rapidly. The challenge for security officials today is how to prevent the foreign fighters in Syria and Iraq from expanding the threat of Salafi-jihadism further. To inform this effort, this article derives lessons learned from past efforts against Arab Afghans in Bosnia (1992–1995) and Abu Musab al-Zarqawi’s foreign volunteers in Iraq (2003–2008).  相似文献   
80.
Reliability data obtained from life tests and degradation tests have been extensively used for purposes such as estimating product reliability and predicting warranty costs. When there is more than one candidate model, an important task is to discriminate between the models. In the literature, the model discrimination was often treated as a hypothesis test and a pairwise model discrimination procedure was carried out. Because the null distribution of the test statistic is unavailable in most cases, the large sample approximation and the bootstrap were frequently used to find the acceptance region of the test. Although these two methods are asymptotically accurate, their performance in terms of size and power is not satisfactory in small sample size. To enhance the small‐sample performance, we propose a new method to approximate the null distribution, which builds on the idea of generalized pivots. Conventionally, the generalized pivots were often used for interval estimation of a certain parameter or function of parameters in presence of nuisance parameters. In this study, we further extend the idea of generalized pivots to find the acceptance region of the model discrimination test. Through extensive simulations, we show that the proposed method performs better than the existing methods in discriminating between two lifetime distributions or two degradation models over a wide range of sample sizes. Two real examples are used to illustrate the proposed methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号