首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   335篇
  免费   81篇
  国内免费   3篇
  419篇
  2024年   1篇
  2022年   2篇
  2021年   3篇
  2020年   1篇
  2019年   12篇
  2018年   4篇
  2017年   17篇
  2016年   20篇
  2015年   19篇
  2014年   15篇
  2013年   70篇
  2012年   21篇
  2011年   21篇
  2010年   21篇
  2009年   21篇
  2008年   23篇
  2007年   34篇
  2006年   23篇
  2005年   16篇
  2004年   21篇
  2003年   13篇
  2002年   14篇
  2001年   11篇
  2000年   13篇
  1999年   2篇
  1998年   1篇
排序方式: 共有419条查询结果,搜索用时 15 毫秒
131.
Most scheduling problems are notoriously intractable, so the majority of algorithms for them are heuristic in nature. Priority rule‐based methods still constitute the most important class of these heuristics. Of these, in turn, parametrized biased random sampling methods have attracted particular interest, due to the fact that they outperform all other priority rule‐based methods known. Yet, even the “best” such algorithms are unable to relate to the full range of instances of a problem: Usually there will exist instances on which other algorithms do better. We maintain that asking for the one best algorithm for a problem may be asking too much. The recently proposed concept of control schemes, which refers to algorithmic schemes allowing to steer parametrized algorithms, opens up ways to refine existing algorithms in this regard and improve their effectiveness considerably. We extend this approach by integrating heuristics and case‐based reasoning (CBR), an approach that has been successfully used in artificial intelligence applications. Using the resource‐constrained project scheduling problem as a vehicle, we describe how to devise such a CBR system, systematically analyzing the effect of several criteria on algorithmic performance. Extensive computational results validate the efficacy of our approach and reveal a performance similar or close to state‐of‐the‐art heuristics. In addition, the analysis undertaken provides new insight into the behaviour of a wide class of scheduling heuristics. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 201–222, 2000  相似文献   
132.
We deal with the problem of minimizing makespan on a single batch processing machine. In this problem, each job has both processing time and size (capacity requirement). The batch processing machine can process a number of jobs simultaneously as long as the total size of these jobs being processed does not exceed the machine capacity. The processing time of a batch is just the processing time of the longest job in the batch. An approximation algorithm with worst‐case ratio 3/2 is given for the version where the processing times of large jobs (with sizes greater than 1/2) are not less than those of small jobs (with sizes not greater than 1/2). This result is the best possible unless P = NP. For the general case, we propose an approximation algorithm with worst‐case ratio 7/4. A number of heuristics by Uzosy are also analyzed and compared. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 226–240, 2001  相似文献   
133.
This article compares the profitability of two pervasively adopted return policies—money‐back guarantee and hassle‐free policies. In our model, a seller sells to consumers with heterogeneous valuations and hassle costs. Products are subject to quality risk, and product misfit can only be observed post‐purchase. While the hassle‐free policy is cost advantageous from the seller's viewpoint, a money‐back guarantee allows the seller to fine‐tune the consumer hassle on returning the product. Thus, when the two return policies lead to the same consumer behaviors, the hassle‐free policy dominates. Conversely, a money‐back guarantee can be more profitable even if on average, high‐valuation consumers experience a lower hassle cost than the low‐valuation ones. The optimal hassle cost can be higher when product quality gets improved; thus, it is not necessarily a perfect proxy or signal of the seller's quality. We further allow the seller to adopt a mixture of these policies, and identify the concrete operating regimes within which these return policies are optimal among more flexible policies. © 2014 Wiley Periodicals, Inc. Naval Research Logistics 61: 403–417, 2014  相似文献   
134.
We consider the problem of assessing the value of demand sharing in a multistage supply chain in which the retailer observes stationary autoregressive moving average demand with Gaussian white noise (shocks). Similar to previous research, we assume each supply chain player constructs its best linear forecast of the leadtime demand and uses it to determine the order quantity via a periodic review myopic order‐up‐to policy. We demonstrate how a typical supply chain player can determine the extent of its available information in the presence of demand sharing by studying the properties of the moving average polynomials of adjacent supply chain players. The retailer's demand is driven by the random shocks appearing in the autoregressive moving average representation for its demand. Under the assumptions we will make in this article, to the retailer, knowing the shock information is equivalent to knowing the demand process (assuming that the model parameters are also known). Thus (in the event of sharing) the retailer's demand sequence and shock sequence would contain the same information to the retailer's supplier. We will show that, once we consider the dynamics of demand propagation further up the chain, it may be that a player's demand and shock sequences will contain different levels of information for an upstream player. Hence, we study how a player can determine its available information under demand sharing, and use this information to forecast leadtime demand. We characterize the value of demand sharing for a typical supply chain player. Furthermore, we show conditions under which (i) it is equivalent to no sharing, (ii) it is equivalent to full information shock sharing, and (iii) it is intermediate in value to the two previously described arrangements. Although it follows from existing literature that demand sharing is equivalent to full information shock sharing between a retailer and supplier, we demonstrate and characterize when this result does not generalize to upstream supply chain players. We then show that demand propagates through a supply chain where any player may share nothing, its demand, or its full information shocks (FIS) with an adjacent upstream player as quasi‐ARMA in—quasi‐ARMA out. We also provide a convenient form for the propagation of demand in a supply chain that will lend itself to future research applications. © 2014 Wiley Periodicals, Inc. Naval Research Logistics 61: 515–531, 2014  相似文献   
135.
In this article, we address a stochastic generalized assignment machine scheduling problem in which the processing times of jobs are assumed to be random variables. We develop a branch‐and‐price (B&P) approach for solving this problem wherein the pricing problem is separable with respect to each machine, and has the structure of a multidimensional knapsack problem. In addition, we explore two other extensions of this method—one that utilizes a dual‐stabilization technique and another that incorporates an advanced‐start procedure to obtain an initial feasible solution. We compare the performance of these methods with that of the branch‐and‐cut (B&C) method within CPLEX. Our results show that all B&P‐based approaches perform better than the B&C method, with the best performance obtained for the B&P procedure that includes both the extensions aforementioned. We also utilize a Monte Carlo method within the B&P scheme, which affords the use of a small subset of scenarios at a time to estimate the “true” optimal objective function value. Our experimental investigation reveals that this approach readily yields solutions lying within 5% of optimality, while providing more than a 10‐fold savings in CPU times in comparison with the best of the other proposed B&P procedures. © 2014 Wiley Periodicals, Inc. Naval Research Logistics 61: 131–143, 2014  相似文献   
136.
Stochastic network design is fundamental to transportation and logistic problems in practice, yet faces new modeling and computational challenges resulted from heterogeneous sources of uncertainties and their unknown distributions given limited data. In this article, we design arcs in a network to optimize the cost of single‐commodity flows under random demand and arc disruptions. We minimize the network design cost plus cost associated with network performance under uncertainty evaluated by two schemes. The first scheme restricts demand and arc capacities in budgeted uncertainty sets and minimizes the worst‐case cost of supply generation and network flows for any possible realizations. The second scheme generates a finite set of samples from statistical information (e.g., moments) of data and minimizes the expected cost of supplies and flows, for which we bound the worst‐case cost using budgeted uncertainty sets. We develop cutting‐plane algorithms for solving the mixed‐integer nonlinear programming reformulations of the problem under the two schemes. We compare the computational efficacy of different approaches and analyze the results by testing diverse instances of random and real‐world networks. © 2017 Wiley Periodicals, Inc. Naval Research Logistics 64: 154–173, 2017  相似文献   
137.
This article proposes an approximation for the blocking probability in a many‐server loss model with a non‐Poisson time‐varying arrival process and flexible staffing (number of servers) and shows that it can be used to set staffing levels to stabilize the time‐varying blocking probability at a target level. Because the blocking probabilities necessarily change dramatically after each staffing change, we randomize the time of each staffing change about the planned time. We apply simulation to show that (i) the blocking probabilities cannot be stabilized without some form of randomization, (ii) the new staffing algorithm with randomiation can stabilize blocking probabilities at target levels and (iii) the required staffing can be quite different when the Poisson assumption is dropped. © 2017 Wiley Periodicals, Inc. Naval Research Logistics 64: 177–202, 2017  相似文献   
138.
This paper considers the statistical analysis of masked data in a series system, where the components are assumed to have Marshall‐Olkin Weibull distribution. Based on type‐I progressive hybrid censored and masked data, we derive the maximum likelihood estimates, approximate confidence intervals, and bootstrap confidence intervals of unknown parameters. As the maximum likelihood estimate does not exist for small sample size, Gibbs sampling is used to obtain the Bayesian estimates and Monte Carlo method is employed to construct the credible intervals based on Jefferys prior with partial information. Numerical simulations are performed to compare the performances of the proposed methods and one data set is analyzed.  相似文献   
139.
This paper models and estimates Greek defence spending over the 1950–1989 period. It employs the Stone‐Geary welfare function and estimates levels of defence expenditures by the Engle‐Granger two‐step procedure. The Dickey‐Fuller test regression for cointegration is specified in terms of the significance of additional augmentations. The Deaton‐Muellbauer functional form is then employed and an estimating equation for the expenditure share of defence is derived. This specification is compared with the levels equation through a number of non‐nested tests involving model transformation.  相似文献   
140.
Using tests of a single equation model and cointegration techniques, this paper finds no evidence of a long run trade‐off, and some evidence of a short‐run trade‐off, between military spending and investment in post‐World War II United States data. The short‐run trade‐off is confined to the 1949–1971 period, and may be the result of the sharp expansion and contraction of military outlays in connection with the Korean and Vietnam Wars. In addition, cointegration techniques are used to identify a possible long‐run trade‐off between military spending and consumption.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号