首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   396篇
  免费   0篇
  396篇
  2021年   6篇
  2019年   9篇
  2018年   5篇
  2017年   4篇
  2016年   7篇
  2015年   2篇
  2014年   5篇
  2013年   65篇
  2012年   3篇
  2009年   4篇
  2007年   12篇
  2005年   2篇
  2004年   3篇
  2003年   7篇
  2002年   6篇
  2001年   2篇
  2000年   4篇
  1999年   4篇
  1998年   7篇
  1997年   8篇
  1996年   9篇
  1995年   3篇
  1994年   6篇
  1993年   10篇
  1992年   7篇
  1991年   9篇
  1990年   10篇
  1989年   17篇
  1988年   14篇
  1987年   7篇
  1986年   13篇
  1985年   8篇
  1984年   4篇
  1983年   3篇
  1982年   10篇
  1981年   6篇
  1980年   8篇
  1979年   9篇
  1978年   8篇
  1977年   5篇
  1976年   6篇
  1975年   4篇
  1974年   9篇
  1973年   5篇
  1972年   4篇
  1971年   7篇
  1970年   6篇
  1969年   5篇
  1967年   6篇
  1948年   4篇
排序方式: 共有396条查询结果,搜索用时 0 毫秒
251.
In this article we present a methodology for postoptimality and sensitivity analysis of zero-one goal programs based on the set of k-best solutions. A method for generating the set of k-best solutions using a branch and bound algorithm and an implicit enumeration scheme for multiple objective problem are discussed. Rules for determining the range of parameter changes that still allows a member of the k-best set to be optimal are developed. An investigation of a sufficient condition for postoptimality analysis is also presented.  相似文献   
252.
253.
    
COVID-19 outbreaks in local communities can result in a drastic surge in demand for scarce resources such as mechanical ventilators. To deal with such demand surges, many hospitals (1) purchased large quantities of mechanical ventilators, and (2) canceled/postponed elective procedures to preserve care capacity for COVID-19 patients. These measures resulted in a substantial financial burden to the hospitals and poor outcomes for non-COVID-19 patients. Given that COVID-19 transmits at different rates across various regions, there is an opportunity to share portable healthcare resources to mitigate capacity shortages triggered by local outbreaks with fewer total resources. This paper develops a novel data-driven adaptive robust simulation-based optimization (DARSO) methodology for optimal allocation and relocation of mechanical ventilators over different states and regions. Our main methodological contributions lie in a new policy-guided approach and an efficient algorithmic framework that mitigates critical limitations of current robust and stochastic models and make resource-sharing decisions implementable in real-time. In collaboration with epidemiologists and infectious disease doctors, we give proof of concept for the DARSO methodology through a case study of sharing ventilators among regions in Ohio and Michigan. The results suggest that our optimal policy could satisfy ventilator demand during the first pandemic's peak in Ohio and Michigan with 14% (limited sharing) to 63% (full sharing) fewer ventilators compared to a no sharing strategy (status quo), thereby allowing hospitals to preserve more elective procedures. Furthermore, we demonstrate that sharing unused ventilators (rather than purchasing new machines) can result in 5% (limited sharing) to 44% (full sharing) lower expenditure, compared to no sharing, considering the transshipment and new ventilator costs.  相似文献   
254.
    
Testing provides essential information for managing infectious disease outbreaks, such as the COVID-19 pandemic. When testing resources are scarce, an important managerial decision is who to test. This decision is compounded by the fact that potential testing subjects are heterogeneous in multiple dimensions that are important to consider, including their likelihood of being disease-positive, and how much potential harm would be averted through testing and the subsequent interventions. To increase testing coverage, pooled testing can be utilized, but this comes at a cost of increased false-negatives when the test is imperfect. Then, the decision problem is to partition the heterogeneous testing population into three mutually exclusive sets: those to be individually tested, those to be pool tested, and those not to be tested. Additionally, the subjects to be pool tested must be further partitioned into testing pools, potentially containing different numbers of subjects. The objectives include the minimization of harm (through detection and mitigation) or maximization of testing coverage. We develop data-driven optimization models and algorithms to design pooled testing strategies, and show, via a COVID-19 contact tracing case study, that the proposed testing strategies can substantially outperform the current practice used for COVID-19 contact tracing (individually testing those contacts with symptoms). Our results demonstrate the substantial benefits of optimizing the testing design, while considering the multiple dimensions of population heterogeneity and the limited testing capacity.  相似文献   
255.
Previous lot-sizing models incorporating learning effects focus exclusively on worker learning. We extend these models to include the presence of setup learning, which occurs when setup costs exhibit a learning curve effect as a function of the number of lots produced. The joint worker/setup learning problem can be solved to optimality by dynamic programming. Computational experience indicates, however, that solution times are sensitive to certain problem parameters, such as the planning horizon and/or the presence of a lower bound on worker learning. We define a two-phase EOQ-based heuristic for the problem when total transmission of worker learning occurs. Numerical results show that the heuristic consistently generates solutions well within 1% of optimality.  相似文献   
256.
Consider an experiment in which only record-breaking values (e.g., values smaller than all previous ones) are observed. The data available may be represented as X1,K1,X2,K2, …, where X1,X2, … are successive minima and K1,K2, … are the numbers of trials needed to obtain new records. Such data arise in life testing and stress testing and in industrial quality-control experiments. When only a single sequence of random records are available, efficient estimation of the underlying distribution F is possible only in a parametric framework (see Samaniego and Whitaker [9]). In the present article we study the problem of estimating certain population quantiles nonparametrically from such data. Furthermore, under the assumption that the process of observing random records can be replicated, we derive and study the nonparametric maximum-likelihood estimator F̂ of F. We establish the strong uniform consistency of this estimator as the number of replications grows large, and identify its asymptotic distribution theory. The performance of F̂ is compared to that of two possible competing estimators.  相似文献   
257.
Minimum cardinality set covering problems (MCSCP) tend to be more difficult to solve than weighted set covering problems because the cost or weight associated with each variable is the same. Since MCSCP is NP-complete, large problem instances are commonly solved using some form of a greedy heuristic. In this paper hybrid algorithms are developed and tested against two common forms of the greedy heuristic. Although all the algorithms tested have the same worst case bounds provided by Ho [7], empirical results for 60 large randomly generated problems indicate that one algorithm performed better than the others.  相似文献   
258.
Consider an experiment in which only record-breaking values (e.g., values smaller than all previous ones) are observed. The data available may be represented as X1,K1,X2,K2, …, where X1,X2, … are successive minima and K1,K2, … are the numbers of trials needed to obtain new records. We treat the problem of estimating the mean of an underlying exponential distribution, and we consider both fixed sample size problems and inverse sampling schemes. Under inverse sampling, we demonstrate certain global optimality properties of an estimator based on the “total time on test” statistic. Under random sampling, it is shown than an analogous estimator is consistent, but can be improved for any fixed sample size.  相似文献   
259.
We describe a modification of Brown's fictitious play method for solving matrix (zero-sum two-person) games and apply it to both symmetric and general games. If the original game is not symmetric, the basic idea is to transform the given matrix game into an equivalent symmetric game (a game with a skew-symmetric matrix) and use the solution properties of symmetric games (the game value is zero and both players have the same optimal strategies). The fictitious play method is then applied to the enlarged skew-symmetric matrix with a modification that calls for the periodic restarting of the process. At restart, both players' strategies are made equal based on the following considerations: Select the maximizing or minimizing player's strategy that has a game value closest to zero. We show for both symmetric and general games, and for problems of varying sizes, that the modified fictitious play (MFP) procedure approximates the value of the game and optimal strategies in a greatly reduced number of iterations and in less computational time when compared to Brown's regular fictitious play (RFP) method. For example, for a randomly generated 50% dense skew-symmetric 100 × 100 matrix (symmetric game), with coefficients |aij| ≤ 100, it took RFP 2,652,227 iterations to reach a gap of 0.03118 between the lower and upper bounds for the game value in 70.71 s, whereas it took MFP 50,000 iterations to reach a gap of 0.03116 in 1.70 s. Improved results were also obtained for general games in which the MFP solves a much larger equivalent symmetric game. © 1996 John Wiley & Sons, Inc.  相似文献   
260.
Items are characterized by a set of attributes (T) and a collection of covariates (X) associated with those attributes. We wish to screen for acceptable items (TCT), but T is expensive to measure. We envisage a two-stage screen in which observation of X_ is used as a filter at the first stage to sentence most items. The second stage involves the observation of T for those items for which the first stage is indecisive. We adopt a Bayes decision-theoretic approach to the development of optimal two-stage screens within a general framework for costs and stochastic structure. We also consider the important question of how much screens need to be modified in the light of resource limitations that bound the proportion of items that can be passed to the second stage. © 1996 John Wiley & Sons, Inc.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号