首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   591篇
  免费   10篇
  601篇
  2021年   7篇
  2020年   7篇
  2019年   12篇
  2018年   8篇
  2017年   13篇
  2016年   13篇
  2015年   5篇
  2014年   9篇
  2013年   125篇
  2009年   7篇
  2007年   12篇
  2005年   5篇
  2004年   5篇
  2003年   9篇
  2002年   8篇
  2001年   5篇
  2000年   5篇
  1999年   5篇
  1998年   8篇
  1997年   10篇
  1996年   11篇
  1995年   9篇
  1994年   9篇
  1993年   12篇
  1992年   13篇
  1991年   16篇
  1990年   11篇
  1989年   20篇
  1988年   16篇
  1987年   13篇
  1986年   16篇
  1985年   13篇
  1984年   5篇
  1983年   6篇
  1982年   11篇
  1981年   7篇
  1980年   10篇
  1979年   10篇
  1978年   11篇
  1977年   7篇
  1976年   7篇
  1975年   8篇
  1974年   13篇
  1973年   9篇
  1972年   4篇
  1971年   8篇
  1970年   9篇
  1969年   6篇
  1967年   7篇
  1948年   4篇
排序方式: 共有601条查询结果,搜索用时 15 毫秒
401.
The classical work of Gittins, which resulted in the celebrated index result, had applications to research planning as an important part of its motivation. However, research planning problems often have features that are not accommodated within Gittins's original framework. These include precedence constraints on the task set, influence between tasks, stopping or investment options and routes to success in which some tasks do not feature. We consider three classes of Markovian decision models for research planning, each of which has all of these features. Gittins-index heuristics are proposed and are assessed both analytically and computationally. They perform impressively. © 1995 John Wiley & Sons, Inc.  相似文献   
402.
Moment estimators for the parameters of the Weibull distribution are considered in the context of analysis of field data. The data available are aggregated, with individual failure times not recorded. In this case, the complexity of the likelihood function argues against the use of maximum-likelihood estimation, particularly for relatively large sets of data, and moment estimators are a reasonable alternative. In this article, we derive the asymptotic covariance matrix of the moment estimators, and provide listings for BASIC computer programs which generate tables useful for calculation of the estimates as well as for estimating the asymptotic covariance matrix using aggregated data.  相似文献   
403.
Minimum cardinality set covering problems (MCSCP) tend to be more difficult to solve than weighted set covering problems because the cost or weight associated with each variable is the same. Since MCSCP is NP-complete, large problem instances are commonly solved using some form of a greedy heuristic. In this paper hybrid algorithms are developed and tested against two common forms of the greedy heuristic. Although all the algorithms tested have the same worst case bounds provided by Ho [7], empirical results for 60 large randomly generated problems indicate that one algorithm performed better than the others.  相似文献   
404.
Consider an experiment in which only record-breaking values (e.g., values smaller than all previous ones) are observed. The data available may be represented as X1,K1,X2,K2, …, where X1,X2, … are successive minima and K1,K2, … are the numbers of trials needed to obtain new records. We treat the problem of estimating the mean of an underlying exponential distribution, and we consider both fixed sample size problems and inverse sampling schemes. Under inverse sampling, we demonstrate certain global optimality properties of an estimator based on the “total time on test” statistic. Under random sampling, it is shown than an analogous estimator is consistent, but can be improved for any fixed sample size.  相似文献   
405.
We describe a modification of Brown's fictitious play method for solving matrix (zero-sum two-person) games and apply it to both symmetric and general games. If the original game is not symmetric, the basic idea is to transform the given matrix game into an equivalent symmetric game (a game with a skew-symmetric matrix) and use the solution properties of symmetric games (the game value is zero and both players have the same optimal strategies). The fictitious play method is then applied to the enlarged skew-symmetric matrix with a modification that calls for the periodic restarting of the process. At restart, both players' strategies are made equal based on the following considerations: Select the maximizing or minimizing player's strategy that has a game value closest to zero. We show for both symmetric and general games, and for problems of varying sizes, that the modified fictitious play (MFP) procedure approximates the value of the game and optimal strategies in a greatly reduced number of iterations and in less computational time when compared to Brown's regular fictitious play (RFP) method. For example, for a randomly generated 50% dense skew-symmetric 100 × 100 matrix (symmetric game), with coefficients |aij| ≤ 100, it took RFP 2,652,227 iterations to reach a gap of 0.03118 between the lower and upper bounds for the game value in 70.71 s, whereas it took MFP 50,000 iterations to reach a gap of 0.03116 in 1.70 s. Improved results were also obtained for general games in which the MFP solves a much larger equivalent symmetric game. © 1996 John Wiley & Sons, Inc.  相似文献   
406.
Items are characterized by a set of attributes (T) and a collection of covariates (X) associated with those attributes. We wish to screen for acceptable items (TCT), but T is expensive to measure. We envisage a two-stage screen in which observation of X_ is used as a filter at the first stage to sentence most items. The second stage involves the observation of T for those items for which the first stage is indecisive. We adopt a Bayes decision-theoretic approach to the development of optimal two-stage screens within a general framework for costs and stochastic structure. We also consider the important question of how much screens need to be modified in the light of resource limitations that bound the proportion of items that can be passed to the second stage. © 1996 John Wiley & Sons, Inc.  相似文献   
407.
408.
A method previously devised for the solution of the p-center problem on a network has now been extended to solve the analogous minimax location-allocation problem in continuous space. The essence of the method is that we choose a subset of the n points to be served and consider the circles based on one, two, or three points. Using a set-covering algorithm we find a set of p such circles which cover the points in the relaxed problem (the one with m < n points). If this is possible, we check whether the n original points are covered by the solution; if so, we have a feasible solution to the problem. We now delete the largest circle with radius rp (which is currently an upper limit to the optimal solution) and try to find a better feasible solution. If we have a feasible solution to the relaxed problem which is not feasible to the original, we augment the relaxed problem by adding a point, preferably the one which is farthest from its nearest center. If we have a feasible solution to the original problem and we delete the largest circle and find that the relaxed problem cannot be covered by p circles, we conclude that the latest feasible solution to the original problem is optimal. An example of the solution of a problem with ten demand points and two and three service points is given in some detail. Computational data for problems of 30 demand points and 1–30 service points, and 100, 200, and 300 demand points and 1–3 service points are reported.  相似文献   
409.
Two new randomization tests are introduced for ordinal contingency tables for testing independence against strictly positive quadrant dependence, i.e., P(X > x,Y > y) ≥ P(X > x)P(Y > y) for all x,y with strict inequality for some x and y. For a number of cases, simulation is used to compare the estimated power of these tests versus those standard tests based on Kendall's T, Spearman's p, Pearson's X2, the usual likelihood ratio test, and a test based upon the log-odds ratio. In these cases, subsets of the alternative region are identified where each of the testing statistics is superior. The new tests are found to be more powerful than the standard tests over a broad range of the alternative regions for these cases.  相似文献   
410.
In this article we present a methodology for postoptimality and sensitivity analysis of zero-one goal programs based on the set of k-best solutions. A method for generating the set of k-best solutions using a branch and bound algorithm and an implicit enumeration scheme for multiple objective problem are discussed. Rules for determining the range of parameter changes that still allows a member of the k-best set to be optimal are developed. An investigation of a sufficient condition for postoptimality analysis is also presented.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号