全文获取类型
收费全文 | 470篇 |
免费 | 10篇 |
出版年
2021年 | 5篇 |
2019年 | 13篇 |
2018年 | 9篇 |
2017年 | 11篇 |
2016年 | 9篇 |
2015年 | 10篇 |
2013年 | 78篇 |
2011年 | 5篇 |
2010年 | 6篇 |
2009年 | 4篇 |
2007年 | 6篇 |
2006年 | 7篇 |
2005年 | 11篇 |
2004年 | 8篇 |
2003年 | 5篇 |
2001年 | 4篇 |
2000年 | 9篇 |
1999年 | 6篇 |
1998年 | 5篇 |
1997年 | 9篇 |
1996年 | 12篇 |
1995年 | 4篇 |
1994年 | 10篇 |
1993年 | 6篇 |
1992年 | 7篇 |
1991年 | 14篇 |
1990年 | 8篇 |
1989年 | 12篇 |
1988年 | 6篇 |
1987年 | 11篇 |
1986年 | 9篇 |
1985年 | 11篇 |
1984年 | 9篇 |
1983年 | 7篇 |
1982年 | 6篇 |
1981年 | 8篇 |
1980年 | 10篇 |
1979年 | 7篇 |
1978年 | 8篇 |
1977年 | 8篇 |
1976年 | 9篇 |
1975年 | 6篇 |
1974年 | 11篇 |
1973年 | 8篇 |
1972年 | 9篇 |
1971年 | 13篇 |
1970年 | 3篇 |
1969年 | 6篇 |
1968年 | 5篇 |
1967年 | 3篇 |
排序方式: 共有480条查询结果,搜索用时 15 毫秒
41.
In this article a bicriteria model, formed by the weighted sum of the minisum and minimax functions for a single-location problem, is investigated. It is shown that all efficient solutions generated by either constrained model are also properly efficient. The bicriteria model and the constrained models are theoretically equivalent, but it is more efficient and simpler to generate nondominated solutions using the constrained criterion approach. When solving the bicriteria model, a critical range is found for which all properly efficient solutions are generated. 相似文献
42.
In sensitivity testing for the Department of Defense, the high cost of experimental units necessitates the use of small sample sizes and accentuates the importance of design. This article compares five data collection-estimation procedures. Four of these are modifications of the Robbins-Monro method, and the other is the Langlie. The simulation study is designed as a factorial experiment with response function, sample size, initial design point, gate width, and noise as factors. The estimated V50 and its MSE are the responses compared to assess the small sample behavior of each method. Although there is no single clear-cut winner, the Delayed Robbins-Monro (DRM) with maximum likelihood estimation and the Estimated Quantal Response Curve (Wu [21]) are shown to perform well over a broad variety of conditions. 相似文献
43.
Stochastic combat models are more realistic than either deterministic or exponential models. Stochastic combat models have been solved analytically only for small combat sizes. It is very difficult, if not impossible, to extend previous solution techniques to larger-scale combat. This research provides the solution for many-on-many heterogeneous stochastic combat with any break points. Furthermore, every stage in stochastic combat is clearly defined and associated aiming and killing probabilities are calculated. © 1996 John Wiley & Sons, Inc. 相似文献
44.
Silverman's game on (1, B) × (1, B) was analyzed by R. J. Evans, who showed that optimal strategies exist (and found them) only on a set of measure zero in the parameter plane. We examine the corresponding game on (1, B) × (1, D) with D > B, and show that optimal strategies exist in about half the parameter plane. Optimal strategies and game value are obtained explicitly. © 1995 John Wiley & Sons, Inc. 相似文献
45.
The DOD directs the usage of 10% of item cost as the cost of capital in the calculation of inventory holding costs. This 10% cost is not totally justified and a complete review must be accomplished to bring this factor to a meaningful and more useful value. The current logic supporting a 10% cost of capital results in a continuing perturbation which forces the Air Force to operate in a less than efficient mode when using the economic order quantity for consumable purchases. 相似文献
46.
While the traditional solution to the problem of meeting stochastically variable demands for inventory during procurement lead time is through the use of some level of safety stock, several authors have suggested that a decision be made to employ some form of rationing so as to protect certain classes of demands against stockout by restricting issues to other classes. Nahmias and Demmy [10] derived an approximate continuous review model of systems with two demand classes which would permit an inventory manager to calculate the expected fill rates per order cycle for high-priority, low-priority, and total system demands for a variety of parameters. The manager would then choose the rationing policy that most closely approximated his fill-rate objectives. This article describes a periodic review model that permits the manager to establish a discrete time rationing policy during lead time by prescribing a desired service level for high-priority demands. The reserve levels necessary to meet this level of service can then be calculated based upon the assumed probability distributions of high- and low-priority demands over lead time. The derived reserve levels vary with the amount of lead time remaining. Simulation tests of the model indicate they are more effective than the single reserve level policy studied by Nahmias and Demmy. 相似文献
47.
Consider the problem of estimating the reliability of a series system of (possibly) repairable subsystems when test data and historical information are available at the component, subsystem, and system levels. Such a problem is well suited to a Bayesian approach. Martz, Waller, and Fickas [Technometrics, 30 , 143–154 (1988)] presented a Bayesian procedure that accommodates pass/fail (binomial) data at any level. However, other types of test data are often available, including (a) lifetimes of nonrepayable components, and (b) repair histories for repairable subsystems. In this article we describe a new Bayesian procedure that accommodates pass/fail, life, and repair data at any level. We assume a Weibull model for the life data, a censored Weibull model for the pass/fail data, and a power-law process model for the repair data. Consequently, the test data at each level can be represented by a two-parameter likelihood function of a certain form, and historical information can be expressed using a conjugate family of prior distributions. We discuss computational issues, and use the procedure to analyze the reliability of a vehicle system. © 1994 John Wiley & Sons, Inc. 相似文献
48.
Mean residual life of coherent systems consisting of multiple types of dependent components
下载免费PDF全文
![点击此处可从《海军后勤学研究》网站下载免费的PDF全文](/ch/ext_images/free.gif)
Mean residual life is a useful dynamic characteristic to study reliability of a system. It has been widely considered in the literature not only for single unit systems but also for coherent systems. This article is concerned with the study of mean residual life for a coherent system that consists of multiple types of dependent components. In particular, the survival signature based generalized mixture representation is obtained for the survival function of a coherent system and it is used to evaluate the mean residual life function. Furthermore, two mean residual life functions under different conditional events on components’ lifetimes are also defined and studied. 相似文献
49.
Todas information and communication network requires a design that is secure to tampering. Traditional performance measures of reliability and throughput must be supplemented with measures of security. Recognition of an adversary who can inflict damage leads toward a game‐theoretic model. Through such a formulation, guidelines for network designs and improvements are derived. We opt for a design that is most robust to withstand both natural degradation and adversarial attacks. Extensive computational experience with such a model suggests that a Nash‐equilibrium design exists that can withstand the worst possible damage. Most important, the equilibrium is value‐free in that it is stable irrespective of the unit costs associated with reliability vs. capacity improvement and how one wishes to trade between throughput and reliability. This finding helps to pinpoint the most critical components in network design. From a policy standpoint, the model also allows the monetary value of information‐security to be imputed. © 2009 Wiley Periodicals, Inc. Naval Research Logistics, 2009 相似文献
50.
Dmitrii Usanov G.A. Guido Legemaate Peter M. van de Ven Rob D. van der Mei 《海军后勤学研究》2019,66(2):105-122
The effectiveness of a fire department is largely determined by its ability to respond to incidents in a timely manner. To do so, fire departments typically have fire stations spread evenly across the region, and dispatch the closest truck(s) whenever a new incident occurs. However, large gaps in coverage may arise in the case of a major incident that requires many nearby fire trucks over a long period of time, substantially increasing response times for emergencies that occur subsequently. We propose a heuristic for relocating idle trucks during a major incident in order to retain good coverage. This is done by solving a mathematical program that takes into account the location of the available fire trucks and the historic spatial distribution of incidents. This heuristic allows the user to balance the coverage and the number of truck movements. Using extensive simulation experiments we test the heuristic for the operations of the Fire Department of Amsterdam‐Amstelland, and compare it against three other benchmark strategies in a simulation fitted using 10 years of historical data. We demonstrate substantial improvement over the current relocation policy, and show that not relocating during major incidents may lead to a significant decrease in performance. 相似文献