全文获取类型
收费全文 | 666篇 |
免费 | 11篇 |
出版年
2021年 | 5篇 |
2019年 | 14篇 |
2018年 | 16篇 |
2017年 | 15篇 |
2016年 | 17篇 |
2015年 | 11篇 |
2013年 | 159篇 |
2011年 | 5篇 |
2010年 | 8篇 |
2009年 | 5篇 |
2007年 | 6篇 |
2006年 | 7篇 |
2005年 | 15篇 |
2004年 | 8篇 |
2003年 | 9篇 |
2002年 | 7篇 |
2000年 | 11篇 |
1999年 | 9篇 |
1998年 | 6篇 |
1997年 | 13篇 |
1996年 | 14篇 |
1995年 | 6篇 |
1994年 | 13篇 |
1993年 | 11篇 |
1992年 | 10篇 |
1991年 | 16篇 |
1990年 | 10篇 |
1989年 | 14篇 |
1988年 | 7篇 |
1987年 | 17篇 |
1986年 | 12篇 |
1985年 | 12篇 |
1984年 | 11篇 |
1983年 | 8篇 |
1982年 | 8篇 |
1981年 | 9篇 |
1980年 | 10篇 |
1979年 | 9篇 |
1978年 | 11篇 |
1977年 | 9篇 |
1976年 | 11篇 |
1975年 | 8篇 |
1974年 | 14篇 |
1973年 | 8篇 |
1972年 | 11篇 |
1971年 | 15篇 |
1970年 | 5篇 |
1969年 | 8篇 |
1968年 | 7篇 |
1967年 | 4篇 |
排序方式: 共有677条查询结果,搜索用时 125 毫秒
41.
In this article we investigate situations where the buyer is offered discounted price schedules from alternative vendors. Given various discount schedules, the buyer must make the best buying decision under a variety of constraints, such as limited storage space and restricted inventory budgets. Solutions to this problem can be utilized by the buyer to improve profitability. EOQ models for multiple products with all-units discounts are readily solvable in the absence of constraints spanning the products. However, constrained discounted EOQ models lack convenient mathematical properties. Relaxing the product-spanning constraints produces a dual problem that is separable, but lack of convexity and smoothness opens the door for duality gaps. In this research we present a set of algorithms that collectively find the optimal order vector. Finally, we present numerical examples using actual data. to illustrate the application of the algorithms. © 1993 John Wiley & Sons, Inc. 相似文献
42.
Jobs with known processing times and due dates have to be processed on a machine which is subject to a single breakdown. The moment of breakdown and the repair time are independent random variables. Two cases are distinguished with reference to the processing time preempted by the breakdown (no other preemptions are allowed): (i) resumption without time losses and (ii) restart from the beginning. Under certain compatible conditions, we find the policies which minimize stochastically the number of tardy jobs. 相似文献
43.
In this article a bicriteria model, formed by the weighted sum of the minisum and minimax functions for a single-location problem, is investigated. It is shown that all efficient solutions generated by either constrained model are also properly efficient. The bicriteria model and the constrained models are theoretically equivalent, but it is more efficient and simpler to generate nondominated solutions using the constrained criterion approach. When solving the bicriteria model, a critical range is found for which all properly efficient solutions are generated. 相似文献
44.
In sensitivity testing for the Department of Defense, the high cost of experimental units necessitates the use of small sample sizes and accentuates the importance of design. This article compares five data collection-estimation procedures. Four of these are modifications of the Robbins-Monro method, and the other is the Langlie. The simulation study is designed as a factorial experiment with response function, sample size, initial design point, gate width, and noise as factors. The estimated V50 and its MSE are the responses compared to assess the small sample behavior of each method. Although there is no single clear-cut winner, the Delayed Robbins-Monro (DRM) with maximum likelihood estimation and the Estimated Quantal Response Curve (Wu [21]) are shown to perform well over a broad variety of conditions. 相似文献
45.
Component grouping problems, a type of set-partitioning problem, arise in a number of different manufacturing and material logistics application areas. For example, in circuit board assembly, robotic work cells can be used to insert components onto a number of different types of circuit boards. Each type of circuit board requires particular components, with some components appearing on more than one type. The problem is to decide which components should be assigned to each work cell in order to minimize the number of visits by circuit boards to work cells. We describe two new heuristics for this problem, based on so-called greedy random adaptive search procedures (GRASP). With GRASP, a local search technique is replicated many times with different starting points. The starting points are determined by a greedy procedure with a probabilistic aspect. The best result is then kept as the solution. Computational experiments on problems based on data from actual manufacturing processes indicate that these GRASP methods outperform, both in speed and in solution quality, an earlier, network-flow-based heuristic. We also describe techniques for generating lower bounds for the component grouping problem, based on the combinatorial structure of a problem instance. The lower bounds for our real-world test problems averaged within 7%-8% of the heuristic solutions. Similar results are obtained for larger, randomly generated problems. © 1994 John Wiley & Sons. Inc. 相似文献
46.
Consider the problem of estimating the reliability of a series system of (possibly) repairable subsystems when test data and historical information are available at the component, subsystem, and system levels. Such a problem is well suited to a Bayesian approach. Martz, Waller, and Fickas [Technometrics, 30 , 143–154 (1988)] presented a Bayesian procedure that accommodates pass/fail (binomial) data at any level. However, other types of test data are often available, including (a) lifetimes of nonrepayable components, and (b) repair histories for repairable subsystems. In this article we describe a new Bayesian procedure that accommodates pass/fail, life, and repair data at any level. We assume a Weibull model for the life data, a censored Weibull model for the pass/fail data, and a power-law process model for the repair data. Consequently, the test data at each level can be represented by a two-parameter likelihood function of a certain form, and historical information can be expressed using a conjugate family of prior distributions. We discuss computational issues, and use the procedure to analyze the reliability of a vehicle system. © 1994 John Wiley & Sons, Inc. 相似文献
47.
While the traditional solution to the problem of meeting stochastically variable demands for inventory during procurement lead time is through the use of some level of safety stock, several authors have suggested that a decision be made to employ some form of rationing so as to protect certain classes of demands against stockout by restricting issues to other classes. Nahmias and Demmy [10] derived an approximate continuous review model of systems with two demand classes which would permit an inventory manager to calculate the expected fill rates per order cycle for high-priority, low-priority, and total system demands for a variety of parameters. The manager would then choose the rationing policy that most closely approximated his fill-rate objectives. This article describes a periodic review model that permits the manager to establish a discrete time rationing policy during lead time by prescribing a desired service level for high-priority demands. The reserve levels necessary to meet this level of service can then be calculated based upon the assumed probability distributions of high- and low-priority demands over lead time. The derived reserve levels vary with the amount of lead time remaining. Simulation tests of the model indicate they are more effective than the single reserve level policy studied by Nahmias and Demmy. 相似文献
48.
The DOD directs the usage of 10% of item cost as the cost of capital in the calculation of inventory holding costs. This 10% cost is not totally justified and a complete review must be accomplished to bring this factor to a meaningful and more useful value. The current logic supporting a 10% cost of capital results in a continuing perturbation which forces the Air Force to operate in a less than efficient mode when using the economic order quantity for consumable purchases. 相似文献
49.
John W. Chinneck 《海军后勤学研究》1992,39(4):531-543
Nonviable network models have edges which are forced to zero flow simply by the pattern of interconnection of the nodes. The original nonviability diagnosis algorithm [4] is extended here to cover all classes of network models, including pure, generalized, pure processing, nonconserving processing, and generalized processing. The extended algorithm relies on the conversion of all network forms to a pure processing form. Efficiency improvements to the original algorithm are also presented. 相似文献
50.
We present a new approach for inference from accelerated life tests. Our approach is based on a dynamic general linear model setup which arises naturally from the accelerated life-testing problem and uses linear Bayesian methods for inference. The advantage of the procedure is that it does not require large numbers of items to be tested and that it can deal with both censored and uncensored data. We illustrate the use of our approach with some actual accelerated life-test data. © 1992 John Wiley & Sons, Inc. 相似文献