首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   368篇
  免费   44篇
  2021年   9篇
  2019年   8篇
  2018年   6篇
  2017年   7篇
  2016年   13篇
  2015年   14篇
  2014年   12篇
  2013年   83篇
  2012年   10篇
  2011年   8篇
  2010年   6篇
  2009年   14篇
  2008年   9篇
  2007年   15篇
  2006年   9篇
  2005年   12篇
  2004年   12篇
  2003年   12篇
  2002年   16篇
  2001年   9篇
  2000年   8篇
  1999年   9篇
  1998年   3篇
  1997年   3篇
  1995年   7篇
  1994年   3篇
  1993年   5篇
  1992年   3篇
  1991年   5篇
  1990年   4篇
  1989年   5篇
  1988年   7篇
  1987年   6篇
  1986年   5篇
  1984年   5篇
  1983年   3篇
  1981年   5篇
  1980年   4篇
  1979年   3篇
  1978年   4篇
  1977年   5篇
  1976年   2篇
  1975年   2篇
  1974年   2篇
  1973年   4篇
  1972年   2篇
  1971年   2篇
  1970年   2篇
  1969年   3篇
  1968年   2篇
排序方式: 共有412条查询结果,搜索用时 109 毫秒
111.
A single machine is available to process a collection of stochastic jobs. There may be technological constraints on the job set. The machine sometimes breaks down. Costs are incurred and rewards are earned during processing. We seek strategies for processing the jobs which maximize the total expected reward earned.  相似文献   
112.
A model is developed which may be used to determine the expected total cost of quality control per inspection lot under acceptance sampling by variables where several characteristics are to be simultaneously controlled. Optimization of the model is accomplished through the application of a conventional search procedure. The sensitivity of the model and the optimum solution to the shape of the underlying probability distributions is discussed and associated analyses are presented through an example.  相似文献   
113.
Degradation experiments are widely used to assess the reliability of highly reliable products which are not likely to fail under the traditional life tests. In order to conduct a degradation experiment efficiently, several factors, such as the inspection frequency, the sample size, and the termination time, need to be considered carefully. These factors not only affect the experimental cost, but also affect the precision of the estimate of a product's lifetime. In this paper, we deal with the optimal design of a degradation experiment. Under the constraint that the total experimental cost does not exceed a predetermined budget, the optimal decision variables are solved by minimizing the variance of the estimated 100pth percentile of the lifetime distribution of the product. An example is provided to illustrate the proposed method. Finally, a simulation study is conducted to investigate the robustness of this proposed method. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 689–706, 1999  相似文献   
114.
This paper introduces a general or “distribution‐free” model to analyze the lifetime of components under accelerated life testing. Unlike the accelerated failure time (AFT) models, the proposed model shares the advantage of being “distribution‐free” with the proportional hazard (PH) model and overcomes the deficiency of the PH model not allowing survival curves corresponding to different values of a covariate to cross. In this research, we extend and modify the extended hazard regression (EHR) model using the partial likelihood function to analyze failure data with time‐dependent covariates. The new model can be easily adopted to create an accelerated life testing model with different types of stress loading. For example, stress loading in accelerated life testing can be a step function, cyclic, or linear function with time. These types of stress loadings reduce the testing time and increase the number of failures of components under test. The proposed EHR model with time‐dependent covariates which incorporates multiple stress loadings requires further verification. Therefore, we conduct an accelerated life test in the laboratory by subjecting components to time‐dependent stresses, and we compare the reliability estimation based on the developed model with that obtained from experimental results. The combination of the theoretical development of the accelerated life testing model verified by laboratory experiments offers a unique perspective to reliability model building and verification. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 303–321, 1999  相似文献   
115.
In this article we consider two versions of two-on-two homogeneous stochastic combat and develop expressions, in each case, for the state probabilities. The models are natural generalizations of the exponential Lanchester square law model. In the first version, a marksman whose target is killed resumes afresh the killing process on a surviving target; in the second version, the marksman whose target is killed merely uses up his remaining time to a kill on a surviving target. Using the state probabilities we then compute such important combat measures as (1) the mean and variance of the number of survivors as they vary with time for each of the sides, (2) the win probabilities for each of the sides, and (3) the mean and variance of the battle duration time. As an application, computations were made for the specific case of a gamma (2) interfiring time random variable for each side and the above combat measures were compared with the appropriate exponential and deterministic Lanchester square law approximations. The latter two are shown to be very poor approximations in this case.  相似文献   
116.
In this article we deal with the shortest queue model with jockeying. We assume that the arrivals are Poisson, each of the exponential servers has his own queue, and jockeying among the queues is permitted. Explicit solutions of the equilibrium probabilities, the expected customers, and the expected waiting time of a customer in the system are given, which only depend on the traffic intensity. Numerical results can be easily obtained from our solutions. Several examples are provided in the article.  相似文献   
117.
This article considers a two-person game in which the first player has access to certain information that is valuable but unknown to the second player. The first player can distort the information before it is passed on to the second player. The purpose in distorting the information is to render it as useless as possible to the second player. Based on the distorted information received, the second player then maximizes some given objective. In certain cases he may still be able to use the distorted information, but sometimes the information has been so badly distorted that it becomes completely useless to him. © 1993 John Wiley & Sons, Inc.  相似文献   
118.
In this paper, we present an O(nm log(U/n)) time maximum flow algorithm. If U = O(n) then this algorithm runs in O(nm) time for all values of m and n. This gives the best available running time to solve maximum flow problems satisfying U = O(n). Furthermore, for unit capacity networks the algorithm runs in O(n2/3m) time. It is a two‐phase capacity scaling algorithm that is easy to implement and does not use complex data structures. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 511–520, 2000  相似文献   
119.
Reliability data obtained from life tests and degradation tests have been extensively used for purposes such as estimating product reliability and predicting warranty costs. When there is more than one candidate model, an important task is to discriminate between the models. In the literature, the model discrimination was often treated as a hypothesis test and a pairwise model discrimination procedure was carried out. Because the null distribution of the test statistic is unavailable in most cases, the large sample approximation and the bootstrap were frequently used to find the acceptance region of the test. Although these two methods are asymptotically accurate, their performance in terms of size and power is not satisfactory in small sample size. To enhance the small‐sample performance, we propose a new method to approximate the null distribution, which builds on the idea of generalized pivots. Conventionally, the generalized pivots were often used for interval estimation of a certain parameter or function of parameters in presence of nuisance parameters. In this study, we further extend the idea of generalized pivots to find the acceptance region of the model discrimination test. Through extensive simulations, we show that the proposed method performs better than the existing methods in discriminating between two lifetime distributions or two degradation models over a wide range of sample sizes. Two real examples are used to illustrate the proposed methods.  相似文献   
120.
We consider the problem of finding the system with the best primary performance measure among a finite number of simulated systems in the presence of a stochastic constraint on a single real‐valued secondary performance measure. Solving this problem requires the identification and removal from consideration of infeasible systems (Phase I) and of systems whose primary performance measure is dominated by that of other feasible systems (Phase II). We use indifference zones in both phases and consider two approaches, namely, carrying out Phases I and II sequentially and carrying out Phases I and II simultaneously, and we provide specific example procedures of each type. We present theoretical results guaranteeing that our approaches (general and specific, sequential and simultaneous) yield the best system with at least a prespecified probability, and we provide a portion of an extensive numerical study aimed at evaluating and comparing the performance of our approaches. The experimental results show that both new procedures are useful for constrained ranking and selection, with neither procedure showing uniform superiority over the other.© 2010 Wiley Periodicals, Inc. Naval Research Logistics, 2010  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号