首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   195篇
  免费   40篇
  2021年   1篇
  2019年   7篇
  2018年   2篇
  2017年   6篇
  2016年   13篇
  2015年   14篇
  2014年   11篇
  2013年   65篇
  2012年   9篇
  2011年   7篇
  2010年   4篇
  2009年   7篇
  2008年   9篇
  2007年   13篇
  2006年   7篇
  2005年   11篇
  2004年   7篇
  2003年   11篇
  2002年   10篇
  2001年   6篇
  2000年   5篇
  1999年   8篇
  1993年   1篇
  1988年   1篇
排序方式: 共有235条查询结果,搜索用时 250 毫秒
71.
Degradation experiments are widely used to assess the reliability of highly reliable products which are not likely to fail under the traditional life tests. In order to conduct a degradation experiment efficiently, several factors, such as the inspection frequency, the sample size, and the termination time, need to be considered carefully. These factors not only affect the experimental cost, but also affect the precision of the estimate of a product's lifetime. In this paper, we deal with the optimal design of a degradation experiment. Under the constraint that the total experimental cost does not exceed a predetermined budget, the optimal decision variables are solved by minimizing the variance of the estimated 100pth percentile of the lifetime distribution of the product. An example is provided to illustrate the proposed method. Finally, a simulation study is conducted to investigate the robustness of this proposed method. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 689–706, 1999  相似文献   
72.
This paper introduces a general or “distribution‐free” model to analyze the lifetime of components under accelerated life testing. Unlike the accelerated failure time (AFT) models, the proposed model shares the advantage of being “distribution‐free” with the proportional hazard (PH) model and overcomes the deficiency of the PH model not allowing survival curves corresponding to different values of a covariate to cross. In this research, we extend and modify the extended hazard regression (EHR) model using the partial likelihood function to analyze failure data with time‐dependent covariates. The new model can be easily adopted to create an accelerated life testing model with different types of stress loading. For example, stress loading in accelerated life testing can be a step function, cyclic, or linear function with time. These types of stress loadings reduce the testing time and increase the number of failures of components under test. The proposed EHR model with time‐dependent covariates which incorporates multiple stress loadings requires further verification. Therefore, we conduct an accelerated life test in the laboratory by subjecting components to time‐dependent stresses, and we compare the reliability estimation based on the developed model with that obtained from experimental results. The combination of the theoretical development of the accelerated life testing model verified by laboratory experiments offers a unique perspective to reliability model building and verification. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 303–321, 1999  相似文献   
73.
In this paper, we present an O(nm log(U/n)) time maximum flow algorithm. If U = O(n) then this algorithm runs in O(nm) time for all values of m and n. This gives the best available running time to solve maximum flow problems satisfying U = O(n). Furthermore, for unit capacity networks the algorithm runs in O(n2/3m) time. It is a two‐phase capacity scaling algorithm that is easy to implement and does not use complex data structures. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 511–520, 2000  相似文献   
74.
Reliability data obtained from life tests and degradation tests have been extensively used for purposes such as estimating product reliability and predicting warranty costs. When there is more than one candidate model, an important task is to discriminate between the models. In the literature, the model discrimination was often treated as a hypothesis test and a pairwise model discrimination procedure was carried out. Because the null distribution of the test statistic is unavailable in most cases, the large sample approximation and the bootstrap were frequently used to find the acceptance region of the test. Although these two methods are asymptotically accurate, their performance in terms of size and power is not satisfactory in small sample size. To enhance the small‐sample performance, we propose a new method to approximate the null distribution, which builds on the idea of generalized pivots. Conventionally, the generalized pivots were often used for interval estimation of a certain parameter or function of parameters in presence of nuisance parameters. In this study, we further extend the idea of generalized pivots to find the acceptance region of the model discrimination test. Through extensive simulations, we show that the proposed method performs better than the existing methods in discriminating between two lifetime distributions or two degradation models over a wide range of sample sizes. Two real examples are used to illustrate the proposed methods.  相似文献   
75.
We deal with the problem of minimizing makespan on a single batch processing machine. In this problem, each job has both processing time and size (capacity requirement). The batch processing machine can process a number of jobs simultaneously as long as the total size of these jobs being processed does not exceed the machine capacity. The processing time of a batch is just the processing time of the longest job in the batch. An approximation algorithm with worst‐case ratio 3/2 is given for the version where the processing times of large jobs (with sizes greater than 1/2) are not less than those of small jobs (with sizes not greater than 1/2). This result is the best possible unless P = NP. For the general case, we propose an approximation algorithm with worst‐case ratio 7/4. A number of heuristics by Uzosy are also analyzed and compared. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 226–240, 2001  相似文献   
76.
We consider a container terminal discharging containers from a ship and locating them in the terminal yard. Each container has a number of potential locations in the yard where it can be stored. Containers are moved from the ship to the yard using a fleet of vehicles, each of which can carry one container at a time. The problem is to assign each container to a yard location and dispatch vehicles to the containers so as to minimize the time it takes to download all the containers from the ship. We show that the problem is NP‐hard and develop a heuristic algorithm based on formulating the problem as an assignment problem. The effectiveness of the heuristic is analyzed from both worst‐case and computational points of view. © 2001 John Wiley & Sons, Inc. Naval Research Logistics 48: 363–385, 2001  相似文献   
77.
We consider scheduling problems involving two agents (agents A and B), each having a set of jobs that compete for the use of a common machine to process their respective jobs. The due dates of the A‐jobs are decision variables, which are determined by using the common (CON) or slack (SLK) due date assignment methods. Each agent wants to minimize a certain performance criterion depending on the completion times of its jobs only. Under each due date assignment method, the criterion of agent A is always the same, namely an integrated criterion consisting of the due date assignment cost and the weighted number of tardy jobs. Several different criteria are considered for agent B, including the maxima of regular functions (associated with each job), the total (weighted) completion time, and the weighted number of tardy jobs. The overall objective is to minimize the performance criterion of agent A, while keeping the objective value of agent B no greater than a given limit. We analyze the computational complexity, and devise polynomial or pseudo‐polynomial dynamic programming algorithms for the considered problems. We also convert, if viable, any of the devised pseudopolynomial dynamic programming algorithms into a fully polynomial‐time approximation scheme. © 2016 Wiley Periodicals, Inc. Naval Research Logistics 63: 416–429, 2016  相似文献   
78.
We present the green telecommunication network planning problem with switchable base stations, where the location and configuration of the base stations are optimized, while taking into account uncertainty and variability of demand. The problem is formulated as a two‐stage stochastic program under demand uncertainty with integers in both stages. Since solving the presented problem is computationally challenging, we develop the corresponding Dantzig‐Wolfe reformulation and propose a solution approach based on column generation. Comprehensive computational results are provided for instances of varying characteristics. The results show that the joint location and dynamic switching of base stations leads to significant savings in terms of energy cost. Up to 30% reduction in power consumption cost is achieved while still serving all users. In certain cases, allowing dynamic configurations leads to more installed base stations and higher user coverage, while having lower total energy consumption. The Dantzig‐Wolfe reformulation provides solutions with a tight LP‐gap eliminating the need for a full branch‐and‐price scheme. Furthermore, the proposed column generation solution approach is computationally efficient and outperforms CPLEX on the majority of the tested instances. © 2016 Wiley Periodicals, Inc. Naval Research Logistics 63: 351–366, 2016  相似文献   
79.
In this article, we study a two‐level lot‐sizing problem with supplier selection (LSS), which is an NP‐hard problem arising in different production planning and supply chain management applications. After presenting various formulations for LSS, and computationally comparing their strengths, we explore the polyhedral structure of one of these formulations. For this formulation, we derive several families of strong valid inequalities, and provide conditions under which they are facet‐defining. We show numerically that incorporating these valid inequalities within a branch‐and‐cut framework leads to significant improvements in computation. © 2016 Wiley Periodicals, Inc. Naval Research Logistics 63: 647–666, 2017  相似文献   
80.
We consider the problem of finding the system with the best primary performance measure among a finite number of simulated systems in the presence of a stochastic constraint on a single real‐valued secondary performance measure. Solving this problem requires the identification and removal from consideration of infeasible systems (Phase I) and of systems whose primary performance measure is dominated by that of other feasible systems (Phase II). We use indifference zones in both phases and consider two approaches, namely, carrying out Phases I and II sequentially and carrying out Phases I and II simultaneously, and we provide specific example procedures of each type. We present theoretical results guaranteeing that our approaches (general and specific, sequential and simultaneous) yield the best system with at least a prespecified probability, and we provide a portion of an extensive numerical study aimed at evaluating and comparing the performance of our approaches. The experimental results show that both new procedures are useful for constrained ranking and selection, with neither procedure showing uniform superiority over the other.© 2010 Wiley Periodicals, Inc. Naval Research Logistics, 2010  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号