首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   408篇
  免费   15篇
  2021年   5篇
  2019年   10篇
  2018年   10篇
  2017年   12篇
  2016年   9篇
  2015年   6篇
  2014年   9篇
  2013年   98篇
  2011年   7篇
  2010年   6篇
  2009年   8篇
  2008年   8篇
  2007年   3篇
  2006年   8篇
  2005年   10篇
  2004年   7篇
  2003年   4篇
  2002年   10篇
  2001年   4篇
  2000年   5篇
  1999年   4篇
  1998年   4篇
  1997年   6篇
  1996年   5篇
  1995年   6篇
  1994年   6篇
  1993年   11篇
  1992年   8篇
  1991年   7篇
  1990年   3篇
  1989年   6篇
  1988年   14篇
  1987年   7篇
  1986年   7篇
  1985年   11篇
  1984年   8篇
  1983年   4篇
  1981年   4篇
  1980年   6篇
  1979年   5篇
  1978年   4篇
  1977年   6篇
  1975年   3篇
  1974年   6篇
  1972年   2篇
  1971年   8篇
  1970年   3篇
  1969年   4篇
  1967年   3篇
  1966年   2篇
排序方式: 共有423条查询结果,搜索用时 15 毫秒
311.
Particulate composites are one of the widely used materials in producing numerous state-of-the-art components in biomedical, automobile, aerospace including defence technology. Variety of modelling techniques have been adopted in the past to model mechanical behaviour of particulate composites. Due to their favourable properties, particle-based methods provide a convenient platform to model failure or fracture of these composites. Smooth particle hydrodynamics (SPH) is one of such methods which demonstrate excellent potential for modelling failure or fracture of particulate composites in a Lagrangian setting. One of the major challenges in using SPH method for modelling composite materials depends on accurate and efficient way to treat interface and boundary conditions. In this paper, a master-slave method based multi-freedom constraints is proposed to impose essential boundary conditions and interfacial displacement constraints in modelling mechanical behaviour of composite materials using SPH method. The proposed methodology enforces the above constraints more accurately and requires only smaller condition number for system stiffness matrix than the procedures based on typical penalty function approach. A minimum cut-off value-based error criteria is employed to improve the compu-tational efficiency of the proposed methodology. In addition, the proposed method is further enhanced by adopting a modified numerical interpolation scheme along the boundary to increase the accuracy and computational efficiency. The numerical examples demonstrate that the proposed master-slave approach yields better accuracy in enforcing displacement constraints and requires approximately the same computational time as that of penalty method.  相似文献   
312.
We present a shifting bottleneck heuristic for minimizing the total weighted tardiness in a job shop. The method decomposes the job shop into a number of single‐machine subproblems that are solved one after another. Each machine is scheduled according to the solution of its corresponding subproblem. The order in which the single machine subproblems are solved has a significant impact on the quality of the overall solution and on the time required to obtain this solution. We therefore test a number of different orders for solving the subproblems. Computational results on 66 instances with ten jobs and ten machines show that our heuristic yields solutions that are close to optimal, and it clearly outperforms a well‐known dispatching rule enhanced with backtracking mechanisms. © 1999 John Wiley & Sons, Inc. Naval Research Logistics 46: 1–17, 1999  相似文献   
313.
314.
n independent jobs are to be scheduled nonpreemptively on a single machine so as to minimize some performance measure. Federgruen and Mosheiov [2] show that a large class of such scheduling problems can be optimized by solving either a single instance or a finite sequence of instances of the so-called SQC problem, in which all the jobs have a fixed or controllable common due date and the sum of general quasiconvex functions of the job completion times is to be minimized. In this note we point out that this is not always true. In particular, we show that the algorithm proposed in [2] does not always find a global optimal schedule to the problem of minimizing the weighted sum of the mean and variance of job completion times. © 1996 John Wiley & Sons, Inc.  相似文献   
315.
An approximate method for measuring the service levels of the warehouse-retailer system operating under (s, S) policy is presented. All the retailers are identical and the demand process at each retailer follows a stationary stuttering Poisson process. This type of demand process allows customer orders to be for a random number of units, which gives rise to the undershoot quantity at both the warehouse and retailer levels. Exact analyses of the distribution of the undershoot quantity and the number of orders place by a retailer during the warehouse reordering lead time are derived. By using this distribution together with probability approximation and other heuristic approaches, we model the behavior of the warehouse level. Based on the results of the warehouse level and on an existing framework from previous work, the service level at the retailer level is estimated. Results of the approximate method are then compared with those of simulation. © 1995 John Wiley & Sons, Inc.  相似文献   
316.
Capacity expansion refers to the process of adding facilities or manpower to meet increasing demand. Typical capacity expansion decisions are characterized by uncertain demand forecasts and uncertainty in the eventual cost of expansion projects. This article models capacity expansion within the framework of piecewise deterministic Markov processes and investigates the problem of controlling investment in a succession of same type projects in order to meet increasing demand with minimum cost. In particular, we investigate the optimality of a class of investment strategies called cutoff strategies. These strategies have the property that there exists some undercapacity level M such that the strategy invests at the maximum available rate at all levels above M and does not invest at any level below M. Cutoff strategies are appealing because they are straightforward to implement. We determine conditions on the undercapacity penalty function that ensure the existence of optimal cutoff strategies when the cost of completing a project is exponentially distributed. A by-product of the proof is an algorithm for determining the optimal strategy and its cost. © 1995 John Wiley & Sons, Inc.  相似文献   
317.
Consider a two machine flow shop and n jobs. The processing time of job j on machine i is equal to the random variable Xij One of the two machines is subject to breakdown and repair. The objective is to find the schedule that minimizes the expected makespan. Two results are shown. First, ifP(X2j ≧ X1j) = 1 for all j and the random variables X11, X12,…, X1n are likelihood ratio ordered, then the SEPT sequence minimizes the expected makespan when machine 2 is subject to an arbitrary breakdown process; if P(X1j≧X2j) = 1 and X21, X22,….,X2n are likelihood ratio ordered, then the LEPT sequence minimizes the expected makespan when machine 1 is subject to an arbitrary breakdown process. A generalization is presented for flow shops with m machines. Second, consider the case where X1j and X2j are i.i.d. exponentially distributed with rate λj. The SEPT sequence minimizes the expected makespan when machine 2 is subject to an arbitrary breakdown process and the LEPT sequence is optimal when machine 1 is subject to an arbitrary breakdown process. © 1995 John Wiley & Sons, Inc.  相似文献   
318.
The problem of optimizing a linear function over the efficient set of a multiple objective linear program is an important but difficult problem in multiple criteria decision making. In this article we present a flexible face search heuristic algorithm for the problem. Preliminary computational experiments indicate that the algorithm gives very good estimates of the global optimum with relatively little computational effort. © 1993 John Wiley & Sons, Inc.  相似文献   
319.
A two-unit cold standby production system with one repairman is considered. After inspection of a failed unit the repairman chooses either a slow or a fast repair rate to carry out the corresponding amount of work. At system breakdown the repairman has an additional opportunity to switch to the fast rate. If there are no fixed costs associated with system breakdowns, then the policy which minimizes longrun average costs is shown to be a two-dimensional control limit rule. If fixed costs are incurred every time the system breaks down, then the optimal policy is not necessarily of control limit type. This is illustrated by a counterexample. Furthermore, we present several performance measures for this maintenance system controlled by a two-dimensional control limit rule. © 1993 John Wiley & Sons, Inc.  相似文献   
320.
We study discrete‐time, parallel queues with two identical servers. Customers arrive randomly at the system and join the queue with the shortest workload that is defined as the total service time required for the server to complete all the customers in the queue. The arrivals are assumed to follow a geometric distribution and the service times are assumed to have a general distribution. It is a no‐jockeying queue. The two‐dimensional state space is truncated into a banded array. The resulting modified queue is studied using the method of probability generating function (pgf) The workload distribution in steady state is obtained in form of pgf. A special case where the service time is a deterministic constant is further investigated. Numerical examples are illustrated. © 2000 John Wiley & Sons, Inc. Naval Research Logistics 47: 440–454, 2000  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号