首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   396篇
  免费   0篇
  396篇
  2021年   6篇
  2019年   9篇
  2018年   5篇
  2017年   4篇
  2016年   7篇
  2015年   2篇
  2014年   5篇
  2013年   65篇
  2012年   3篇
  2009年   4篇
  2007年   12篇
  2005年   2篇
  2004年   3篇
  2003年   7篇
  2002年   6篇
  2001年   2篇
  2000年   4篇
  1999年   4篇
  1998年   7篇
  1997年   8篇
  1996年   9篇
  1995年   3篇
  1994年   6篇
  1993年   10篇
  1992年   7篇
  1991年   9篇
  1990年   10篇
  1989年   17篇
  1988年   14篇
  1987年   7篇
  1986年   13篇
  1985年   8篇
  1984年   4篇
  1983年   3篇
  1982年   10篇
  1981年   6篇
  1980年   8篇
  1979年   9篇
  1978年   8篇
  1977年   5篇
  1976年   6篇
  1975年   4篇
  1974年   9篇
  1973年   5篇
  1972年   4篇
  1971年   7篇
  1970年   6篇
  1969年   5篇
  1967年   6篇
  1948年   4篇
排序方式: 共有396条查询结果,搜索用时 15 毫秒
251.
A new bivariate negative binomial distribution is derived by convoluting an existing bivariate geometric distribution; the probability function has six parameters and admits of positive or negative correlations and linear or nonlinear regressions. Given are the moments to order two and, for special cases, the regression function and a recursive formula for the probabilities. Purely numerical procedures are utilized in obtaining maximum likelihood estimates of the parameters. A data set with a nonlinear empirical regression function and another with negative sample correlation coefficient are discussed.  相似文献   
252.
253.
Hemiter's entropy model for brand purchase behavior has been generalized for Renyi's measure of entropy which is a more general concept than Shannon's measure of entropy used by Herniter and which includes Shannon's measure as a limiting case. The generalized model considered here is more flexible than Herniter's model since it can give different marketing statistics for different products and it can give these statistics even when only some of the brands are considered.  相似文献   
254.
From an original motivation in quantitative inventory modeling, we develop methods for testing the hypothesis that the service times of an M/G/1 queue are exponentially distributed, given a sequence of observations of customer line and/or system waits. The approaches are mostly extensions of the well-known exponential goodness-of-fit test popularized by Gnedenko, which results from the observation that the sum of a random exponential sample is Erlang distributed and thus that the quotient of two independent exponential sample means is F distributed.  相似文献   
255.
In this paper we present the results of a limited number of experiments with linear fractional problems. Six solution procedures were tested and the results are expressed in the number of simplex-like pivots required to solve a sample of twenty problems randomly generated.  相似文献   
256.
Minimum cardinality set covering problems (MCSCP) tend to be more difficult to solve than weighted set covering problems because the cost or weight associated with each variable is the same. Since MCSCP is NP-complete, large problem instances are commonly solved using some form of a greedy heuristic. In this paper hybrid algorithms are developed and tested against two common forms of the greedy heuristic. Although all the algorithms tested have the same worst case bounds provided by Ho [7], empirical results for 60 large randomly generated problems indicate that one algorithm performed better than the others.  相似文献   
257.
Consider an experiment in which only record-breaking values (e.g., values smaller than all previous ones) are observed. The data available may be represented as X1,K1,X2,K2, …, where X1,X2, … are successive minima and K1,K2, … are the numbers of trials needed to obtain new records. We treat the problem of estimating the mean of an underlying exponential distribution, and we consider both fixed sample size problems and inverse sampling schemes. Under inverse sampling, we demonstrate certain global optimality properties of an estimator based on the “total time on test” statistic. Under random sampling, it is shown than an analogous estimator is consistent, but can be improved for any fixed sample size.  相似文献   
258.
Many organizations providing service support for products or families of products must allocate inventory investment among the parts (or, identically, items) that make up those products or families. The allocation decision is crucial in today's competitive environment in which rapid response and low levels of inventory are both required for providing competitive levels of customer service in marketing a firm's products. This is particularly important in high-tech industries, such as computers, military equipment, and consumer appliances. Such rapid response typically implies regional and local distribution points for final products and for spare parts for repairs. In this article we fix attention on a given product or product family at a single location. This single-location problem is the basic building block of multi-echelon inventory systems based on level-by-level decomposition, and our modeling approach is developed with this application in mind. The product consists of field-replaceable units (i.e., parts), which are to be stocked as spares for field service repair. We assume that each part will be stocked at each location according to an (s, S) stocking policy. Moreover, we distinguish two classes of demand at each location: customer (or emergency) demand and normal replenishment demand from lower levels in the multiechelon system. The basic problem of interest is to determine the appropriate policies (si Si) for each part i in the product under consideration. We formulate an approximate cost function and service level constraint, and we present a greedy heuristic algorithm for solving the resulting approximate constrained optimization problem. We present experimental results showing that the heuristics developed have good cost performance relative to optimal. We also discuss extensions to the multiproduct component commonality problem.  相似文献   
259.
260.
A new technique for solving large‐scale allocation problems with partially observable states and constrained action and observation resources is introduced. The technique uses a master linear program (LP) to determine allocations among a set of control policies, and uses partially observable Markov decision processes (POMDPs) to determine improving policies using dual prices from the master LP. An application is made to a military problem where aircraft attack targets in a sequence of stages, with information acquired in one stage being used to plan attacks in the next. © 2000 John Wiley & Sons, Inc., Naval Research Logistics 47: 607–619, 2000  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号