全文获取类型
收费全文 | 659篇 |
免费 | 70篇 |
国内免费 | 146篇 |
出版年
2023年 | 3篇 |
2022年 | 4篇 |
2021年 | 15篇 |
2020年 | 12篇 |
2019年 | 9篇 |
2018年 | 12篇 |
2017年 | 41篇 |
2016年 | 54篇 |
2015年 | 15篇 |
2014年 | 70篇 |
2013年 | 47篇 |
2012年 | 57篇 |
2011年 | 60篇 |
2010年 | 16篇 |
2009年 | 56篇 |
2008年 | 32篇 |
2007年 | 52篇 |
2006年 | 75篇 |
2005年 | 51篇 |
2004年 | 45篇 |
2003年 | 26篇 |
2002年 | 23篇 |
2001年 | 25篇 |
2000年 | 16篇 |
1999年 | 19篇 |
1998年 | 11篇 |
1997年 | 6篇 |
1996年 | 1篇 |
1995年 | 5篇 |
1994年 | 9篇 |
1993年 | 6篇 |
1989年 | 1篇 |
1988年 | 1篇 |
排序方式: 共有875条查询结果,搜索用时 609 毫秒
791.
《防务技术》2022,18(9):1697-1714
To solve the problem of realizing autonomous aerial combat decision-making for unmanned combat aerial vehicles (UCAVs) rapidly and accurately in an uncertain environment, this paper proposes a decision-making method based on an improved deep reinforcement learning (DRL) algorithm: the multi-step double deep Q-network (MS-DDQN) algorithm. First, a six-degree-of-freedom UCAV model based on an aircraft control system is established on a simulation platform, and the situation assessment functions of the UCAV and its target are established by considering their angles, altitudes, environments, missile attack performances, and UCAV performance. By controlling the flight path angle, roll angle, and flight velocity, 27 common basic actions are designed. On this basis, aiming to overcome the defects of traditional DRL in terms of training speed and convergence speed, the improved MS-DDQN method is introduced to incorporate the final return value into the previous steps. Finally, the pre-training learning model is used as the starting point for the second learning model to simulate the UCAV aerial combat decision-making process based on the basic training method, which helps to shorten the training time and improve the learning efficiency. The improved DRL algorithm significantly accelerates the training speed and estimates the target value more accurately during training, and it can be applied to aerial combat decision-making. 相似文献
792.
Lanchester equations and their extensions are widely used to calculate attrition in models of warfare. This paper examines how Lanchester models fit detailed daily data on the battles of Kursk and Ardennes. The data on Kursk, often called the greatest tank battle in history, was only recently made available. A new approach is used to find the optimal parameter values and gain an understanding of how well various parameter combinations explain the battles. It turns out that a variety of Lanchester models fit the data about as well. This explains why previous studies on Ardennes, using different minimization techniques and data formulations, have found disparate optimal fits. We also find that none of the basic Lanchester laws (i.e., square, linear, and logarithmic) fit the data particularly well or consistently perform better than the others. This means that it does not matter which of these laws you use, for with the right coefficients you will get about the same result. Furthermore, no constant attrition coefficient Lanchester law fits very well. The failure to find a good‐fitting Lanchester model suggests that it may be beneficial to look for new ways to model highly aggregated attrition. © 2003 Wiley Periodicals, Inc. Naval Research Logistics, 2004. 相似文献
793.
794.
本文介绍了舰载作战训练系统的研究方向、功能、组成及各单元的任务和接口关系,并对关键技术进行了简要说明。 相似文献
795.
基于模型的舰艇信息融合系统的研究 总被引:2,自引:0,他引:2
应用计算机仿真技术建立一个具有战术想定功能的舰艇作战系统仿真环境,来研究舰艇的多传感器信息融合,并给出部分仿真模型。 相似文献
796.
797.
根据遭遇战斗的特点,运用随机格斗、微分对策数理战术模型,研究坦克分队遭遇战斗最优火力运用策略问题.研究得出的结论符合坦克分队战术特点,为坦克分队指挥决策提供辅助作用. 相似文献
798.
799.
800.