共查询到20条相似文献,搜索用时 93 毫秒
1.
基于概率分析法的无人攻击机作战效能分析模型研究 总被引:3,自引:0,他引:3
未来防空战场上无人攻击机已成为一种重要的空中威胁。无人攻击机作战过程可分为飞行、突防、攻击三个阶段,运用概率分析方法分别建立了各阶段的数学模型,给出了其作战效能表达式,为进一步进行无人攻击机作战效能分析提供了模型基础。 相似文献
2.
3.
4.
详细分析战斗机综合作战效能评估中通常采用的对数法模型,指出模型中存在的问题;然后结合模型构成分析,提出了一种新的建模思想,建立了战斗机综合作战效能评估解析计算模型;同时对综合效能模型中各分项能力的评估模型进行修改完善。最后以6种机型方案的综合作战效能评估为例计算并检验了模型的可用性。 相似文献
5.
6.
7.
8.
基于对多种作战效能评估方法优缺点的分析,提出结合多种方法评估效能的思路。采用粗糙集做原始样本预处理,结合专家评估法结果认可度高和BP神经网络评估法高效率的优势,建立了3种方法结合评估作战效能的方法和流程。利用专家评估中的德尔菲法得到多型无人攻击机的评估结论;将专家评估结论作为BP神经网络的训练样本,得到基于专家经验的BP神经网络模型,固化了专家经验,实现了基于专家经验高效评估新机型的目的。最后用算例检验了方法的可用性和合理性。 相似文献
9.
10.
11.
分析了无人作战飞机在各国的研究及使用情况,给出了有人/无人机协同作战指挥控制系统的结构,按照空间位置和主要完成任务的不同,将系统分为有人机、无人机两个平台,介绍了各平台的组成部分及相应的功能,归纳出协同作战所需要解决的关键技术:交互控制技术、协同态势感知、协同目标分配、协同航路规划技术、毁伤效能评估技术及智能决策技术,并且给出了一个在典型作战任务想定下的作战及信息处理流程.最后对无人作战飞机未来的发展方向进行了展望. 相似文献
12.
13.
14.
《防务技术》2020,16(1):150-157
A formation model of manned/unmanned aerial vehicle (MAV/UAV) collaborative combat can qualitatively and quantitatively analyze the synergistic effects. However, there is currently no effective and appropriate model construction method or theory, and research in the field of collaborative capability evaluation is basically nonexistent. According to the actual conditions of cooperative operations, a new MAV/UAV collaborative combat network model construction method based on a complex network is presented. By analyzing the characteristic parameters of the abstract network, the index system and complex network are combined. Then, a method for evaluating the synergistic effect of the cooperative combat network is developed. This method provides assistance for the verification and evaluation of MAV/UAV collaborative combat. 相似文献
15.
在“零伤亡”作战思想的指引下,随着计算机、人工智能等技术的日臻成熟,无人机的发展风起云涌。近年来,无人机在战术运用方面更是突破了传统的侦察领域,逐步涉足攻击甚至电子攻击等领域。通过回顾无人机在近代历次军事冲突中的运用,分析无人机运用于电子战的优势,探讨无人机在电子战中的运用及可能的运用情况,最后指出无人机运用于电子战后对未来电子战的影响。 相似文献
16.
《防务技术》2022,18(9):1697-1714
To solve the problem of realizing autonomous aerial combat decision-making for unmanned combat aerial vehicles (UCAVs) rapidly and accurately in an uncertain environment, this paper proposes a decision-making method based on an improved deep reinforcement learning (DRL) algorithm: the multi-step double deep Q-network (MS-DDQN) algorithm. First, a six-degree-of-freedom UCAV model based on an aircraft control system is established on a simulation platform, and the situation assessment functions of the UCAV and its target are established by considering their angles, altitudes, environments, missile attack performances, and UCAV performance. By controlling the flight path angle, roll angle, and flight velocity, 27 common basic actions are designed. On this basis, aiming to overcome the defects of traditional DRL in terms of training speed and convergence speed, the improved MS-DDQN method is introduced to incorporate the final return value into the previous steps. Finally, the pre-training learning model is used as the starting point for the second learning model to simulate the UCAV aerial combat decision-making process based on the basic training method, which helps to shorten the training time and improve the learning efficiency. The improved DRL algorithm significantly accelerates the training speed and estimates the target value more accurately during training, and it can be applied to aerial combat decision-making. 相似文献
17.
18.
19.