排序方式: 共有154条查询结果,搜索用时 15 毫秒
141.
142.
针对航天器有限时间姿态机动问题,提出一种自适应二阶终端滑模控制算法。设计一种终端滑模面,保证系统状态能够在有限时间内沿滑模面收敛到系统原点;为克服系统抖振,设计了二阶终端滑模控制器,并采用参数自适应估计项补偿系统中的外部干扰力矩。基于Lyapunov函数法证明了二阶自适应终端滑模控制器能够保证闭环系统实际有限时间稳定。仿真结果表明,提出的姿态机动算法响应速度快、精度高,能够有效实现对系统抖振和外部干扰的抑制,具有重要的科学意义和工程应用价值。 相似文献
143.
144.
ALHAJI MS BAH 《African Security Review》2013,22(3):33-46
This article explores the proliferation of illicit small arms and light weapons in the West African sub-region and efforts by the regional Economic Community of West African States (ECOWAS) to deal with the problem through the ECOWAS Declaration of a Moratorium on the Importation, Exportation and Manufacture of Small Arms and Light Weapons in West Africa. The paper analyses the degree of compliance with the Moratorium by four ECOWAS member states, namely, Ghana, Nigeria, Sierra Leone and Mali. 相似文献
145.
146.
为了对降落伞充气展开过程中的开伞载荷进行更加准确的预测,提出一种基于循环神经网络的开伞载荷补偿计算方法,包括模型架构和数据处理方式。该方法将充气时间法计算的预测值代入循环网络进行二次计算,使最终结果能够更加贴近试验真值。使用多层前馈网络、标准循环网络与长短时记忆网络三种网络进行比较,验证了所提模型预测结果的适用性和准确性,研究了学习率、输入层维度和隐层维度等超参数对模型性能的影响,并给出了基于长短时记忆网络的补偿模型最优训练条件。实验结果表明,利用循环网络进行开伞载荷预测具有较好的拟合结果,为机器学习与降落伞工业的学科交叉研究提供了一定的参考方向。 相似文献
147.
《防务技术》2022,18(9):1697-1714
To solve the problem of realizing autonomous aerial combat decision-making for unmanned combat aerial vehicles (UCAVs) rapidly and accurately in an uncertain environment, this paper proposes a decision-making method based on an improved deep reinforcement learning (DRL) algorithm: the multi-step double deep Q-network (MS-DDQN) algorithm. First, a six-degree-of-freedom UCAV model based on an aircraft control system is established on a simulation platform, and the situation assessment functions of the UCAV and its target are established by considering their angles, altitudes, environments, missile attack performances, and UCAV performance. By controlling the flight path angle, roll angle, and flight velocity, 27 common basic actions are designed. On this basis, aiming to overcome the defects of traditional DRL in terms of training speed and convergence speed, the improved MS-DDQN method is introduced to incorporate the final return value into the previous steps. Finally, the pre-training learning model is used as the starting point for the second learning model to simulate the UCAV aerial combat decision-making process based on the basic training method, which helps to shorten the training time and improve the learning efficiency. The improved DRL algorithm significantly accelerates the training speed and estimates the target value more accurately during training, and it can be applied to aerial combat decision-making. 相似文献
148.
149.
150.