首页 | 本学科首页   官方微博 | 高级检索  
   检索      

长向量处理器高效RNN推理方法
引用本文:苏华友,陈抗抗,杨乾明.长向量处理器高效RNN推理方法[J].国防科技大学学报,2024,46(1):121-130.
作者姓名:苏华友  陈抗抗  杨乾明
作者单位:国防科技大学 计算机学院, 湖南 长沙 410073;国防科技大学 并行与分布计算全国重点实验室, 湖南 长沙 410073
基金项目:国家自然科学基金资助项目(61872377);湘江实验室基金资助项目(22XJ01012)
摘    要:模型深度的不断增加和处理序列长度的不一致对循环神经网络在不同处理器上的性能优化提出巨大挑战。针对自主研制的长向量处理器FT-M7032,实现了一个高效的循环神经网络加速引擎。该引擎采用行优先矩阵向量乘算法和数据感知的多核并行方式,提高矩阵向量乘的计算效率;采用两级内核融合优化方法降低临时数据传输的开销;采用手写汇编优化多种算子,进一步挖掘长向量处理器的性能潜力。实验表明,长向量处理器循环神经网络推理引擎可获得较高性能,相较于多核ARM CPU以及Intel Golden CPU,类循环神经网络模型长短记忆网络可获得最高62.68倍和3.12倍的性能加速。

关 键 词:多核DSP  长向量处理器  循环神经网络  并行优化
收稿时间:2022/11/7 0:00:00

Efficient RNN inference engine on very long vector processor
SU Huayou,CHEN Kangkang,YANG Qianming.Efficient RNN inference engine on very long vector processor[J].Journal of National University of Defense Technology,2024,46(1):121-130.
Authors:SU Huayou  CHEN Kangkang  YANG Qianming
Institution:College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China;National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha 410073, China
Abstract:With the increasing depth and the inconsistent length of processing sequences, the performance optimization of RNN(recurrent neural network) on different processors makes it difficult to researchers. An efficient RNN acceleration engine was implemented for the self-developed long vector processor FT-M7032. This engine proposed a row-first matrix vector multiplication algorithm and a data-aware multi-core parallel method to improve the computational efficiency of matrix vector multiplication. It proposed a two-level kernel fusion optimization method to reduce the overhead of temporary data transmission. Optimized handwritten assembly codes for multiple operators were integrated to further tap the performance potential of long vector processors. Experiments show that the RNN engine for long-vector processors is efficient, when compared with the multi-core ARM CPU and Intel Golden CPU, the RNN-like model long short term memory networks can achieve a performance acceleration of up to 62.68 times and 3.12 times, respectively.
Keywords:multicore DSP  very long vector processor  recurrent neural networks  parallel optimization
点击此处可从《国防科技大学学报》浏览原始摘要信息
点击此处可从《国防科技大学学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号