首页 | 本学科首页   官方微博 | 高级检索  
     

梯度学习的参数控制帮助线程预取模型
引用本文:裴颂文,张俊格,宁静. 梯度学习的参数控制帮助线程预取模型[J]. 国防科技大学学报, 2016, 38(5): 59-63
作者姓名:裴颂文  张俊格  宁静
作者单位:1.上海理工大学 光电信息与计算机工程学院,1.上海理工大学 光电信息与计算机工程学院,1.上海理工大学 光电信息与计算机工程学院
基金项目:上海市自然科学基金资助项目(15ZR1428600);计算机体系结构国家重点实验室开放资助项目(CARCH201206)
摘    要:对于非规则访存的应用程序,当某个应用程序的访存开销大于计算开销时,传统帮助线程的访存开销会高于主线程的计算开销,从而导致帮助线程落后于主线程。于是提出一种改进的基于参数控制的帮助线程预取模型,该模型采用梯度下降算法对控制参数求解最优值,从而有效地控制帮助线程与主线程的访存任务量,使帮助线程领先于主线程。实验结果表明,基于参数选择的线程预取模型能获得1.1~1.5倍的系统性能加速比。

关 键 词:数据预取   帮助线程   多核系统   访存延迟   梯度下降
收稿时间:2015-11-16

Helper Thread Pre-fetching Model Based on Learning Gradients of Control Parameters
PEI Songwen,ZHANG Junge and NING Jing. Helper Thread Pre-fetching Model Based on Learning Gradients of Control Parameters[J]. Journal of National University of Defense Technology, 2016, 38(5): 59-63
Authors:PEI Songwen  ZHANG Junge  NING Jing
Abstract:The applications with irregular accessing memory would incur serious Cache miss in the run-time. Helper thread is an effective technology to improve the performance of multicore systems. Helper thread pre-fetches data from memory to the Cache which is the closest one to CPU. If the overhead of accessing memory for a given application is much greater than that of computation, it would make helper thread lag behind the main thread. Hereby, we propose an improved helper thread pre-fetching model by adding control parameters. Furthermore, gradient descent algorithm is one of the most popular machine learning algorithms, which is adopted here to determine the optimal control parameters. The amount of the memory access tasks are controlled by the control parameters effectively, which makes helper thread be finished before main thread. The experiment results show that the speedup of system performance is achieved by 1.2 times to 1.5 times.
Keywords:data pre-fetch   helper thread   multi-core system   memory latency   gradient descent
本文献已被 CNKI 等数据库收录!
点击此处可从《国防科技大学学报》浏览原始摘要信息
点击此处可从《国防科技大学学报》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号