首页 > 论文 > 光子学报 > 47卷 > 9期(pp:910001--1)

融合峰旁比和帧差均值自适应模型更新的视觉跟踪

Adaptive Model Update via Fusing Peak-to-sidelobe Ratio and Mean Frame Difference for Visual Tracking

  • 摘要
  • 论文信息
  • 参考文献
  • 被引情况
  • PDF全文
分享:

摘要

为了让相关滤波模型更加适应目标外观的变化, 提高相关滤波跟踪算法的鲁棒性和实时性, 根据相关滤波响应值、帧差均值和目标运动位移之间的关系, 提出了一种单层卷积相关滤波实时跟踪模型的自适应学习率调整跟踪方法.该方法首先选取单个卷积层卷积特征, 减少了卷积特征维度, 然后使用单层卷积特征训练相关滤波分类器预测目标位置, 用快速尺度预测方法估计跟踪目标的尺度, 并采用稀疏的模型更新策略, 提高跟踪的速度; 最后利用相关滤波预测响应图的峰旁比估计预测位置的可信度, 结合图像帧差均值和目标的运动位移量来评估目标的表观变化, 并根据目标预测的可信度和表观变化情况自适应调整相关滤波模型更新的学习率, 使模型快速学习目标的变化特征, 提高了目标跟踪的精度.在OTB100数据集上对算法进行测试, 实验结果表明, 本文算法的平均距离精度达90.1%, 优于实验中对比的9种主流算法, 平均成功率值为79.2%, 仅次于9种算法中的连续卷积跟踪算法, 平均速度为31.8帧/秒, 是连续卷积相关滤波算法的近30倍.

Abstract

In order to adapt the correlation filter model to the change of the target appearance and improve the robustness and real-time performance of the correlation filter algorithm for visual tracking, an adaptive learning rate adjustment method for real-time tracking of a single-layer convolution filter is proposed, which is based on the relationship of the correlation filter response value, mean frame difference and the object movement displacement。 This method selects the convolution features of a single convolution layer to train the correlation filter classifier that is used to predict the position of object, reducing the convolution feature dimension and improving the speed of visual tracking。 Meanwhile it uses the fast-scale prediction method to estimate the object's scale, and adopts a sparse model update strategy。 Besides, the Peak-to-Sidelobe Ratio (PSR) of convolutional response is used to estimate the credibility of the predicted location。 The apparent change of the object is evaluated by combining the mean frame difference and the object movement displacement。 And the learning rate of the correlation filter model update can be adjusted by these two terms adaptively, so that the change characteristics of the object can be quickly learned。 The accuracy of visual tracking is improved by this method。 The method is tested on the standard OTB-100 dataset。 The results show that the average distance accuracy is 90。1%, which is better than the state-of-the-art algorithms in the experiment。 And the average success rate is 79。2%, which is only smaller than the continuous convolution tracking algorithm(CCOT)。 But the average speed is 31。8 frames per second, nearly 30 times of the CCOT。

补充资料

中图分类号:TP491.4

DOI:

基金项目:国家重点研发计划(No.2017YFC0821102)资助

收稿日期:2018-05-06

修改稿日期:2018-06-12

网络出版日期:--

作者单位    点击查看

熊昌镇:北方工业大学 城市道路交通智能控制技术北京市重点实验室, 北京 100144
车满强:北方工业大学 城市道路交通智能控制技术北京市重点实验室, 北京 100144
王润玲:北方工业大学 理学院, 北京 100144
卢颜:北方工业大学 城市道路交通智能控制技术北京市重点实验室, 北京 100144

联系人作者:熊昌镇(xczkiong@163.com)

备注:熊昌镇(1979-), 男, 副教授, 博士, 主要研究方向为深度学习、视频图像处理。 Email: xczkiong@163。com

【1】BOLME D S, BEVERIDGE J R, DRAPER B A, et al. Visual object tracking using adaptive correlation filters[C]. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Piscataway: IEEE, 2010: 2544-2550.

【2】DANELLJAN M, KHAN F S, FELSBERG M。 Adaptive color attributes for real-time visual tracking[C]。 IEEE Conference on Computer Vision and Pattern Recognition, Los Alamitos: Washington, DC: IEEE Computer Society Press, 2014: 1090-1097。

【3】HENRIQUES J F, CASEIRO R, MARTINS P, et al。 High-speed tracking with kernelized correlation filters[J]。 IEEE Transactions on Pattern Analysis & Machine Intelligence, 2015, 37(3): 583-596。

【4】DANELLJAN M, HAGER G, KHAN F S, et al。 Learning spatially regularized correlation filters for visual tracking[C]。 IEEE International Conference on Computer Vision, Los Alamitos: Washington, DC: IEEE Computer Society Press, 2015: 4310-4318。

【5】DANELLJAN M, ROBINSON A, KHAN F S, et al。 Beyond correlation filters: learning continuous convolution operators for visual tracking[C]。 European Conference on Computer Vision, Verlag: Berlin: Springer, 2016: 472-488。

【6】DANELLJAN M, BHAT G, KHAN F S, et al. ECO: Efficient convolution operators for tracking[C]. IEEE Conference on Computer Vision and Pattern Recognition, Los Alamitos: Washington, DC: IEEE Computer Society Press, 2017: 6931-6939.

【7】XIONG Chang-zhen, ZHAO Lu-lu, GUO Fen-hong. Kernelized correlation filters tracking based on adaptive feature fusion[J]. Journal of Computer-Aided Design & Computer Graphics, 2017, 29(6): 1068-1074.
熊昌镇, 赵璐璐, 郭芬红. 自适应特征融合的核相关滤波跟踪算法[J]. 计算机辅助设计与图形学学报, 2017, 29(6): 1068-1074.

【8】BERTINETTO L, VALMADRE J, GOLODETZ S. Staple: complementary learners for real-time tracking[C]. IEEE International Conference on Computer Vision and Pattern Recognition, Los Alamitos: IEEE Computer Society Press, 2016: 1401-1409.

【9】LUKEZIC A, VOJIR T, ZAJC L C, et al. Discriminative correlation filter with channel and spatial reliability[C]. IEEE Conference on Computer Vision and Pattern Recognition, IEEE, 2017: 4847-4856.

【10】DANELLJAN M, HGER G, SHAHBAZ KHAN F, et al. Accurate scale estimation for robust visual tracking[C]. British Machine Vision Conference, 2014: 1-11.

【11】HAMED KI G, ASHTON F, SIMON L. Learning background-aware correlation filters for visualtracking[C]. IEEE International Conference on Computer Vision (ICCV), Los Alamitos: Washington, DC: IEEE Computer Society Press, 2017: 1144-1152.

【12】MA C, HUANG J B, YANG X, et al。 Hierarchical convolutional features for visual tracking[C]。 IEEE International Conference on Computer Vision (ICCV), Los Alamitos: Washington, DC: IEEE Computer Society Press, 2015: 3074-3082。

【13】MA C, HUANG J B, YANG X, et al. Robust visual tracking via hierarchical convolutional features [OL]. [2018-4-20]. https: //arxiv.org/abs/1707.03816v1.

【14】YUANKAI Q I, SHENGPING ZHANG, LEI QIN, et al。 Hedged deep tracking [C]。 IEEE Conference on Computer Vision and Pattern Recognition, Los Alamitos: Washington, DC: IEEE Computer Society Press, 2016: 4303-4311。

【15】SUN C, WANG D, LU H, et al。 Correlation tracking via joint discrimination and reliability learning[OL]。 [2018-6-8]https: //arxiv。org/pdf/1804。08965。pdf。

【16】ZHU Z, WU W, ZOU W, et al. End-to-end flow correlation tracking with spatial-temporal attention[OL]. [2018-6-8]. https: //arxiv.org/pdf/1711.01124.pdf.

【17】DANELLJAN M, BHAT G, KHAN F S, et al. ECO: Efficient convolu-tion operators for tracking[C]. IEEE Conference on Computer Vision and Pattern Recognition, Los Alamitos: IEEE Computer Society Press, 2017: 6931-6939.

【18】CAI Yu-zhu, YANG De-dong, MAO Ning, et al. Visual tracking algorithm based on adaptive convolution features [J]. Acta Optica Sinica, 2017, 37(03): 0315002.
蔡玉柱, 杨德东, 毛宁,等. 基于自适应卷积特征的目标跟踪算法[J]. 光学学报, 2017, 37(03): 0315002.

【19】WANG X, LI H, LI Y, et al。 Robust and real-time deep tracking via multi-scale domain adaptation [C]。 IEEE International Conference on Multimedia and Expo, Los Alamitos: Washington, DC: IEEE Computer Society Press, 2017: 1338- 1343。

【20】FAN H, LING H. Parallel tracking and verifying: a framework for real-time and high accuracy visual tracking [C]. IEEE International Conference on Computer Vision (ICCV), Los Alamitos: Washington, DC: IEEE Computer Society Press, 2017: 5487-5495.

【21】GALOOGAHI H K, SIM T, LUCEY S. Correlation filters with limited boundaries[C]. IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, 2015: 4630-4638.

【22】YANG De-dong, MAO Ning, YANG Fu-cai, et al. Improved SRDCF object tracking via the Best-Buddies similarity[J]. Optics and Precision Engineering, 2018, 26(2): 492-502.
杨德东, 毛宁, 杨福才等, 利用最佳伙伴相似性的改进空间正则化判别相关滤波目标跟踪[J].光学 精密工程, 2018, 26(2): 492-502.

【23】WANG M, LIU Y, HUANG Z。 Large margin object tracking with circulant feature maps [C]。 IEEE Conference on Computer Vision and Pattern Recognition, Los Alamitos: Washington, DC: IEEE Computer Society Press, 2017: 4800-4808。

【24】ZHU Z, HUANG G, ZOU W, et al。 UCT: Learning unified convolutional networks for real-time visual tracking[C]。 IEEE International Conference on Computer Vision Workshop, IEEE Computer Society, 2017: 1973-1982。

引用该论文

XIONG Chang-zhen,CHE Man-qiang,WANG Run-ling,LU Yan。 Adaptive Model Update via Fusing Peak-to-sidelobe Ratio and Mean Frame Difference for Visual Tracking[J]。 ACTA PHOTONICA SINICA, 2018, 47(9): 0910001

熊昌镇,车满强,王润玲,卢颜。 融合峰旁比和帧差均值自适应模型更新的视觉跟踪[J]。 光子学报, 2018, 47(9): 0910001

被引情况

【1】曾梦媛,尚振宏,刘辉,李健鹏. 融合多层卷积特征自适应更新的目标跟踪算法. 激光与光电子学进展, 2020, 57(2): 21008--1

您的浏览器不支持PDF插件,请使用最新的(Chrome/Fire Fox等)浏览器.或者您还可以点击此处下载该论文PDF

山东11选5开奖 中华彩票网 平安彩票 上海11选5 极速赛车是哪里开的 趣彩彩票计划群 状元彩票计划群 湖北快3走势 极速赛车有什么平台 北京赛车pk10玩法