1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
      <samp id="qm3rj"></samp>
      <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

      <video id="qm3rj"><code id="qm3rj"></code></video>

        1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
            1. 2.765

              2022影響因子

              (CJCR)

              • 中文核心
              • EI
              • 中國科技核心
              • Scopus
              • CSCD
              • 英國科學文摘

              留言板

              尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

              姓名
              郵箱
              手機號碼
              標題
              留言內容
              驗證碼

              目標跟蹤中基于IoU和中心點距離預測的尺度估計

              李紹明 儲珺 冷璐 涂序繼

              李紹明, 儲珺, 冷璐, 涂序繼. 目標跟蹤中基于IoU和中心點距離預測的尺度估計. 自動化學報, 2021, 48(x): 1001?1014 doi: 10.16383/j.aas.c210356
              引用本文: 李紹明, 儲珺, 冷璐, 涂序繼. 目標跟蹤中基于IoU和中心點距離預測的尺度估計. 自動化學報, 2021, 48(x): 1001?1014 doi: 10.16383/j.aas.c210356
              Li Shao-Ming, Chu Jun, Leng Lu, Tu Xu-Ji. Accurate scale estimation with IoU and distance between centroids for object tracking. Acta Automatica Sinica, 2021, 48(x): 1001?1014 doi: 10.16383/j.aas.c210356
              Citation: Li Shao-Ming, Chu Jun, Leng Lu, Tu Xu-Ji. Accurate scale estimation with IoU and distance between centroids for object tracking. Acta Automatica Sinica, 2021, 48(x): 1001?1014 doi: 10.16383/j.aas.c210356

              目標跟蹤中基于IoU和中心點距離預測的尺度估計

              doi: 10.16383/j.aas.c210356
              基金項目: 國家自然科學基金(62162045)資助, 江西省科技支撐計劃項目20192BBE50073 資助
              詳細信息
                作者簡介:

                李紹明:南昌航空大學軟件學院碩士研究生. 主要研究方向為計算機視覺, 目標跟蹤. E-mail: thorn_mo1905@163.com

                儲珺:南昌航空大學軟件學院教授. 主要研究方向為計算機視覺, 模式識別. E-mail: chuj@nchu.edu.cn

                冷璐:南昌航空大學軟件學院教授. 主要研究方向為圖像處理,生物特征模板保護和生物特征識別. E-mail: leng@nchu.edu.cn

                涂序繼:南昌航空大學軟件學院講師. 主要研究方向為計算機視覺和圖像處理. E-mail: 71068@nchu.edu.cn

              Accurate Scale Estimation with IoU and Distance between Centroids for Object Tracking

              Funds: Supported by National Natural Science Foundation of P. R. China (62162045), supported by Jiangxi Provincial Science and Technology Key Project 20192BBE50073
              More Information
                Author Bio:

                LI Shao-Ming Master student at the School of Software, Nanchang Hangkong University. His research interests include computer vision and object tracking

                Chu Jun Professor at the School of Software, Nanchang Hangkong University. His research interest include computer vision and pattern recognition

                LENG Lu Professor at the School of Software, Nanchang Hangkong University. His research interests include image processing, biometric template protection and biometric recognition

                TU Xu-Ji Lecturer at the School of Software, Nanchang Hangkong University. His research interest include computer vision and image processing

              • 摘要: 目標跟蹤中基于IoU (Intersection over union, IoU)預測的尺度估計方法, 通過估計視頻幀中候選框與真實目標框的重疊度訓練尺度回歸模型, 推理階段通過最大化IoU對初始化邊界框進行微調, 取得目標的尺度. 本文詳細分析了基于IoU預測的尺度估計模型的梯度更新過程, 發現其在訓練和推理過程僅將IoU作為度量, 缺乏對預測框和真實目標框中心點距離的約束, 導致外觀模型更新過程中模板受到污染, 前景和背景分類時定位出現偏差. 基于此發現, 本文構建了一種結合IoU和中心點距離的新度量NDIoU (Normalization distance IoU), 在此基礎上提出一種新的尺度估計方法, 并將其嵌入判別式跟蹤框架. 即在訓練階段以NDIoU為標簽, 設計了具有中心點距離約束的損失函數監督網絡的學習, 在線推理期間通過最大化NDIoU微調目標尺度, 以幫助外觀模型更新時獲得更加準確的樣本. 在七個數據上與相關主流方法進行對比, 本文方法在七個數據集上的綜合性能優于所有對比算法. 特別是在GOT-10k數據集上, 本文方法的AO、$ S{R}_{0.5} $$ S{R}_{0.75} $三個指標達到了65.4%、78.7%和53.4%, 分別超過基線模型4.3%、7.0%和4.2%.
                1)  收稿日期?2021-04-24 錄用日期?2021-11-02 Manuscript?received?April?24,?2021;?accepted?November?2, 2021 國家自然科學基金 (62162045) 資助,?江西省科技支撐計劃項目20192BBE50073?資助 Supported?by?National?Natural?Science?Foundation?of?P.?R. China?(62162045),?supported?by?Jiangxi?Provincial?Science?and Technology?Key?Project?20192BBE50073 本文責任編委 Recommended?by?Associate?Editor 1.?南昌航空大學軟件學院計算機視覺研究所?南昌?330063 2.江西省圖像處理與模式識別重點實驗室?南昌?330063
                2)  1.?School?of?Software,?Nanchang?Hangkong?University,?Nanchang?330063 2.?Key?Laboratory?of?Jiangxi?Province?for?Image Processing?and?Pattern?Recognition,?Nanchang?Hangkong?University,?Nanchang?330063
              • 圖  1  IoU相同, 但中心點距離不同的情況. 其中, 紅色代表候選的邊界框, 綠色代表真實邊界框

                Fig.  1  Same IoU while different distances between centroids. Red represents the candidate bounding box, and green represents the ground-truth

                圖  2  標準化中心點之間的距離

                Fig.  2  Normalized distance between centroids

                圖  3  IoU和中心點距離對應視頻幀數的統計

                Fig.  3  The number statistics of video frame corresponding to IoU and distances between centroids

                圖  4  在視頻序列dinosaur上跟蹤的結果可視化

                Fig.  4  Visualization of tracking results on the video sequence dinosaur

                圖  5  本文方法(ASEID)在OTB-100上與相關方法的比較

                Fig.  5  Comparison of the proposed method (ASEID) with related algorithms on OTB-100

                圖  6  OTB-100 數據集不同挑戰性因素影響下的成功率圖

                Fig.  6  Success plots on sequences with different challenging attributes on OTB-100 dataset

                圖  7  OTB-100 數據集不同挑戰性因素影響下的準確率圖

                Fig.  7  Precision plots on sequences with different challenging attributes on OTB-100 dataset

                圖  8  本文方法與相關方法的可視化比較

                Fig.  8  Visualization comparison of the proposed method and related trackers

                圖  9  OTB-100數據集中的失敗案例. 綠色框代表真實框, 紅色框代表本文算法的預測框.

                Fig.  9  Failure cases in OTB-100. The green bounding box is ground truth, and the red box represents the prediction box.

                圖  10  GOT-10k數據集中的失敗案例. 在GOT-10k的測試集中, 由于只能拿到測試視頻序列的第一幀的真實框, 因此第一幀的標記代表被跟蹤目標.

                Fig.  10  Failure cases in GOT-10k. In GOT-10k test set, only the ground truth in the first frame of the test video sequence can be obtained. Therefore, the ground truth of the first frame represents the tracked target.

                表  1  OTB-100上的消融實驗

                Table  1  Ablation study on OTB-100

                AUC (%)Precision (%)Norm.Pre (%)FPS
                多尺度搜索68.488.883.821
                IoU68.489.484.235
                NDIoU69.891.387.335
                下載: 導出CSV

                表  2  在UAV123上和SOTA算法的比較

                Table  2  Compare with SOTA trackers on UAV123

                SiamBAN[33]CGACD[34]POST[35]MetaRTT[36]ECO[28]UPDT[32]DaSiamRPN[37]ATOM[7]DiMP50 (baseline)[14]ASEID (ours)
                AUC (%)63.163.362.956.952.454.256.963.264.364.5
                Precision (%)83.383.380.080.974.176.878.184.485.086.1
                Norm.Pre (%)66.870.974.279.180.581.6
                下載: 導出CSV

                表  3  在VOT2018上與SOTA方法的比較

                Table  3  Compare with SOTA trackers on VOT2018

                DRT[38]RCO[39]UPDT[32]DaSiamRPN[37]MFT[39]LADCF[40]ATOM[9]SiamRPN++[16]Dimp50 (baseline)[14]PrDiMP50[15]ASEID (ours)
                EAO0.3560.3760.3780.3830.3850.3890.4010.4140.4400.4420.454
                Robustness0.2010.1550.1840.2760.1400.1590.2040.2340.1530.1650.153
                Accuracy0.5190.5070.5360.5860.5050.5030.5900.6000.5970.6180.615
                下載: 導出CSV

                表  4  在GOT-10k上與SOTA方法的比較

                Table  4  Compare with SOTA trackers on GOT-10k

                DCFST[30]PrDiMP50[15]KYS[17]SiamFC++[13]D3S[41]Ocean[12]ROAM[31]ATOM[7]DiMP50 (baseline)[14]ASEID (ours)
                $ \mathit{S}{\mathit{R}}_{0.50}\left (\mathbf{\%}\right) $68.373.875.169.567.672.146.663.471.778.7
                $ \mathit{S}{\mathit{R}}_{0.75} $ (%)44.854.351.547.946.216.440.249.253.4
                $ \mathit{A}\mathit{O}\left (\mathbf{\%}\right) $59.263.463.659.559.761.143.655.661.165.4
                下載: 導出CSV

                表  5  在LaSOT上與SOTA方法的比較

                Table  5  Compare with SOTA trackers on LaSOT

                ASRCF[6]POST[35]Ocean[12]GlobalT[42]SiamRPN++[19]ROAM[31]ATOM[9]DiMP50 (baseline)[14]ASEID (ours)
                Precision (%)33.746.356.652.756.944.550.556.957.5
                Success (AUC) (%)35.948.156.052.149.644.751.456.957.2
                下載: 導出CSV

                表  6  在TrackingNet上與SOTA方法的比較

                Table  6  Compare with SOTA trackers on TrackingNet

                MDNet[29]ECO[28]DaSiamRPN[37]D3S[41]ROAM[31]CGACD[34]ATOM[9]Dimp50 (baseline)[14]ASEID (ours)
                AUC (%)60.655.463.872.867.071.170.374.075.3
                Precision (%)56.549.259.166.462.369.364.868.771.1
                Norm.Pre (%)70.561.873.377.180.181.9
                下載: 導出CSV

                表  7  在TC128上與SOTA算法比較

                Table  7  Compare with SOTA trackers on TC128

                POST[35]MetaRTT[36]ASRCF[6]UDT[43]TADT[44]Re2EMA[45]RTMDNet[46]MLT[47]DiMP50 (baseline)[14]ASEID (ours)
                AUC (%)56.359.760.354.156.252.156.349.861.263.2
                Precision (%)78.180.082.571.769.578.881.084.2
                下載: 導出CSV
                1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
                  <samp id="qm3rj"></samp>
                  <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

                  <video id="qm3rj"><code id="qm3rj"></code></video>

                    1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
                        亚洲第一网址_国产国产人精品视频69_久久久久精品视频_国产精品第九页
                      1. [1] Wu Y, Lim J, and Yang M H. Object tracking benchmark[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834–1848. doi: 10.1109/TPAMI.2014.2388226
                        [2] 孟琭, 楊旭. 目標跟蹤算法綜述[J]. 自動化學報, 2019, 45(07): 1244-1260

                        Meng Lu, Yang Xu. A review of target tracking algorithms[J]. ACTA AUTOMATICA SINICA, 2019, 45(07): 1244-1260
                        [3] 尹宏鵬, 陳波, 柴毅, 劉兆棟. 基于視覺的目標檢測與跟蹤綜述[J]. 自動化學報, 2016, 42(10): 1466-1489

                        Yin Hong-Peng, Chen Bo, Chai Yi, Liu Zhao-Dong. A review of object detection and tracking based on vision[J]. ACTA AUTOMATICA SINICA, 2016, 42(10): 1466-1489
                        [4] 譚建豪, 鄭英帥, 王耀南, 馬小萍. 基于中心點搜索的無錨框全卷積孿生跟蹤器[J]. 自動化學報, 2021, 47(04): 801-812

                        Tan Jian-Hao, Zheng Ying-Shuai, Wang Yao-Nan, Ma Xiao-Ping. AFST: Anchor-free fully convolutional siamese tracker with searching center[J]. ACTA AUTOMATICA SINICA, 2021, 47(04): 801-812
                        [5] Danelljan M, Hager G, Khan F S. Learning spatially regularized correlation filters for visual tracking[C]. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 4310–4318.
                        [6] Dai K, Wang D, Lu H, Sun C, and Li J. Visual tracking via adaptive spatially-regularized correlation filters[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 4670–4679.
                        [7] Danelljan M, Hager G, Khan F S, and Felsberg M. Discriminative scale space tracking[J]. IEEE Transaction Pattern Analysis Machine Intelligence, 2017, 39(8): 1561–1575. doi: 10.1109/TPAMI.2016.2609928
                        [8] Li Y and Zhu J. A scale adaptive kernel correlation filter tracker with feature integration[C]. In: Proceedings of the 13th European Conference on Computer Vision. Switzerland, Zurich: Springer, 2014. 254–265.
                        [9] Danelljan M, Bhat G, Khan F S, and Felsberg M. ATOM: Accurate tracking by overlap maximization[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 4660–4669.
                        [10] Li B, Yan J, Wu W, Zhu Z, and Hu X. High performance visual tracking with siamese region proposal network[C]. In: Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 8971–8980.
                        [11] Wang Q, Bertinetto L, Hu W, and Torr P. Fast online object tracking and segmentation: a unifying approach[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 1328–1338.
                        [12] Zhang Z, Peng H, Fu J, Li B, Hu W. Ocean: Object-aware anchor-free tracking[C]. In: Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 771–787.
                        [13] Xu Y, Wang Z, Li Z, Yuan Y, Yu G. SiamFC++: Towards robust and accurate visual tracking with target estimation guidelines[C]. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA: AAAI, 2020. 12549–12556.
                        [14] Bhat G, Danelljan M, Gool L, and Timofte R. Learning discriminative model prediction for tracking[C]. In: Proceedings of the 2019 International Conference on Computer Vision. Seoul, Korea: IEEE, 2019. 6181–6190.
                        [15] Danelljan M, Gool L, Timofte R. Probabilistic regression for visual tracking[C]. In: Proceedings of the 2020 IEEE Conference Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 7183–7192.
                        [16] Li B, Wu W, Wang Q, Zhang F, Xing J, and Yan J. SiamRPN++: Evolution of siamese visual tracking with very deep networks[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 4282–4291.
                        [17] Bhat G, Danelljan M, Gool L, Timofte R. Know your surroundings: exploiting scene information for object tracking[C]. In: Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 205–221.
                        [18] Girshick R. Fast R-CNN[C]. In: Proceedings of the 2015 IEEE International Conference on Computer Vision. Santiago, Chile: IEEE, 2015. 1440–1448.
                        [19] Ren S, He K, Girshick R, and Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks[C]. In: IEEE Transaction Pattern Analysis Machine Intelligence, 2015, 39(6): 1137–1149.
                        [20] Jiang B, Luo R, Mao J, Xiao T, and Jiang Y. Acquisition of localization confidence for accurate object detection[C]. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 816–832.
                        [21] Mueller M, Smith N, and Ghanem B. A benchmark and simulator for UAV tracking[C]. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, Netherlands: Springer, 2016. 445–461.
                        [22] Kristan M, Leonardis A, Matas J, Felsberg M, Pfugfelder R, Zajc L, Vojir T, Bhat G, Lukezic A, Eldesokey A, Fernandez G, and et al. The sixth visual object tracking VOT2018 challenge results. In: Proceedings of the 15th European Conference on Computer Vision workshop. Munich, Germany: Springer, 2018. 3–53.
                        [23] Huang L, Zhao X, and Huang K. Got-10k: A Large High-diversity Benchmark for Generic Object Tracking in the Wild[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, pp. 1562–1577.
                        [24] Fan H, Lin L, Yang F, Chu P, Deng G, Yu S, Bai H, Xu X, Liao C, and Ling H. LaSOT: A high-quality benchmark for large-scale single object tracking[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 5374–5383.
                        [25] Muller M, Bibi A, Giancola S, Subaihi S, and Ghanem B. Trackingnet: A large-scale dataset and benchmark for object tracking in the wild[C]. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 310–327.
                        [26] Liang P, Blasch E, Lin H. Encoding color information for visual tracking: algorithms and benchmark[J]. IEEE Transactions on Image Processing, 2015, 24(12): 5630–5644. doi: 10.1109/TIP.2015.2482905
                        [27] Zheng Z, Wang P, Liu W, Li J, Ye R, Ren D. Distance-IoU loss: faster and better learning for bounding box regression[C]. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA: AAAI, 2020, 34(7): 12993–13000.
                        [28] Danelljan M, Bhat G, Khan F S, and Felsberg M. ECO: Efficient convolution operators for tracking[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. 6931–6939.
                        [29] Nam H, Han B. Learning multi-domain convolutional neural networks for visual tracking[C]. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 4293–4302.
                        [30] Zheng L, Tang M, Chen Y, Wang J, Lu H. Learning feature embeddings for discriminant model based tracking[C]. In: Proceedings of the 16th European Conference on Computer Vision. Glasgow, UK: Springer, 2020. 759–775.
                        [31] Yang T, Xu P, Hu R, Chai H, Chan A. ROAM: Recurrently optimizing tracking model[C]. In: Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 6718–6727.
                        [32] Bhat G, Johnander J, Danelljan M, Khan F S, and Felsberg M. Unveiling the power of deep tracking[C]. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 493–509.
                        [33] Chen Z, Zhong B, Li G, Zhang S, and Ji R. Siamese box adaptive network for visual tracking[C]. In: Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 6668–6677.
                        [34] Du F, Liu P, Zhao W, Tang X. Correlation-guided attention for corner detection based visual tracking[C]. In: Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 6836–6845.
                        [35] Wang N, Zhou W, Qi G, Li H. POST: POlicy-based switch tracking[C]. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA: AAAI, 2020. 12184–12191.
                        [36] Jung I, You K, Noh H, Cho M, Han B. Real-Time object tracking via meta-learning: efficient model adaptation and one-shot channel pruning[C]. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA: AAAI, 2020. 11205–11212.
                        [37] Zhu Z, Wang Q, Li B, Wei W, Yan J, Hu W. Distractor-aware siamese networks for visual object tracking[C]. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 101–117.
                        [38] Sun C, Wang D, Lu H, Yang M. Correlation tracking via joint discrimination and reliability learning[C]. In: Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 489–497.
                        [39] Matej Kristan, Ales Leonardis, Jiri Matas, Michael Felsberg, Roman Pfugfelder, Luka Cehovin Zajc, Tomas Vojir, Goutam Bhat, Alan Lukezic, Abdelrahman Eldesokey, Gustavo Fernandez, and et al. The sixth visual object tracking VOT2018 challenge results. In: Proceedings of the 15th European Conference on Computer Vision Workshop. Munich, Germany: Springer, 2018.
                        [40] Xu T, Feng Z, Wu X, and Kittler J. Learning adaptive discriminative correlation filters via temporal consistency preserving spatial feature selection for robust visual tracking[Online]. ArXiv Preprint ArXiv: 1807.11348, 2018.
                        [41] Lukezic A, Matas J, Kristan M. D3S – A discriminative single shot segmentation tracker[C]. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 7133–7142.
                        [42] Huang L, Zhao X, Huang K. GlobalTrack: A simple and strong baseline for long-term tracking[C]. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. New York, USA: AAAI, 2020. 11037–11044.
                        [43] Wang N, Song Y, Ma C, Zhou W, Liu W. Unsupervised deep tracking[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 1308–1317.
                        [44] Li X, Ma C, Wu B, He Z, Yang M. Target-aware deep tracking[C]. In: Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, USA: IEEE, 2019. 1369–1378.
                        [45] Huang J, and Zhou W. Re2EMA: Regularized and reinitialized exponential moving average for target model update in object tracking[C]. In: Proceedings of the 2019 AAAI Conference on Artificial Intelligence, 2019. 8457–8464.
                        [46] Jung I, Song J, Baek M, Han B. Real-time MDNet[C]. In: Proceedings of the 15th European Conference on Computer Vision. Munich, Germany: Springer, 2018. 89–104.
                        [47] Choi J, Kwon J, Lee K. Deep meta learning for real-time target-aware visual tracking[C]. In: Proceedings of the 2019 International Conference on Computer Vision. Seoul, Korea: IEEE, 2019. 911–920.
                        [48] Paszke A, Gross S, Massa F, and et. al. Pytorch: An imperative style, high-performance deep learning library [C]. In: Proceedings of the 2019 Neural Information Processing Systems. Vancouver, Canada: MIT Press, 2019.
                        [49] Martin Danelljan, Goutam Bhat. PyTracking: Visual tracking library based on PyTorch.https://github.com/visionml/pytracking,2019.
                        [50] Lin T, Maire M, Belongie S, Bourdev L, Girshick R, Hays J, Perona P, Ramanan D, Dollar P, and Zitnick C. Microsoft COCO: Common objects in context. In: Proceedings of the 13th European Conference on Computer Vision. Switzerland, Zurich: Springer, 2014: 740–755.
                      2. 加載中
                      3. 計量
                        • 文章訪問數:  641
                        • HTML全文瀏覽量:  394
                        • 被引次數: 0
                        出版歷程
                        • 收稿日期:  2021-04-24
                        • 錄用日期:  2021-11-02
                        • 網絡出版日期:  2021-11-29

                        目錄

                          /

                          返回文章
                          返回