1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
      <samp id="qm3rj"></samp>
      <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

      <video id="qm3rj"><code id="qm3rj"></code></video>

        1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
            1. 2.765

              2022影響因子

              (CJCR)

              • 中文核心
              • EI
              • 中國科技核心
              • Scopus
              • CSCD
              • 英國科學文摘

              留言板

              尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

              姓名
              郵箱
              手機號碼
              標題
              留言內容
              驗證碼

              基于層次特征復用的視頻超分辨率重建

              周圓 王明非 杜曉婷 陳艷芳

              周圓, 王明非, 杜曉婷, 陳艷芳. 基于層次特征復用的視頻超分辨率重建. 自動化學報, 2021, x(x): 1?11 doi: 10.16383/j.aas.c210095
              引用本文: 周圓, 王明非, 杜曉婷, 陳艷芳. 基于層次特征復用的視頻超分辨率重建. 自動化學報, 2021, x(x): 1?11 doi: 10.16383/j.aas.c210095
              Zhou Yuan, Wang Ming-Fei, Du Xiao-Ting, Chen Yan-Fang. Video super-resolution via hierarchical feature reuse. Acta Automatica Sinica, 2021, x(x): 1?11 doi: 10.16383/j.aas.c210095
              Citation: Zhou Yuan, Wang Ming-Fei, Du Xiao-Ting, Chen Yan-Fang. Video super-resolution via hierarchical feature reuse. Acta Automatica Sinica, 2021, x(x): 1?11 doi: 10.16383/j.aas.c210095

              基于層次特征復用的視頻超分辨率重建

              doi: 10.16383/j.aas.c210095
              基金項目: 國家自然科學基金聯合基金項目(U2006211)資助, 國家重點研發計劃項目(課題編號2020YFC1523204)資助
              詳細信息
                作者簡介:

                周圓:天津大學電氣自動化與信息工程學院副教授. 主要研究方向為計算機視覺與圖像/視頻通信. 本文通信作者. E-mail: zhouyuan@tju.edu.cn

                王明非:天津大學電氣自動化與信息工程學院研究生. 主要研究方向為計算機視覺與機器學習. E-mail:

                杜曉婷:天津大學電氣自動化與信息工程學院研究生. 主要研究方向為計算機視覺與機器學習. E-mail:

                陳艷芳:天津大學電氣自動化與信息工程學院博士研究生. 主要研究方向為計算機視覺與機器學習. E-mail:

              Video Super-Resolution via Hierarchical Feature Reuse

              Funds: Supported by the National Natural Science Foundation of China (U2006211) and National Key Research and Development Program (2020YFC1523204)
              More Information
                Author Bio:

                ZHOU Yuan Associate professor at the School of Electrical and Information Engineering, Tianjin University. Her research interests include computer vision and image/video communication. Corresponding author of the paper

                WANG Ming-Fei Master student at the School of Electrical and Information Engineering, Tianjin University. His research interests include computer vision and machine learning

                DU Xiao-Ting Master student at the School of Electrical and Information Engineering, Tianjin University. Her research interests include computer vision and machine learning

                CHEN Yan-Fang Ph.D. student at the School of Electrical and Information Engineering, Tianjin University. Her research interests include computer vision and machine learning

              • 摘要: 當前的深度卷積神經網絡方法, 在視頻超分辨率任務上實現的性能提升相對于圖像超分辨率任務略低一些, 部分原因是它們對層次結構特征中的某些關鍵幀間信息的利用不夠充分. 為此, 本文提出了一個稱作層次特征復用網絡(Hierarchical feature reuse network, HFRNet)的結構, 用以解決上述問題. 該網絡保留運動補償幀的低頻內容, 并采用密集層次特征塊(Dense hierarchical feature block, DHFB)自適應地融合其內部每個殘差塊的特征, 之后用長距離特征復用融合多個DHFB間的特征, 從而促進高頻細節信息的恢復. 實驗結果表明, 本文提出的方法在定量和定性指標上均優于當前的方法.
              • 圖  1  層次特征復用網絡(HFRNet)的結構, 上中下分別為整體結構、HFRNet(a)的特征融合模塊與HFRNet (b)的特征融合模塊

                Fig.  1  Architecture of Hierarchical Feature Reuse Network (HFRNet). Above: Overall Architecture; Middle: Feature Fusion Module of HFRNet(a), Below: Feature Fusion Module of HFRNet(b)

                圖  2  DHFB的詳細結構

                Fig.  2  Detailed architecture of Dense Hierarchical Feature Block (DHFB)

                圖  3  本文方法和其他方法在VIDEO4和Myanmar數據集下得到的平均PSNR(dB)和平均SSIM

                Fig.  3  Average PSNR(dB) and SSIM for VIDEO4 and Myanmar dataset, between our method and other methods

                圖  4  VIDEO4數據集下本文方法與其他方法的主觀對比

                Fig.  4  Qualitative super-resolution comparison of HFRNet with other models on an image from VIDEO4 dataset.

                圖  5  Myanmar數據集下本文方法與其他方法的主觀對比

                Fig.  5  Qualitative super-resolution comparison of HFRNet with other models on an image from Myanmar dataset.

                圖  6  本文方法重建結果與其他方法的細節對比

                Fig.  6  Qualitative super-resolution comparison of the reconstruction details by HFRNet and other models.

                表  1  不同DHFB數目(D)和每個DHFB殘差塊數目(R)對2倍率超分辨率重建性能的影響(PSNR(dB))

                Table  1  Average PSNR(dB) for 2x video super resolution task, with different number of DHFBs (D) and residual blocks (R) per DHFB

                模塊組合方式CITY序列 (dB)WALK序列 (dB)FOLIAGE序列 (dB)CALENDAR序列 (dB)平均PSNR (dB)
                R4D634.34236.84632.04527.07132.576
                R6D434.33937.10132.11727.06732.656
                R6D634.89637.21032.22427.13732.866
                R6D834.901(±0.035)37.102(±0.054)32.187(±0.069)27.140(±0.007)32.833(±0.041)
                R8D634.633(±0.039)36.873(±0.025)32.144(±0.050)27.109(±0.019)32.690(±0.034)
                下載: 導出CSV

                表  2  不同網絡結構實驗結果的平均PSNR(dB)及所需參數量

                Table  2  Number of parameters with different structures of HFRNet, and average PSNR(dB) achieved in super-resolution task

                尺度網絡結構參數量CITY序列 (dB)WALK序列 (dB)FOLIAGE序列 (dB)CALENDAR序列 (dB)平均PSNR (dB)
                x2無層次特征復用2.85M33.79335.91931.88426.29131.972
                HFRNet(a)3.01M34.89637.21032.22427.13732.866
                HFRNet(b)3.10M35.10437.21832.23027.15832.927
                x3無層次特征復用2.85M27.22030.11327.01923.34426.924
                HFRNet(a)3.01M28.23531.51327.53924.19027.869
                HFRNet(b)3.10M28.24031.61327.58724.21727.914
                下載: 導出CSV

                表  3  不同光流估計方法對超分辨率重建性能的影響(PSNR(dB))

                Table  3  Average PSNR(dB) in video super resolution task, with different optical flow estimation algorithm

                尺度光流估計算法CITY序列 (dB)WALK序列 (dB)FOLIAGE序列 (dB)CALENDAR序列 (dB)平均PSNR (dB)
                x2CNN-based35.22637.10632.24427.81733.098
                CLG-TV35.10437.21832.23027.15832.927
                x3CNN-based28.25532.10327.59024.76628.179
                CLG-TV28.24031.61327.58724.21727.914
                下載: 導出CSV

                表  4  不同運動補償算法對超分辨率重建性能的影響(平均PSNR(dB))

                Table  4  Average PSNR(dB) in video super resolution task, with different motion compensation algorithm.

                運動補償算法與參數尺度MC (k=0.050)MC (k=0.100)MC (k=0.125)MC (k=0.175)AMC
                平均PSNR (dB)x232.49332.51032.71432.61532.927
                x327.50527.68427.82227.69427.914
                下載: 導出CSV
                1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
                  <samp id="qm3rj"></samp>
                  <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

                  <video id="qm3rj"><code id="qm3rj"></code></video>

                    1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
                        亚洲第一网址_国产国产人精品视频69_久久久久精品视频_国产精品第九页
                      1. [1] Liu C, Sun D. On bayesian adaptive video super resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(2): 346?360 doi: 10.1109/TPAMI.2013.127
                        [2] Shahar O, Faktor A, Irani M. Space-time super-resolution from a single video. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Colorado Springs, USA: IEEE, 2011. 3353?3360.
                        [3] Zhou Y, Wang Y, Zhang Y, Du X, Liu H, Li C. Manifold learning based super resolution for mixed-resolution multi-view video in visual internet of things. In: Proceedings of the International Conference on Artificial Intelligence for Communications and Networks. Harbin, China: Springer, 2019. 486?495.
                        [4] Caballero J, Ledig C, Aitken A, Acosta A, Totz J, Wang Z, et al. Real-time video super-resolution with spatio-temporal networks and motion compensation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 2848?2857.
                        [5] Tao X, Gao H, Liao R, Wang J, Jia J. Detail-revealing deep video super-resolution. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 4482?4490.
                        [6] Kappeler A, Yoo S, Dai Q, Katsaggelos A K. Video super-resolution with convolutional neural networks. IEEE Transactions on Computational Imaging, 2016, 2(2): 109?122 doi: 10.1109/TCI.2016.2532323
                        [7] Li D, Wang Z. Video super resolution via motion compensation and deep residual learning. IEEE Transactions on Computational Imaging, 2017, 3(4): 749?762 doi: 10.1109/TCI.2017.2671360
                        [8] Zhou Y, Zhang Y, Xie X, Kung S-Y. Image super-resolution based on dense convolutional auto-encoder blocks. Neurocomputing, 2021, 423(1): 98?109
                        [9] 李金新, 黃志勇, 李文斌, 周登文. 基于多層次特征融合的圖像超分辨率重建. 自動化學報, 2021, x(x): 1?11

                        Li Jin-Xin, Huang Zhi-Yong, Li Wen-Bin, Zhou Deng-Wen. Image super-resolution based on multi hierarchical features fusion network. Acta Automatica Sinica, 2021, x(x): 1?11
                        [10] 張毅鋒, 劉袁, 蔣程, 程旭. 用于超分辨率重建的深度網絡遞進學習方法. 自動化學報, 2020, 46(2): 274?282

                        Zhang Yi-Feng, Liu Yuan, Jiang Cheng, Cheng Xu. A curriculum learning approach for single image super resolution. Acta Automatica Sinica, 2020, 46(2): 274?282
                        [11] Zhou Y, Feng L, Hou C, Kung S-Y. Hyperspectral and multispectral image fusion based on local low rank and coupled spectral unmixing. IEEE Transactions on Geoscience and Remote Sensing, 2017, 55(10): 5997?6009 doi: 10.1109/TGRS.2017.2718728
                        [12] 周登文, 趙麗娟, 段然, 柴曉亮. 基于遞歸殘差網絡的圖像超分辨率重建. 自動化學報, 2019, 45(6): 1157?1165

                        Zhou Deng-Wen, Zhao Li-Juan, Duan Ran, Chai Xiao-Liang. Image super-resolution based on recursive residual networks. Acta Automatica Sinica, 2019, 45(6): 1157?1165
                        [13] 孫旭, 李曉光, 李嘉鋒, 卓力. 基于深度學習的圖像超分辨率復原研究進展. 自動化學報, 2017, 43(5): 697?709

                        Sun Xu, Li Xiao-Guang, Li Jia-Feng, Zhuo Li. Review on deep learning based image super-resolution restoration algorithms. Acta Automatica Sinica, 2017, 43(5): 697?709
                        [14] Zhou Y, Xie X, and Kung S-Y. Exploiting operation importance for differentiable neural architecture search. IEEE Transactions on Neural Networks and Learning Systems, to be published.
                        [15] Huo S, Zhou Y, Xiang W, Kung S-Y. Semi-supervised learning based on a novel iterative optimization model for saliency detection. IEEE Transactions on Neural Network and Learning System, 2019, 30(1): 225?241 doi: 10.1109/TNNLS.2018.2809702
                        [16] Zhou Y, Mao A, Huo S, Lei J, Kung S-Y. Salient object detection via fuzzy theory and object-level enhancement. IEEE Transactions on Multimedia, 2019, 1(1): 74?85
                        [17] Jo Y, Oh S W, Kang J, Kim S J. Deep video super-resolution network using dynamic upsampling filters without explicit motion compensation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 3224?3232.
                        [18] 潘志勇, 郁梅, 謝登梅, 宋洋, 蔣剛毅. 采用精簡卷積神經網絡的快速視頻超分辨率重建. 光電子·激光, 2018, 29(12): 1332?1341

                        Pan Zhi-Yong, Yu Mei, Xie Deng-Mei, Song Yang, Jiang Gang-Yi. Fast video super-resolution reconstruction using a succinct convolutional neural network. Journal of Optoelectronics·Laser, 2018, 29(12): 1332?1341
                        [19] Drulea M, Nedevschi S. Total variation regularization of local-global optical flow. In: Proceedings of the IEEE Conference on Intelligent Transportation Systems. Washington DC, USA: IEEE, 2011. 318?323.
                        [20] Lucas A, López-Tapia S, Molina R, Katsaggelos A K. Generative adversarial networks and perceptual losses for video super-resolution. IEEE Transactions on Image Processing, 2019, 28(7): 3312?3327 doi: 10.1109/TIP.2019.2895768
                        [21] Zhou Y, Yang J, Li H, Cao T, Kung S-Y. Adversarial learning for multiscale crowd counting under complex scenes. IEEE Transactions on Cybernetics, to be published.
                        [22] Zhou Y, Huo S, Xiang W, Hou C, Kung S-Y. Semi-supervised salient object detection using a linear feedback control system model. IEEE Transactions on Cybernetics, 2019, 49(4): 1173?1185 doi: 10.1109/TCYB.2018.2793278
                        [23] Huo S, Zhou Y, Lei J, Ling N, Hou C. Iterative feedback control-based salient object segmentation. IEEE Transactions on Multimedia, 2018, 20(6): 1350?1364 doi: 10.1109/TMM.2017.2769801
                        [24] Zhou Y, Zhang T, Huo S, Hou C, Kung S-Y. Adaptive irregular graph construction based salient object detection. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(6): 1569?1582 doi: 10.1109/TCSVT.2019.2904463
                        [25] Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 1?9.
                        [26] He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 770?778.
                        [27] Huang G, Liu Z, Van Der Maaten L, Weinberger. Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 2261?2269.
                        [28] Zhang Y, Tian Y, Kong Y, Zhong B, Fu Y. Residual dense network for image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. : Salt Lake City, USA: IEEE, 2018. 2472?2481.
                        [29] Zhou Y, Du X, Wang M, Huo S, Zhang Y, Kung S-Y. Cross-scale residual network: a general framework for image super-resolution, denoising, and deblocking. IEEE Transactions on Cybernetics, to be published.
                        [30] Yi P, Wang Z, Jiang K, Shao Z, Ma J. Multi-temporal ultra dense memory network for video super-resolution. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 30(8): 2503?2516 doi: 10.1109/TCSVT.2019.2925844
                        [31] Yi P, Wang Z, Jiang K, Jiang J, Ma J. Progressive fusion video super-resolution network via exploiting non-local spatio-temporal correlations. In: Proceedings of the IEEE International Conference on Computer Vision. Seoul, Korea (South): IEEE, 2019. 3106?3115.
                        [32] Yi P, Wang Z, Jiang K, Jiang J, Lu T, Ma J. A progressive fusion generative adversarial network for realistic and consistent video super-resolution. IEEE Transactions on Pattern Analysis and Machine Intelligence, to be published.
                        [33] H Inc. Myanmar 60p [Online], available: http://www.harmonicinc.com/resources/videos/4k-video-clip-center, May 20, 2021.
                        [34] Wang L, Guo Y, Liu L, Lin Z, Deng X, An W. Deep video super-resolution using hr optical flow estimation. IEEE Transactions on Image Processing, 2020, 29(1): 4323?4336
                        [35] Dong C, Loy C C, He K, Tang X. Image super-resolution using deep convolutional networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(2): 295?307 doi: 10.1109/TPAMI.2015.2439281
                        [36] Li D, Liu Y, Wang Z. Video super-resolution using motion compensation and residual bidirectional recurrent convolutional network. In: Proceedings of the IEEE International Conference on Image Processing. Beijing, China: IEEE, 2017. 1642?1646.
                        [37] Kim S Y, Lim J, Na T, Kim M. Video super-resolution based on 3D-cnns with consideration of scene change. In: Proceedings of the IEEE International Conference on Image Processing. Taipei, China: IEEE, 2019. 2831?2835.
                        [38] Wang Z, Yi P, Jiang K, Jiang J, Han Z, Lu T, et al. Multi-memory convolutional neural network for video super-resolution. IEEE Transactions on Image Processing, 2019, 28(5): 2530?2544 doi: 10.1109/TIP.2018.2887017
                        [39] Kim J, Lee J K, Lee K M. Accurate image super-resolution using very deep convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition. Las Vegas, USA: IEEE, 2016. 1646?1654.
                        [40] Lai W S, Huang J B, Ahuja N, Yang M H. Deep laplacian pyramid networks for fast and accurate super-resolution. In: Proceedings of the IEEE Conference on Computer Vision & Pattern Recognition. Honolulu, USA: IEEE, 2017. 5835?5843.
                        [41] Wang L, Guo Y, Lin Z, Deng X, An W. Learning for video super-resolution through hr optical flow estimation. In: Proceedings of the Asian Conference on Computer Vision. Perth, Australia: Springer, 2018. 514?529.
                      2. 加載中
                      3. 計量
                        • 文章訪問數:  908
                        • HTML全文瀏覽量:  340
                        • 被引次數: 0
                        出版歷程
                        • 網絡出版日期:  2021-06-30

                        目錄

                          /

                          返回文章
                          返回