1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
      <samp id="qm3rj"></samp>
      <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

      <video id="qm3rj"><code id="qm3rj"></code></video>

        1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
            1. 2.845

              2023影響因子

              (CJCR)

              • 中文核心
              • EI
              • 中國科技核心
              • Scopus
              • CSCD
              • 英國科學(xué)文摘

              留言板

              尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復。謝謝您的支持!

              姓名
              郵箱
              手機號碼
              標題
              留言?xún)热?/th>
              驗證碼

              聯(lián)合深度超參數卷積和交叉關(guān)聯(lián)注意力的大位移光流估計

              王梓歌 葛利躍 陳震 張聰炫 王子旭 舒銘奕

              王梓歌, 葛利躍, 陳震, 張聰炫, 王子旭, 舒銘奕. 聯(lián)合深度超參數卷積和交叉關(guān)聯(lián)注意力的大位移光流估計. 自動(dòng)化學(xué)報, 2024, 50(8): 1631?1645 doi: 10.16383/j.aas.c230049
              引用本文: 王梓歌, 葛利躍, 陳震, 張聰炫, 王子旭, 舒銘奕. 聯(lián)合深度超參數卷積和交叉關(guān)聯(lián)注意力的大位移光流估計. 自動(dòng)化學(xué)報, 2024, 50(8): 1631?1645 doi: 10.16383/j.aas.c230049
              Wang Zi-Ge, Ge Li-Yue, Chen Zhen, Zhang Cong-Xuan, Wang Zi-Xu, Shu Ming-Yi. Large displacement optical flow estimation jointing depthwise over-parameterized convolution and cross correlation attention. Acta Automatica Sinica, 2024, 50(8): 1631?1645 doi: 10.16383/j.aas.c230049
              Citation: Wang Zi-Ge, Ge Li-Yue, Chen Zhen, Zhang Cong-Xuan, Wang Zi-Xu, Shu Ming-Yi. Large displacement optical flow estimation jointing depthwise over-parameterized convolution and cross correlation attention. Acta Automatica Sinica, 2024, 50(8): 1631?1645 doi: 10.16383/j.aas.c230049

              聯(lián)合深度超參數卷積和交叉關(guān)聯(lián)注意力的大位移光流估計

              doi: 10.16383/j.aas.c230049
              基金項目: 國家自然科學(xué)基金(62222206, 62272209), 江西省重大科技研發(fā)專(zhuān)項(20232ACC01007), 江西省重點(diǎn)研發(fā)計劃重點(diǎn)專(zhuān)項(20232BBE50006), 江西省技術(shù)創(chuàng )新引導類(lèi)計劃項目(2021AEI91005), 江西省教育廳科學(xué)技術(shù)項目(GJJ210910), 江西省圖像處理與模式識別重點(diǎn)實(shí)驗室開(kāi)放基金(ET202104413)資助
              詳細信息
                作者簡(jiǎn)介:

                王梓歌:南昌航空大學(xué)測試與光電工程學(xué)院碩士研究生. 主要研究方向為計算機視覺(jué). E-mail: Wangzggg@163.com

                葛利躍:南昌航空大學(xué)助理實(shí)驗師. 北京航空航天大學(xué)儀器科學(xué)與光電工程學(xué)院博士研究生. 主要研究方向為圖像檢測與智能識別. E-mail: lygeah@163.com

                陳震:南昌航空大學(xué)測試與光電工程學(xué)院教授. 2003年獲得西北工業(yè)大學(xué)博士學(xué)位. 主要研究方向為圖像處理與計算機視覺(jué). E-mail: dr_chenzhen@163.com

                張聰炫:南昌航空大學(xué)測試與光電工程學(xué)院教授. 2014年獲得南京航空航天大學(xué)博士學(xué)位. 主要研究方向為圖像處理與計算機視覺(jué). 本文通信作者. E-mail: zcxdsg@163.com

                王子旭:南昌航空大學(xué)測試與光電工程學(xué)院碩士研究生. 主要研究方向為計算機視覺(jué). E-mail: wangzixu0827@163.com

                舒銘奕:南昌航空大學(xué)測試與光電工程學(xué)院碩士研究生. 主要研究方向為計算機視覺(jué). E-mail: shumingyi1997@163.com

              Large Displacement Optical Flow Estimation Jointing Depthwise Over-parameterized Convolution and Cross Correlation Attention

              Funds: Supported by National Natural Science Foundation of China (62222206, 62272209), National Science and Technology Major Project of Jiangxi Province (20232ACC01007), Key Research and Development Program of Jiangxi Province (20232BBE50006), the Technological Innovation Guidance Program of Jiangxi Province (2021AEI91005), Science and Technology Program of Education Department of Jiangxi Province (GJJ210910), and the Open Fund of Jiangxi Key Laboratory for Image Processing and Pattern Recognition (ET202104413)
              More Information
                Author Bio:

                WANG Zi-Ge Master student at the School of Measuring and Optical Engineering, Nanchang Hangkong University. Her main research interest is computer vision

                GE Li-Yue Assistant experimenter at Nanchang Hangkong University. Ph.D. candidate at the School of Instrumentation and Optoelectronic Engineering, Beihang University. His research interest covers image detection and intelligent recognition

                CHEN Zhen Professor at the School of Measuring and Optical Engineering, Nanchang Hangkong University. He received his Ph.D. degree from Northwestern Polytechnical University in 2003. His research interest covers image processing and computer vision

                ZHANG Cong-Xuan Professor at the School of Measuring and Optical Engineering, Nanchang Hangkong University. He received his Ph.D. degree from Nanjing University of Aeronautics and Astronautics in 2014. His research interest covers image processing and computer vision. Corresponding author of this paper

                WANG Zi-Xu Master student at the School of Measuring and Optical Engineering, Nanchang Hangkong University. His main research interest is computer vision

                SHU Ming-Yi Master student at the School of Measuring and Optical Engineering, Nanchang Hangkong University. His main research interest is computer vision

              • 摘要: 針對現有深度學(xué)習光流估計模型在大位移場(chǎng)景下的準確性和魯棒性問(wèn)題, 提出了一種聯(lián)合深度超參數卷積和交叉關(guān)聯(lián)注意力的圖像序列光流估計方法. 首先, 通過(guò)聯(lián)合深層卷積和標準卷積構建深度超參數卷積以替代普通卷積, 提取更多特征并加快光流估計網(wǎng)絡(luò )訓練的收斂速度, 在不增加網(wǎng)絡(luò )推理量的前提下提高光流估計的準確性; 然后, 設計基于交叉關(guān)聯(lián)注意力的特征提取編碼網(wǎng)絡(luò ), 通過(guò)疊加注意力層數獲得更大的感受野, 以提取多尺度長(cháng)距離上下文特征信息, 增強大位移場(chǎng)景下光流估計的魯棒性; 最后, 采用金字塔殘差迭代模型構建聯(lián)合深度超參數卷積和交叉關(guān)聯(lián)注意力的光流估計網(wǎng)絡(luò ), 提升光流估計的整體性能. 分別采用MPI-Sintel和KITTI測試圖像集對本文方法和現有代表性光流估計方法進(jìn)行綜合對比分析, 實(shí)驗結果表明本文方法取得了較好的光流估計性能, 尤其在大位移場(chǎng)景下具有更好的估計準確性與魯棒性.
              • 圖  1  基于深度超參數卷積和交叉關(guān)聯(lián)注意力的大位移光流估計網(wǎng)絡(luò )示意圖

                Fig.  1  Structure diagram of large displacement optical flow estimation based on depthwise over-parameterized convolution and cross correlation attention

                圖  2  深度超參數卷積和標準卷積示意圖

                Fig.  2  The structure diagram of conventional convolution and depthwise over-parameterized convolution

                圖  3  深度超參數卷積操作

                Fig.  3  The operation of depthwise over-parameterized convolution

                圖  4  不同光流模型特征圖對比

                Fig.  4  Comparison of feature maps of different optical flow models

                圖  5  交叉關(guān)聯(lián)注意力模塊

                Fig.  5  The cross correlation attention block

                圖  6  基于交叉關(guān)聯(lián)注意力的光流特征編碼網(wǎng)絡(luò )示意圖

                Fig.  6  Structure diagram of optical flow feature encoder network based on cross correlation attention

                圖  7  不同光流模型估計結果對比

                Fig.  7  Comparison of results of different optical flow models

                圖  8  Clean和Final數據集不同序列特征圖可視化 (其中紅框區域內為存在明顯區別的邊緣特征信息結果)

                Fig.  8  Visualization of feature maps of different sequence in Clean and Final datasets (The red bounding box contains edge feature information results with significant differences)

                圖  9  金字塔不同層數下不同尺度目標特征可視化

                Fig.  9  Visualization of feature maps at different scales under different layers of pyramid

                圖  10  MPI-Sintel測試集圖像序列對比方法光流估計可視化結果

                Fig.  10  Visualization results of flow field results of the comparable methods on MPI-Sintel test datasets

                圖  11  KITTI2015測試集圖像序列對比方法光流估計誤差可視化結果

                Fig.  11  Flow error maps of the comparable methods tested on KITTI2015 datasets

                圖  12  Baseline_deconv在各數據集訓練過(guò)程

                Fig.  12  The training process of Baseline_deconv on each dataset

                圖  13  消融模型光流估計結果在MPI-Sintel測試數據集可視化對比

                Fig.  13  Comparison of visualization results of each ablation model on MPI-Sintel test datasets

                圖  14  消融模型光流估計結果在KITTI2015測試數據集可視化對比

                Fig.  14  Comparison of visualization results of each ablation model on KITTI2015 datasets

                表  1  MPI-Sintel數據集圖像序列光流估計結果 (pixels)

                Table  1  Optical flow calculation results of image sequences in MPI-Sintel dataset (pixels)

                對比方法CleanFinal
                AllMatchedUnmatchedAllMatchedUnmatched
                IRR-PWC[14]3.8441.47223.2204.5792.15424.355
                PPAC-HD3[36]4.5891.50729.7514.5992.11624.852
                LiteFlowNet2[37]3.4831.38320.6374.6862.24824.571
                IOFPL-ft[38]4.3941.61127.1284.2241.95622.704
                PWC-Net[25]4.3861.71926.1665.0422.44526.221
                HMFlow[39]3.2061.12220.2105.0382.40426.535
                SegFlow153[40]4.1511.24627.8556.1912.94032.682
                SAMFL[41]4.4771.76326.6434.7652.28225.008
                本文方法2.7631.06216.6564.2022.05621.696
                下載: 導出CSV

                表  2  MPI-Sintel數據集運動(dòng)邊緣與大位移指標對比結果 (pixels)

                Table  2  Comparison results of motion edge and large displacement index in MPI-Sintel dataset (pixels)

                對比方法CleanFinal
                $rf50c1hsl6_{0\text{-}10}$$rf50c1hsl6_{10\text{-}60}$$rf50c1hsl6_{60\text{-}140}$${s}_{0\text{-}10}$${s}_{10\text{-}40}$${s}_{40+}$$rf50c1hsl6_{0\text{-}10}$$rf50c1hsl6_{10\text{-}60}$$rf50c1hsl6_{60\text{-}140}$${s}_{0\text{-}10}$${s}_{10\text{-}40}$${s}_{40+}$
                IRR-PWC[14]3.5091.2960.7210.5351.72425.4304.1651.8431.2920.7092.42328.998
                PPAC-HD3[36]2.7881.3401.0680.3551.28933.6243.5211.7021.6370.6172.08330.457
                LiteFlowNet2[37]3.2931.2630.6290.5971.77221.9764.0481.8991.4730.8112.43329.375
                IOFPL-ft[38]3.0591.4210.9430.3911.29231.8123.2881.4791.4190.6461.89727.596
                PWC-Net[25]4.2821.6570.6740.6062.07028.7934.6362.0871.4750.7992.98631.070
                HMFlow[39]2.7860.9570.5840.4671.69320.4704.5822.2131.4650.9263.17029.974
                SegFlow153[40]3.0721.1430.6560.4862.00027.5634.9692.4922.1191.2013.86536.570
                SAMFL[41]3.9461.6230.8110.6181.86029.9954.2081.8461.4490.8932.58729.232
                本文方法2.7720.8540.4430.5411.62116.5753.8841.6601.2920.7532.38125.715
                下載: 導出CSV

                表  3  KITTI2015數據集計算結果 (%)

                Table  3  Calculation results in KITTI2015 dataset (%)

                對比方法$Fl\text{-}bg $$Fl\text{-}fg $$Fl\text{-}all $
                IRR-PWC[14]7.687.527.65
                PPAC-HD3[36]5.787.486.06
                LiteFlowNet2[37]7.627.647.62
                IOFPL-ft[38]6.52
                PWC-Net[25]9.66 9.319.60
                SegFlow153[40]22.2123.7222.46
                SAMFL[41]7.727.437.68
                本文方法7.436.657.30
                下載: 導出CSV

                表  4  MPI-Sintel數據集上消融實(shí)驗結果對比 (pixels)

                Table  4  Comparison of ablation experiment results in MPI-Sintel dataset (pixels)

                消融模型AllMatchedUnmatched$s_{10\text{-}40}$$s_{40+}$
                Baseline3.8441.47223.2201.72425.430
                Baseline_CS2.8921.07017.7651.66217.460
                Baseline_deconv3.6211.46121.2721.65923.482
                Full model2.7631.06216.6561.62116.575
                下載: 導出CSV

                表  5  KITTI2015數據集上消融實(shí)驗結果對比

                Table  5  Comparison of ablation experiment results in KITTI2015 dataset

                消融模型$Fl\text{-}bg $ (%)$Fl\text{-}fg $ (%)$Fl\text{-}all $ (%)訓練時(shí)間(min)
                Baseline7.687.527.65621
                Baseline_CS7.747.587.71690
                Baseline_deconv7.287.307.29632
                Full model7.436.657.30616
                下載: 導出CSV
                1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
                  <samp id="qm3rj"></samp>
                  <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

                  <video id="qm3rj"><code id="qm3rj"></code></video>

                    1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
                        亚洲第一网址_国产国产人精品视频69_久久久久精品视频_国产精品第九页
                      1. [1] 張驕陽(yáng), 叢爽, 匡森. n比特隨機量子系統實(shí)時(shí)狀態(tài)估計及其反饋控制. 自動(dòng)化學(xué)報, 2024, 50(1): 42?53

                        Zhang Jiao-Yang, Cong Shuang, Kuang Sen. Real-time state estimation and feedback control for n-qubit stochastic quantum systems. Acta Automatica Sinica, 2024, 50(1): 42?53
                        [2] 張偉, 黃衛民. 基于種群分區的多策略自適應多目標粒子群算法. 自動(dòng)化學(xué)報, 2022, 48(10): 2585?2599 doi: 10.16383/j.aas.c200307

                        Zhang Wei, Huang Wei-Min. Multi-strategy adaptive multi-objective particle swarm optimization algorithm based on swarm partition. Acta Automatica Sinica, 2022, 48(10): 2585?2599 doi: 10.16383/j.aas.c200307
                        [3] 張芳, 趙東旭, 肖志濤, 耿磊, 吳駿, 劉彥北. 單幅圖像超分辨率重建技術(shù)研究進(jìn)展. 自動(dòng)化學(xué)報, 2022, 48(11): 2634?2654 doi: 10.16383/j.aas.c200777

                        Zhang Fang, Zhao Dong-Xu, Xiao Zhi-Tao, Geng Lei, Wu Jun, Liu Yan-Bei. Research progress of single image super-resolution reconstruction technology. Acta Automatica Sinica, 2022, 48(11): 2634?2654 doi: 10.16383/j.aas.c200777
                        [4] 楊天金, 侯振杰, 李興, 梁久禎, 宦娟, 鄭紀翔. 多聚點(diǎn)子空間下的時(shí)空信息融合及其在行為識別中的應用. 自動(dòng)化學(xué)報, 2022, 48(11): 2823?2835 doi: 10.16383/j.aas.c190327

                        Yang Tian-Jin, Hou Zhen-Jie, Li Xing, Liang Jiu-Zhen, Huan Juan, Zheng Ji-Xiang. Recognizing action using multi-center subspace learning-based spatial-temporal information fusion. Acta Automatica Sinica, 2022, 48(11): 2823?2835 doi: 10.16383/j.aas.c190327
                        [5] 閆夢(mèng)凱, 錢(qián)建軍, 楊健. 弱對齊的跨光譜人臉檢測. 自動(dòng)化學(xué)報, 2023, 49(1): 135?147 doi: 10.16383/j.aas.c210058

                        Yan Meng-Kai, Qian Jian-Jun, Yang Jian. Weakly aligned cross-spectral face detection. Acta Automatica Sinica, 2023, 49(1): 135?147 doi: 10.16383/j.aas.c210058
                        [6] 郭迎春, 馮放, 閻剛, 郝小可. 基于自適應融合網(wǎng)絡(luò )的跨域行人重識別方法. 自動(dòng)化學(xué)報, 2022, 48(11): 2744?2756 doi: 10.16383/j.aas.c220083

                        Guo Ying-Chun, Feng Fang, Yan Gang, Hao Xiao-Ke. Cross-domain person re-identification on adaptive fusion network. Acta Automatica Sinica, 2022, 48(11): 2744?2756 doi: 10.16383/j.aas.c220083
                        [7] Horn B K P, Schunck B G. Determining optical flow. Artificial Intelligence, 1981, 17(1?3): 185?203 doi: 10.1016/0004-3702(81)90024-2
                        [8] Sun D Q, Roth S, Black M J. Secrets of optical flow estimation and their principles. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). San Francisco, USA: IEEE, 2010. 2432?2439
                        [9] Menze M, Heipke C, Geiger A. Discrete optimization for optical flow. In: Proceedings of the 37th German Conference Pattern Recognition (GCPR). Aachen, Germany: Springer, 2015. 16?28
                        [10] Chen Q F, Koltun V. Full flow: Optical flow estimation by global optimization over regular grids. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 4706?4714
                        [11] Dosovitskiy A, Fischer P, Ilg E, H?usser P, Hazirbas C, Golkov V. FlowNet: Learning optical flow with convolutional networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Santiago, Chile: IEEE, 2015. 2758?2766
                        [12] Ranjan A, Black M J. Optical flow estimation using a spatial pyramid network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 2720?2729
                        [13] Amiaz T, Lubetzky E, Kiryati N. Coarse to over-fine optical flow estimation. Pattern Recognition, 2007, 40(9): 2496?2503 doi: 10.1016/j.patcog.2006.09.011
                        [14] Hur J, Roth S. Iterative residual refinement for joint optical flow and occlusion estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 5754?5763
                        [15] Tu Z G, Xie W, Zhang D J, Poppe R, Veltkamp R C, Li B X, et al. A survey of variational and CNN-based optical flow techniques. Signal Processing: Image Communication, 2019, 72: 9?24 doi: 10.1016/j.image.2018.12.002
                        [16] Zhang C X, Ge L Y, Chen Z, Li M, Liu W, Chen H. Refined TV-L1 optical flow estimation using joint filtering. IEEE Transactions on Multimedia, 2020, 22(2): 349?364 doi: 10.1109/TMM.2019.2929934
                        [17] Dalca A V, Rakic M, Guttag J, Sabuncu M R. Learning conditional deformable templates with convolutional networks. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2019. Article No. 32
                        [18] Chen J, Lai J H, Cai Z M, Xie X H, Pan Z G. Optical flow estimation based on the frequency-domain regularization. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 31(1): 217?230 doi: 10.1109/TCSVT.2020.2974490
                        [19] Zhai M L, Xiang X Z, Lv N, Kong X D. Optical flow and scene flow estimation: A survey. Pattern Recognition, 2021, 114: Article No. 107861 doi: 10.1016/j.patcog.2021.107861
                        [20] Zach C, Pock T, Bischof H. A duality based approach for realtime TV-L1 optical flow. In: Proceedings of the 29th DAGM Symposium on Pattern Recognition. Heidelberg, Germany: Springer, 2007. 214?223
                        [21] Zhao S Y, Zhao L, Zhang Z X, Zhou E Y, Metaxas D. Global matching with overlapping attention for optical flow estimation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 17571?17580
                        [22] Li Z W, Liu F, Yang W J, Peng S H, Zhou J. A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(12): 6999?7019 doi: 10.1109/TNNLS.2021.3084827
                        [23] Han J W, Yao X W, Cheng G, Feng X X, Xu D. P-CNN: Part-based convolutional neural networks for fine-grained visual categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(2): 579?590 doi: 10.1109/TPAMI.2019.2933510
                        [24] Ilg E, Mayer N, Saikia T, Keuper M, Dosovitskiy A, Brox T. FlowNet 2.0: Evolution of optical flow estimation with deep networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017. 1647?1655
                        [25] Sun D Q, Yang X D, Liu M Y, Kautz J. PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City, USA: IEEE, 2018. 8934?8943
                        [26] Wang Z G, Chen Z, Zhang C X, Zhou Z K, Chen H. LCIF-Net: Local criss-cross attention based optical flow method using multi-scale image features and feature pyramid. Signal Processing: Image Communication, 2023, 112: Article No. 116921 doi: 10.1016/j.image.2023.116921
                        [27] Teed Z, Deng J. RAFT: Recurrent all-pairs field transforms for optical flow. In: Proceedings of the 16th European Conference on Computer Vision (ECCV). Glasgow, UK: Springer, 2020. 402?419
                        [28] Han K, Xiao A, Wu E H, Guo J Y, Xu C J, Wang Y H. Transformer in transformer. In: Proceedings of the 35th International Conference on Neural Information Processing Systems. Montreal, Canada: NIPS, 2021.15908?15919
                        [29] Jiang S H, Campbell D, Lu Y, Li H D, Hartley R. Learning to estimate hidden motions with global motion aggregation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: 2021. 9752?9761
                        [30] Xu H F, Zhang J, Cai J F, Rezatofighi H, Tao D C. GMFlow: Learning optical flow via global matching. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 8111?8120
                        [31] Cao J M, Li Y Y, Sun M C, Chen Y, Lischinski D, Cohen-Or D, et al. DO-Conv: Depthwise over-parameterized convolutional layer. IEEE Transactions on Image Processing, 2022, 31: 3726?3736 doi: 10.1109/TIP.2022.3175432
                        [32] Dong X Y, Bao J M, Chen D D, Zhang W M, Yu N H, Yuan L, et al. CSWin transformer: A general vision transformer backbone with cross-shaped windows. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). New Orleans, USA: IEEE, 2022. 12114?12124
                        [33] Huang Z L, Wang X G, Huang L C, Huang C, Wei Y C, Liu W Y. CCNet: Criss-cross attention for semantic segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, Korea (South): IEEE, 2019. 603?612
                        [34] Butler D J, Wulff J, Stanley G B, Black M J. A naturalistic open source movie for optical flow evaluation. In: Proceedings of the 12th European Conference on Computer Vision (ECCV). Florence, Italy: Springer, 2012. 611?625
                        [35] Menze M, Geiger A. Object scene flow for autonomous vehicles. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015. 3061?3070
                        [36] Wannenwetsch A S, Roth S. Probabilistic pixel-adaptive refinement networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 11639?11648
                        [37] Hui T W, Tang X O, Loy C C. A lightweight optical flow CNN——Revisiting data fidelity and regularization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(8): 2555?2569 doi: 10.1109/TPAMI.2020.2976928
                        [38] Hofinger M, Bulò S R, Porzi L, Knapitsch A, Pock T, Kontschieder P. Improving optical flow on a pyramid level. In: Proceedings of the 16th European Conference on Computer Vision (ECCV). Glasgow, UK: Springer, 2020. 770?786
                        [39] Yu S H J, Zhang Y M, Wang C, Bai X, Zhang L, Hancock E R. HMFlow: Hybrid matching optical flow network for small and fast-moving objects. In: Proceedings of the 25th International Conference on Pattern Recognition (ICPR). Milan, Italy: IEEE, 2021. 1197?1204
                        [40] Chen J, Cai Z M, Lai J H, Xie X H. Efficient segmentation-based PatchMatch for large displacement optical flow estimation. IEEE Transactions on Circuits and Systems for Video Technology, 2019, 29(12): 3595?3607 doi: 10.1109/TCSVT.2018.2885246
                        [41] Zhang C X, Zhou Z K, Chen Z, Hu W M, Li M, Jiang S F. Self-attention-based multiscale feature learning optical flow with occlusion feature map prediction. IEEE Transactions on Multimedia, 2022, 24: 3340?3354 doi: 10.1109/TMM.2021.3096083
                        [42] Lu Z H, Xie H T, Liu C B, Zhang Y D. Bridging the gap between vision transformers and convolutional neural networks on small datasets. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans, USA: 2022. 14663?14677
                      2. 加載中
                      3. 圖(14) / 表(5)
                        計量
                        • 文章訪(fǎng)問(wèn)數:  478
                        • HTML全文瀏覽量:  178
                        • PDF下載量:  74
                        • 被引次數: 0
                        出版歷程
                        • 收稿日期:  2023-02-10
                        • 錄用日期:  2023-08-29
                        • 網(wǎng)絡(luò )出版日期:  2023-10-07
                        • 刊出日期:  2024-08-22

                        目錄

                          /

                          返回文章
                          返回