1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
      <samp id="qm3rj"></samp>
      <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

      <video id="qm3rj"><code id="qm3rj"></code></video>

        1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
            1. 2.765

              2022影響因子

              (CJCR)

              • 中文核心
              • EI
              • 中國科技核心
              • Scopus
              • CSCD
              • 英國科學文摘

              留言板

              尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

              姓名
              郵箱
              手機號碼
              標題
              留言內容
              驗證碼

              基于注意力機制和循環域三元損失的域適應目標檢測

              周洋 韓冰 高新波 楊錚 陳瑋銘

              周洋, 韓冰, 高新波, 楊錚, 陳瑋銘. 基于注意力機制和循環域三元損失的域適應目標檢測. 自動化學報, xxxx, xx(x): x?xx doi: 10.16383/j.aas.c220938
              引用本文: 周洋, 韓冰, 高新波, 楊錚, 陳瑋銘. 基于注意力機制和循環域三元損失的域適應目標檢測. 自動化學報, xxxx, xx(x): x?xx doi: 10.16383/j.aas.c220938
              Zhou Yang, Han Bing, Gao Xin-Bo, Yang Zheng, Chen Wei-Ming. Domain adaptive object detection based on attention mechanism and cycle domain triplet loss. Acta Automatica Sinica, xxxx, xx(x): x?xx doi: 10.16383/j.aas.c220938
              Citation: Zhou Yang, Han Bing, Gao Xin-Bo, Yang Zheng, Chen Wei-Ming. Domain adaptive object detection based on attention mechanism and cycle domain triplet loss. Acta Automatica Sinica, xxxx, xx(x): x?xx doi: 10.16383/j.aas.c220938

              基于注意力機制和循環域三元損失的域適應目標檢測

              doi: 10.16383/j.aas.c220938
              基金項目: 國家自然科學基金(62076190, 41831072, 41874195, 62036007), 陜西省重點創新產業鏈(2022ZDLGY01-11)資助
              詳細信息
                作者簡介:

                周洋:西安電子科技大學電子工程學院碩士研究生. 2020獲西南石油大學電子信息工程學士學位. 主要研究方向為計算機視覺和域適應目標檢測. E-mail: yzhou_6@stu.xidian.edu.cn

                韓冰:博士, 西安電子科技大學電子工程學院教授. 目前主要研究領域為智能輔助駕駛、視覺感知與認知、空間物理與人工智能交叉研究等. 本文通信作者. E-mail: bhan@xidian.edu.cn

                高新波:博士, 西安電子科技大學教授, 重慶郵電大學校長. 目前主要從事機器學習、圖像處理、計算機視覺、模式識別和多媒體內容分析等領域的研究. E-mail: xbgao@ieee.org

                楊錚:西安電子科技大學電子工程學院博士研究生. 2017獲西安電子科技大學智能科學與技術學士學位. 主要研究方向為深度學習, 目標跟蹤和強化學習. E-mail: zhengy@stu.xidian.edu.cn

                陳瑋銘:西安電子科技大學電子工程學院碩士研究生. 2019獲西安電子科技大學機械設計制造及其自動化學士學位. 主要研究方向為計算機視覺, 目標檢測和遙感技術. E-mail: wmchen@stu.xidian.edu.cn

              Domain Adaptive Object Detection Based on Attention Mechanism and Cycle Domain Triplet Loss

              Funds: Supported by National Natural Science Foundation of China (62076190,41831072, 41874195, 62036007), The Key Industry Innovation Chain of Shaanxi under Grant (2022ZDLGY01-11)
              More Information
                Author Bio:

                ZHOU Yang Master student at the School of Electronic Engineering, Xidian University. He received his bachelor degree in electronic and information engineering from Southwest Petroleum University in 2020. His research interest covers computer vision and domain adaptive detection

                HAN Bing Professor at school of Electronic Engineering, Xidian University. Her research interest covers intelligent auxiliary drive syste, visual perception and cognition, and cross-disciplinary research between space physics and pattern recognition. Corresponding author of this paper

                GAO Xin-bo Professor of Xidian University and the president of Chongqing University. His current research interests include Machine Learning, Image processing, computer vision, pattern recognition and multimedia analysis

                YANG Zheng Ph. D.candidate at the School of Electronic Engineering, Xidian University. He received his bachelor degree in intelligent science and technology from Xidian University in 2019. His research interests include deep learning, object tracking, and reinforcement learning

                CHEN Wei-Ming Master student at the School of Electronic Engineering, Xidian University. He received his bachelor degree in mechanical design manufacture and automation from Xidian University in 2019. His research interests include computer vision, object detection, and remote sensing

              • 摘要: 目前大多數深度學習算法都依賴于大量的標注數據并欠缺一定的泛化能力. 無監督域適應算法能提取到已標注數據和未標注數據間隱式共同特征, 從而提高算法在未標注數據上的性能. 目前域適應目標檢測算法主要為兩階段目標檢測器設計. 針對單階段檢測器中無法直接進行實例級特征對齊導致一定數量域不變特征的缺失, 提出結合通道注意力機制的圖像級域分類器加強域不變特征提取. 此外對于域適應目標檢測中存在類別特征的錯誤對齊引起的精度下降問題, 通過原型學習構建類別中心, 設計了一種基于原型的循環域三元損失函數, 從而實現原型引導的精細類別特征對齊. 以單階段目標檢測算法作為檢測器, 在多種域適應目標檢測公共數據集上進行實驗. 實驗結果證明該方法能有效提升原檢測器在目標域的泛化能力達到更高的檢測精度, 并且對于單階段目標檢測網絡具有一定的通用性.
              • 圖  1  基于注意力機制和循環域三元損失的域適應目標檢測算法流程

                Fig.  1  The pipeline of domain adaptive object detection based on attention mechanism and cycle domain triplet loss

                圖  2  循環域三元損失函數原理圖

                Fig.  2  Principal of cycle domain adaptive tripleLoss

                圖  3  本文方法在CityScapes→Foggy CityScapes上的主觀檢測結果

                Fig.  3  The Subjective results of the ours method on CityScapes→Foggy CityScapes

                圖  4  本文方法在SunnyDay→DuskRainy和SunnyDay→NightRainy上的主觀檢測結果

                Fig.  4  The Subjective results of the ours method on SunnyDay→DuskRainy and SunnyDay→NightRainy

                圖  5  本文方法在KITTI→CityScapes和Sim10k→CityScapes上的消融實驗結果

                Fig.  5  The Subjective results of the ablation experiment on KITTI→CityScapes and Sim10k→CityScapes

                圖  6  本文方法在VOC→Clipart1k上的主觀結果

                Fig.  6  The Subjective results of ours on VOC→Clipart1k

                圖  7  不同循環迭代訓練次數在YOLOv3和YOLOv5檢測器上的結果

                Fig.  7  The result of different cycle iteration on YOLOv3 and YOLOv5

                表  1  CityScapes→Foggy CityScapes數據集上的對比實驗, “?”代表該方法沒有進行此實驗

                Table  1  The results of different methods on CityScapes→Foggy CityScapes, “?” represents the experiment is absent on this method

                方法檢測器personridercartruckbusmotorbiketrainmAPmGP
                DAF[10]Faster-RCNN25.031.040.522.135.320.027.120.227.738.8
                SWDA[11]Faster-RCNN29.942.343.524.536.230.035.332.634.370.0
                C2F[14]Faster-RCNN34.046.952.130.843.234.737.429.938.679.1
                CAFA[16]Faster-RCNN41.938.756.722.641.524.635.526.836.081.9
                ICCR-VDD[21]Faster-RCNN33.444.051.733.952.034.236.834.740.0
                MeGA[20]Faster-RCNN37.749.052.425.449.234.539.046.941.891.1
                DAYOLO[28]YOLOv329.527.746.19.128.212.724.84.536.161.0
                本文方法(v3)YOLOv334.037.255.831.444.422.330.850.738.383.9
                MS-DAYOLO[31]YOLOv439.646.556.528.951.027.536.045.941.568.6
                A-DAYOLO[32]YOLOv532.835.751.318.834.511.825.616.228.3
                S-DAYOLO[34]YOLOv542.642.161.923.540.524.437.339.539.069.9
                本文方法(v5)YOLOv530.937.453.323.839.524.229.935.034.383.8
                下載: 導出CSV

                表  2  SunnyDay→DuskRainy數據集上的對比實驗

                Table  2  The results of different methods on SunnyDay→DuskRainy

                方法檢測器busbikecarmotorpersonridertruckmAP$\Delta$mAP
                DAF[10]Faster-RCNN43.627.552.316.128.521.744.833.55.2
                SWDA[11]Faster-RCNN40.022.851.415.426.320.344.231.53.2
                ICCR-VDD[21]Faster-RCNN47.933.255.126.130.523.848.137.89.5
                本文方法(v3)YOLOv350.124.970.724.239.119.053.240.27.4
                本文方法(v5)YOLOv546.222.168.216.534.817.550.536.59.4
                下載: 導出CSV

                表  3  SunnyDay→NightRainy數據集上的對比實驗

                Table  3  The results of different methods on SunnyDay→NightRainy

                方法檢測器busbikecarmotorpersonridertruckmAP$\Delta$mAP
                DAF[10]Faster-RCNN23.812.037.70.214.94.029.017.41.1
                SWDA[11]Faster-RCNN24.710.033.70.613.510.429.117.41.1
                ICCR-VDD[21]Faster-RCNN34.815.638.610.518.717.330.623.77.4
                本文方法(v3)YOLOv345.08.251.14.020.99.637.925.35.1
                本文方法(v5)YOLOv540.79.345.00.612.89.232.521.54.7
                下載: 導出CSV

                表  4  KITTI→CityScapes和Sim10k→CityScapes數據集上的對比實驗, “?”代表該方法沒有進行此實驗

                Table  4  The results of different methods on KITTI→CityScapes and Sim10k→CityScapes, “?” represents the experiment is absent on this method

                方法KITTI→CityScapesSim10k→CityScapes
                APGPAPGP
                DAF[10]38.521.039.022.5
                SWDA[11]37.919.542.330.8
                C2F[14]43.835.3
                CAFA[16]43.232.949.047.7
                MeGA[20]43.032.444.837.0
                DAYOLO[28]54.082.250.939.5
                本文方法(v3)61.129.460.837.1
                A-DAYOLO[32]37.744.9
                S-DAYOLO[34]49.352.9
                本文方法(v5)60.050.460.356.3
                下載: 導出CSV

                表  5  CityScapes→FoggyCityscapes數據集上基于YOLOv3的消融實驗

                Table  5  The results of ablation experiment on CityScapes→FoggyCityscapes based on YOLOv3

                方法personridercartruckbusmotorbiketrainmAP
                SO29.835.044.720.432.414.828.321.628.4
                CADC34.438.054.724.445.021.232.149.137.2
                CDTL31.138.046.728.934.523.427.813.730.5
                CADC+CDTL34.037.255.831.444.422.330.850.738.3
                Oracle34.938.855.925.345.022.633.449.140.2
                下載: 導出CSV

                表  6  CityScapes→FoggyCityscapes數據集上基于YOLOv5的消融實驗

                Table  6  The results of ablation experiment on CityScapes→FoggyCityscapes based on YOLOv5

                方法personridercartruckbusmotorbiketrainmAP
                SO26.933.139.98.921.111.324.84.921.4
                CADC32.637.152.726.838.123.038.132.634.1
                CDTL29.736.743.213.125.517.128.713.126.2
                CADC+CDTL30.937.453.323.839.524.229.935.034.3
                Oracle34.837.957.524.442.723.133.240.836.8
                下載: 導出CSV

                表  7  SunnyDay→DuskRainy數據集上基于YOLOv3的消融實驗

                Table  7  The results of ablation experiment on SunnyDay→DuskRainy based on YOLOv3

                方法busbikecarmotorpersonridertruckmAP
                Source Only43.714.368.412.031.510.948.732.8
                CADC50.022.670.823.238.418.7 53.539.6
                CDTL45.420.169.215.234.817.247.835.7
                CADC+ CDTL50.1 24.970.7 24.2 39.119.053.240.2
                下載: 導出CSV

                表  8  SunnyDay→DuskRainy數據集上基于YOLOv5的消融實驗

                Table  8  The results of ablation experiment on SunnyDay→DuskRainy based on YOLOv5

                方法busbikecarmotorpersonridertruckmAP
                Source Only37.28.463.85.523.77.943.427.1
                CADC45.622.168.216.634.515.450.135.9
                CDTL41.613.165.57.629.710.244.930.4
                CADC+ CDTL46.222.1 68.2 16.534.817.550.5 36.5
                下載: 導出CSV

                表  9  SunnyDay→NightRainy數據集上基于YOLOv3的消融實驗

                Table  9  The results of ablation experiment on SunnyDay→NightRainy based on YOLOv3

                方法busbikecarmotorpersonridertruckmAP
                Source Only39.25.144.20.214.86.930.720.2
                CADC44.48.150.90.620.2 11.338.324.8
                CDTL40.48.245.80.616.27.233.421.7
                CADC+ CDTL45.08.2 51.14.020.99.637.925.3
                下載: 導出CSV

                表  10  SunnyDay→NightRainy數據集上基于YOLOv5的消融實驗

                Table  10  The results of ablation experiment on SunnyDay→NightRainy based on YOLOv5

                方法busbikecarmotorpersonridertruckmAP
                Source Only25.43.236.30.29.14.420.814.2
                CADC38.78.342.70.312.36.432.020.1
                CDTL34.36.244.20.511.28.730.319.3
                CADC+ CDTL40.79.345.0 0.6 12.8 9.232.5 21.5
                下載: 導出CSV

                表  11  KITTI→CityScapes和Sim10k→CityScapes數據集上的對比實驗

                Table  11  The results of different methods on KITTI→CityScapes and Sim10k→CityScapes

                KITTISim10k
                YOLOv3
                Source Only59.658.5
                CADC60.559.6
                CDTL60.560.8
                CADC+ CDTL61.159.8
                Oracle64.764.7
                YOLOv5
                Source Only54.053.1
                CADC59.558.6
                CDTL59.060.3
                CADC+ CDTL60.059.0
                Oracle65.965.9
                下載: 導出CSV

                表  12  本文方法在VOC→Clipart1k上的實驗

                Table  12  The experiment on VOC→Clipart1k

                方法aerobcyclebirdboatbottlebuscarcatchaircowtabledoghrsbikeprsnplntsheepsofatraintvmAP
                I3Net23.766.225.319.323.755.235.713.637.835.525.413.924.160.356.339.813.634.556.041.835.1
                I3Net+CDTL23.361.627.817.124.754.339.812.341.434.132.215.527.677.957.037.45.5031.351.847.836.0
                I3Net+CDTL+$CADC^*$31.260.431.819.427.063.340.713.741.138.427.218.025.567.854.937.215.536.454.847.837.6
                下載: 導出CSV

                表  13  本文方法在VOC→Comic2k上的實驗

                Table  13  The experiment on VOC→Comic2k

                方法bikebirdcarcatdogpersonmAP
                I3Net44.917.831.910.723.546.329.2
                I3Net+CDTL43.715.131.511.718.646.927.9
                I3Net+CDTL+CADC*47.816.033.815.124.443.530.1
                下載: 導出CSV

                表  14  本文方法在VOC→Watercolor2k上的實驗

                Table  14  The experiment on VOC→Watercolor2k

                方法bikebirdcarcatdogpersonmAP
                I3Net81.349.643.638.231.361.751.0
                I3Net+CDTL79.547.241.733.535.460.349.6
                I3Net+CDTL+CADC*84.145.346.632.931.461.450.3
                下載: 導出CSV

                表  15  像素級對齊對網絡的影響

                Table  15  The impact of pixel alignment to network

                方法檢測器C→FK→CS→C
                CDTL+CADCYOLOv335.959.858.4
                CDTL+CADC+$D_{pixel}$YOLOv337.260.559.6
                CDTL+CADCYOLOv532.758.956.8
                CDTL+CADC+$D_{pixel}$YOLOv534.159.558.6
                下載: 導出CSV

                表  16  通道注意力域分類器中損失函數的選擇

                Table  16  The choice of loss function in channel attention domain classifier(CADC)

                檢測器$F_1$$F_2$$F_3$mAP
                YOLOv3/v5CECECE35.8/32.7
                YOLOv3/v5CECEFL36.4/33.2
                YOLOv3/v5CEFLFL37.2/34.1
                YOLOv3/v5FLFLFL37.0/33.5
                下載: 導出CSV
                1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
                  <samp id="qm3rj"></samp>
                  <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

                  <video id="qm3rj"><code id="qm3rj"></code></video>

                    1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
                        亚洲第一网址_国产国产人精品视频69_久久久久精品视频_国产精品第九页
                      1. [1] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS), Lake Tahoe, Nevada, USA: IEEE, 2012. 1097?1105.
                        [2] Bottou L, Bousquet O. The tradeoffs of large scale learning[J]. Advances in neural information processing systems, 2007, 20.
                        [3] Shen J, Qu Y, Zhang W, et al. Wasserstein distance guided representation learning for domain adaptation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2018, 32.
                        [4] 皋軍, 黃麗莉, 孫長銀. 一種基于局部加權均值的領域適應學習框架. 自動化學報, 2013, 39(7): 1037-1052. doi: 10.3724/SP.J.1004.2013.01037.

                        GAO Jun, HUANG Li-Li, SUN Chang-Yin. A Local Weighted Mean Based Domain Adaptation Learning Framework. ACTA AUTOMATICA SINICA, 2013, 39(7): 1037-1052. doi: 10.3724/SP.J.1004.2013.01037
                        [5] Ganin Y, Ustinova E, Ajakan H, et al. Domain-adversarial training of neural networks[J]. The journal of machine learning research, 2016, 17(1): 2096-2030.
                        [6] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks[J]. Communications of the ACM, 2020, 63(11): 139-144. doi: 10.1145/3422622
                        [7] 郭迎春, 馮放, 閻剛, 郝小可. 基于自適應融合網絡的跨域行人重識別方法. 自動化學報, 2022, 48(11): 2744-2756 doi: 10.16383/j.aas.c220083.

                        Guo Ying-Chun, Feng Fang, Yan Gang, Hao Xiao-Ke. Cross-domain person re-identification on adaptive fusion network. Acta Automatica Sinica, 2022, 48(11): 2744?2756 doi: 10.16383/j.aas.c220083
                        [8] 梁文琦, 王廣聰, 賴劍煌. 基于多對多生成對抗網絡的非對稱跨域遷移行人再識別. 自動化學報, 2022, 48(1): 103-120 doi: 10.16383/j.aas.c190303.

                        Liang Wen-Qi, Wang Guang-Cong, Lai Jian-Huang. Asymmetric cross-domain transfer learning of person re-identification based on the many-to-many generative adversarial network. Acta Automatica Sinica, 2022, 48(1): 103?120 doi: 10.16383/j.aas.c190303.
                        [9] Ren S, He K, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks[J]. Advances in neural information processing systems, 2015, 28.
                        [10] Chen Y, Li W, Sakaridis C, et al. Domain adaptive faster r-cnn for object detection in the wild[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 3339?3348.
                        [11] Saito K, Ushiku Y, Harada T, et al. Strong-Weak distribution alignment for adaptive object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 6956?6965.
                        [12] Lin T Y, Goyal P, Girshick R, et al. Focal Loss for Dense Object Detection[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2017, PP(99): 2999-3007.
                        [13] Shen Z, Maheshwari H, Yao W, et al. Scl: Towards accurate domain adaptive object detection via gradient detach based stacked complementary losses[J]. arXiv preprint arXiv: 1911.02559, 2019.
                        [14] Zheng Y, Huang D, Liu S, et al. Cross-domain Object Detection through Coarse-to-Fine Feature Adaptation: IEEE, 10.1109/CVPR42600.2020.01378[P]. 2020.
                        [15] Xu C D, Zhao X R, Jin X, et al. Exploring categorical regularization for domain adaptive object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 11724?11733.
                        [16] HSU C C, TSAI Y H, LIN Y Y, et al. Every Pixel Matters: Center-Aware Feature Alignment for Domain Adaptive Object Detector[C]//VEDALDI A, BISCHOF H, BROX T, et al. Computer Vision – ECCV 2020. Cham: Springer International Publishing, 2020: 733?748.
                        [17] Chen C, Zheng Z, Ding X, et al. Harmonizing transferability and discriminability for adapting object detectors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 8869?8878.
                        [18] Zhu J Y, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2223?2232.
                        [19] Deng J, Li W, Chen Y, et al. Unbiased mean teacher for cross-domain object detection[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 4091?4101.
                        [20] Xu M, Wang H, Ni B, et al. Cross-domain detection via graph-induced prototype alignment[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 12355?12364.
                        [21] Wu A, Liu R, Han Y, et al. Vector-decomposed disentanglement for domain-invariant object detection[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 9342?9351.
                        [22] Chen C, Zheng Z, Huang Y, et al. I3net: Implicit instance-invariant network for adapting one-stage object detectors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 12576?12585.
                        [23] 李威, 王蒙. 基于漸進多源域遷移的無監督跨域目標檢測[J]. 自動化學報, 2022, 48(8): 1-15. doi: 10.16383/j.aas.c190532

                        Li Wei, Wang Meng. Unsupervised cross-domain object detection based on progressive multi-source transfer. Acta Automatica Sinica, 2022, 48(9): 2337?2351 doi: 10.16383/j.aas.c190532.
                        [24] A. L. Rodriguez and K. Mikolajczyk, "Domain adaptation for object detection via style consistency," British Machine Vision Conference, 2019.
                        [25] Liu W, Anguelov D, Erhan D, et al. Ssd: Single shot multibox detector[C]//European conference on computer vision. Springer, Cham, 2016: 21?37.
                        [26] Redmon J, Divvala S, Girshick R, et al. You only look once: Unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 779?788.
                        [27] Yolov8 [Online], available: https://github.com/ultralytics/yolov8, Feb 15, 2023
                        [28] Zhang S, Tuo H, Hu J, et al. Domain Adaptive YOLO for One-Stage Cross-Domain Detection[C]//Asian Conference on Machine Learning. PMLR, 2021: 785?797.
                        [29] Redmon J, Farhadi A. Yolov3: An incremental improvement[J]. arXiv preprint arXiv: 1804.02767, 2018.
                        [30] HNEWA M, RADHA H. Integrated Multiscale Domain Adaptive YOLO[J]. arXiv: 2202.03527, 2022
                        [31] Bochkovskiy A, Wang C Y, Liao H Y M. Yolov4: Optimal speed and accuracy of object detection[J]. arXiv preprint arXiv: 2004.10934, 2020.
                        [32] Vidit V, Salzmann M. Attention-based domain adaptation for single-stage detectors[J]. Machine Vision and Applications, 2022, 33(5): 65. doi: 10.1007/s00138-022-01320-y
                        [33] Yolov5[Online], available: https://github.com/ultralytics/yolov5, Nov 28, 2022
                        [34] LI G, JI Z, QU X, et al. Cross-Domain Object Detection for Autonomous Driving: A Stepwise Domain Adaptative YOLO Approach[J]. IEEE Transactions on Intelligent Vehicles, 2022: 1-1.
                        [35] Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 7132?7141
                        [36] Wang Q, Wu B, Zhu P, et al. ECA-Net: Efficient channel attention for deep convolutional neural networks[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 11534?11542.
                        [37] Lee H J, Kim H E, Nam H. Srm: A style-based recalibration module for convolutional neural networks[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 1854?1862.
                        [38] Wang M, Wang W, Li B, et al. Interbn: Channel fusion for adversarial unsupervised domain adaptation[C] //Proceedings of the 29th ACM international conference on multimedia. 2021: 3691?3700.
                        [39] Ding S, Lin L, Wang G, et al. Deep feature learning with relative distance comparison for person re-identification[J]. Pattern Recognition, 2015, 48(10): 2993-3003. doi: 10.1016/j.patcog.2015.04.005
                        [40] Snell J, Swersky K, Zemel R. Prototypical networks for few-shot learning[J]. Advances in neural information processing systems, 2017, 30.
                        [41] He K, Fan H, Wu Y, et al. Momentum contrast for unsupervised visual representation learning[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 9729?9738.
                        [42] Cordts M, Omran M, Ramos S, et al. The cityscapes dataset for semantic urban scene understanding[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 3213?3223.
                        [43] Sakaridis C, Dai D, Van Gool L. Semantic foggy scene understanding with synthetic data[J]. International Journal of Computer Vision, 2018, 126(9): 973-992. doi: 10.1007/s11263-018-1072-8
                        [44] Yu F, Chen H, Wang X, et al. Bdd100k: A diverse driving dataset for heterogeneous multitask learning[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 2636?2645.
                        [45] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research, 32(11): 1231-1237, 2013. doi: 10.1177/0278364913491297
                        [46] Johnson-Roberson M, Barto C, Mehta R, et al. Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks?[J]. arXiv preprint arXiv: 1610.01983, 2016.
                        [47] Everingham M, Van Gool L, Williams C K I, et al. The pascal visual object classes (voc) challenge[J]. International journal of computer vision, 2009, 88: 303-308.
                        [48] Inoue N, Furuta R, Yamasaki T, et al. Cross-domain weakly-supervised object detection through progressive domain adaptation[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 5001?5009.
                      2. 加載中
                      3. 計量
                        • 文章訪問數:  443
                        • HTML全文瀏覽量:  161
                        • 被引次數: 0
                        出版歷程
                        • 收稿日期:  2022-12-05
                        • 錄用日期:  2023-05-18
                        • 網絡出版日期:  2023-08-18

                        目錄

                          /

                          返回文章
                          返回