1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
      <samp id="qm3rj"></samp>
      <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

      <video id="qm3rj"><code id="qm3rj"></code></video>

        1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
            1. 2.845

              2023影響因子

              (CJCR)

              • 中文核心
              • EI
              • 中國科技核心
              • Scopus
              • CSCD
              • 英國科學(xué)文摘

              留言板

              尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復。謝謝您的支持!

              姓名
              郵箱
              手機號碼
              標題
              留言?xún)热?/th>
              驗證碼

              基于注意力機制和循環(huán)域三元損失的域自適應目標檢測

              周洋 韓冰 高新波 楊錚 陳瑋銘

              周洋, 韓冰, 高新波, 楊錚, 陳瑋銘. 基于注意力機制和循環(huán)域三元損失的域自適應目標檢測. 自動(dòng)化學(xué)報, 2024, 50(11): 2188?2203 doi: 10.16383/j.aas.c220938
              引用本文: 周洋, 韓冰, 高新波, 楊錚, 陳瑋銘. 基于注意力機制和循環(huán)域三元損失的域自適應目標檢測. 自動(dòng)化學(xué)報, 2024, 50(11): 2188?2203 doi: 10.16383/j.aas.c220938
              Zhou Yang, Han Bing, Gao Xin-Bo, Yang Zheng, Chen Wei-Ming. Domain adaptive object detection based on attention mechanism and cycle domain triplet loss. Acta Automatica Sinica, 2024, 50(11): 2188?2203 doi: 10.16383/j.aas.c220938
              Citation: Zhou Yang, Han Bing, Gao Xin-Bo, Yang Zheng, Chen Wei-Ming. Domain adaptive object detection based on attention mechanism and cycle domain triplet loss. Acta Automatica Sinica, 2024, 50(11): 2188?2203 doi: 10.16383/j.aas.c220938

              基于注意力機制和循環(huán)域三元損失的域自適應目標檢測

              doi: 10.16383/j.aas.c220938 cstr: 32138.14.j.aas.c220938
              基金項目: 國家自然科學(xué)基金(62076190, 41831072, 62036007), 陜西省重點(diǎn)創(chuàng )新產(chǎn)業(yè)鏈基金(2022ZDLGY01-11), 西安市重點(diǎn)產(chǎn)業(yè)鏈技術(shù)攻關(guān)項目(23ZDCYJSGG0022-2023), 國家空間科學(xué)數據中心青年開(kāi)放課題基金(NSSDC2302005)資助
              詳細信息
                作者簡(jiǎn)介:

                周洋:西安電子科技大學(xué)電子工程學(xué)院碩士研究生. 2020年獲得西南石油大學(xué)電子信息工程專(zhuān)業(yè)學(xué)士學(xué)位. 主要研究方向為計算機視覺(jué)和域自適應目標檢測. E-mail: yzhou_6@stu.xidian.edu.cn

                韓冰:西安電子科技大學(xué)電子工程學(xué)院教授. 主要研究方向為智能輔助駕駛系統, 視覺(jué)感知與認知, 空間物理與人工智能交叉. 本文通信作者. E-mail: bhan@xidian.edu.cn

                高新波:西安電子科技大學(xué)教授. 主要研究方向為機器學(xué)習, 圖像處理, 計算機視覺(jué), 模式識別和多媒體內容分析. E-mail: xbgao@ieee.org

                楊錚:西安電子科技大學(xué)電子工程學(xué)院博士研究生. 2017年獲得西安電子科技大學(xué)智能科學(xué)與技術(shù)專(zhuān)業(yè)學(xué)士學(xué)位. 主要研究方向為深度學(xué)習, 目標跟蹤和強化學(xué)習. E-mail: zhengy@stu.xidian.edu.cn

                陳瑋銘:西安電子科技大學(xué)電子工程學(xué)院碩士研究生. 2019年獲得西安電子科技大學(xué)機械設計制造及其自動(dòng)化專(zhuān)業(yè)學(xué)士學(xué)位. 主要研究方向為計算機視覺(jué), 目標檢測和遙感技術(shù). E-mail: wmchen@stu.xidian.edu.cn

              Domain Adaptive Object Detection Based on Attention Mechanism and Cycle Domain Triplet Loss

              Funds: Supported by National Natural Science Foundation of China (62076190, 41831072, 62036007), Key Industry Innovation Chain of Shaanxi Province (2022ZDLGY01-11), Key Industry Chain Technology Research Project of Xi'an (23ZDCYJSGG0022-2023), and Youth Open Project of National Space Science Data Center (NSSDC2302005)
              More Information
                Author Bio:

                ZHOU Yang Master student at the School of Electronic Engineering, Xidian University. He received his bachelor degree in electronic and information engineering from Southwest Petroleum University in 2020. His research interest covers computer vision and domain adaptive detection

                HAN Bing Professor at the School of Electronic Engineering, Xidian University. Her research interest covers intelligent auxiliary drive system, visual perception and cognition, and cross-disciplinary research between space physics and artificial intelligence. Corresponding author of this paper

                GAO Xin-Bo Professor at Xidian University. His research interest covers machine learning, image processing, computer vision, pattern recognition, and multimedia content analysis

                YANG Zheng Ph.D. candidate at the School of Electronic Engineering, Xidian University. He received his bachelor degree in intelligent science and technology from Xidian University in 2017. His research interest covers deep learning, object tracking, and reinforcement learning

                CHEN Wei-Ming Master student at the School of Electronic Engineering, Xidian University. He received his bachelor degree in mechanical design manufacture and automation from Xidian University in 2019. His research interest covers computer vision, object detection, and remote sensing

              • 摘要: 目前大多數深度學(xué)習算法都依賴(lài)于大量的標注數據并欠缺一定的泛化能力. 無(wú)監督域自適應算法能提取到已標注數據和未標注數據間隱式共同特征, 從而提高算法在未標注數據上的泛化性能. 目前域自適應目標檢測算法主要為兩階段目標檢測器設計. 針對單階段檢測器中無(wú)法直接進(jìn)行實(shí)例級特征對齊導致一定數量域不變特征的缺失, 提出結合通道注意力機制的圖像級域分類(lèi)器加強域不變特征提取. 此外, 對于域自適應目標檢測中存在類(lèi)別特征的錯誤對齊引起的精度下降問(wèn)題, 通過(guò)原型學(xué)習構建類(lèi)別中心, 設計了一種基于原型的循環(huán)域三元損失(Cycle domain triplet loss, CDTL)函數, 從而實(shí)現原型引導的精細類(lèi)別特征對齊. 以單階段目標檢測算法作為檢測器, 并在多種域自適應目標檢測公共數據集上進(jìn)行實(shí)驗. 實(shí)驗結果證明該方法能有效提升原檢測器在目標域的泛化能力, 達到比其他方法更高的檢測精度, 并且對于單階段目標檢測網(wǎng)絡(luò )具有一定的通用性.
              • 圖  1  基于注意力機制和循環(huán)域三元損失的域自適應目標檢測算法流程

                Fig.  1  The pipeline of domain adaptive object detection based on attention mechanism and cycle domain triplet loss

                圖  2  循環(huán)域自適應三元損失函數原理

                Fig.  2  Principle of cycle domain adaptive TripleLoss

                圖  3  本文方法在CityScapes→FoggyCityScapes上的主觀(guān)檢測結果

                Fig.  3  The subjective results of our method on CityScapes→FoggyCityScapes

                圖  4  本文方法在SunnyDay→DuskRainy和SunnyDay→NightRainy上的主觀(guān)檢測結果

                Fig.  4  The subjective results of our method on SunnyDay→DuskRainy and SunnyDay→NightRainy

                圖  5  本文方法在KITTI→CityScapes和Sim10k→CityScapes上的消融實(shí)驗結果

                Fig.  5  The ablation experimental results of our method on KITTI→CityScapes and Sim10k→CityScapes

                圖  6  本文方法在VOC→Clipart1k上的主觀(guān)結果

                Fig.  6  The subjective results of our method on VOC→Clipart1k

                圖  7  不同循環(huán)迭代訓練次數在YOLOv3和YOLOv5s檢測器上的結果

                Fig.  7  The result of different cycle iterations on YOLOv3 and YOLOv5s

                表  1  不同方法在CityScapes→FoggyCityScapes數據集上的對比實(shí)驗結果(%)

                Table  1  The results of different methods on the CityScapes→FoggyCityScapes dataset (%)

                方法檢測器personridercartruckbusmotorbiketrainmAPmGP
                DAF[10]Faster R-CNN25.031.040.522.135.320.027.120.227.738.8
                SWDA[11]Faster R-CNN29.942.343.524.536.230.035.332.634.370.0
                C2F[14]Faster R-CNN34.046.952.130.843.234.737.429.938.679.1
                CAFA[16]Faster R-CNN41.938.756.722.641.524.635.526.836.081.9
                ICCR-VDD[21]Faster R-CNN33.444.051.733.952.034.236.834.740.0
                MeGA[20]Faster R-CNN37.749.052.425.449.234.539.046.941.891.1
                DAYOLO[28]YOLOv329.527.746.19.128.212.724.84.536.161.0
                本文方法(v3)YOLOv334.037.255.831.444.422.330.850.738.383.9
                MS-DAYOLO[31]YOLOv439.646.556.528.951.027.536.045.941.568.6
                A-DAYOLO[32]YOLOv532.835.751.318.834.511.825.616.228.3
                S-DAYOLO[34]YOLOv542.642.161.923.540.524.437.339.539.069.9
                本文方法(v5)YOLOv5s30.937.453.323.839.524.229.935.034.383.8
                 注: “—”表示該方法沒(méi)有進(jìn)行此實(shí)驗; (v3)表示檢測器為YOLOv3; (v5)表示檢測器為YOLOv5s; 加粗數值表示對比實(shí)驗中的最佳結果.
                下載: 導出CSV

                表  2  不同方法在SunnyDay→DuskRainy數據集上的對比實(shí)驗結果(%)

                Table  2  The results of different methods on the SunnyDay→DuskRainy dataset (%)

                方法檢測器busbikecarmotorpersonridertruckmAP$\Delta{\rm{mAP}}$
                DAF[10]Faster R-CNN43.627.552.316.128.521.744.833.55.2
                SWDA[11]Faster R-CNN40.022.851.415.426.320.344.231.53.2
                ICCR-VDD[21]Faster R-CNN47.933.255.126.130.523.848.137.89.5
                本文方法(v3)YOLOv350.124.970.724.239.119.053.240.27.4
                本文方法(v5)YOLOv5s46.222.168.216.534.817.550.536.59.4
                 注: $\Delta {\rm{mAP}}$表示mAP的漲幅程度.
                下載: 導出CSV

                表  3  不同方法在SunnyDay→NightRainy數據集上的對比實(shí)驗結果(%)

                Table  3  The results of different methods on the SunnyDay→NightRainy dataset (%)

                方法檢測器busbikecarmotorpersonridertruckmAP$\Delta {\rm{mAP}}$
                DAF[10]Faster R-CNN23.812.037.70.214.94.029.017.41.1
                SWDA[11]Faster R-CNN24.710.033.70.613.510.429.117.41.1
                ICCR-VDD[21]Faster R-CNN34.815.638.610.518.717.330.623.77.4
                本文方法(v3)YOLOv345.08.251.14.020.99.637.925.35.1
                本文方法(v5)YOLOv5s40.79.345.00.612.89.232.521.54.7
                下載: 導出CSV

                表  4  KITTI→CityScapes和Sim10k→CityScapes數據集上的對比實(shí)驗結果(%)

                Table  4  The results of different methods on KITTI→CityScapes and Sim10k→CityScapes datasets (%)

                方法KITTI→CityScapesSim10k→CityScapes
                APGPAPGP
                DAF[10]38.521.039.022.5
                SWDA[11]37.919.542.330.8
                C2F[14]43.835.3
                CAFA[16]43.232.949.047.7
                MeGA[20]43.032.444.837.0
                DAYOLO[28]54.082.250.939.5
                本文方法(v3)61.129.460.837.1
                A-DAYOLO[32]37.744.9
                S-DAYOLO[34]49.352.9
                本文方法(v5)60.050.460.356.3
                下載: 導出CSV

                表  5  CityScapes→FoggyCityScapes數據集上基于YOLOv3的消融實(shí)驗結果(%)

                Table  5  The results of ablation experiment on CityScapes→FoggyCityScapes dataset based on YOLOv3 (%)

                方法personridercartruckbusmotorbiketrainmAP
                SO29.835.044.720.432.414.828.321.628.4
                CADC34.438.054.724.445.021.232.149.137.2
                CDTL31.138.046.728.934.523.427.813.730.5
                CADC + CDTL34.037.255.831.444.422.330.850.738.3
                Oracle34.938.855.925.345.022.633.449.140.2
                下載: 導出CSV

                表  6  CityScapes→FoggyCityScapes數據集上基于YOLOv5s的消融實(shí)驗結果(%)

                Table  6  The results of ablation experiment on CityScapes→FoggyCityScapes dataset based on YOLOv5s (%)

                方法personridercartruckbusmotorbiketrainmAP
                SO26.933.139.98.921.111.324.84.921.4
                CADC32.637.152.726.838.123.038.132.634.1
                CDTL29.736.743.213.125.517.128.713.126.2
                CADC + CDTL30.937.453.323.839.524.229.935.034.3
                Oracle34.837.957.524.442.723.133.240.836.8
                下載: 導出CSV

                表  7  SunnyDay→DuskRainy數據集上基于YOLOv3的消融實(shí)驗結果(%)

                Table  7  The results of ablation experiment on SunnyDay→DuskRainy dataset based on YOLOv3 (%)

                方法busbikecarmotorpersonridertruckmAP
                SO43.714.368.412.031.510.948.732.8
                CADC50.022.670.823.238.418.7 53.539.6
                CDTL45.420.169.215.234.817.247.835.7
                CADC + CDTL50.1 24.970.7 24.2 39.119.053.240.2
                下載: 導出CSV

                表  8  SunnyDay→DuskRainy數據集上基于YOLOv5s的消融實(shí)驗結果(%)

                Table  8  The results of ablation experiment on SunnyDay→DuskRainy dataset based on YOLOv5s (%)

                方法busbikecarmotorpersonridertruckmAP
                SO37.28.463.85.523.77.943.427.1
                CADC45.622.168.216.634.515.450.135.9
                CDTL41.613.165.57.629.710.244.930.4
                CADC + CDTL46.222.1 68.2 16.534.817.550.5 36.5
                下載: 導出CSV

                表  9  SunnyDay→NightRainy數據集上基于YOLOv3的消融實(shí)驗結果(%)

                Table  9  The results of ablation experiment on SunnyDay→NightRainy dataset based on YOLOv3 (%)

                方法busbikecarmotorpersonridertruckmAP
                SO39.25.144.20.214.86.930.720.2
                CADC44.48.150.90.620.2 11.338.324.8
                CDTL40.48.245.80.616.27.233.421.7
                CADC + CDTL45.08.2 51.14.020.99.637.925.3
                下載: 導出CSV

                表  10  SunnyDay→NightRainy數據集上基于YOLOv5s的消融實(shí)驗結果(%)

                Table  10  The results of ablation experiment on SunnyDay→NightRainy dataset based on YOLOv5s (%)

                方法busbikecarmotorpersonridertruckmAP
                SO25.43.236.30.29.14.420.814.2
                CADC38.78.342.70.312.36.432.020.1
                CDTL34.36.244.20.511.28.730.319.3
                CADC + CDTL40.79.345.0 0.6 12.8 9.232.5 21.5
                下載: 導出CSV

                表  11  KITTI→CityScapes和Sim10k→CityScapes數據集上的對比實(shí)驗結果(%)

                Table  11  The results of different methods on KITTI→CityScapes and Sim10k→CityScapes datasets (%)

                方法KITTISim10k
                YOLOv3SO59.658.5
                CADC60.559.6
                CDTL60.560.8
                CADC + CDTL61.159.8
                Oracle64.764.7
                YOLOv5sSO54.053.1
                CADC59.558.6
                CDTL59.060.3
                CADC + CDTL60.059.0
                Oracle65.965.9
                下載: 導出CSV

                表  12  本文方法在VOC→Clipart1k上的實(shí)驗(%)

                Table  12  The experiment of our method on VOC→Clipart1k (%)

                方法aerobcyclebirdboatbottlebuscarcatchaircowtabledoghrsbikeprsnplntsheepsofatraintvmAP
                I3Net23.766.225.319.323.755.235.713.637.835.525.413.924.160.356.339.813.634.556.041.835.1
                I3Net + CDTL23.361.627.817.124.754.339.812.341.434.132.215.527.677.957.037.45.5031.351.847.836.0
                I3Net + CDTL + ${\rm{CADC}}^*$31.260.431.819.427.063.340.713.741.138.427.218.025.567.854.937.215.536.454.847.837.6
                下載: 導出CSV

                表  13  本文方法在VOC→Comic2k上的實(shí)驗(%)

                Table  13  The experiment of our method on VOC→Comic2k (%)

                方法bikebirdcarcatdogpersonmAP
                I3Net44.917.831.910.723.546.329.2
                I3Net + CDTL43.715.131.511.718.646.927.9
                I3Net + CDTL + CADC*47.816.033.815.124.443.530.1
                下載: 導出CSV

                表  14  本文方法在VOC→Watercolor2k上的實(shí)驗(%)

                Table  14  The experiment of our method on VOC→Watercolor2k (%)

                方法bikebirdcarcatdogpersonmAP
                I3Net81.349.643.638.231.361.751.0
                I3Net + CDTL79.547.241.733.535.460.349.6
                I3Net + CDTL + CADC*84.145.346.632.931.461.450.3
                下載: 導出CSV

                表  15  像素級對齊對網(wǎng)絡(luò )的影響(%)

                Table  15  The impact of pixel alignment to network (%)

                方法檢測器C→FK→CS→C
                CDTL + CADCYOLOv335.959.858.4
                CDTL + CADC + $D_{{\rm{pixel}}}$YOLOv337.260.559.6
                CDTL + CADCYOLOv5s32.758.956.8
                CDTL + CADC + $D_{{\rm{pixel}}}$YOLOv5s34.159.558.6
                下載: 導出CSV

                表  16  通道注意力域分類(lèi)器中損失函數的選擇

                Table  16  The choice of loss function in channel attention domain classifier

                檢測器$F_1$$F_2$$F_3$mAP (%)
                YOLOv3/v5sCECECE35.8/32.7
                YOLOv3/v5sCECEFL36.4/33.2
                YOLOv3/v5sCEFLFL37.2/34.1
                YOLOv3/v5sFLFLFL37.0/33.5
                下載: 導出CSV
                1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
                  <samp id="qm3rj"></samp>
                  <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

                  <video id="qm3rj"><code id="qm3rj"></code></video>

                    1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
                        亚洲第一网址_国产国产人精品视频69_久久久久精品视频_国产精品第九页
                      1. [1] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, USA: NIPS, 2012. 1106?1114
                        [2] Bottou L, Bousquet O. The tradeoffs of large scale learning. In: Proceedings of the 20th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2007. 161?168
                        [3] Shen J, Qu Y R, Zhang W N, Yu Y. Wasserstein distance guided representation learning for domain adaptation. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. New Orleans, USA: AAAI, 2018. 4058?4065
                        [4] 皋軍, 黃麗莉, 孫長(cháng)銀. 一種基于局部加權均值的領(lǐng)域自適應學(xué)習框架. 自動(dòng)化學(xué)報, 2013, 39(7): 1037?1052

                        Gao Jun, Huang Li-Li, Sun Chang-Yin. A local weighted mean based domain adaptation learning framework. Acta Automatica Sinica, 2013, 39(7): 1037?1052
                        [5] Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, et al. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 2016, 17(1): 2096?2030
                        [6] Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial networks. Communications of the ACM, 2020, 63(11): 139?144 doi: 10.1145/3422622
                        [7] 郭迎春, 馮放, 閻剛, 郝小可. 基于自適應融合網(wǎng)絡(luò )的跨域行人重識別方法. 自動(dòng)化學(xué)報, 2022, 48(11): 2744?2756

                        Guo Ying-Chun, Feng Fang, Yan Gang, Hao Xiao-Ke. Cross-domain person re-identification on adaptive fusion network. Acta Automatica Sinica, 2022, 48(11): 2744?2756
                        [8] 梁文琦, 王廣聰, 賴(lài)劍煌. 基于多對多生成對抗網(wǎng)絡(luò )的非對稱(chēng)跨域遷移行人再識別. 自動(dòng)化學(xué)報, 2022, 48(1): 103?120

                        Liang Wen-Qi, Wang Guang-Cong, Lai Jian-Huang. Asymmetric cross-domain transfer learning of person re-identification based on the many-to-many generative adversarial network. Acta Automatica Sinica, 2022, 48(1): 103?120
                        [9] Ren S Q, He K M, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press, 2015. 91?99
                        [10] Chen Y H, Li W, Sakaridis C, Dai D X, Van Gool L. Domain adaptive faster R-CNN for object detection in the wild. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 3339?3348
                        [11] Saito K, Ushiku Y, Harada T, Saenko K. Strong-weak distribution alignment for adaptive object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 6949?6958
                        [12] Lin T Y, Goyal P, Girshick R, He K M, Dollar P. Focal loss for dense object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 42(2): 318?327 doi: 10.1109/TPAMI.2018.2858826
                        [13] Shen Z Q, Maheshwari H, Yao W C, Savvides M. SCL: Towards accurate domain adaptive object detection via gradient detach based stacked complementary losses. arXiv preprint arXiv: 1911.02559, 2019.
                        [14] Zheng Y T, Huang D, Liu S T, Wang Y H. Cross-domain object detection through coarse-to-fine feature adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 13763?13772
                        [15] Xu C D, Zhao X R, Jin X, Wei X S. Exploring categorical regularization for domain adaptive object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 11721?11730
                        [16] Hsu C C, Tsai Y H, Lin Y Y, Yang M H. Every pixel matters: Center-aware feature alignment for domain adaptive object detector. In: Proceedings of the 16th European Conference on Computer Vision (ECCV). Glasgow, UK: Springer, 2020. 733?748
                        [17] Chen C Q, Zheng Z B, Ding X H, Huang Y, Dou Q. Harmonizing transferability and discriminability for adapting object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 8866?8875
                        [18] Zhu J Y, Park T, Isola P, Efros A A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017. 2242?2251
                        [19] Deng J H, Li W, Chen Y H, Duan L X. Unbiased mean teacher for cross-domain object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Nashville, USA: IEEE, 2021. 4089?4099
                        [20] Xu M H, Wang H, Ni B B, Tian Q, Zhang W J. Cross-domain detection via graph-induced prototype alignment. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA: IEEE, 2020. 12352?12361
                        [21] Wu A M, Liu R, Han Y H, Zhu L C, Yang Y. Vector-decomposed disentanglement for domain-invariant object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE, 2021. 9322?9331
                        [22] Chen C Q, Zheng Z B, Huang Y, Ding X H, Yu Y Z. I.3Net: Implicit instance-invariant network for adapting one-stage object detectors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2021. 12576?12585
                        [23] 李威, 王蒙. 基于漸進(jìn)多源域遷移的無(wú)監督跨域目標檢測. 自動(dòng)化學(xué)報, 2022, 48(9): 2337?2351

                        Li Wei, Wang Meng. Unsupervised cross-domain object detection based on progressive multi-source transfer. Acta Automatica Sinica, 2022, 48(9): 2337?2351
                        [24] Rodriguez A L, Mikolajczyk K. Domain adaptation for object detection via style consistency. In: Proceedings of the 30th British Machine Vision Conference. Cardiff, UK: BMVA Press, 2019.
                        [25] Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C Y, et al. SSD: Single shot MultiBox detector. In: Proceedings of the 14th European Conference on Computer Vision (ECCV). Amsterdam, The Netherlands: Springer, 2016. 21?37
                        [26] Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 779?788
                        [27] Yolov8 [Online], available: https://github.com/ultralytics/yolov8, February 15, 2023
                        [28] Zhang S Z, Tuo H Y, Hu J, Jing Z L. Domain adaptive YOLO for one-stage cross-domain detection. In: Proceedings of the 13th Asian Conference on Machine Learning. PMLR, 2021. 785?797
                        [29] Redmon J, Farhadi A. YOLOv3: An incremental improvement. arXiv preprint arXiv: 1804.02767, 2018.
                        [30] Hnewa M, Radha H. Integrated multiscale domain adaptive YOLO. IEEE Transactions on Image Processing, 2023, 32: 1857?1867 doi: 10.1109/TIP.2023.3255106
                        [31] Bochkovskiy A, Wang C Y, Liao H Y M. YOLOv4: Optimal speed and accuracy of object detection. arXiv preprint arXiv: 2004.10934, 2020.
                        [32] Vidit V, Salzmann M. Attention-based domain adaptation for single-stage detectors. Machine Vision and Applications, 2022, 33(5): Article No. 65 doi: 10.1007/s00138-022-01320-y
                        [33] YOLOv5 [Online], available: https://github.com/ultralytics/yolov5, November 28, 2022
                        [34] Li G F, Ji Z F, Qu X D, Zhou R, Cao D P. Cross-domain object detection for autonomous driving: A stepwise domain adaptative YOLO approach. IEEE Transactions on Intelligent Vehicles, 2022, 7(3): 603?615 doi: 10.1109/TIV.2022.3165353
                        [35] Hu J, Shen L, Sun G. Squeeze-and-excitation networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 7132?7141
                        [36] Wang Q L, Wu B G, Zhu P F, Li P H, Zuo W M, Hu Q H. ECA-Net: Efficient channel attention for deep convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 11531?11539
                        [37] Lee H, Kim H E, Nam H. SRM: A style-based recalibration module for convolutional neural networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). Seoul, South Korea: IEEE, 2019. 1854?1862
                        [38] Wang M Z, Wang W, Li B P, Zhang X, Lan L, Tan H B, et al. InterBN: Channel fusion for adversarial unsupervised domain adaptation. In: Proceedings of the 29th ACM International Conference on Multimedia. Virtual Event: ACM, 2021. 3691?3700
                        [39] Ding S Y, Lin L, Wang G R, Chao H Y. Deep feature learning with relative distance comparison for person re-identification. Pattern Recognition, 2015, 48(10): 2993?3003 doi: 10.1016/j.patcog.2015.04.005
                        [40] Snell J, Swersky K, Zemel R. Prototypical networks for few-shot learning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 4080?4090
                        [41] He K M, Fan H Q, Wu Y X, Xie S N, Girshick R. Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 9726?9735
                        [42] Cordts M, Omran M, Ramos S, Rehfeld T, Enzweiler M, Benenson R, et al. The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 3213?3223
                        [43] Sakaridis C, Dai D X, Van Gool L. Semantic foggy scene understanding with synthetic data. International Journal of Computer Vision, 2018, 126(9): 973?992 doi: 10.1007/s11263-018-1072-8
                        [44] Yu F, Chen H F, Wang X, Xian W Q, Chen Y Y, Liu F C, et al. Bdd100K: A diverse driving dataset for heterogeneous multitask learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, USA: IEEE, 2020. 2633?2642
                        [45] Geiger A, Lenz P, Stiller C, Urtasun R. Vision meets robotics: The KITTI dataset. The International Journal of Robotics Research, 2013, 32(11): 1231?1237 doi: 10.1177/0278364913491297
                        [46] Johnson-Roberson M, Barto C, Mehta R, Sridhar S N, Rosaen K, Vasudevan R. Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks? In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). Singapore: IEEE, 2017. 746?753
                        [47] Everingham M, Van Gool L, Williams C K I, Winn J, Zisserman A. The Pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 2010, 88(2): 303?338 doi: 10.1007/s11263-009-0275-4
                        [48] Inoue N, Furuta R, Yamasaki T, Aizawa K. Cross-domain weakly-supervised object detection through progressive domain adaptation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 5001?5009
                      2. 加載中
                      3. 圖(7) / 表(16)
                        計量
                        • 文章訪(fǎng)問(wèn)數:  658
                        • HTML全文瀏覽量:  266
                        • PDF下載量:  87
                        • 被引次數: 0
                        出版歷程
                        • 收稿日期:  2022-12-05
                        • 錄用日期:  2023-05-18
                        • 網(wǎng)絡(luò )出版日期:  2023-08-18
                        • 刊出日期:  2024-11-26

                        目錄

                          /

                          返回文章
                          返回