1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
      <samp id="qm3rj"></samp>
      <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

      <video id="qm3rj"><code id="qm3rj"></code></video>

        1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
            1. 2.765

              2022影響因子

              (CJCR)

              • 中文核心
              • EI
              • 中國科技核心
              • Scopus
              • CSCD
              • 英國科學文摘

              留言板

              尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

              姓名
              郵箱
              手機號碼
              標題
              留言內容
              驗證碼

              基于特征變換和度量網絡的小樣本學習算法

              王多瑞 杜楊 董蘭芳 胡衛明 李兵

              王多瑞, 杜楊, 董蘭芳, 胡衛明, 李兵. 基于特征變換和度量網絡的小樣本學習算法. 自動化學報, xxxx, xx(x): x?xx doi: 10.16383/j.aas.c210903
              引用本文: 王多瑞, 杜楊, 董蘭芳, 胡衛明, 李兵. 基于特征變換和度量網絡的小樣本學習算法. 自動化學報, xxxx, xx(x): x?xx doi: 10.16383/j.aas.c210903
              Wang Duo-Rui, Du Yang, Dong Lan-Fang, Hu Wei-Ming, Li Bing. Metric based feature transformation networks for few-shot learning. Acta Automatica Sinica, xxxx, xx(x): x?xx doi: 10.16383/j.aas.c210903
              Citation: Wang Duo-Rui, Du Yang, Dong Lan-Fang, Hu Wei-Ming, Li Bing. Metric based feature transformation networks for few-shot learning. Acta Automatica Sinica, xxxx, xx(x): x?xx doi: 10.16383/j.aas.c210903

              基于特征變換和度量網絡的小樣本學習算法

              doi: 10.16383/j.aas.c210903
              基金項目: 國家重點研發計劃(2018AAA0102802), 國家自然科學基金(62036011,62192782, 61721004), 中國科學院前沿科學重點研究計劃(QYZDJ-SSW-JSC040)資助
              詳細信息
                作者簡介:

                王多瑞:北京航空航天大學博士研究生. 2021年獲得中國科學技術大學碩士學位. 主要研究方向為小樣本學習和目標檢測.E-mail: wangduor@mail.ustc.edu.cn

                杜楊:2019年獲得中國科學院自動化研究所博士學位. 主要研究方向為行為識別和醫學圖像處理.E-mail: jingzhou.dy@alibaba-inc.com

                董蘭芳:中國科學技術大學副教授. 1994年獲得中國科學技術大學碩士學位. 主要研究方向為圖像和視頻智能分析,知識圖譜和對話系統,數值模擬及三維重建.E-mail: lfdong@ustc.edu.cn

                胡衛明:中國科學院自動化研究所研究員. 1998年獲得浙江大學博士學位. 主要研究方向為視頻運動分析和網絡多媒體內容安全分析與識別.E-mail: wmhu@nlpr.ia.ac.cn

                李兵:中國科學院自動化研究所研究員. 2009年獲得北京交通大學博士學位. 主要研究方向為網絡內容安全和智能ISP成像.E-mail: bing.li@ia.ac.cn

              Metric Based Feature Transformation Networks for Few-shot Learning

              Funds: Supported by National Key R&D Program of China (2018AAA0102802), Natural Science Foundation of China (62036011, 62192782, 61721004), Key Research Program of Frontier Sciences, Chinese Academy of Sciences (QYZDJ-SSW-JSC040)
              More Information
                Author Bio:

                WANG Duo-Rui Ph.D. candidate in Beihang University. He received his master’s degree from University of Science and Technology of China in 2021. His research interest covers few-shot learning and object detection

                DU Yang He received his Ph.D. degree from the Institute of Automation Chinese Academy of Sciences. His research interests covers action recognition and medical image processing

                DONG Lan-Fang Associated Professor in University of Science and Technology of China. She received her master’s degree from University of Science and Technology of China in 1994. Her research interest covers the intelligent image and video analysis, knowledge mapping and dialogue systems, numerical simulation and 3D reconstruction

                HU Wei-Ming Professor in the Institute of Automation Chinese Academy of Sciences. He received his Ph. D. degree from Zhejiang University in 1998. His research interest covers the visual motion analysis, recognition of web objectionable information, and network intrusion detection. Corresponding author of this paper

                LI Bing Professor in the Institute of Automation Chinese Academy of Sciences. He received his Ph. D. degree from Beijing Jiaotong University in 2009. His research interest covers the web content security and intelligent ISP imaging

              • 摘要: 在小樣本分類任務中, 每類可供訓練的樣本非常有限, 同類樣本在特征空間中分布稀疏, 異類樣本間的邊界模糊. 文章提出一種新的基于特征變換的網絡, 并使用度量的方法來處理小樣本分類任務. 算法通過嵌入函數將樣本映射到特征空間并計算輸入樣本與樣本中心的特征殘差, 利用特征變換函數學習樣本中心與同類樣本間的殘差, 使樣本在特征空間中向同類樣本中心靠攏, 更新樣本中心在特征空間中的位置使它們之間的距離增大. 融合余弦相似度和歐氏距離構造一個新的度量方法, 設計一個度量函數對特征圖中每個局部特征的度量距離進行聯合地表達, 該函數在網絡優化時可同時對樣本特征間的夾角和歐氏距離進行優化. 網絡模型在小樣本分類任務常用數據集上的表現證明, 該模型性能優秀且具有泛化性.
              • 圖  1  特征變換和度量網絡模型

                Fig.  1  Model of feature transformation networks

                圖  2  各模塊中關鍵函數的結構

                Fig.  2  structure of important functions of networks

                表  1  網絡模型的嵌入函數與重要結構

                Table  1  Networks’ embedding function and important structures

                模型嵌入函數模型重要結構
                MN[11]Conv-4注意力長短時記憶網絡
                ProtoNet[12]Conv-4“原型”概念、使用歐氏距離進行度量
                RN[14]Conv-4卷積神經網絡作為度量函數
                EGNN[22]Conv-4邊標簽預測節點類別
                EGNN+Transduction[22]ResNet-12邊標簽預測節點類別、轉導和標簽傳遞
                DN4[24]ResNet-12局部描述子、圖像與類別間的相似性度量
                DC[25]Conv-4稠密分類
                DC+IMP[25]Conv-4稠密分類、神經網絡遷移
                MBFTNConv-4特征變換模塊、特征度量模塊
                MBFTN-R12ResNet-12特征變換模塊、特征度量模塊
                下載: 導出CSV

                表  2  Omniglot數據集上的平均分類精度對比 (%)

                Table  2  Comparison of mean accuracy (%) with existing algorithms on Omniglot dataset

                模型5-類20-類
                1-樣本5-樣本1-樣本5-樣本
                MN[11]98.198.993.898.5
                ProtoNet[12]98.899.796.098.9
                SN[13]97.398.488.297.0
                RN[14]99.6 ± 0.299.8 ± 0.197.6 ± 0.299.1 ± 0.1
                SM[15]98.499.695.098.6
                MetaNet[16]98.9597.0
                MANN[17]82.894.9
                MAML[18]98.7 ± 0.499.9 ± 0.195.8 ± 0.398.9 ± 0.2
                MMNet[26]99.28 ± 0.0899.77 ± 0.0497.16 ± 0.198.93 ± 0.05
                MBFTN99.7 ± 0.199.9 ± 0.198.3 ± 0.199.5 ± 0.1
                下載: 導出CSV

                表  3  miniImageNet數據集上的平均分類精度對比 (%)

                Table  3  Comparison of mean accuracy (%) with existing algorithms on miniImageNet dataset

                模型5-類
                1-樣本5-樣本
                MN[11]43.4 ± 0.7851.09 ± 0.71
                ML-LSTM[11]43.56 ± 0.8455.31 ± 0.73
                ProtoNet[12]49.42 ± 0.7868.20 ± 0.66
                RN[14]50.44 ± 0.8265.32 ± 0.70
                MetaNet[16]49.21 ± 0.96
                MAML[18]48.70 ± 1.8463.11 ± 0.92
                EGNN[22]66.85
                EGNN+Transduction[22]76.37
                DN4[24]51.24 ± 0.7471.02 ± 0.64
                DC[25]62.53 ± 0.1978.95 ± 0.13
                DC+IMP[25]79.77 ± 0.19
                MMNet[26]53.37 ± 0.0866.97 ± 0.09
                PredictNet[27]54.53 ± 0.4067.87 ± 0.20
                DynamicNet[28]56.20 ± 0.8672.81 ± 0.62
                MN-FCE[29]43.44 ± 0.7760.60 ± 0.71
                MetaOptNet[30]60.64 ± 0.6178.63 ± 0.46
                MBFTN59.86 ± 0.9175.96 ± 0.82
                MBFTN-R1261.33 ± 0.2179.59 ± 0.47
                下載: 導出CSV

                表  4  在CUB-200、CIFAR-FS和tieredImageNet上的分類精度(%)

                Table  4  Mean accuracy (%) with existing algorithms on CUB-200, CIFAR-FS and tieredImageNet dataset

                模型CUB-200 5-類CIFAR-FS 5-類tieredImageNet 5-類
                1-樣本5-樣本1-樣本5-樣本1-樣本5-樣本
                MN[11]61.16 ± 0.8972.86 ± 0.70
                ProtoNet[12]51.31 ± 0.9170.77 ± 0.6955.5 ± 0.772.0 ± 0.653.31 ± 0.8972.69 ± 0.74
                RN[14]62.45 ± 0.9876.11 ± 0.6955.0 ± 1.069.3 ± 0.854.48 ± 0.9371.32 ± 0.78
                MAML[18]55.92 ± 0.9572.09 ± 0.7658.9 ± 1.971.5 ± 1.051.67 ± 1.8170.30 ± 1.75
                EGNN[22]63.52 ± 0.5280.24 ± 0.49
                DN4[24]53.15 ± 0.8481.90 ± 0.60
                MetaOptNet[30]72.0 ± 0.784.2 ± 0.565.99 ± 0.7281.56 ± 0.53
                MBFTN-R1269.58 ± 0.3685.46 ± 0.7970.3 ± 0.582.6 ± 0.362.14 ± 0.6381.74 ± 0.33
                下載: 導出CSV

                表  5  網絡模型的消融實驗結果對比

                Table  5  Ablation study of our model

                模型5-類
                1-樣本5-樣本
                ProtoNet-4C49.42 ± 0.7868.20 ± 0.66
                ProtoNet-8C51.18 ± 0.7370.23 ± 0.46
                ProtoNet-Trans-4C53.47 ± 0.4671.33 ± 0.23
                ProtoNet-M-4C56.54 ± 0.5773.46 ± 0.53
                ProtoNet-VLAD-4C52.46 ± 0.6770.83 ± 0.62
                Trans*-M-4C59.86 ± 0.9167.86 ± 0.56
                僅使用54.62 ± 0.5772.58 ± 0.38
                僅使用歐氏距離55.66 ± 0.6773.34 ± 0.74
                ProtoNet-Trans-M-4C59.86 ± 0.9175.96 ± 0.82
                下載: 導出CSV
                1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
                  <samp id="qm3rj"></samp>
                  <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

                  <video id="qm3rj"><code id="qm3rj"></code></video>

                    1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
                        亚洲第一网址_国产国产人精品视频69_久久久久精品视频_国产精品第九页
                      1. [1] Szegedy C, Liu W, Jia Y Q, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston, USA: IEEE, 2015. 1-9
                        [2] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 770-778
                        [3] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 26th International Conference on Neural Information Processing Systems (NIPS). Lake Tahoe, USA: NIPS, 2012. 1106-1114
                        [4] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. In: Proceedings of the 3rd International Conference on Learning Representations. San Diego, USA: ICLR, 2015.
                        [5] 劉穎, 雷研博, 范九倫, 王富平, 公衍超, 田奇. 基于小樣本學習的圖像分類技術綜述. 自動化學報, 2021, 1(2): 297-315 doi: 10.16383/j.aas.c190720

                        Liu Ying, Lei Yan-Bo, Fan Jiu-Lun, Wang Fu-Ping, Gong Yan-Chao, Tian Qi. Survey on image classification technology based on small sample learning. Acta Automatica Sinica, 2021, 1(2): 297-315 doi: 10.16383/j.aas.c190720
                        [6] Miller E G, Matsakis N E, Viola P A. Learning from one example through shared densities on transforms. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Hilton Head Island, USA: IEEE, 2000. 464-471
                        [7] Fei-Fei L, Fergus R, Perona P. One-shot learning of object categories. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 1(4): 594-6111
                        [8] Lake B M, Salakhutdinov R, Gross J, Tenenbaum J B. One shot learning of simple visual concepts. In: Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (CogSci). Boston, USA: CogSci, 2011. 2568-2573
                        [9] Lake B M, Salakhutdinov R, Tenenbaum J B. Human-level concept learning through probabilistic program induction. Science, 2015, 1(6266): 1332-1338
                        [10] Edwards H, Storkey A J. Towards a neural statistician. In: Proceedings of the 5th International Conference on Learning Representations. Toulon, France: ICLR, 2017.
                        [11] Vinyals O, Blundell C, Lillicrap T, Kavukcuoglu K, Wierstra D. Matching networks for one shot learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. Barcelona, Spain: Curran Associates Inc., 2016. 3637-3645
                        [12] Snell J, Swersky K, Zemel R. Prototypical networks for few-shot learning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 4080-4090
                        [13] Koch G, Zemel R, Salakhutdinov R. Siamese neural networks for one-shot image recognition. In: Proceedings of the 32nd International Conference on Machine Learning. Lille, France: JMLR, 2015.
                        [14] Sung F, Yang Y X, Zhang L, Xiang T, Torr P H S, Hospedales T M. Learning to compare: Relation network for few-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 1199-1208
                        [15] Kaiser ?, Nachum O, Roy A, Bengio S. Learning to remember rare events. In: Proceedings of the 5th International Conference on Learning Representations. Toulon, France: ICLR, 2017.
                        [16] Munkhdalai T, Yu H. Meta networks. In: Proceedings of the 34th International Conference on Machine Learning - Volume 70. Sydney, Australia: JMLR.org, 2017. 2554-2563
                        [17] . Santoro A, Bartunov S, Botvinick M, Wierstra D, Lillicrap T. Meta-learning with memory-augmented neural networks. In: Proceedings of the 33rd International Conference on Machine Learning. New York City, USA: PMLR, 2016. 1842-1850
                        [18] Finn C, Abbeel P, Levine S. Model-agnostic meta-learning for fast adaptation of deep networks. In: Proceedings of the 34th International Conference on Machine Learning - Volume 70. Sydney, Australia: JMLR.org, 2017. 1126-1135
                        [19] Arandjelovic R, Gronat P, Torii A, Pajdla T, Sivic J. NetVLAD: CNN architecture for weakly supervised place recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016. 5297-5307
                        [20] Jégou H, Douze M, Schmid C, Pérez P. Aggregating local descriptors into a compact image representation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco, USA: IEEE, 2010. 3304-3311
                        [21] Bertinetto L, Henriques J F, Torr P H, Vedaldi A. Meta-learning with differentiable closed-form solvers. In: Proceedings of the 7th International Conference on Learning Representations. New Orleans, USA: ICLR, 2019.
                        [22] Kim J, Kim T, Kim S, Yoo C D. Edge-labeling graph neural network for few-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 11-20
                        [23] Yue Z Q, Zhang H W, Sun Q R, Hua X S. Interventional Few-Shot Learning. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada: Curran Associates Inc., 2020. Article No. 230
                        [24] Li W B, Wang L, Xu J L, Huo J, Gao Y, Luo J B. Revisiting local descriptor based image-to-class measure for few-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 7253-7260
                        [25] Lifchitz Y, Avrithis Y, Picard S, Bursuc A. Dense classification and implanting for few-shot learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 9250-9259
                        [26] Cai Q, Pan Y W, Yao T, Yan C G, Mei T. Memory matching networks for one-shot image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 4080-4088
                        [27] Qiao S Y, Liu C X, Shen W, Yuille A L. Few-shot image recognition by predicting parameters from activations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 7229-7238
                        [28] Gidaris S, Komodakis N. Dynamic few-shot visual learning without forgetting. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 4367-4375
                        [29] Ravi S, Larochelle H. Optimization as a model for few-shot learning. In: Proceedings of the 5th International Conference on Learning Representations. Toulon, France: ICLR, 2017.
                        [30] Lee K, Maji S, Ravichandran A, Soatto S. Meta-learning with differentiable convex optimization. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, USA: IEEE, 2019. 10649-10657
                      2. 加載中
                      3. 計量
                        • 文章訪問數:  255
                        • HTML全文瀏覽量:  102
                        • 被引次數: 0
                        出版歷程
                        • 收稿日期:  2021-09-03
                        • 錄用日期:  2021-12-11
                        • 網絡出版日期:  2023-09-11

                        目錄

                          /

                          返回文章
                          返回