1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
      <samp id="qm3rj"></samp>
      <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

      <video id="qm3rj"><code id="qm3rj"></code></video>

        1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
            1. 2.765

              2022影響因子

              (CJCR)

              • 中文核心
              • EI
              • 中國科技核心
              • Scopus
              • CSCD
              • 英國科學(xué)文摘

              留言板

              尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復。謝謝您的支持!

              姓名
              郵箱
              手機號碼
              標題
              留言?xún)热?/th>
              驗證碼

              卷積神經(jīng)網(wǎng)絡(luò )結構優(yōu)化綜述

              林景棟 吳欣怡 柴毅 尹宏鵬

              林景棟, 吳欣怡, 柴毅, 尹宏鵬. 卷積神經(jīng)網(wǎng)絡(luò )結構優(yōu)化綜述. 自動(dòng)化學(xué)報, 2020, 46(1): 24-37. doi: 10.16383/j.aas.c180275
              引用本文: 林景棟, 吳欣怡, 柴毅, 尹宏鵬. 卷積神經(jīng)網(wǎng)絡(luò )結構優(yōu)化綜述. 自動(dòng)化學(xué)報, 2020, 46(1): 24-37. doi: 10.16383/j.aas.c180275
              LIN Jing-Dong, WU Xin-Yi, CHAI Yi, YIN Hong-Peng. Structure Optimization of Convolutional Neural Networks: A Survey. ACTA AUTOMATICA SINICA, 2020, 46(1): 24-37. doi: 10.16383/j.aas.c180275
              Citation: LIN Jing-Dong, WU Xin-Yi, CHAI Yi, YIN Hong-Peng. Structure Optimization of Convolutional Neural Networks: A Survey. ACTA AUTOMATICA SINICA, 2020, 46(1): 24-37. doi: 10.16383/j.aas.c180275

              卷積神經(jīng)網(wǎng)絡(luò )結構優(yōu)化綜述

              doi: 10.16383/j.aas.c180275
              基金項目: 

              國家自然科學(xué)基金 61633005

              國家自然科學(xué)基金 61773080

              中央高?;究蒲袠I(yè)務(wù)費專(zhuān)項資金 2019CDYGZD001

              重慶市基礎科學(xué)與研究技術(shù)專(zhuān)項 cstc2015jcyjB0569

              重慶大學(xué)科研后備拔尖人才 cqu2018CDHB1B04

              重慶市重點(diǎn)科技專(zhuān)項子項 cstc2015shms-ztzx30001

              詳細信息
                作者簡(jiǎn)介:

                林景棟??重慶大學(xué)自動(dòng)化學(xué)院副教授. 2002年獲得重慶大學(xué)博士學(xué)位.主要研究方向為工業(yè)自動(dòng)化生產(chǎn)線(xiàn)設計, 智能家居控制系統的設計.E-mail: linzhanding@163.com

                吳欣怡??重慶大學(xué)自動(dòng)化學(xué)院碩士研究生. 2016年獲得重慶大學(xué)學(xué)士學(xué)位.主要研究方向為深度學(xué)習, 計算機視覺(jué). E-mail: wuxinyi12358@gmail.com

                柴毅??重慶大學(xué)自動(dòng)化學(xué)院教授.2001年獲得重慶大學(xué)博士學(xué)位.主要研究方向為信息處理, 融合與控制, 計算機網(wǎng)絡(luò )與系統控制. E-mail:chaiyi@cqu.edu.cn

                通訊作者:

                尹宏鵬  重慶大學(xué)自動(dòng)化學(xué)院教授.2009年獲得重慶大學(xué)博士學(xué)位.主要研究方向為模式識別與智能系統.本文通信作者.E-mail:yinhongpeng@gmail.com

              Structure Optimization of Convolutional Neural Networks: A Survey

              Funds: 

              National Natural Science Foundation of China 61633005

              National Natural Science Foundation of China 61773080

              Fundamental Research Funds for the Central Universities 2019CDYGZD001

              Chongqing Nature Science Foundation of Fundamental Science and Frontier Technologies cstc2015jcyjB0569

              Scientiflc Reserved Talents of Chongqing University cqu2018CDHB1B04

              Chongqing Nature Science Foundation of Scientiflc Key Program cstc2015shms-ztzx30001

              More Information
                Author Bio:

                LIN Jing-Dong?? Associate professor at the College of Automation, Chongqing University. He received his Ph. D. degree from Chongqing University in 2002. His research interest covers industrial automation line design, and smart home control system design

                WU Xin-Yi?? Master student at the College of Automation, Chongqing University. He received his bachelor degree from Chongqing University in 2016. His research interest covers deep learning and computer vision

                CHAI Yi?? Professor at the College of Automation, Chongqing University. He received his Ph. D. degree from Chongqing University in 2001. His research interest covers information processing, integration and control, and computer network and system control

                Corresponding author: YIN Hong-Peng   Professor at the College of Automation, Chongqing University. He received his Ph. D. degree from Chongqing University in 2009. His research interest covers pattern recognition, image processing, and computer vision. Corresponding author of this paper
              • 摘要: 近年來(lái), 卷積神經(jīng)網(wǎng)絡(luò )(Convolutional neural network, CNNs)在計算機視覺(jué)、自然語(yǔ)言處理、語(yǔ)音識別等領(lǐng)域取得了突飛猛進(jìn)的發(fā)展, 其強大的特征學(xué)習能力引起了國內外專(zhuān)家學(xué)者廣泛關(guān)注.然而, 由于深度卷積神經(jīng)網(wǎng)絡(luò )普遍規模龐大、計算度復雜, 限制了其在實(shí)時(shí)要求高和資源受限環(huán)境下的應用.對卷積神經(jīng)網(wǎng)絡(luò )的結構進(jìn)行優(yōu)化以壓縮并加速現有網(wǎng)絡(luò )有助于深度學(xué)習在更大范圍的推廣應用, 目前已成為深度學(xué)習社區的一個(gè)研究熱點(diǎn).本文整理了卷積神經(jīng)網(wǎng)絡(luò )結構優(yōu)化技術(shù)的發(fā)展歷史、研究現狀以及典型方法, 將這些工作歸納為網(wǎng)絡(luò )剪枝與稀疏化、張量分解、知識遷移和精細模塊設計4個(gè)方面并進(jìn)行了較為全面的探討.最后, 本文對當前研究的熱點(diǎn)與難點(diǎn)作了分析和總結, 并對網(wǎng)絡(luò )結構優(yōu)化領(lǐng)域未來(lái)的發(fā)展方向和應用前景進(jìn)行了展望.
                Recommended by Associate Editor HE Wei
                1)  本文責任編委?賀威
              • 圖  1  LeNet-5網(wǎng)絡(luò )結構[6]

                Fig.  1  Structure of LeNet-5[6]

                圖  2  四種剪枝粒度方式[26]

                Fig.  2  Four pruning granularities[26]

                圖  3  張量分解過(guò)程

                Fig.  3  Process of tensor factorization

                圖  4  知識遷移過(guò)程

                Fig.  4  Process of knowledge transfer

                圖  5  Inception-v1結構[4]

                Fig.  5  Inception-v1 module[4]

                圖  6  卷積核分解示意圖[57]

                Fig.  6  Process of convolutional filter factorization[57]

                圖  7  Xception模塊[59]

                Fig.  7  Xception module[59]

                圖  8  線(xiàn)性卷積結構與多層感知機卷積結構[55]

                Fig.  8  Linear convolutional structure and Mlpconv structure[55]

                圖  9  殘差模塊[5]

                Fig.  9  Residual module[5]

                表  1  經(jīng)典卷積神經(jīng)網(wǎng)絡(luò )的性能及相關(guān)參數

                Table  1  Classic convolutional neural networks and corresponding parameters

                年份 網(wǎng)絡(luò )名稱(chēng) 網(wǎng)絡(luò )層數 卷積層數量 參數數量 卷積層 全連接層 乘加操作數(MACs) 卷積層 全連接層 Top-5錯誤率(%)
                2012 AlexNet[1] 8 5 2.3M 58.6M 666 M 58.6M 16.4
                2014 Overfeat[2] 8 5 16 M 130M 2.67G 124M 14.2
                2014 VGGNet-16[3] 16 13 14.7M 124M 15.3 G 130M 7.4
                2015 GoogLeNet[4] 22 21 6M 1M 1.43 G 1M 6.7
                2016 ResNet-50[5] 50 49 23.5M 2M 3.86 G 2M 3.6
                下載: 導出CSV

                表  2  網(wǎng)絡(luò )剪枝對不同網(wǎng)絡(luò )的壓縮效果

                Table  2  Comparison of different pruned networks

                所用方法 選用網(wǎng)絡(luò ) 初始錯誤率 剪枝后錯誤率 初始參數量 剪枝后參數量 壓縮率
                [28] AlexNet 19.73% 19.70% 61 M 6.7M
                [29] CaffeNet 42.16% 44.4% 61 M 21.3M
                [30] LeNet-5 0.91% 0.91 % 431 K 4.0 K 108×
                [33] VGGNet-16 6.75% 6.6% 150 M 5.4M 28×
                [34] ResNet-50 8.86% 11.7% 25.56 M 8.66M
                下載: 導出CSV
                1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
                  <samp id="qm3rj"></samp>
                  <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

                  <video id="qm3rj"><code id="qm3rj"></code></video>

                    1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
                        亚洲第一网址_国产国产人精品视频69_久久久久精品视频_国产精品第九页
                      1. [1] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, Nevada, USA: Curran Associates Inc., 2012. 1097-1105
                        [2] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks. In: Proceedings of 13th European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014. 818-833 http://cn.bing.com/academic/profile?id=2e04eadd73b7358f1ea104aef2c94bd4&encoded=0&v=paper_preview&mkt=zh-cn
                        [3] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv: 1409.1556, 2014. http://cn.bing.com/academic/profile?id=9a83dddfc646cd21a3e38737d303a369&encoded=0&v=paper_preview&mkt=zh-cn
                        [4] Szegedy C, Liu W, Jia Y Q, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA, USA: IEEE, 2015. 1-9 http://cn.bing.com/academic/profile?id=7d4011aa0a4959f0c5e4af61acc12466&encoded=0&v=paper_preview&mkt=zh-cn
                        [5] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016. 770-778 http://cn.bing.com/academic/profile?id=d3fa279e4a35560a5429ba8f84dff15e&encoded=0&v=paper_preview&mkt=zh-cn
                        [6] LeCun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11): 2278-2324 doi: 10.1109/5.726791
                        [7] He K M, Sun J. Convolutional neural networks at constrained time cost. In: Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 5353-5360
                        [8] Le Cun Y, Denker J S, Solla S A. Optimal brain damage. In: Proceedings of the 2nd International Conference on Neural Information Processing Systems. Denver, Colorado, USA: MIT Press, 1989. 598-605
                        [9] Hassibi B, Stork D G, Wolff G, Watanabe T. Optimal brain surgeon: extensions and performance comparisons. In: Proceedings of the 6th International Conference on Neural Information Processing Systems. Denver, Colorado, USA: Morgan Kaufmann Publishers Inc., 1993. 263-270
                        [10] Cheng Y, Wang D, Zhou P, Zhang T. A survey of model compression and acceleration for deep neural networks. arXiv: 1710.09282, 2017.
                        [11] Cheng J, Wang P S, Li G, Hu Q H, Lu H Q. Recent advances in efficient computation of deep convolutional neural networks. Frontiers of Information Technology & Electronic Engineering, 2018, 19(1): 64-77 http://d.old.wanfangdata.com.cn/Periodical/zjdxxbc-e201801008
                        [12] 雷杰, 高鑫, 宋杰, 王興路, 宋明黎.深度網(wǎng)絡(luò )模型壓縮綜述.軟件學(xué)報, 2018, 29(2): 251-266 http://d.old.wanfangdata.com.cn/Periodical/rjxb201802002

                        Lei Jie, Gao Xin, Song Jie, Wang Xing-Lu, Song Ming-Li. Survey of deep neural network model compression. Journal of Software, 2018, 29(2): 251-266 http://d.old.wanfangdata.com.cn/Periodical/rjxb201802002
                        [13] Hu H Y, Peng R, Tai Y W, Tang C K. Network trimming: a data-driven neuron pruning approach towards efficient deep architectures. arXiv: 1607.03250, 2016.
                        [14] Cheng Y, Wang D, Zhou P, Zhang T. Model compression and acceleration for deep neural networks: the principles, progress, and challenges. IEEE Signal Processing Magazine, 2018, 35(1): 126-136 http://cn.bing.com/academic/profile?id=c41edb9c79f4cae56125bbfc508801d3&encoded=0&v=paper_preview&mkt=zh-cn
                        [15] Gong Y C, Liu L, Yang M, Bourdev L. Compressing deep convolutional networks using vector quantization. arXiv: 1412.6115, 2014. http://cn.bing.com/academic/profile?id=fb878b7aaad93122079eeaf80c4b058f&encoded=0&v=paper_preview&mkt=zh-cn
                        [16] Reed R. Pruning algorithms-a survey. IEEE Transactions on Neural Networks, 1993, 4(5): 740-747 doi: 10.1109/72.248452
                        [17] Collins M D, Kohli P. Memory bounded deep convolutional networks. arXiv: 1412.1442, 2014.
                        [18] Jin X J, Yuan X T, Feng J S, Yan S C. Training skinny deep neural networks with iterative hard thresholding methods. arXiv: 1607.05423, 2016.
                        [19] Zhou H, Alvarez J M, Porikli F. Less is more: towards compact CNNs. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016. 662-677
                        [20] Wen W, Wu C P, Wang Y D, Chen Y R, Li H. Learning structured sparsity in deep neural networks. In: Proceedings of the 30th Conference on Neural Information Processing Systems. Barcelona, Spain: MIT Press, 2016. 2074-2082
                        [21] Lebedev V, Lempitsky V. Fast convnets using group-wise brain damage. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 2554-2564
                        [22] Louizos C, Welling M, Kingma D P. Learning sparse neural networks through L0regularization. arXiv: 1712.01312, 2017.
                        [23] Hinton G E, Srivastava N, Krizhevsky A, Sutskever I, Salakhutdinov R R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv: 1207.0580, 2012.
                        [24] Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014, 15(1): 1929-1958 http://d.old.wanfangdata.com.cn/Periodical/kzyjc200606005
                        [25] Li Z, Gong B Q, Yang T B. Improved dropout for shallow and deep learning. In: Proceedings of the 30th Conference on Neural Information Processing Systems. Barcelona, Spain: MIT Press, 2016. 2523-2531
                        [26] Anwar S, Sung W. Coarse pruning of convolutional neural networks with random masks. In: Proceedings of 2017 International Conference on Learning Representations. Toulon, France: 2017. 134-145
                        [27] Hanson S J, Pratt L Y. Comparing biases for minimal network construction with back-propagation. In: Proceedings of the 1st International Conference on Neural Information Processing Systems. Denver, Colorado, USA: MIT Press, 1988. 177-185
                        [28] Han S, Mao H Z, Dally W J. Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv: 1510.00149, 2015. http://cn.bing.com/academic/profile?id=9bf6fa99e4da3298640c577b462462d5&encoded=0&v=paper_preview&mkt=zh-cn
                        [29] Srinivas S, Babu R V. Data-free parameter pruning for deep neural networks. arXiv: 1507.06149, 2015.
                        [30] Guo Y W, Yao A B, Chen Y R. Dynamic network surgery for efficient DNNs. In: Proceedings of the 30th Conference on Neural Information Processing Systems. Barcelona, Spain: MIT Press, 2016. 1379-1387
                        [31] Liu X Y, Pool J, Han S, Dally W J. Efficient sparse-winograd convolutional neural networks. In: Proceedings of 2017 International Conference on Learning Representation. France: 2017.
                        [32] He Y H, Zhang X Y, Sun J. Channel pruning for accelerating very deep neural networks. In: Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 1398-1406
                        [33] Li H, Kadav A, Durdanovic I, Samet H, Graf H P. Pruning filters for efficient convNets. arXiv: 1608.08710, 2016.
                        [34] Luo J H, Wu J X, Lin W Y. Thinet: a filter level pruning method for deep neural network compression. In: Proceedings of 2017 IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 5068-5076
                        [35] Denil M, Shakibi B, Dinh L, Ranzato M, de Freitas N. Predicting parameters in deep learning. In: Proceedings of the 26th International Conference on Neural Information Processing Systems. Lake Tahoe, Nevada, USA: Curran Associates Inc., 2013. 2148-2156
                        [36] Rigamonti R, Sironi A, Lepetit V, Fua P. Learning separable filters. In: Proceedings of 2013 IEEE Conference on Computer Vision and Pattern Recognition. Portland, OR, USA: IEEE, 2013. 2754-2761
                        [37] Jaderberg M, Vedaldi A, Zisserman A. Speeding up convolutional neural networks with low rank expansions. arXiv: 1405.3866, 2014. http://cn.bing.com/academic/profile?id=2c2b54ee2cf492b32a9efa47b48a5cfc&encoded=0&v=paper_preview&mkt=zh-cn
                        [38] Denton E, Zaremba W, Bruna J, LeCun Y, Fergus R. Exploiting linear structure within convolutional networks for efficient evaluation. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. Montreal, Canada: MIT Press, 2014. 1269-1277
                        [39] Lebedev V, Ganin Y, Rakhuba M, Oseledets I, Lempitsky V. Speeding-up convolutional neural networks using fine-tuned CP-decomposition. arXiv: 1412.6553, 2014.
                        [40] Tai C, Xiao T, Zhang Y, Wang X G, E W N. Convolutional neural networks with low-rank regularization. arXiv: 1511.06067, 2015.
                        [41] Zhang X Y, Zou J H, Ming X, He K M, Sun J. Efficient and accurate approximations of nonlinear convolutional networks. In: Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 1984-1992
                        [42] Kim Y D, Park E, Yoo S, Choi T, Yang L, Shin D. Compression of deep convolutional neural networks for fast and low power mobile applications. arXiv: 1511.06530, 2015. http://cn.bing.com/academic/profile?id=281aebb382c2e11ab8d73baaafadfbe5&encoded=0&v=paper_preview&mkt=zh-cn
                        [43] Wang Y H, Xu C, Xu C, Tao D C. Beyond filters: compact feature map for portable deep model. In: Proceedings of the 34th International Conference on Machine Learning. Sydney, Australia: JMLR.org, 2017. 3703-3711
                        [44] Astrid M, Lee S I. CP-decomposition with tensor power method for convolutional neural networks compression. In: Proceedings of 2017 IEEE International Conference on Big Data and Smart Computing. Jeju, South Korea: IEEE, 2017. 115-118
                        [45] Bucilu\v{a} C, Caruana R, Niculescu-Mizil A. Model compression. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Philadelphia, USA: ACM, 2006. 535-541
                        [46] Ba J, Caruana R. Do deep nets really need to be deep? In: Proceedings of Advances in Neural Information Processing Systems. Montreal, Quebec, Canada: MIT Press, 2014. 2654-2662
                        [47] Hinton G, Vinyals O, Dean J. Distilling the knowledge in a neural network. arXiv: 1503.02531, 2015.
                        [48] Romero A, Ballas N, Kahou S E, Chassang A, Gatta C, Bengio Y. Fitnets: hints for thin deep nets. arXiv: 1412.6550, 2014.
                        [49] Luo P, Zhu Z Y, Liu Z W, Wang X G, Tang X O. Face model compression by distilling knowledge from neurons. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence. Phoenix, Arizona, USA: AAAI, 2016. 3560-3566
                        [50] Chen T Q, Goodfellow I, Shlens J. Net2Net: accelerating learning via knowledge transfer. arXiv: 1511.05641, 2015.
                        [51] Zagoruyko S, Komodakis N. Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. In: Proceedings of 2017 International Conference on Learning Representations. France: 2017.
                        [52] Theis L, Korshunova I, Tejani A, Huszár F. Faster gaze prediction with dense networks and Fisher pruning. arXiv: 1801.05787, 2018.
                        [53] Yim J, Joo D, Bae J, Kim J. A gift from knowledge distillation: fast optimization, network minimization and transfer learning. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017.
                        [54] Chen G B, Choi W, Yu X, Han T, Chandraker M. Learning efficient object detection models with knowledge distillation. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 742-751
                        [55] Lin M, Chen Q, Yan S C. Network in network. arXiv: 1312.4400, 2013.
                        [56] Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv: 1502.03167, 2015.
                        [57] Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE, 2016. 2818-2826
                        [58] Szegedy C, Ioffe S, Vanhoucke V, Alemi A A. Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence. San Francisco, USA: AAAI, 2017. 12
                        [59] Chollet F. Xception: deep learning with depthwise separable convolutions. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017.
                        [60] Chang J R, Chen Y S. Batch-normalized maxout network in network. arXiv: 1511.02583, 2015.
                        [61] Pang Y W, Sun M L, Jiang X H, Li X L. Convolution in convolution for network in network. IEEE transactions on neural networks and learning systems, 2018, 29(5): 1587-1597 doi: 10.1109/TNNLS.2017.2676130
                        [62] Han X M, Dai Q. Batch-normalized mlpconv-wise supervised pre-training network in network. Applied Intelligence, 2018, 48(1): 142-155 doi: 10.1007/s10489-017-0968-2
                        [63] Srivastava R K, Greff K, Schmidhuber J. Highway networks. arXiv: 1505.00387, 2015.
                        [64] Hochreiter S, Schmidhuber J. Long short-term memory. Neural Computation, 1997, 9(8): 1735-1780 doi: 10.1162/neco.1997.9.8.1735
                        [65] Larsson G, Maire M, Shakhnarovich G. Fractalnet: ultra-deep neural networks without residuals. arXiv: 1605.07648, 2016.
                        [66] Huang G, Sun Y, Liu Z, Sedra D, Weinberger K Q. Deep networks with stochastic depth. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016. 646-661
                        [67] He K M, Zhang X Y, Ren S Q, Sun J. Identity mappings in deep residual networks. In: Proceedings of the 14th European Conference on Computer Vision. Amsterdam, The Netherlands: Springer, 2016. 630-645
                        [68] Xie S N, Girshick R, Dollár P, Tu Z W, He K M. Aggregated residual transformations for deep neural networks. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017. 5987-5995
                        [69] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521(7553): 436-444 doi: 10.1038/nature14539
                        [70] Zagoruyko S, Komodakis N. Wide residual networks. arXiv: 1605.07146, 2016.
                        [71] Targ S, Almeida D, Lyman K. Resnet in resnet: generalizing residual architectures. arXiv: 1603.08029, 2016.
                        [72] Zhang K, Sun M, Han T X, Yuan X F, Guo L R, Liu T. Residual networks of residual networks: multilevel residual networks. IEEE Transactions on Circuits and Systems for Video Technology, 2018, 28(6): 1303-1314 doi: 10.1109/TCSVT.2017.2654543
                        [73] Abdi M, Nahavandi S. Multi-residual networks: improving the speed and accuracy of residual networks. arXiv: 1609.05672, 2016.
                        [74] Huang G, Liu Z, van der Maaten L, Weinberger K Q. Densely connected convolutional networks. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USA: IEEE, 2017.
                        [75] 張婷, 李玉鑑, 胡海鶴, 張亞紅.基于跨連卷積神經(jīng)網(wǎng)絡(luò )的性別分類(lèi)模型.自動(dòng)化學(xué)報, 2016, 42(6): 858-865 doi: 10.16383/j.aas.2016.c150658

                        Zhang Ting, Li Yu-Jian, Hu Hai-He, Zhang Ya-Hong. A gender classification model based on cross-connected convolutional neural networks. Acta Automatica Sinica, 2016, 42(6): 858-865 doi: 10.16383/j.aas.2016.c150658
                        [76] 李勇, 林小竹, 蔣夢(mèng)瑩.基于跨連接LeNet-5網(wǎng)絡(luò )的面部表情識別.自動(dòng)化學(xué)報, 2018, 44(1): 176-182 doi: 10.16383/j.aas.2018.c160835

                        Li Yong, Lin Xiao-Zhu, Jiang Meng-Ying. Facial expression recognition with cross-connect LeNet-5 network. Acta Automatica Sinica, 2018, 44(1): 176-182 doi: 10.16383/j.aas.2018.c160835
                        [77] Howard A G, Zhu M L, Chen B, Kalenichenko D, Wang W J, Weyand T, et al. Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv: 1704.04861, 2017.
                        [78] Sandler M, Howard A, Zhu M L, Zhmoginov A, Chen L C. MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018. 4510-4520
                        [79] Zhang X Y, Zhou X Y, Lin M X, Sun J. ShuffleNet: an extremely efficient convolutional neural network for mobile devices. In: Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA: IEEE, 2018.
                      2. 加載中
                      3. 圖(9) / 表(2)
                        計量
                        • 文章訪(fǎng)問(wèn)數:  8014
                        • HTML全文瀏覽量:  3263
                        • PDF下載量:  2370
                        • 被引次數: 0
                        出版歷程
                        • 收稿日期:  2018-05-03
                        • 錄用日期:  2018-11-05
                        • 刊出日期:  2020-01-21

                        目錄

                          /

                          返回文章
                          返回