1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
      <samp id="qm3rj"></samp>
      <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

      <video id="qm3rj"><code id="qm3rj"></code></video>

        1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
            1. 2.845

              2023影響因子

              (CJCR)

              • 中文核心
              • EI
              • 中國科技核心
              • Scopus
              • CSCD
              • 英國科學(xué)文摘

              留言板

              尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復。謝謝您的支持!

              姓名
              郵箱
              手機號碼
              標題
              留言?xún)热?/th>
              驗證碼

              基于跨連接LeNet-5網(wǎng)絡(luò )的面部表情識別

              李勇 林小竹 蔣夢(mèng)瑩

              李勇, 林小竹, 蔣夢(mèng)瑩. 基于跨連接LeNet-5網(wǎng)絡(luò )的面部表情識別. 自動(dòng)化學(xué)報, 2018, 44(1): 176-182. doi: 10.16383/j.aas.2018.c160835
              引用本文: 李勇, 林小竹, 蔣夢(mèng)瑩. 基于跨連接LeNet-5網(wǎng)絡(luò )的面部表情識別. 自動(dòng)化學(xué)報, 2018, 44(1): 176-182. doi: 10.16383/j.aas.2018.c160835
              LI Yong, LIN Xiao-Zhu, JIANG Meng-Ying. Facial Expression Recognition with Cross-connect LeNet-5 Network. ACTA AUTOMATICA SINICA, 2018, 44(1): 176-182. doi: 10.16383/j.aas.2018.c160835
              Citation: LI Yong, LIN Xiao-Zhu, JIANG Meng-Ying. Facial Expression Recognition with Cross-connect LeNet-5 Network. ACTA AUTOMATICA SINICA, 2018, 44(1): 176-182. doi: 10.16383/j.aas.2018.c160835

              基于跨連接LeNet-5網(wǎng)絡(luò )的面部表情識別

              doi: 10.16383/j.aas.2018.c160835
              基金項目: 

              國家自然科學(xué)基金 60772168

              詳細信息
                作者簡(jiǎn)介:

                李勇?北京化工大學(xué)碩士研究生.主要研究方向為圖像處理與模式識別, 深度學(xué)習.E-mail:15117965051@163.com

                蔣夢(mèng)瑩?北京化工大學(xué)碩士研究生.主要研究方向為圖像處理與模式識別, 深度學(xué)習.E-mail:18810493772@163.com

                通訊作者:

                林小竹 北京石油化工學(xué)院教授.主要研究方向為圖像處理與模式識別, 深度學(xué)習, 信號與系統.本文通信作者.E-mail:linzhu1964@163.com

              Facial Expression Recognition with Cross-connect LeNet-5 Network

              Funds: 

              National Natural Science Foundation of China 60772168

              More Information
                Author Bio:

                Master student at the College of Information Science and Technology, Beijing University of Chemical Technology. His research interest covers image processing, pattern recognition, and deep learning

                Master student at the College of Information Science and Technology, Beijing University of Chemical Technology. Her research interest covers image processing, pattern recognition, and deep learning

                Corresponding author: LIN Xiao-Zhu Professor at the School of Information Engineering, Beijing Institute of Petrochemical Technology. His research interest covers image processing and pattern recognition, deep learning, and signals and systems. Corresponding author of this paper
              • 摘要: 為避免人為因素對表情特征提取產(chǎn)生的影響,本文選擇卷積神經(jīng)網(wǎng)絡(luò )進(jìn)行人臉表情識別的研究.相較于傳統的表情識別方法需要進(jìn)行復雜的人工特征提取,卷積神經(jīng)網(wǎng)絡(luò )可以省略人為提取特征的過(guò)程.經(jīng)典的LeNet-5卷積神經(jīng)網(wǎng)絡(luò )在手寫(xiě)數字庫上取得了很好的識別效果,但在表情識別中識別率不高.本文提出了一種改進(jìn)的LeNet-5卷積神經(jīng)網(wǎng)絡(luò )來(lái)進(jìn)行面部表情識別,將網(wǎng)絡(luò )結構中提取的低層次特征與高層次特征相結合構造分類(lèi)器,該方法在JAFFE表情公開(kāi)庫和CK+數據庫上取得了較好的結果.
                1)  本文責任編委?胡清華
              • 圖  1  LeNet-5結構圖

                Fig.  1  The LeNet-5 convolutional neural network

                圖  2  改進(jìn)的LeNet-5卷積神經(jīng)網(wǎng)絡(luò )

                Fig.  2  Improved LeNet-5 convolutional neural network

                圖  3  JAFFE表情庫7種表情示例圖像

                Fig.  3  7 kinds of facial expression image in JAFFE expression dataset

                圖  4  CK+表情庫7種表情示例圖像

                Fig.  4  7 kinds of facial expression image in the CK+ expression dataset

                表  1  LeNet-5網(wǎng)絡(luò )Layer 2與Layer 3之間的連接方式

                Table  1  Connection between LeNet-5 network0s Layer 2 and Layer 3

                12345678910111213141516
                1
                2
                3
                4
                5
                6
                下載: 導出CSV

                表  2  卷積網(wǎng)絡(luò )參數

                Table  2  Convolutional network parameters

                輸入輸入尺寸卷積核大小池化區域步長(cháng)輸出尺寸
                Input32 × 325 × 5128 × 28
                Layer 16 @ 28 × 282 × 226@14 × 14
                Layer 26 @ 14 × 145 × 5110 × 10
                Layer 316 @ 10 × 102 × 2216 @ 5 × 5
                Layer 416 @ 5 × 55 × 51120@1 × 1
                Layer 5120 @ 1 × 11 × 84
                Layer 61 × 1 6601 × 7
                Output1 × 7
                下載: 導出CSV

                表  3  JAFFE表情庫不同表情的分類(lèi)正確率(%)

                Table  3  Classification accuracy of different expressions in JAFFE expression dataset (%)

                生氣厭惡害怕高興中性悲傷驚訝整體
                測試集11008010010010090.9188.8994.37
                測試集2100909081.8210010010092.96
                測試集310010081.8290.9110010010095.77
                整體10089.6690.6390.6310096.7796.5594.37
                下載: 導出CSV

                表  4  CK+數據庫不同表情的分類(lèi)正確率(%)

                Table  4  Classification accuracy of different expressions in CK+ dataset (%)

                生氣厭惡害怕高興中性悲傷驚訝整體
                測試集188.8994.448092.8670.839693.9488.89
                測試集270.3777.788096.30688496.9782.32
                測試集377.7885.7184.62100647293.9483.33
                測試集462.9694.298889.29608087.8880.81
                測試集581.4885.717292.866479.1710083.33
                整體76.3087.5980.9294.2665.3782.2394.5583.74
                下載: 導出CSV

                表  5  網(wǎng)絡(luò )是否跨連接正確率對比(%)

                Table  5  Classification accuracy of the network whether cross connection or not (%)

                方法參數量JAFFE表情庫中平均正確率CK+數據庫中平均正確率
                LeNet-514 44462.4432.32
                本文方法25 47694.3783.74
                下載: 導出CSV

                表  6  不同方法在JAFFE上的對比(%)

                Table  6  The comparison of different methods on JAFFE (%)

                來(lái)源方法正確率
                Kumbhar等[28]*Image feature60 ~ 70
                Praseeda等[5]*SVM86.9
                本文算法跨連的LeNet-594.37
                ??*數據來(lái)源于文獻[15]
                下載: 導出CSV
                1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
                  <samp id="qm3rj"></samp>
                  <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

                  <video id="qm3rj"><code id="qm3rj"></code></video>

                    1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
                        亚洲第一网址_国产国产人精品视频69_久久久久精品视频_国产精品第九页
                      1. [1] Pantic M, Rothkrantz L J M. Expert system for automatic analysis of facial expressions. Image and Vision Computing, 2000, 18(11):881-905 doi: 10.1016/S0262-8856(00)00034-2
                        [2] Ekman P, Friesen W V. Facial Action Coding System:A Technique for the Measurement of Facial Movement. Palo Alto, CA:Consulting Psychologists Press, 1978. https://www.researchgate.net/publication/239537771_Facial_action_coding_system_A_technique_for_the_measurement_of_facial_movement
                        [3] Lucey P, Cohn J F, Kanade T, Saragih J, Ambadar Z, Matthews I. The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). San Francisco, CA, USA: IEEE, 2010. 94-101 http://ieeexplore.ieee.org/xpls/icp.jsp?arnumber=5543262
                        [4] Lanitis A, Taylor C J, Cootes T F. Automatic interpretation and coding of face images using flexible models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997, 19(7):743-756 doi: 10.1109/34.598231
                        [5] Praseeda Lekshmi V, Sasikumar M. Analysis of facial expression using Gabor and SVM. International Journal of Recent Trends in Engineering, 2009, 1(2):47-50 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.381.5275
                        [6] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems, Lake Tahoe, Nevada, USA: NIPS, 2012. 1097-1105 http://dl.acm.org/citation.cfm?id=2999257
                        [7] Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks. Science, 2006, 313(5786):504-507 doi: 10.1126/science.1127647
                        [8] 余凱, 賈磊, 陳雨強, 徐偉.深度學(xué)習的昨天、今天和明天.計算機研究與發(fā)展, 2013, 50(9):1799-1804 doi: 10.7544/issn1000-1239.2013.20131180

                        Yu Kai, Jia Lei, Chen Yu-Qiang, Xu Wei. Deep learning:yesterday, today, and tomorrow. Journal of Computer Research and Development, 2013, 50(9):1799-1804 doi: 10.7544/issn1000-1239.2013.20131180
                        [9] 王夢(mèng)來(lái), 李想, 陳奇, 李瀾博, 趙衍運.基于CNN的監控視頻事件檢測.自動(dòng)化學(xué)報, 2016, 42(6):892-903 http://www.ynkaiyun.com/CN/abstract/abstract18880.shtml

                        Wang Meng-Lai, Li Xiang, Chen Qi, Li Lan-Bo, Zhao Yan-Yun. Surveillance event detection based on CNN. Acta Automatica Sinica, 2016, 42(6):892-903 http://www.ynkaiyun.com/CN/abstract/abstract18880.shtml
                        [10] 奚雪峰, 周?chē)鴹?面向自然語(yǔ)言處理的深度學(xué)習研究.自動(dòng)化學(xué)報, 2016, 42(10):1445-1465 http://www.ynkaiyun.com/CN/abstract/abstract18934.shtml

                        Xi Xue-Feng, Zhou Guo-Dong. A survey on deep learning for natural language processing. Acta Automatica Sinica, 2016, 42(10):1445-1465 http://www.ynkaiyun.com/CN/abstract/abstract18934.shtml
                        [11] 張暉, 蘇紅, 張學(xué)良, 高光來(lái).基于卷積神經(jīng)網(wǎng)絡(luò )的魯棒性基音檢測方法.自動(dòng)化學(xué)報, 2016, 42(6):959-964 http://www.ynkaiyun.com/CN/abstract/abstract18887.shtml

                        Zhang Hui, Su Hong, Zhang Xue-Liang, Gao Guang-Lai. Convolutional neural network for robust pitch determination. Acta Automatica Sinica, 2016, 42(6):959-964 http://www.ynkaiyun.com/CN/abstract/abstract18887.shtml
                        [12] 隨婷婷, 王曉峰.一種基于CLMF的深度卷積神經(jīng)網(wǎng)絡(luò )模型.自動(dòng)化學(xué)報, 2016, 42(6):875-882 http://www.ynkaiyun.com/CN/abstract/abstract18878.shtml

                        Sui Ting-Ting, Wang Xiao-Feng. Convolutional neural networks with candidate location and multi-feature fusion. Acta Automatica Sinica, 2016, 42(6):875-882 http://www.ynkaiyun.com/CN/abstract/abstract18878.shtml
                        [13] 王偉凝, 王勵, 趙明權, 蔡成加, 師婷婷, 徐向民.基于并行深度卷積神經(jīng)網(wǎng)絡(luò )的圖像美感分類(lèi).自動(dòng)化學(xué)報, 2016, 42(6):904-914 http://www.ynkaiyun.com/CN/abstract/abstract18881.shtml

                        Wang Wei-Ning, Wang Li, Zhao Ming-Quan, Cai Cheng-Jia, Shi Ting-Ting, Xu Xiang-Min. Image aesthetic classification using parallel deep convolutional neural networks. Acta Automatica Sinica, 2016, 42(6):904-914 http://www.ynkaiyun.com/CN/abstract/abstract18881.shtml
                        [14] 常亮, 鄧小明, 周明全, 武仲科, 袁野, 楊碩, 王宏安.圖像理解中的卷積神經(jīng)網(wǎng)絡(luò ).自動(dòng)化學(xué)報, 2016, 42(9):1300-1312 http://www.ynkaiyun.com/CN/abstract/abstract18919.shtml

                        Chang Liang, Deng Xiao-Ming, Zhou Ming-Quan, Wu Zhong-Ke, Yuan Ye, Yang Shuo, Wang Hong-An. Convolutional neural networks in image understanding. Acta Automatica Sinica, 2016, 42(9):1300-1312 http://www.ynkaiyun.com/CN/abstract/abstract18919.shtml
                        [15] 孫曉, 潘汀, 任福繼.基于ROI-KNN卷積神經(jīng)網(wǎng)絡(luò )的面部表情識別.自動(dòng)化學(xué)報, 2016, 42(6):883-891 http://www.ynkaiyun.com/CN/abstract/abstract18879.shtml

                        Sun Xiao, Pan Ting, Ren Fu-Ji. Facial expression recognition using ROI-KNN deep convolutional neural networks. Acta Automatica Sinica, 2016, 42(6):883-891 http://www.ynkaiyun.com/CN/abstract/abstract18879.shtml
                        [16] Hubel D H, Wiesel T N. Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. The Journal of Physiology, 1962, 160(1):106-154 doi: 10.1113/jphysiol.1962.sp006837
                        [17] Fukushima K, Miyake S, Ito T. Neocognitron:a neural network model for a mechanism of visual pattern recognition. IEEE Transactions on Systems, Man, and Cybernetics, 1983, SMC-13(5):826-834 doi: 10.1109/TSMC.1983.6313076
                        [18] Le Cun Y, Boser B, Denker J S, Howard R E, Habbard W, Jackel L D, Henderson D. Handwritten digit recognition with a back-propagation network. Advances in Neural Information Processing Systems 2. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 1989. 396-404
                        [19] Le Cun Y, Bottou L, Bengio Y, Haffner P. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11):2278-2324 doi: 10.1109/5.726791
                        [20] Bengio Y. Learning deep architectures for AI. Foundations and Trends? in Machine Learning, 2009, 2(1):1-127 doi: 10.1561/2200000006
                        [21] Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics (AISTATS) 2010. Sardinia, Italy: Chia Laguna Resort, 2010. 249-256
                        [22] Ziegel R. Modern Applied Statistics with S-plus (3rd edition), by Venables W N and Ripley B D, New York: Springer-Verlag, 1999, Technometrics, 2001, 43(2): 249
                        [23] Srivastava R K, Greff K, Schmidhuber J. Highway networks. Computer Science, arXiv: 1505. 00387, 2015.
                        [24] Romero A, Ballas N, Kahou S E, Chassang A, Gatta C, Bengio Y. FitNets: hints for thin deep nets. Computer Science, arXiv: 1412. 6550, 2014.
                        [25] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. arXiv: 1512. 03385, 2016. 770-778
                        [26] Sun Y, Wang X G, Tang X O. Deep learning face representation from predicting 10, 000 classes. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Columbus, OH, USA: IEEE, 2014. 1891-1898 https://www.computer.org/csdl/proceedings/cvpr/2014/5118/00/5118b891-abs.html
                        [27] 張婷, 李玉鑑, 胡海鶴, 張亞紅.基于跨連卷積神經(jīng)網(wǎng)絡(luò )的性別分類(lèi)模型.自動(dòng)化學(xué)報, 2016, 42(6):858-865 http://www.ynkaiyun.com/CN/abstract/abstract18876.shtml

                        Zhang Ting, Li Yu-Jian, Hu Hai-He, Zhang Ya-Hong. A gender classification model based on cross-connected convolutional neural networks. Acta Automatica Sinica, 2016, 42(6):858-865 http://www.ynkaiyun.com/CN/abstract/abstract18876.shtml
                        [28] Kumbhar M, Jadhav A, Patil M. Facial expression recognition based on image feature. International Journal of Computer and Communication Engineering, 2012, 1(2):117-119 https://www.researchgate.net/publication/250922449_Facial_Expression_Recognition_Based_on_Image_Feature
                      2. 加載中
                      3. 圖(4) / 表(6)
                        計量
                        • 文章訪(fǎng)問(wèn)數:  4257
                        • HTML全文瀏覽量:  703
                        • PDF下載量:  1478
                        • 被引次數: 0
                        出版歷程
                        • 收稿日期:  2016-12-23
                        • 錄用日期:  2017-05-04
                        • 刊出日期:  2018-01-20

                        目錄

                          /

                          返回文章
                          返回