1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
      <samp id="qm3rj"></samp>
      <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

      <video id="qm3rj"><code id="qm3rj"></code></video>

        1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
            1. 2.765

              2022影響因子

              (CJCR)

              • 中文核心
              • EI
              • 中國科技核心
              • Scopus
              • CSCD
              • 英國科學文摘

              留言板

              尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

              姓名
              郵箱
              手機號碼
              標題
              留言內容
              驗證碼

              基于混合數據增強的MSWI過程燃燒狀態識別

              郭海濤 湯健 丁海旭 喬俊飛

              郭海濤, 湯健, 丁海旭, 喬俊飛. 基于混合數據增強的MSWI過程燃燒狀態識別. 自動化學報, 2024, 50(3): 560?575 doi: 10.16383/j.aas.c210843
              引用本文: 郭海濤, 湯健, 丁海旭, 喬俊飛. 基于混合數據增強的MSWI過程燃燒狀態識別. 自動化學報, 2024, 50(3): 560?575 doi: 10.16383/j.aas.c210843
              Guo Hai-Tao, Tang Jian, Ding Hai-Xu, Qiao Jun-Fei. Combustion states recognition method of MSWI process based on mixed data enhancement. Acta Automatica Sinica, 2024, 50(3): 560?575 doi: 10.16383/j.aas.c210843
              Citation: Guo Hai-Tao, Tang Jian, Ding Hai-Xu, Qiao Jun-Fei. Combustion states recognition method of MSWI process based on mixed data enhancement. Acta Automatica Sinica, 2024, 50(3): 560?575 doi: 10.16383/j.aas.c210843

              基于混合數據增強的MSWI過程燃燒狀態識別

              doi: 10.16383/j.aas.c210843
              基金項目: 國家自然科學基金(62073006, 62021003), 北京市自然科學基金(4212032, 4192009), 科學技術部國家重點研發計劃(2018YFC1900800-5), 礦冶過程自動控制技術國家(北京市)重點實驗室(BGRIMM-KZSKL-2020-02)資助
              詳細信息
                作者簡介:

                郭海濤:北京工業大學信息學部碩士研究生. 主要研究方向為面向城市固廢焚燒過程的圖像處理研究. E-mail: guoht@emails.bjut.edu.cn

                湯?。罕本┕I大學信息學部教授. 主要研究方向為小樣本數據建模, 城市固廢處理過程智能控制. 本文通信作者. E-mail: freeflytang@bjut.edu.cn

                丁海旭:北京工業大學信息學部博士研究生. 主要研究方向為城市固廢焚燒過程特征建模與智能控制. E-mail: dinghaixu@emails.bjut.edu.cn

                喬俊飛:北京工業大學信息學部教授. 主要研究方向為污水處理過程智能控制, 神經網絡結構設計與優化. E-mail: junfeiq@bjut.edu.cn

              Combustion States Recognition Method of MSWI Process Based on Mixed Data Enhancement

              Funds: Supported by National Natural Science Foundation of China (62073006, 62021003), Beijing Natural Science Foundation (4212032, 4192009), National Key Research and Development Program of the Ministry of Science and Technology (2018YFC1900800-5), and Beijing Key Laboratory of Process Automation in Mining and Metallurgy (BGRIMM-KZSKL-2020-02)
              More Information
                Author Bio:

                GUO Hai-Tao Master student at the Faculty of Information Technology, Beijing University of Technology. His main research interest is image processing of municipal solid waste incineration process

                TANG Jian Professor at the Faculty of Information Technology, Beijing University of Technology. His research interest covers small sample data modeling and intelligent control of municipal solid waste treatment process. Corresponding author of this paper

                DING Hai-Xu Ph.D. candidate at the Faculty of Information Technology, Beijing University of Technology. His research interest covers feature modeling and intelligent control of municipal solid waste incineration process

                QIAO Jun-Fei Professor at the Faculty of Information Technology, Beijing University of Technology. His research interest covers intelligent control of wastewater treatment process and structure design and optimization of neural networks

              • 摘要: 國內城市固廢焚燒(Municipal solid waste incineration, MSWI)過程通常依靠運行專家觀察爐內火焰識別燃燒狀態后再結合自身經驗修正控制策略以維持穩定燃燒, 存在智能化水平低、識別結果具有主觀性與隨意性等問題. 由于MSWI過程的火焰圖像具有強污染、多噪聲等特性, 并且存在異常工況數據較為稀缺等問題, 導致傳統目標識別方法難以適用. 對此, 提出一種基于混合數據增強的MSWI過程燃燒狀態識別方法. 首先, 結合領域專家經驗與焚燒爐排結構對燃燒狀態進行標定; 接著, 設計由粗調和精調兩級組成的深度卷積生成對抗網絡(Deep convolutional generative adversarial network, DCGAN)以獲取多工況火焰圖像; 然后, 采用弗雷歇距離(Fréchet inception distance, FID)對生成式樣本進行自適應選擇; 最后, 通過非生成式數據增強對樣本進行再次擴充, 獲得混合增強數據構建卷積神經網絡以識別燃燒狀態. 基于某MSWI電廠實際運行數據實驗, 表明該方法有效地提高了識別網絡的泛化性與魯棒性, 具有良好的識別精度.
              • 圖  1  MSWI過程工藝圖

                Fig.  1  Flow chart of MSWI process

                圖  2  基于DCGAN數據增強的燃燒狀態識別策略

                Fig.  2  Strategy of combustion state recognition based on DCGAN data enhancement

                圖  3  爐排等比例結構示意圖

                Fig.  3  Schematic diagram of equal proportion structure of grate

                圖  4  燃燒和停爐狀態圖像標定示意圖

                Fig.  4  Image calibration diagram of combustion and shutdown status

                圖  5  生成網絡結構

                Fig.  5  Structure of generation network

                圖  6  判別網絡結構

                Fig.  6  Structure of discrimination network

                圖  7  燃燒線前移

                Fig.  7  Combustion line forward

                圖  8  燃燒線正常

                Fig.  8  Combustion line normal

                圖  9  燃燒線后移

                Fig.  9  Combustion line back

                圖  10  粗調DCGAN迭代過程中FID對生成燃燒狀態圖像的評估結果

                Fig.  10  Assessment of FID for generating combustion state images during rough DCGAN iteration

                圖  11  燃燒線前移的增強圖像

                Fig.  11  Expansion results of combustion line forward image

                圖  13  燃燒線后移的增強圖像

                Fig.  13  Expansion results of combustion line back image

                圖  12  燃燒線正常的增強圖像

                Fig.  12  Expansion results of combustion line normal image

                圖  14  本文所提的非生成式數據增強

                Fig.  14  Non-generative data enhancement with the proposed method

                圖  15  隨機進行的非生成式數據增強

                Fig.  15  Non-generative data enhancement with random mode

                圖  16  不同生成模型生成的燃燒狀態圖像

                Fig.  16  Combustion state images generated by different generation models

                表  1  數據集劃分

                Table  1  Dataset partition

                數據集劃分方式訓練集驗證集測試集
                A時間次序9 × 89 × 19 × 1
                B隨機抽樣9 × 89 × 19 × 1
                下載: 導出CSV

                表  2  不同生成模型生成數據的評估結果

                Table  2  Evaluation results of data generated by different generation models

                方法評價指標
                FIDminFIDaverageEpoch
                GAN250.00254.5010000
                LSGAN58.5651.943000
                DCGAN43.8149.672500
                本文方法36.1048.512500
                下載: 導出CSV

                表  3  識別模型的性能對比

                Table  3  Performance comparison of recognition models

                方法測試集準確率測試集損失驗證集準確率驗證集損失
                方式ACNN0.7518±0.002450.6046±0.028820.6115±0.002121.6319±0.11640
                非生成式數據增強+CNN0.8272±0.002060.6504±0.040380.7830±0.001830.9077±0.03739
                DCGAN數據增強+CNN0.8000±0.000980.8776±0.010630.5885±0.003961.9024±0.11050
                本文方法0.8482±0.001050.5520±0.010060.7269±0.003770.9768±0.05797
                方式BCNN0.8926±0.001050.2298±0.003090.8519±0.000610.2519±0.00167
                非生成式數據增強+CNN0.9371±0.001840.1504±0.008250.9704±0.000550.1093±0.01037
                DCGAN數據增強+CNN0.9000±0.001230.3159±0.011500.8445±0.002070.2913±0.00396
                本文方法0.9407±0.003670.2019±0.014980.9741±0.000440.0699±0.00195
                下載: 導出CSV

                A1  符號及含義

                A1  Symbols and their descriptions

                符號符號含義
                D 判別器
                G生成器
                $ V(D,G)$GAN 原始的目標函數
                ${\boldsymbol{z}} $潛在空間的隨機噪聲
                $ D^*$固定G 參數, 在$\mathop {\max }\nolimits_D V \left({D,G} \right)$過程中, D 的最優解
                ${D_{{\text{JS}}}}$JS 散度
                ${R_{jk}}$圖像中經過卷積核掃描后的第 j 行第 k 列的結果
                ${H_{j - u,k - v}}$卷積核
                ${F_{u,v}}$圖像
                $X$燃燒狀態數據集, 包含前移、正常和后移的數據集, 即燃燒圖像粗調 DCGAN 中判別網絡輸入值集合$[ { {\boldsymbol{x} }_{{1} } };{ {\boldsymbol{x} }_{{2} } }; $ ${ {\boldsymbol{x} }_{{3} } }; \cdots ;{ {\boldsymbol{x} }_{\rm{a}}} \cdots ]$, 即$ \left[ {{X_{{\rm{real}}}};{X_{{\rm{false}}}}} \right]$
                $ X_{{\rm{FW}}}$燃燒線前移數據集
                $ X_{{\rm{NM}}}$燃燒線正常數據集
                $ X_{{\rm{BC}}}$燃燒線后移數據集
                $ X'_{{\rm{FW}}}$訓練集燃燒線前移數據集
                $ X'_{{\rm{NM}}}$訓練集燃燒線正常數據集
                $ X'_{{\rm{BC}}}$訓練集燃燒線后移數據集
                $ X''_{{\rm{FW}}}$測試、驗證燃燒線前移數據集
                $ X''_{{\rm{NM}}}$測試、驗證燃燒線正常數據集
                $ X''_{{\rm{BC}}}$測試、驗證燃燒線后移數據集
                $ {D_t}(\cdot, \cdot )$燃燒圖像粗調 DCGAN 子模塊中, 判別網絡參數為${\theta _{D,t}}$時, 判別網絡預測值集合
                $ {D_{t+1}}(\cdot, \cdot )$燃燒圖像粗調 DCGAN 子模塊中, 判別網絡參數為${\theta _{D,t+1}}$時, 判別網絡預測值集合
                $ Y_{D,t}$在燃燒圖像粗調 DCGAN 子模塊中第 t 次博弈訓練判別網絡的真實值集合
                $ Y_{G,t}$在燃燒圖像粗調 DCGAN 子模塊中第 t 次博弈訓練生成網絡的真實值集合
                $ loss_{D,t}$在燃燒圖像粗調 DCGAN 子模塊中第 t 次博弈更新判別網絡的損失值
                $ loss_{G,t}$在燃燒圖像粗調 DCGAN 子模塊中第 t 次博弈更新生成網絡的損失值
                $ X_{{\rm{real}}}$在燃燒圖像粗調 DCGAN 子模塊中參加博弈的真實數據
                $ X_{{\rm{false}},t}$在燃燒圖像粗調 DCGAN 子模塊中參加第 t 次博弈的生成的數據
                $ G_t({\boldsymbol{z}})$在燃燒圖像粗調 DCGAN 子模塊第 t 次博弈中由隨機噪聲經過生成網絡得到的虛擬樣本
                ${S_{D,t}}$燃燒圖像粗調 DCGAN 中獲得的判別網絡的結構參數
                ${S_{G,t}}$燃燒圖像粗調 DCGAN 中獲得的生成網絡的結構參數
                ${\theta _{D,t}}$在燃燒圖像粗調 DCGAN 子模塊中第 t 次博弈判別網絡更新前的網絡參數
                ${\theta _{G,t}}$在燃燒圖像粗調 DCGAN 子模塊中第 t 次博弈生成網絡更新前的網絡參數
                $ X_{{\rm{real}}}^{{\rm{FW}}}$燃燒線前移精調 DCGAN 子模塊中參加博弈的真實數據
                $ X_{{\rm{false}},t}^{{\rm{FW}}}$在燃燒線前移精調 DCGAN 子模塊中參加第 t 次博弈的生成數據
                $ X_{{\rm{real}}}^{{\rm{NM}}}$燃燒線正常精調 DCGAN 子模塊中參加博弈的真實數據
                $ X_{{\rm{false}},t}^{{\rm{NM}}}$在燃燒線正常精調 DCGAN 子模塊中參加第 t 次博弈的生成數據
                $ X_{{\rm{real}}}^{{\rm{BC}}}$燃燒線后移精調 DCGAN 子模塊中參加博弈的真實數據
                $ X_{{\rm{false}},t}^{{\rm{BC}}}$在燃燒線后移精調 DCGAN 子模塊中參加第 t 次博弈的生成數據
                $ D_t^{{\rm{FW}}}(\cdot, \cdot )$在燃燒線前移精調 DCGAN 子模塊中判別網絡參數為參數$\theta _{D,t}^{{\text{FW}}}$時, 判別網絡預測值集合
                $ D_t^{{\rm{NM}}}(\cdot, \cdot )$在燃燒線正常精調 DCGAN 子模塊中判別網絡參數為參數$\theta _{D,t}^{{\text{NM}}}$時, 判別網絡預測值集合
                $ {D}_{t}^{\text{BC}}(\cdot, \cdot ) $在燃燒線后移精調 DCGAN 子模塊中判別網絡參數為參數$\theta _{D,t}^{{\text{BC}}}$時, 判別網絡預測值集合
                $ D_{t+1}^{{\rm{FW}}}(\cdot, \cdot )$在燃燒線前移精調 DCGAN 子模塊中判別網絡參數為參數$\theta _{D,t + 1}^{{\text{FW}}}$時, 判別網絡預測值集合
                $ D_{t+1}^{{\rm{NM}}}(\cdot, \cdot )$在燃燒線正常精調 DCGAN 子模塊中判別網絡參數為參數$\theta _{D,t + 1}^{{\text{NM}}}$時, 判別網絡預測值集合
                $ D_{t+1}^{{\rm{BC}}}(\cdot, \cdot )$在燃燒線后移精調 DCGAN 子模塊中判別網絡參數為參數$\theta _{D,t + 1}^{{\text{BC}}}$時, 判別網絡預測值集合
                $ Y_{D,t}^{{\rm{FW}}}$燃燒線前移精調 DCGAN 子模塊中第 t 次博弈訓練 D 的真實值集合
                $ Y_{G,t}^{{\rm{FW}}}$燃燒線前移精調 DCGAN 子模塊中第 t 次博弈訓練G的真實值集合
                $ Y_{D,t}^{{\rm{NM}}}$燃燒線正常精調 DCGAN 子模塊中第 t 次博弈訓練 D 的真實值集合
                $ Y_{G,t}^{{\rm{NM}}}$燃燒線正常精調 DCGAN 子模塊中第 t 次博弈訓練G的真實值集合
                $ Y_{D,t}^{{\rm{BC}}}$燃燒線后移精調 DCGAN 子模塊中第 t 次博弈訓練 D 的真實值集合
                $ Y_{G,t}^{{\rm{BC}}}$燃燒線后移精調 DCGAN 子模塊中第 t 次博弈訓練G的真實值集合
                $ loss_{D,t}^{{\rm{FW}}}$燃燒線前移精調 DCGAN 子模塊中第 t 次博弈更新 D 的損失值
                $ loss_{G,t}^{{\rm{FW}}}$燃燒線前移精調 DCGAN 子模塊中第 t 次博弈更新G的損失值
                $ loss_{D,t}^{{\rm{NM}}}$燃燒線正常精調 DCGAN 子模塊中第 t 次博弈更新 D 的損失值
                $ loss_{G,t}^{{\rm{NM}}}$燃燒線正常精調 DCGAN 子模塊中第 t 次博弈更新 G 的損失值
                $ loss_{D,t}^{{\rm{BC}}}$燃燒線后移精調 DCGAN 子模塊中第 t 次博弈更新 D 的損失值
                $ loss_{G,t}^{{\rm{BC}}}$燃燒線后移精調 DCGAN 子模塊中第 t 次博弈更新G的損失值
                $\theta _{D,t}^{{\text{FW}}}$燃燒線前移 DCGAN 子模塊中第 t 次博弈判別網絡更新前的網絡參數
                $\theta _{G,t}^{{\text{FW}}}$燃燒線前移 DCGAN 子模塊中第 t 次博弈生成網絡更新前的網絡參數
                $\theta _{D,t}^{{\text{NM}}}$燃燒線正常 DCGAN 子模塊中第 t 次博弈判別網絡更新前的網絡參數
                $\theta _{G,t}^{{\text{NM}}}$燃燒線正常 DCGAN 子模塊中第 t 次博弈生成網絡更新前的網絡參數
                $\theta _{D,t}^{{\text{BC}}}$燃燒線后移 DCGAN 子模塊中第 t 次博弈判別網絡更新前的網絡參數
                $\theta _{G,t}^{{\text{BC}}}$燃燒線后移 DCGAN 子模塊中第 t 次博弈生成網絡更新前的網絡參數
                ${\widehat Y_{{\text{ CNN }},t}}$燃燒狀態識別模塊第 t 次更新 CNN 模型預測值集合
                $los{s_{{\text{ CNN }},t}}$燃燒狀態識別模塊第 t 次更新 CNN 的損失
                $ \theta _{{\rm{ CNN }},t}$燃燒狀態識別模塊第 t 次更新 CNN 的網絡更新參數
                $ loss$神經網絡的損失
                ${\boldsymbol{x} }_{{a} }$神經網絡第 a 幅輸入圖像
                $y_a $a 幅輸入圖像輸入神經網絡后的輸出值
                $ D_t(X)$判別網絡預測值集合, 即$ {D_t}(\cdot, \cdot )$
                $L $損失函數
                $\delta_i $i 層的誤差
                $O_i $i 層輸出
                $W_i$i 層的所有權重參數
                $B_i $i 層的所有偏置參數
                $ {\nabla _{{W_{i - 1}}}}$第$i-1 $層的權重的當前梯度
                $ {\nabla _{{B_{i - 1}}}}$第$i-1 $層的偏置的當前梯度
                $ {\theta _{D,t}}$t 次判別網絡的參數
                $ {m _{D,t}}$t 次判別網絡一階動量
                $ {v _{D,t}}$t 次判別網絡的二階動量
                $\alpha $學習率
                $\gamma $很小的正實數
                $ {\nabla _{D,t}}$t 次判別網絡參數的梯度
                $\beta_1 $Adam 超參數
                $\beta_2 $Adam 超參數
                $ {\eta _{D,t}}$計算第 t 次的下降梯度
                $ {\widehat m_{D,t}}$初始階段判別網絡的第 t 次一階動量
                $ {\widehat v_{D,t}}$初始階段判別網絡的第 t 次的二階動量
                $Y $神經網絡真值集合
                $ f(X)$神經網絡預測值集合
                $p $概率分布
                ${p_{\text{r}}}$真實圖像的概率分布
                ${p_{\text{g}}}$生成圖像的概率分布
                ${p_{\boldsymbol{z}}}$z 所服從的正態分布
                Cov協方差矩陣
                下載: 導出CSV
                1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
                  <samp id="qm3rj"></samp>
                  <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

                  <video id="qm3rj"><code id="qm3rj"></code></video>

                    1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
                        亚洲第一网址_国产国产人精品视频69_久久久久精品视频_国产精品第九页
                      1. [1] 喬俊飛, 郭子豪, 湯健. 面向城市固廢焚燒過程的二噁英排放濃度檢測方法綜述. 自動化學報, 2020, 46(6): 1063-1089

                        Qiao Jun-Fei, Guo Zi-Hao, Tang Jian. Dioxin emission concentration measurement approaches for municipal solid wastes incineration process: A survey. Acta Automatica Sinica, 2020, 46(6): 1063-1089
                        [2] Lu J W, Zhang S K, Hai J, Lei M. Status and perspectives of municipal solid waste incineration in China: A comparison with developed regions. Waste Management, 2017, 69: 170-186 doi: 10.1016/j.wasman.2017.04.014
                        [3] Kalyani K A, Pandey K K. Waste to energy status in India: A short review. Renewable and Sustainable Energy Reviews, 2014, 31: 113-120 doi: 10.1016/j.rser.2013.11.020
                        [4] 湯健, 喬俊飛. 基于選擇性集成核學習算法的固廢焚燒過程二噁英排放濃度軟測量. 化工學報, 2018, 70(2): 696-706

                        Tang Jian, Qiao Jun-Fei. Dioxin emission concentration soft measuring approach of municipal solid waste incineration based on selective ensemble kernel learning algorithm. CIESC Journal, 2019, 70(2): 696-706
                        [5] 湯健, 王丹丹, 郭子豪, 喬俊飛. 基于虛擬樣本優化選擇的城市固廢焚燒過程二噁英排放濃度預測. 北京工業大學學報, 2021, 47(5): 431-443

                        Tang Jian, Wang Dan-Dan, Guo Zi-Hao, Qiao Jun-Fei. Prediction of dioxin emission concentration in the municipal solid waste incineration process based on optimal selection of virtual samples. Journal of Beijing University of Technology, 2021, 47(5): 431-443
                        [6] 湯健, 喬俊飛, 徐喆, 郭子豪. 基于特征約簡與選擇性集成算法的城市固廢焚燒過程二噁英排放濃度軟測量. 控制理論與應用, 2021, 38(1): 110-120

                        Tang Jian, Qiao Jun-Fei, Xu Zhe, Guo Zi-Hao. Soft measuring approach of dioxin emission concentration in municipal solid waste incineration process based on feature reduction and selective ensemble algorithm. Control Theory & Applications, 2021, 38(1): 110-120
                        [7] Kolekar K A, Hazra T, Chakrabarty S N. A review on prediction of municipal solid waste generation models. Procedia Environmental Sciences, 2016, 35: 238-244 doi: 10.1016/j.proenv.2016.07.087
                        [8] Li X M, Zhang C M, Li Y Z, Zhi Q. The status of municipal solid waste incineration (MSWI) in China and its clean development. Energy Procedia, 2016, 104: 498-503 doi: 10.1016/j.egypro.2016.12.084
                        [9] 喬俊飛, 段滈杉, 湯健, 蒙西. 基于火焰圖像顏色特征的MSWI燃燒工況識別. 控制工程, 2022, 29(7): 1153-1161

                        Qiao Jun-Fei, Duan Hao-Shan, Tang Jian, Meng Xi. Recognition of MSWI combustion conditions based on color features of flame images. Control Engineering of China, 2022, 29(7): 1153-1161
                        [10] 高濟, 何志均. 基于規則的聯想網絡. 自動化學報, 1989, 15(4): 318-323

                        Gao Ji, He Zhi-Jun. The associative network based on rules. Acta Automatica Sinica, 1989, 15(4): 318-323
                        [11] Roy S K, Krishna G, Dubey S R, Chaudhuri B B. HybridSN: Exploring 3-D-2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters, 2020, 17(2): 277-281 doi: 10.1109/LGRS.2019.2918719
                        [12] Ahammad S H, Rajesh V, Rahman Z U, Lay-Ekuakille A. A hybrid CNN-based segmentation and boosting classifier for real time sensor spinal cord injury data. IEEE Sensors Journal, 2020, 20(17): 10092-10101 doi: 10.1109/JSEN.2020.2992879
                        [13] Sun Y, Xue B, Zhang M J, Yen G G, Lv J C. Automatically designing CNN architectures using the genetic algorithm for image classification. IEEE Transactions on Cybernetics, 2020, 50(9): 3840-3854 doi: 10.1109/TCYB.2020.2983860
                        [14] Zhou P, Gao B H, Wang S, Chai T Y. Identification of abnormal conditions for fused magnesium melting process based on deep learning and multisource information fusion, IEEE Transactions on Industrial Electronics, 2022, 69(3): 3017-3026 doi: 10.1109/TIE.2021.3070512
                        [15] 張震, 汪斌強, 李向濤, 黃萬偉. 基于近鄰傳播學習的半監督流量分類方法. 自動化學報, 2013, 39(7): 1100-1109

                        Zhang Zhen, Wang Bin-Qiang, Li Xiang-Tao, Huang Wan-Wei. Semi-supervised traffic identification based on affinity propagation. Acta Automatica Sinica, 2013, 39(7): 1100-1109
                        [16] 王松濤, 周真, 靳薇, 曲寒冰. 基于貝葉斯框架融合的RGB-D圖像顯著性檢測. 自動化學報, 2020, 46(4): 695-720

                        Wang Song-Tao, Zhou Zhen, Jin Wei, Qu Han-Bing. Saliency detection for RGB-D images under Bayesian framework. Acta Automatica Sinica, 2020, 46(4): 695-720
                        [17] 陶劍文, 王士同. 領域適應核支持向量機. 自動化學報, 2012, 38(5): 797-811 doi: 10.3724/SP.J.1004.2012.00797

                        Tao Jian-Wen, Wang Shi-Tong. Kernel support vector machine for domain adaptation. Acta Automatica Sinica, 2012, 38(5): 797-811 doi: 10.3724/SP.J.1004.2012.00797
                        [18] 李強, 王正志. 基于人工神經網絡和經驗知識的遙感信息分類綜合方法. 自動化學報, 2000, 26(2): 233-239

                        Li Qiang, Wang Zheng-Zhi. Remote sensing information classification based on artificial neural network and knowledge. Acta Automatica Sinica, 2000, 26(2): 233-239
                        [19] 羅珍珍, 陳靚影, 劉樂元, 張坤. 基于條件隨機森林的非約束環境自然笑臉檢測. 自動化學報, 2018, 44(4): 696-706

                        Luo Zhen-Zhen, Chen Jing-Ying, Liu Le-Yuan, Zhang Kun. Conditional random forests for spontaneous smile detection in unconstrained environment. Acta Automatica Sinica, 2018, 44(4): 696-706
                        [20] 郝紅衛, 王志彬, 殷緒成, 陳志強. 分類器的動態選擇與循環集成方法. 自動化學報, 2011, 37(11): 1290-1295

                        Hao Hong-Wei, Wang Zhi-Bin, Yin Xu-Cheng, Chen Zhi-Qiang. Dynamic selection and circulating combination for multiple classifier systems. Acta Automatica Sinica, 2011, 37(11): 1290-1295
                        [21] 常亮, 鄧小明, 周明全, 武仲科, 袁野, 楊碩, 等. 圖像理解中的卷積神經網絡. 自動化學報, 2016, 42(9): 1300-1312

                        Chang Liang, Deng Xiao-Ming, Zhou Ming-Quan, Wu Zhong-Ke, Yuan Ye, Yang Shuo, et al. Convolutional neural networks in image understanding. Acta Automatica Sinica, 2016, 42(9): 1300-1312
                        [22] Bai W D, Yan J H, Ma Z Y. Method of flame identification based on support vector machine. Power Engingeering, 2004, 24(4): 548-551
                        [23] Sun D, Lu G, Zhou H, Yan Y, Liu S. Quantitative assessment of flame stability through image processing and spectral analysis. IEEE Transactions on Instrumentation and Measurement, 2015, 64(12): 3323-3333 doi: 10.1109/TIM.2015.2444262
                        [24] Khan A, Sohail A, Zahoora U, Qureshi A S. A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review, 2020, 53(8): 5455-5516 doi: 10.1007/s10462-020-09825-6
                        [25] 馮曉碩, 沈樾, 王冬琦. 基于圖像的數據增強方法發展現狀綜述. 計算機科學與應用, 2021, 11(2): 370-382 doi: 10.12677/CSA.2021.112037

                        Feng Xiao-Shuo, Shen Yue, Wang Dong-Qi. A survey on the development of image data augmentation. Computer Science and Application, 2021, 11(2): 370-382 doi: 10.12677/CSA.2021.112037
                        [26] Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. Montréal, Canada: MIT Press, 2014. 2672?2680
                        [27] 唐賢倫, 杜一銘, 劉雨微, 李佳歆, 馬藝瑋. 基于條件深度卷積生成對抗網絡的圖像識別方法. 自動化學報, 2018, 44(5): 855-864

                        Tang Xian-Lun, Du Yi-Ming, Liu Yu-Wei, Li Jia-Xin, Ma Yi-Wei. Image recognition with conditional deep convolutional generative adversarial networks. Acta Automatica Sinica, 2018, 44(5): 855-864
                        [28] 劉建偉, 謝浩杰, 羅雄麟. 生成對抗網絡在各領域應用研究進展. 自動化學報, 2020, 46(12): 2500-2536

                        Liu Jian-Wei, Xie Hao-Jie, Luo Xiong-Lin. Research progress on application of generative adversarial networks in various fields. Acta Automatica Sinica, 2020, 46(12): 2500-2536
                        [29] 王坤峰, 茍超, 段艷杰, 林懿倫, 鄭心湖, 王飛躍. 生成式對抗網絡GAN的研究進展與展望. 自動化學報, 2017, 43(3): 321-332

                        Wang Kun-Feng, Gou Chao, Duan Yan-Jie, Lin Yi-Lun, Zheng Xin-Hu, Wang Fei-Yue. Generative adversarial networks: The state of the art and beyond. Acta Automatica Sinica, 2017, 43(3): 321-332
                        [30] Yang L, Liu Y H, Peng J Z. An automatic detection and identification method of welded joints based on deep neural network. IEEE Access, 2019, 7: 164952-164961 doi: 10.1109/ACCESS.2019.2953313
                        [31] Lian J, Jia W K, Zareapoor M, Zheng Y J, Luo R, Jain D K, et al. Deep-learning-based small surface defect detection via an exaggerated local variation-based generative adversarial network. IEEE Transactions on Industrial Informatics, 2020, 16(2): 1343-1351 doi: 10.1109/TII.2019.2945403
                        [32] Niu S L, Li B, Wang X G, Lin H. Defect image sample generation with GAN for improving defect recognition. IEEE Transactions on Automation Science and Engineering, 2020, 17(3): 1611-1622
                        [33] Wu X J, Qiu L T, Gu X D, Long Z L. Deep learning-based generic automatic surface defect inspection (ASDI) with pixelwise segmentation. IEEE Transactions on Instrumentation and Measurement, 2021, 70: Article No. 5004010
                        [34] Bichler M, Fichtl M, Heidekrüger S, Kohring N, Sutterer P. Learning equilibria in symmetric auction games using artificial neural networks. Nature Machine Intelligence, 2021, 3(8): 687-695 doi: 10.1038/s42256-021-00365-4
                        [35] Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 6629?6640
                        [36] Lucic M, Kurach K, Michalski M, Bousquet O, Gelly S. Are GANs created equal? A large-scale study. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montréal, Canada: Curran Associates Inc., 2018. 698?707
                        [37] Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. In: Proceedings of the 4th International Conference on Learning Representations. San Juan, Puerto Rico: 2016.
                        [38] Suárez P L, Sappa A D, Vintimilla B X. Infrared image colorization based on a triplet DCGAN architecture. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA: IEEE, 2017. 18?23
                        [39] Yeh R A, Chen C, Lim T Y, Schwing A G, Hasegawa-Johnson M, Do M N. Semantic image inpainting with deep generative models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 5485?5493
                        [40] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 26th Annual Conference on Neural Information Processing Systems. Lake Tahoe, USA: Curran Associates Inc., 2012. 1106?1114
                        [41] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks. In: Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014. 818?833
                        [42] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition [Online], available: http://arxiv.org/abs/1409.1556, April 10, 2015
                        [43] Szegedy C, Liu W, Jia Y Q, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 1?9
                        [44] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 770?778
                        [45] 林景棟, 吳欣怡, 柴毅, 尹宏鵬. 卷積神經網絡結構優化綜述. 自動化學報, 2020, 46(1): 24-37

                        Lin Jing-Dong, Wu Xin-Yi, Chai Yi, Yin Hong-Peng. Structure optimization of convolutional neural networks: A survey. Acta Automatica Sinica, 2020, 46(1): 24-37
                        [46] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors. Nature, 1986, 323(6088): 533-536 doi: 10.1038/323533a0
                        [47] Kingma D P, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980, 2017.
                        [48] Mao X D, Li Q, Xie H R, Lau R Y K, Wang Z, Smolley S P. Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 2813?2821
                      2. 加載中
                      3. 圖(16) / 表(4)
                        計量
                        • 文章訪問數:  758
                        • HTML全文瀏覽量:  441
                        • PDF下載量:  125
                        • 被引次數: 0
                        出版歷程
                        • 收稿日期:  2021-09-06
                        • 錄用日期:  2021-12-02
                        • 網絡出版日期:  2022-02-10
                        • 刊出日期:  2024-03-29

                        目錄

                          /

                          返回文章
                          返回