基于混合數據增強的MSWI過(guò)程燃燒狀態(tài)識別
doi: 10.16383/j.aas.c210843
-
1.
北京工業(yè)大學(xué)信息學(xué)部 北京 100124
-
2.
智慧環(huán)保北京實(shí)驗室 北京 100124
Combustion States Recognition Method of MSWI Process Based on Mixed Data Enhancement
-
1.
Faculty of Information Technology, Beijing University of Technology, Beijing 100124
-
2.
Beijing Laboratory of Smart Environmental Protection, Beijing 100124
-
摘要: 國內城市固廢焚燒(Municipal solid waste incineration, MSWI)過(guò)程通常依靠運行專(zhuān)家觀(guān)察爐內火焰識別燃燒狀態(tài)后再結合自身經(jīng)驗修正控制策略以維持穩定燃燒, 存在智能化水平低、識別結果具有主觀(guān)性與隨意性等問(wèn)題. 由于MSWI過(guò)程的火焰圖像具有強污染、多噪聲等特性, 并且存在異常工況數據較為稀缺等問(wèn)題, 導致傳統目標識別方法難以適用. 對此, 提出一種基于混合數據增強的MSWI過(guò)程燃燒狀態(tài)識別方法. 首先, 結合領(lǐng)域專(zhuān)家經(jīng)驗與焚燒爐排結構對燃燒狀態(tài)進(jìn)行標定; 接著(zhù), 設計由粗調和精調兩級組成的深度卷積生成對抗網(wǎng)絡(luò )(Deep convolutional generative adversarial network, DCGAN)以獲取多工況火焰圖像; 然后, 采用弗雷歇距離(Fréchet inception distance, FID)對生成式樣本進(jìn)行自適應選擇; 最后, 通過(guò)非生成式數據增強對樣本進(jìn)行再次擴充, 獲得混合增強數據構建卷積神經(jīng)網(wǎng)絡(luò )以識別燃燒狀態(tài). 基于某MSWI電廠(chǎng)實(shí)際運行數據實(shí)驗, 表明該方法有效地提高了識別網(wǎng)絡(luò )的泛化性與魯棒性, 具有良好的識別精度.
-
關(guān)鍵詞:
- 城市固廢焚燒 /
- 深度卷積生成對抗網(wǎng)絡(luò ) /
- 燃燒狀態(tài)識別 /
- 非生成式數據增強 /
- 混合數據增強
Abstract: The municipal solid waste incineration (MSWI) process usually relies on operating experts to observe the flame inside furnace for recognizing the combustion states. Then, by combining the experts' own experience to modify the control strategy to maintain the stable combustion. Thus, this manual mode has disadvantages of low intelligence and the subjectivity and randomness recognition results. The traditional methods are difficult to apply to the MSWI process, which has the characteristics of strong pollution, multiple noise, and scarcity of samples under abnormal conditions. To solve the above problems, a combustion states recognition method of MSWI process based on mixed data enhancement is proposed. Firstly, combustion states are labeled by combining the experience of domain experts and the design structure of furnace grate. Next, a deep convolutional generative adversarial network (DCGAN) consisting of two levels of coarse and fine-tuning was designed to acquire multi-situation flame images. Then, the Fréchet inception distance (FID) is used to adaptively select generated samples. Finally, the sample features are enriched at the second time by using non-generative data enhancement strategy, and a convolutional neural network is constructed based on the mixed enhanced data to recognize the combustion state. Experiments based on actual operating data of a MSWI plant show that this method effectively improves the generalization and robustness of the recognition network and has good recognition accuracy. -
圖 2 基于DCGAN數據增強的燃燒狀態(tài)識別策略
Fig. 2 Strategy of combustion state recognition based on DCGAN data enhancement
圖 10 粗調DCGAN迭代過(guò)程中FID對生成燃燒狀態(tài)圖像的評估結果
Fig. 10 Assessment of FID for generating combustion state images during rough DCGAN iteration
表 1 數據集劃分
Table 1 Dataset partition
數據集 劃分方式 訓練集 驗證集 測試集 A 時(shí)間次序 9 × 8 9 × 1 9 × 1 B 隨機抽樣 9 × 8 9 × 1 9 × 1 下載: 導出CSV表 2 不同生成模型生成數據的評估結果
Table 2 Evaluation results of data generated by different generation models
方法 評價(jià)指標 FIDmin FIDaverage Epoch GAN 250.00 254.50 10000 LSGAN 58.56 51.94 3000 DCGAN 43.81 49.67 2500 本文方法 36.10 48.51 2500 下載: 導出CSV表 3 識別模型的性能對比
Table 3 Performance comparison of recognition models
方法 測試集準確率 測試集損失 驗證集準確率 驗證集損失 方式A CNN 0.7518±0.00245 0.6046±0.02882 0.6115±0.00212 1.6319±0.11640 非生成式數據增強+CNN 0.8272±0.00206 0.6504±0.04038 0.7830±0.00183 0.9077±0.03739 DCGAN數據增強+CNN 0.8000±0.00098 0.8776±0.01063 0.5885±0.00396 1.9024±0.11050 本文方法 0.8482±0.00105 0.5520±0.01006 0.7269±0.00377 0.9768±0.05797 方式B CNN 0.8926±0.00105 0.2298±0.00309 0.8519±0.00061 0.2519±0.00167 非生成式數據增強+CNN 0.9371±0.00184 0.1504±0.00825 0.9704±0.00055 0.1093±0.01037 DCGAN數據增強+CNN 0.9000±0.00123 0.3159±0.01150 0.8445±0.00207 0.2913±0.00396 本文方法 0.9407±0.00367 0.2019±0.01498 0.9741±0.00044 0.0699±0.00195 下載: 導出CSVA1 符號及含義
A1 Symbols and their descriptions
符號 符號含義 D 判別器 G 生成器 $ V(D,G)$ GAN 原始的目標函數 ${\boldsymbol{z}} $ 潛在空間的隨機噪聲 $ D^*$ 固定G 參數, 在$\mathop {\max }\nolimits_D V \left({D,G} \right)$過(guò)程中, D 的最優(yōu)解 ${D_{{\text{JS}}}}$ JS 散度 ${R_{jk}}$ 圖像中經(jīng)過(guò)卷積核掃描后的第 j 行第 k 列的結果 ${H_{j - u,k - v}}$ 卷積核 ${F_{u,v}}$ 圖像 $X$ 燃燒狀態(tài)數據集, 包含前移、正常和后移的數據集, 即燃燒圖像粗調 DCGAN 中判別網(wǎng)絡(luò )輸入值集合$[ { {\boldsymbol{x} }_{{1} } };{ {\boldsymbol{x} }_{{2} } }; $ ${ {\boldsymbol{x} }_{{3} } }; \cdots ;{ {\boldsymbol{x} }_{\rm{a}}} \cdots ]$, 即$ \left[ {{X_{{\rm{real}}}};{X_{{\rm{false}}}}} \right]$ $ X_{{\rm{FW}}}$ 燃燒線(xiàn)前移數據集 $ X_{{\rm{NM}}}$ 燃燒線(xiàn)正常數據集 $ X_{{\rm{BC}}}$ 燃燒線(xiàn)后移數據集 $ X'_{{\rm{FW}}}$ 訓練集燃燒線(xiàn)前移數據集 $ X'_{{\rm{NM}}}$ 訓練集燃燒線(xiàn)正常數據集 $ X'_{{\rm{BC}}}$ 訓練集燃燒線(xiàn)后移數據集 $ X''_{{\rm{FW}}}$ 測試、驗證燃燒線(xiàn)前移數據集 $ X''_{{\rm{NM}}}$ 測試、驗證燃燒線(xiàn)正常數據集 $ X''_{{\rm{BC}}}$ 測試、驗證燃燒線(xiàn)后移數據集 $ {D_t}(\cdot, \cdot )$ 燃燒圖像粗調 DCGAN 子模塊中, 判別網(wǎng)絡(luò )參數為${\theta _{D,t}}$時(shí), 判別網(wǎng)絡(luò )預測值集合 $ {D_{t+1}}(\cdot, \cdot )$ 燃燒圖像粗調 DCGAN 子模塊中, 判別網(wǎng)絡(luò )參數為${\theta _{D,t+1}}$時(shí), 判別網(wǎng)絡(luò )預測值集合 $ Y_{D,t}$ 在燃燒圖像粗調 DCGAN 子模塊中第 t 次博弈訓練判別網(wǎng)絡(luò )的真實(shí)值集合 $ Y_{G,t}$ 在燃燒圖像粗調 DCGAN 子模塊中第 t 次博弈訓練生成網(wǎng)絡(luò )的真實(shí)值集合 $ loss_{D,t}$ 在燃燒圖像粗調 DCGAN 子模塊中第 t 次博弈更新判別網(wǎng)絡(luò )的損失值 $ loss_{G,t}$ 在燃燒圖像粗調 DCGAN 子模塊中第 t 次博弈更新生成網(wǎng)絡(luò )的損失值 $ X_{{\rm{real}}}$ 在燃燒圖像粗調 DCGAN 子模塊中參加博弈的真實(shí)數據 $ X_{{\rm{false}},t}$ 在燃燒圖像粗調 DCGAN 子模塊中參加第 t 次博弈的生成的數據 $ G_t({\boldsymbol{z}})$ 在燃燒圖像粗調 DCGAN 子模塊第 t 次博弈中由隨機噪聲經(jīng)過(guò)生成網(wǎng)絡(luò )得到的虛擬樣本 ${S_{D,t}}$ 燃燒圖像粗調 DCGAN 中獲得的判別網(wǎng)絡(luò )的結構參數 ${S_{G,t}}$ 燃燒圖像粗調 DCGAN 中獲得的生成網(wǎng)絡(luò )的結構參數 ${\theta _{D,t}}$ 在燃燒圖像粗調 DCGAN 子模塊中第 t 次博弈判別網(wǎng)絡(luò )更新前的網(wǎng)絡(luò )參數 ${\theta _{G,t}}$ 在燃燒圖像粗調 DCGAN 子模塊中第 t 次博弈生成網(wǎng)絡(luò )更新前的網(wǎng)絡(luò )參數 $ X_{{\rm{real}}}^{{\rm{FW}}}$ 燃燒線(xiàn)前移精調 DCGAN 子模塊中參加博弈的真實(shí)數據 $ X_{{\rm{false}},t}^{{\rm{FW}}}$ 在燃燒線(xiàn)前移精調 DCGAN 子模塊中參加第 t 次博弈的生成數據 $ X_{{\rm{real}}}^{{\rm{NM}}}$ 燃燒線(xiàn)正常精調 DCGAN 子模塊中參加博弈的真實(shí)數據 $ X_{{\rm{false}},t}^{{\rm{NM}}}$ 在燃燒線(xiàn)正常精調 DCGAN 子模塊中參加第 t 次博弈的生成數據 $ X_{{\rm{real}}}^{{\rm{BC}}}$ 燃燒線(xiàn)后移精調 DCGAN 子模塊中參加博弈的真實(shí)數據 $ X_{{\rm{false}},t}^{{\rm{BC}}}$ 在燃燒線(xiàn)后移精調 DCGAN 子模塊中參加第 t 次博弈的生成數據 $ D_t^{{\rm{FW}}}(\cdot, \cdot )$ 在燃燒線(xiàn)前移精調 DCGAN 子模塊中判別網(wǎng)絡(luò )參數為參數$\theta _{D,t}^{{\text{FW}}}$時(shí), 判別網(wǎng)絡(luò )預測值集合 $ D_t^{{\rm{NM}}}(\cdot, \cdot )$ 在燃燒線(xiàn)正常精調 DCGAN 子模塊中判別網(wǎng)絡(luò )參數為參數$\theta _{D,t}^{{\text{NM}}}$時(shí), 判別網(wǎng)絡(luò )預測值集合 $ {D}_{t}^{\text{BC}}(\cdot, \cdot ) $ 在燃燒線(xiàn)后移精調 DCGAN 子模塊中判別網(wǎng)絡(luò )參數為參數$\theta _{D,t}^{{\text{BC}}}$時(shí), 判別網(wǎng)絡(luò )預測值集合 $ D_{t+1}^{{\rm{FW}}}(\cdot, \cdot )$ 在燃燒線(xiàn)前移精調 DCGAN 子模塊中判別網(wǎng)絡(luò )參數為參數$\theta _{D,t + 1}^{{\text{FW}}}$時(shí), 判別網(wǎng)絡(luò )預測值集合 $ D_{t+1}^{{\rm{NM}}}(\cdot, \cdot )$ 在燃燒線(xiàn)正常精調 DCGAN 子模塊中判別網(wǎng)絡(luò )參數為參數$\theta _{D,t + 1}^{{\text{NM}}}$時(shí), 判別網(wǎng)絡(luò )預測值集合 $ D_{t+1}^{{\rm{BC}}}(\cdot, \cdot )$ 在燃燒線(xiàn)后移精調 DCGAN 子模塊中判別網(wǎng)絡(luò )參數為參數$\theta _{D,t + 1}^{{\text{BC}}}$時(shí), 判別網(wǎng)絡(luò )預測值集合 $ Y_{D,t}^{{\rm{FW}}}$ 燃燒線(xiàn)前移精調 DCGAN 子模塊中第 t 次博弈訓練 D 的真實(shí)值集合 $ Y_{G,t}^{{\rm{FW}}}$ 燃燒線(xiàn)前移精調 DCGAN 子模塊中第 t 次博弈訓練G的真實(shí)值集合 $ Y_{D,t}^{{\rm{NM}}}$ 燃燒線(xiàn)正常精調 DCGAN 子模塊中第 t 次博弈訓練 D 的真實(shí)值集合 $ Y_{G,t}^{{\rm{NM}}}$ 燃燒線(xiàn)正常精調 DCGAN 子模塊中第 t 次博弈訓練G的真實(shí)值集合 $ Y_{D,t}^{{\rm{BC}}}$ 燃燒線(xiàn)后移精調 DCGAN 子模塊中第 t 次博弈訓練 D 的真實(shí)值集合 $ Y_{G,t}^{{\rm{BC}}}$ 燃燒線(xiàn)后移精調 DCGAN 子模塊中第 t 次博弈訓練G的真實(shí)值集合 $ loss_{D,t}^{{\rm{FW}}}$ 燃燒線(xiàn)前移精調 DCGAN 子模塊中第 t 次博弈更新 D 的損失值 $ loss_{G,t}^{{\rm{FW}}}$ 燃燒線(xiàn)前移精調 DCGAN 子模塊中第 t 次博弈更新G的損失值 $ loss_{D,t}^{{\rm{NM}}}$ 燃燒線(xiàn)正常精調 DCGAN 子模塊中第 t 次博弈更新 D 的損失值 $ loss_{G,t}^{{\rm{NM}}}$ 燃燒線(xiàn)正常精調 DCGAN 子模塊中第 t 次博弈更新 G 的損失值 $ loss_{D,t}^{{\rm{BC}}}$ 燃燒線(xiàn)后移精調 DCGAN 子模塊中第 t 次博弈更新 D 的損失值 $ loss_{G,t}^{{\rm{BC}}}$ 燃燒線(xiàn)后移精調 DCGAN 子模塊中第 t 次博弈更新G的損失值 $\theta _{D,t}^{{\text{FW}}}$ 燃燒線(xiàn)前移 DCGAN 子模塊中第 t 次博弈判別網(wǎng)絡(luò )更新前的網(wǎng)絡(luò )參數 $\theta _{G,t}^{{\text{FW}}}$ 燃燒線(xiàn)前移 DCGAN 子模塊中第 t 次博弈生成網(wǎng)絡(luò )更新前的網(wǎng)絡(luò )參數 $\theta _{D,t}^{{\text{NM}}}$ 燃燒線(xiàn)正常 DCGAN 子模塊中第 t 次博弈判別網(wǎng)絡(luò )更新前的網(wǎng)絡(luò )參數 $\theta _{G,t}^{{\text{NM}}}$ 燃燒線(xiàn)正常 DCGAN 子模塊中第 t 次博弈生成網(wǎng)絡(luò )更新前的網(wǎng)絡(luò )參數 $\theta _{D,t}^{{\text{BC}}}$ 燃燒線(xiàn)后移 DCGAN 子模塊中第 t 次博弈判別網(wǎng)絡(luò )更新前的網(wǎng)絡(luò )參數 $\theta _{G,t}^{{\text{BC}}}$ 燃燒線(xiàn)后移 DCGAN 子模塊中第 t 次博弈生成網(wǎng)絡(luò )更新前的網(wǎng)絡(luò )參數 ${\widehat Y_{{\text{ CNN }},t}}$ 燃燒狀態(tài)識別模塊第 t 次更新 CNN 模型預測值集合 $los{s_{{\text{ CNN }},t}}$ 燃燒狀態(tài)識別模塊第 t 次更新 CNN 的損失 $ \theta _{{\rm{ CNN }},t}$ 燃燒狀態(tài)識別模塊第 t 次更新 CNN 的網(wǎng)絡(luò )更新參數 $ loss$ 神經(jīng)網(wǎng)絡(luò )的損失 ${\boldsymbol{x} }_{{a} }$ 神經(jīng)網(wǎng)絡(luò )第 a 幅輸入圖像 $y_a $ 第 a 幅輸入圖像輸入神經(jīng)網(wǎng)絡(luò )后的輸出值 $ D_t(X)$ 判別網(wǎng)絡(luò )預測值集合, 即$ {D_t}(\cdot, \cdot )$ $L $ 損失函數 $\delta_i $ 第 i 層的誤差 $O_i $ 第 i 層輸出 $W_i$ 第 i 層的所有權重參數 $B_i $ 第 i 層的所有偏置參數 $ {\nabla _{{W_{i - 1}}}}$ 第$i-1 $層的權重的當前梯度 $ {\nabla _{{B_{i - 1}}}}$ 第$i-1 $層的偏置的當前梯度 $ {\theta _{D,t}}$ 第 t 次判別網(wǎng)絡(luò )的參數 $ {m _{D,t}}$ 第 t 次判別網(wǎng)絡(luò )一階動(dòng)量 $ {v _{D,t}}$ 第 t 次判別網(wǎng)絡(luò )的二階動(dòng)量 $\alpha $ 學(xué)習率 $\gamma $ 很小的正實(shí)數 $ {\nabla _{D,t}}$ 第 t 次判別網(wǎng)絡(luò )參數的梯度 $\beta_1 $ Adam 超參數 $\beta_2 $ Adam 超參數 $ {\eta _{D,t}}$ 計算第 t 次的下降梯度 $ {\widehat m_{D,t}}$ 初始階段判別網(wǎng)絡(luò )的第 t 次一階動(dòng)量 $ {\widehat v_{D,t}}$ 初始階段判別網(wǎng)絡(luò )的第 t 次的二階動(dòng)量 $Y $ 神經(jīng)網(wǎng)絡(luò )真值集合 $ f(X)$ 神經(jīng)網(wǎng)絡(luò )預測值集合 $p $ 概率分布 ${p_{\text{r}}}$ 真實(shí)圖像的概率分布 ${p_{\text{g}}}$ 生成圖像的概率分布 ${p_{\boldsymbol{z}}}$ z 所服從的正態(tài)分布 Cov 協(xié)方差矩陣 下載: 導出CSV亚洲第一网址_国产国产人精品视频69_久久久久精品视频_国产精品第九页 -
[1] 喬俊飛, 郭子豪, 湯健. 面向城市固廢焚燒過(guò)程的二噁英排放濃度檢測方法綜述. 自動(dòng)化學(xué)報, 2020, 46(6): 1063-1089Qiao Jun-Fei, Guo Zi-Hao, Tang Jian. Dioxin emission concentration measurement approaches for municipal solid wastes incineration process: A survey. Acta Automatica Sinica, 2020, 46(6): 1063-1089 [2] Lu J W, Zhang S K, Hai J, Lei M. Status and perspectives of municipal solid waste incineration in China: A comparison with developed regions. Waste Management, 2017, 69: 170-186 doi: 10.1016/j.wasman.2017.04.014 [3] Kalyani K A, Pandey K K. Waste to energy status in India: A short review. Renewable and Sustainable Energy Reviews, 2014, 31: 113-120 doi: 10.1016/j.rser.2013.11.020 [4] 湯健, 喬俊飛. 基于選擇性集成核學(xué)習算法的固廢焚燒過(guò)程二噁英排放濃度軟測量. 化工學(xué)報, 2018, 70(2): 696-706Tang Jian, Qiao Jun-Fei. Dioxin emission concentration soft measuring approach of municipal solid waste incineration based on selective ensemble kernel learning algorithm. CIESC Journal, 2019, 70(2): 696-706 [5] 湯健, 王丹丹, 郭子豪, 喬俊飛. 基于虛擬樣本優(yōu)化選擇的城市固廢焚燒過(guò)程二噁英排放濃度預測. 北京工業(yè)大學(xué)學(xué)報, 2021, 47(5): 431-443Tang Jian, Wang Dan-Dan, Guo Zi-Hao, Qiao Jun-Fei. Prediction of dioxin emission concentration in the municipal solid waste incineration process based on optimal selection of virtual samples. Journal of Beijing University of Technology, 2021, 47(5): 431-443 [6] 湯健, 喬俊飛, 徐喆, 郭子豪. 基于特征約簡(jiǎn)與選擇性集成算法的城市固廢焚燒過(guò)程二噁英排放濃度軟測量. 控制理論與應用, 2021, 38(1): 110-120Tang Jian, Qiao Jun-Fei, Xu Zhe, Guo Zi-Hao. Soft measuring approach of dioxin emission concentration in municipal solid waste incineration process based on feature reduction and selective ensemble algorithm. Control Theory & Applications, 2021, 38(1): 110-120 [7] Kolekar K A, Hazra T, Chakrabarty S N. A review on prediction of municipal solid waste generation models. Procedia Environmental Sciences, 2016, 35: 238-244 doi: 10.1016/j.proenv.2016.07.087 [8] Li X M, Zhang C M, Li Y Z, Zhi Q. The status of municipal solid waste incineration (MSWI) in China and its clean development. Energy Procedia, 2016, 104: 498-503 doi: 10.1016/j.egypro.2016.12.084 [9] 喬俊飛, 段滈杉, 湯健, 蒙西. 基于火焰圖像顏色特征的MSWI燃燒工況識別. 控制工程, 2022, 29(7): 1153-1161Qiao Jun-Fei, Duan Hao-Shan, Tang Jian, Meng Xi. Recognition of MSWI combustion conditions based on color features of flame images. Control Engineering of China, 2022, 29(7): 1153-1161 [10] 高濟, 何志均. 基于規則的聯(lián)想網(wǎng)絡(luò ). 自動(dòng)化學(xué)報, 1989, 15(4): 318-323Gao Ji, He Zhi-Jun. The associative network based on rules. Acta Automatica Sinica, 1989, 15(4): 318-323 [11] Roy S K, Krishna G, Dubey S R, Chaudhuri B B. HybridSN: Exploring 3-D-2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters, 2020, 17(2): 277-281 doi: 10.1109/LGRS.2019.2918719 [12] Ahammad S H, Rajesh V, Rahman Z U, Lay-Ekuakille A. A hybrid CNN-based segmentation and boosting classifier for real time sensor spinal cord injury data. IEEE Sensors Journal, 2020, 20(17): 10092-10101 doi: 10.1109/JSEN.2020.2992879 [13] Sun Y, Xue B, Zhang M J, Yen G G, Lv J C. Automatically designing CNN architectures using the genetic algorithm for image classification. IEEE Transactions on Cybernetics, 2020, 50(9): 3840-3854 doi: 10.1109/TCYB.2020.2983860 [14] Zhou P, Gao B H, Wang S, Chai T Y. Identification of abnormal conditions for fused magnesium melting process based on deep learning and multisource information fusion, IEEE Transactions on Industrial Electronics, 2022, 69(3): 3017-3026 doi: 10.1109/TIE.2021.3070512 [15] 張震, 汪斌強, 李向濤, 黃萬(wàn)偉. 基于近鄰傳播學(xué)習的半監督流量分類(lèi)方法. 自動(dòng)化學(xué)報, 2013, 39(7): 1100-1109Zhang Zhen, Wang Bin-Qiang, Li Xiang-Tao, Huang Wan-Wei. Semi-supervised traffic identification based on affinity propagation. Acta Automatica Sinica, 2013, 39(7): 1100-1109 [16] 王松濤, 周真, 靳薇, 曲寒冰. 基于貝葉斯框架融合的RGB-D圖像顯著(zhù)性檢測. 自動(dòng)化學(xué)報, 2020, 46(4): 695-720Wang Song-Tao, Zhou Zhen, Jin Wei, Qu Han-Bing. Saliency detection for RGB-D images under Bayesian framework. Acta Automatica Sinica, 2020, 46(4): 695-720 [17] 陶劍文, 王士同. 領(lǐng)域適應核支持向量機. 自動(dòng)化學(xué)報, 2012, 38(5): 797-811 doi: 10.3724/SP.J.1004.2012.00797Tao Jian-Wen, Wang Shi-Tong. Kernel support vector machine for domain adaptation. Acta Automatica Sinica, 2012, 38(5): 797-811 doi: 10.3724/SP.J.1004.2012.00797 [18] 李強, 王正志. 基于人工神經(jīng)網(wǎng)絡(luò )和經(jīng)驗知識的遙感信息分類(lèi)綜合方法. 自動(dòng)化學(xué)報, 2000, 26(2): 233-239Li Qiang, Wang Zheng-Zhi. Remote sensing information classification based on artificial neural network and knowledge. Acta Automatica Sinica, 2000, 26(2): 233-239 [19] 羅珍珍, 陳靚影, 劉樂(lè )元, 張坤. 基于條件隨機森林的非約束環(huán)境自然笑臉檢測. 自動(dòng)化學(xué)報, 2018, 44(4): 696-706Luo Zhen-Zhen, Chen Jing-Ying, Liu Le-Yuan, Zhang Kun. Conditional random forests for spontaneous smile detection in unconstrained environment. Acta Automatica Sinica, 2018, 44(4): 696-706 [20] 郝紅衛, 王志彬, 殷緒成, 陳志強. 分類(lèi)器的動(dòng)態(tài)選擇與循環(huán)集成方法. 自動(dòng)化學(xué)報, 2011, 37(11): 1290-1295Hao Hong-Wei, Wang Zhi-Bin, Yin Xu-Cheng, Chen Zhi-Qiang. Dynamic selection and circulating combination for multiple classifier systems. Acta Automatica Sinica, 2011, 37(11): 1290-1295 [21] 常亮, 鄧小明, 周明全, 武仲科, 袁野, 楊碩, 等. 圖像理解中的卷積神經(jīng)網(wǎng)絡(luò ). 自動(dòng)化學(xué)報, 2016, 42(9): 1300-1312Chang Liang, Deng Xiao-Ming, Zhou Ming-Quan, Wu Zhong-Ke, Yuan Ye, Yang Shuo, et al. Convolutional neural networks in image understanding. Acta Automatica Sinica, 2016, 42(9): 1300-1312 [22] Bai W D, Yan J H, Ma Z Y. Method of flame identification based on support vector machine. Power Engingeering, 2004, 24(4): 548-551 [23] Sun D, Lu G, Zhou H, Yan Y, Liu S. Quantitative assessment of flame stability through image processing and spectral analysis. IEEE Transactions on Instrumentation and Measurement, 2015, 64(12): 3323-3333 doi: 10.1109/TIM.2015.2444262 [24] Khan A, Sohail A, Zahoora U, Qureshi A S. A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review, 2020, 53(8): 5455-5516 doi: 10.1007/s10462-020-09825-6 [25] 馮曉碩, 沈樾, 王冬琦. 基于圖像的數據增強方法發(fā)展現狀綜述. 計算機科學(xué)與應用, 2021, 11(2): 370-382 doi: 10.12677/CSA.2021.112037Feng Xiao-Shuo, Shen Yue, Wang Dong-Qi. A survey on the development of image data augmentation. Computer Science and Application, 2021, 11(2): 370-382 doi: 10.12677/CSA.2021.112037 [26] Goodfellow I J, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. Montréal, Canada: MIT Press, 2014. 2672?2680 [27] 唐賢倫, 杜一銘, 劉雨微, 李佳歆, 馬藝瑋. 基于條件深度卷積生成對抗網(wǎng)絡(luò )的圖像識別方法. 自動(dòng)化學(xué)報, 2018, 44(5): 855-864Tang Xian-Lun, Du Yi-Ming, Liu Yu-Wei, Li Jia-Xin, Ma Yi-Wei. Image recognition with conditional deep convolutional generative adversarial networks. Acta Automatica Sinica, 2018, 44(5): 855-864 [28] 劉建偉, 謝浩杰, 羅雄麟. 生成對抗網(wǎng)絡(luò )在各領(lǐng)域應用研究進(jìn)展. 自動(dòng)化學(xué)報, 2020, 46(12): 2500-2536Liu Jian-Wei, Xie Hao-Jie, Luo Xiong-Lin. Research progress on application of generative adversarial networks in various fields. Acta Automatica Sinica, 2020, 46(12): 2500-2536 [29] 王坤峰, 茍超, 段艷杰, 林懿倫, 鄭心湖, 王飛躍. 生成式對抗網(wǎng)絡(luò )GAN的研究進(jìn)展與展望. 自動(dòng)化學(xué)報, 2017, 43(3): 321-332Wang Kun-Feng, Gou Chao, Duan Yan-Jie, Lin Yi-Lun, Zheng Xin-Hu, Wang Fei-Yue. Generative adversarial networks: The state of the art and beyond. Acta Automatica Sinica, 2017, 43(3): 321-332 [30] Yang L, Liu Y H, Peng J Z. An automatic detection and identification method of welded joints based on deep neural network. IEEE Access, 2019, 7: 164952-164961 doi: 10.1109/ACCESS.2019.2953313 [31] Lian J, Jia W K, Zareapoor M, Zheng Y J, Luo R, Jain D K, et al. Deep-learning-based small surface defect detection via an exaggerated local variation-based generative adversarial network. IEEE Transactions on Industrial Informatics, 2020, 16(2): 1343-1351 doi: 10.1109/TII.2019.2945403 [32] Niu S L, Li B, Wang X G, Lin H. Defect image sample generation with GAN for improving defect recognition. IEEE Transactions on Automation Science and Engineering, 2020, 17(3): 1611-1622 [33] Wu X J, Qiu L T, Gu X D, Long Z L. Deep learning-based generic automatic surface defect inspection (ASDI) with pixelwise segmentation. IEEE Transactions on Instrumentation and Measurement, 2021, 70: Article No. 5004010 [34] Bichler M, Fichtl M, Heidekrüger S, Kohring N, Sutterer P. Learning equilibria in symmetric auction games using artificial neural networks. Nature Machine Intelligence, 2021, 3(8): 687-695 doi: 10.1038/s42256-021-00365-4 [35] Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA: Curran Associates Inc., 2017. 6629?6640 [36] Lucic M, Kurach K, Michalski M, Bousquet O, Gelly S. Are GANs created equal? A large-scale study. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montréal, Canada: Curran Associates Inc., 2018. 698?707 [37] Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. In: Proceedings of the 4th International Conference on Learning Representations. San Juan, Puerto Rico: 2016. [38] Suárez P L, Sappa A D, Vintimilla B X. Infrared image colorization based on a triplet DCGAN architecture. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu, USA: IEEE, 2017. 18?23 [39] Yeh R A, Chen C, Lim T Y, Schwing A G, Hasegawa-Johnson M, Do M N. Semantic image inpainting with deep generative models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, USA: IEEE, 2017. 5485?5493 [40] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of the 26th Annual Conference on Neural Information Processing Systems. Lake Tahoe, USA: Curran Associates Inc., 2012. 1106?1114 [41] Zeiler M D, Fergus R. Visualizing and understanding convolutional networks. In: Proceedings of the 13th European Conference on Computer Vision. Zurich, Switzerland: Springer, 2014. 818?833 [42] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition [Online], available: http://arxiv.org/abs/1409.1556, April 10, 2015 [43] Szegedy C, Liu W, Jia Y Q, Sermanet P, Reed S, Anguelov D, et al. Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston, USA: IEEE, 2015. 1?9 [44] He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, USA: IEEE, 2016. 770?778 [45] 林景棟, 吳欣怡, 柴毅, 尹宏鵬. 卷積神經(jīng)網(wǎng)絡(luò )結構優(yōu)化綜述. 自動(dòng)化學(xué)報, 2020, 46(1): 24-37Lin Jing-Dong, Wu Xin-Yi, Chai Yi, Yin Hong-Peng. Structure optimization of convolutional neural networks: A survey. Acta Automatica Sinica, 2020, 46(1): 24-37 [46] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors. Nature, 1986, 323(6088): 533-536 doi: 10.1038/323533a0 [47] Kingma D P, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980, 2017. [48] Mao X D, Li Q, Xie H R, Lau R Y K, Wang Z, Smolley S P. Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision. Venice, Italy: IEEE, 2017. 2813?2821