1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
      <samp id="qm3rj"></samp>
      <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

      <video id="qm3rj"><code id="qm3rj"></code></video>

        1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
            1. 2.845

              2023影響因子

              (CJCR)

              • 中文核心
              • EI
              • 中國科技核心
              • Scopus
              • CSCD
              • 英國科學(xué)文摘

              留言板

              尊敬的讀者、作者、審稿人, 關(guān)于本刊的投稿、審稿、編輯和出版的任何問(wèn)題, 您可以本頁(yè)添加留言。我們將盡快給您答復。謝謝您的支持!

              姓名
              郵箱
              手機號碼
              標題
              留言?xún)热?/th>
              驗證碼

              基于多示例學(xué)習圖卷積網(wǎng)絡(luò )的隱寫(xiě)者檢測

              鐘圣華 張智

              鐘圣華, 張智. 基于多示例學(xué)習圖卷積網(wǎng)絡(luò )的隱寫(xiě)者檢測. 自動(dòng)化學(xué)報, 2024, 50(4): 771?789 doi: 10.16383/j.aas.c220775
              引用本文: 鐘圣華, 張智. 基于多示例學(xué)習圖卷積網(wǎng)絡(luò )的隱寫(xiě)者檢測. 自動(dòng)化學(xué)報, 2024, 50(4): 771?789 doi: 10.16383/j.aas.c220775
              Zhong Sheng-Hua, Zhang Zhi. Steganographer detection via multiple-instance learning graph convolutional networks. Acta Automatica Sinica, 2024, 50(4): 771?789 doi: 10.16383/j.aas.c220775
              Citation: Zhong Sheng-Hua, Zhang Zhi. Steganographer detection via multiple-instance learning graph convolutional networks. Acta Automatica Sinica, 2024, 50(4): 771?789 doi: 10.16383/j.aas.c220775

              基于多示例學(xué)習圖卷積網(wǎng)絡(luò )的隱寫(xiě)者檢測

              doi: 10.16383/j.aas.c220775
              基金項目: 廣東省自然科學(xué)基金(2023A1515012685, 2023A1515011296), 國家自然科學(xué)基金(62002230, 62032015)資助
              詳細信息
                作者簡(jiǎn)介:

                鐘圣華:深圳大學(xué)計算機與軟件學(xué)院副教授. 主要研究方向為多媒體內容分析, 情感腦機接口. 本文通信作者. E-mail: csshzhong@szu.edu.cn

                張智:深圳大學(xué)計算機與軟件學(xué)院研究助理, 香港理工大學(xué)電子計算學(xué)系博士研究生. 主要研究方向為隱寫(xiě)者檢測, 腦電信號分析. E-mail: zhi271.zhang@connect.polyu.hk

              Steganographer Detection via Multiple-instance Learning Graph Convolutional Networks

              Funds: Supported by Natural Science Foundation of Guangdong Province (2023A1515012685, 2023A1515011296) and National Natural Science Foundation of China (62002230, 62032015)
              More Information
                Author Bio:

                ZHONG Sheng-Hua Associate professor at the College of Computer Science and Software Engineering, Shenzhen University. Her research interest covers multimedia content analysis and affective brain-machine interface. Corresponding author of this paper

                ZHANG Zhi Research assistant at the College of Computer Science and Software Engineering, Shenzhen University; Ph.D. candidate in the Department of Computing, The Hong Kong Polytechnic University. Her research interest covers steganographer detection and electroencephalography signal analysis

              • 摘要: 隱寫(xiě)者檢測通過(guò)設計模型檢測在批量圖像中嵌入秘密信息進(jìn)行隱蔽通信的隱寫(xiě)者, 對解決非法使用隱寫(xiě)術(shù)的問(wèn)題具有重要意義. 本文提出一種基于多示例學(xué)習圖卷積網(wǎng)絡(luò ) (Multiple-instance learning graph convolutional network, MILGCN) 的隱寫(xiě)者檢測算法, 將隱寫(xiě)者檢測形式化為多示例學(xué)習(Multiple-instance learning, MIL) 任務(wù). 本文中設計的共性增強圖卷積網(wǎng)絡(luò )(Graph convolutional network, GCN) 和注意力圖讀出模塊能夠自適應地突出示例包中正示例的模式特征, 構建有區分度的示例包表征并進(jìn)行隱寫(xiě)者檢測. 實(shí)驗表明, 本文設計的模型能夠對抗多種批量隱寫(xiě)術(shù)和與之對應的策略.
              • 圖  1  基于多示例學(xué)習圖卷積網(wǎng)絡(luò )的隱寫(xiě)者檢測框架

                Fig.  1  Steganographer detection framework based on multiple-instance learning graph convolutional network

                圖  2  隱寫(xiě)者檢測框架中兩個(gè)模塊((a) 共性增強圖卷積模塊; (b) 注意力讀出模塊)

                Fig.  2  Two modules in steganographer detection framework ((a) The commonness enhancement graph convolutional network module; (b) The attention readout module)

                圖  3  當測試階段隱寫(xiě)者使用不同隱寫(xiě)術(shù)、分享的載密圖像數量占總圖像數量的10%到100%時(shí), 不同的基于圖的隱寫(xiě)者檢測方法檢測準確率

                Fig.  3  The accurate rate of different graph-based steganographer detection methods when the number of shared secret images is from 10% to 100% of the total number of images and the steganographer uses different steganography in test

                圖  4  隱寫(xiě)者和正常用戶(hù)所對應圖結構的可視化

                Fig.  4  Visualization of graph structures corresponding to steganographer and normal user

                表  1  使用的變量符號及對應說(shuō)明

                Table  1  The variable symbols and their corresponding descriptions

                變量含義
                $B_x$用戶(hù)$x$對應的示例包
                $x_i$示例包$B_x$內第$i$個(gè)示例
                $m$當前數據集中示例包的總數量
                $n$當前示例包中示例的總數量
                $v_i$共性增強圖卷積模塊的第$i$個(gè)輸入示例特征
                $h_i$對$v_i$使用$f$函數進(jìn)行特征提取后得到的示例特征向量
                $H$由示例特征向量構成的示例包的矩陣表示 $H=[h_1,\cdots,h_n]^{\rm{T}}$
                $A_{ij}$圖中第$i$個(gè)與第$j$個(gè)示例節點(diǎn)之間邊的權重
                $N_i$圖中第$i$個(gè)示例節點(diǎn)的所有鄰居節點(diǎn)
                $A$示例包$B_x$所構成圖的鄰接矩陣
                $r_i$進(jìn)行圖卷積后$h_i$所對應的示例特征向量
                $R$由示例特征向量構成的示例包的矩陣表示 $R=[r_1,\cdots,r_n]^{\rm{T}}$
                $s_i$進(jìn)行圖歸一化后$r_i$ 所對應的示例特征向量
                $S$由示例特征向量構成的示例包的矩陣表示 $S=[s_1,\cdots,s_n]^{\rm{T}}$
                $t_i$共性增強圖卷積模塊的第$i$個(gè)輸出示例特征
                $T$由示例特征向量構成的示例包的矩陣表示 $T=[t_1,\cdots,t_n]^{\rm{T}}$
                $f$特征提取函數
                $g$注意力計算函數
                $z_i$共性增強圖卷積模塊的輸出, 也是注意力讀出模塊的第$i$個(gè)輸入示例特征
                $Z_x$由共性增強圖卷積模塊得到的示例特征向量構成的用戶(hù)$x$對應的示例包的矩陣表示
                $u_x$用戶(hù)$x$對應的示例包的特征向量表征
                $p_i$當前示例包中第$i$個(gè)示例對示例包表征的貢獻
                ${\rho_i}$第$i$個(gè)示例包的預測結果
                $Y_i$ 第$i$個(gè)示例包的真實(shí)標簽
                $L$本文設計的損失函數
                $L_{\rm{bag}}$本文設計的多示例學(xué)習分類(lèi)損失
                $L_{\rm{entropy}}$本文設計的熵正則損失
                $L_{\rm{contrastive}}$本文設計的對比學(xué)習損失
                $\lambda_1, \lambda_2, \lambda_3$超參數, 用于調整$L_{\rm{bag}},$ $L_{\rm{entropy}},$ $L_{\rm{contrastive}}$的權重
                下載: 導出CSV

                表  2  已知隱寫(xiě)者使用相同圖像隱寫(xiě)術(shù)(S-UNIWARD) 時(shí)的隱寫(xiě)者檢測準確率(%), 嵌入率從0.05 bpp到0.4 bpp

                Table  2  Steganography detection accuracy rate (%) when steganographers use the same image steganography (S-UNIWARD), while the embedding payload is from 0.05 bpp to 0.4 bpp

                模型嵌入率(bpp)
                0.050.10.20.30.4
                前沿MDNNSD454100100100
                XuNet_SD2271100100
                基于GANSSGAN_SD01124
                基于GNNGAT23334
                GraphSAGE2888100100100
                AGNN2499100100100
                GCN1996100100100
                SAGCN72100100100100
                基于MILMILNN_self1587100100100
                MILNN_git1896100100100
                本文MILGCN-MF47100100100100
                MILGCN74100100100100
                下載: 導出CSV

                表  3  當測試階段隱寫(xiě)者使用相同隱寫(xiě)術(shù)(S-UNIWARD) 和分享的載密圖像數量占總圖像數量為10%到100%時(shí), SRNet-AVG和SRNet-MILGCN的檢測成功率 (%)

                Table  3  The accurate rate (%) of SRNet-AVG and SRNet-MILGCN when the number of shared secret images is from 10% to 100% of the total number of images and the steganographer uses the same steganography (S-UNIWARD) in test

                方法占比(%)
                1030507090100
                SRNet-AVG26100100100100100
                SRNet-MILGCN35100100100100100
                下載: 導出CSV

                表  4  當用戶(hù)分享不同數量的圖像時(shí), 使用MILGCN和SAGCN進(jìn)行隱寫(xiě)者檢測的準確率(%),嵌入率從0.05 bpp到0.4 bpp

                Table  4  Steganography detection accuracy rate (%) of MILGCN and SAGCN when users share different numbers of images, while the embedding payload is from 0.05 bpp to 0.4 bpp

                數量(張)嵌入率 (bpp)
                0.050.10.20.30.4
                MILGCN1003596100100100
                20074100100100100
                40096100100100100
                600100100100100100
                SAGCN1003196100100100
                20072100100100100
                40091100100100100
                60091100100100100
                下載: 導出CSV

                表  5  在隱寫(xiě)術(shù)錯配情況下, 當分享的載密圖像數量占比5%時(shí), MILGCN取得的隱寫(xiě)者檢測準確率(%)

                Table  5  Steganography detection accuracy rate (%) in the case of steganography mismatch when the number of shared secret images is 5% of the total number of images

                測試隱寫(xiě)術(shù)HUGO-BDWOWHILLMiPOD
                檢測準確率6455
                下載: 導出CSV

                表  6  訓練模型使用HILL作為隱寫(xiě)術(shù), 分享的載密圖像數量占比10%或30%, MILGCN取得的隱寫(xiě)者檢測準確率(%)

                Table  6  Steganography detection accuracy rate (%) when the steganography used for training is HILL and the number of shared secret images is 10% or 30% of the total number of images

                載密圖像比例測試隱寫(xiě)術(shù)
                HUGO-BDWOWHILLMiPOD
                10%9674
                30%37484947
                下載: 導出CSV

                表  7  已知隱寫(xiě)者使用相同圖像隱寫(xiě)術(shù)(J-UNIWARD)時(shí)的隱寫(xiě)者檢測準確率(%), 嵌入率從0.05 bpnzAC到0.4 bpnzAC

                Table  7  Steganography detection accuracy rate (%) when steganographer use the same image steganography (J-UNIWARD) and the embedding payload is from 0.05 bpnzAC to 0.4 bpnzAC

                模型嵌入率(bpnzAC)
                0.050.10.20.30.4
                JRM_SD1117253148
                PEV_SD00115
                GraphSAGE1368100100100
                AGNN1384100100100
                GCN1688100100100
                SAGCN1792100100100
                MILGCN2592100100100
                下載: 導出CSV

                表  8  當測試階段隱寫(xiě)者使用nsF5或UERD等圖像隱寫(xiě)術(shù)嵌入秘密信息時(shí), 不同方法的隱寫(xiě)者檢測準確率(%),嵌入率從0.05 bpnzAC到0.4 bpnzAC

                Table  8  Steganography detection accurate rate (%) of different methods when steganographer uses nsF5 or UERD as image steganography in the testing phase and the embedding payload is from 0.05 bpnzAC to 0.4 bpnzAC

                隱寫(xiě)術(shù)模型嵌入率(bpnzAC)
                0.050.10.20.30.4
                nsF5PEV_SD0195293
                GraphSAGE2191100100100
                AGNN2090100100100
                GCN2490100100100
                SAGCN2992100100100
                MILGCN2290100100100
                UERDGraphSAGE2591100100100
                AGNN2994100100100
                GCN3396100100100
                SAGCN3398100100100
                MILGCN4299100100100
                下載: 導出CSV

                表  9  計算復雜度分析

                Table  9  The analysis of computational complexity

                方法名稱(chēng)批次平均
                運行時(shí)間(s)
                單個(gè)樣本浮點(diǎn)
                運算數(千兆次)
                參數量(千個(gè))
                MILNN0.0010.00312.92
                GCN0.8302.48067.97
                SAGCN2.2107.41067.94
                MILGCN0.0200.07074.18
                下載: 導出CSV
                1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
                  <samp id="qm3rj"></samp>
                  <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

                  <video id="qm3rj"><code id="qm3rj"></code></video>

                    1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
                        亚洲第一网址_国产国产人精品视频69_久久久久精品视频_国产精品第九页
                      1. [1] Ker A D, Pevny T. A new paradigm for steganalysis via clustering. In: Proceedings of the SPIE 7880, Media Watermarking, Security, and Forensics III. Bellingham, USA: SPIE, 2011. 312?324
                        [2] Pevny T, Feidrich J. Merging Markov and DCT features for multi-class JPEG steganalysis. In: Proceedings of the SPIE 6505, Security, Steganography, and Watermarking of Multimedia Contents IX. Bellingham, USA: SPIE, 2007. 1?13
                        [3] Ker A D, Pevny T. Identifying a steganographer in realistic and heterogeneous data sets. In: Proceedings of the SPIE 8303, Media Watermarking, Security, and Forensics. Bellingham, USA: SPIE, 2012. 1?13
                        [4] Ker A D, Pevny T. The steganographer is the outlier: Realistic large-scale steganalysis. IEEE Transactions on Information Forensics and Security, 2014, 9(9): 1424?1435 doi: 10.1109/TIFS.2014.2336380
                        [5] Li F Y, Wu K, Lei J S, Wen M, Bi Z Q, Gu C H. Steganalysis over large-scale social networks with high-order joint features and clustering ensembles. IEEE Transactions on Information Forensics and Security, 2016, 11(2): 344?357 doi: 10.1109/TIFS.2015.2496910
                        [6] Zheng M J, Zhong S H, Wu S T, Jiang J M. Steganographer detection via deep residual network. In: Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). Piscataway, USA: IEEE, 2017. 235?240
                        [7] Zheng M J, Zhong S H, Wu S T, Jiang J M. Steganographer detection based on multiclass dilated residual networks. In: Proceedings of the ACM on International Conference on Multimedia Retrieval. New York, USA: ACM, 2018. 300?308
                        [8] Zhang Z, Zheng M J, Zhong S H, Liu Y. Steganographer detection via enhancement-aware graph convolutional network. In: Proceedings of the IEEE International Conference on Multimedia and Expo (ICME). Piscataway, USA: IEEE, 2020. 1?6
                        [9] Zhang Z, Zheng M J, Zhong S H, Liu Y. Steganographer detection via a similarity accumulation graph convolutional network. Neural Networks: The Official Journal of the International Neural Network Society, 2021, 136: 97?111 doi: 10.1016/j.neunet.2020.12.026
                        [10] Ning X, Tian W J, Yu Z Y, Li W J, Bai X, Wang Y B. HCFNN: High-order coverage function neural network for image classifican. Pattern Recognition, 2022, 131: Article No. 108873 doi: 10.1016/j.patcog.2022.108873
                        [11] Defferrard M, Bresson X, Vandergheynst P. Convolutional neural networks on graphs with fast localized spectral filtering. In: Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS). New York, USA: Curran Associates Inc., 2016. 3844?3852
                        [12] Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv: 1609.02907, 2016.
                        [13] Hamilton W L, Ying R, Leskovec J. Inductive representation learning on large graphs. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS). New York, USA: Curran Associates Inc., 2017. 1025?1035
                        [14] Ying R, You J X, Morris C, Ren X, Hamilton W L, Leskovec J. Hierarchical graph representation learning with differentiable pooling. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS). New York, USA: Curran Associates Inc., 2018. 4805?4815
                        [15] Veli?kovi? P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y. Graph attention networks. In: Proceedings of the 6th International Conference on Learning Representation. Vancouver, Canada: ICLR Press, 2018. 1?12
                        [16] Gao W, Wan F, Yue J, Xu S C, Ye Q X. Discrepant multiple instance learning for weakly supervised object detection. Pattern Recognition: The Journal of the Pattern Recognition Society, 2022, 122: Article No. 108233 doi: 10.1016/j.patcog.2021.108233
                        [17] Tang X C, Liu M Z, Zhong H, Ju Y Z, Li W L, Xu Q. MILL: Channel attention-based deep multiple instance learning for landslide recognition. ACM Transactions on Multimedia Computing, Communications, and Applications, 2021, 17(2s): 1?11
                        [18] Yuan M, Xu Y T, Feng R X, Liu Z M. Instance elimination strategy for non-convex multiple-instance learning using sparse positive bags. Neural Networks, 2021, 142: 509?521 doi: 10.1016/j.neunet.2021.07.009
                        [19] Su Z Y, Tavolara T E, Carreno-Galeano G, Lee S J, Gurcan M N, Niazi M K K. Attention2majority: Weak multiple instance learning for regenerative kidney grading on whole slide images. Medical Image Analysis, 2022, 79: Article No. 102462 doi: 10.1016/j.media.2022.102462
                        [20] Bas P, Filler T, Pevny T. “Break our steganographic system”: The ins and outs of organizing BOSS. In: Proceedings of the International Workshop on Information Hiding. Prague, Czech Republic: Springer, 2011. 59?70
                        [21] Holub V, Fridrich J. Low-complexity features for JPEG steganalysis using undecimated DCT. IEEE Transactions on Information Forensics and Security, 2015, 10(2): 219?228 doi: 10.1109/TIFS.2014.2364918
                        [22] Fridrich J, Kodovsky J. Rich models for steganalysis of digital images. IEEE Transactions on Information Forensics and Security, 2012, 7(3): 868?882 doi: 10.1109/TIFS.2012.2190402
                        [23] Shi H C, Dong J, Wang W, Qian Y L, Zhang X Y. SSGAN: Secure steganography based on generative adversarial networks. In: Proceedings of the Pacific Rim Conference on Multimedia. Harbin, China: Springer, 2017. 534?544
                        [24] Pevny T, Somol P. Using neural network formalism to solve multiple-instance problems. arXiv preprint arXiv: 1609.07257, 2016.
                        [25] Thekumparampil K K, Wang C, Oh S, Li L J. Attention-based graph neural network for semi-supervised learning. arXiv preprint arXiv: 1803.03735, 2018.
                        [26] Xu G S, Wu H Z, Shi Y Q. Structural design of convolutional neural networks for steganalysis. IEEE Signal Processing Letters, 2016, 23(5): 708?712 doi: 10.1109/LSP.2016.2548421
                        [27] Holub V, Fridrich J, Denemark T. Universal distortion function for steganography in an arbitrary domain. EURASIP Journal on Information Security, DOI: 10.1186/1687-417X-2014-1
                        [28] Filler T, Fridrich J. Gibbs construction in steganography. IEEE Transactions on Information Forensics and Security, 2010, 5(4): 705?720 doi: 10.1109/TIFS.2010.2077629
                        [29] Holub V, Fridrich J. Designing steganographic distortion using directional filters. In: Proceedings of the IEEE International Workshop on Information Forensics and Security (WIFS). Piscataway, USA: IEEE, 2012. 234?239
                        [30] Sedighi V, Cogranne R, Fridrich J. Content-adaptive steganography by minimizing statistical detectability. IEEE Transactions on Information Forensics and Security, 2016, 11(2): 221?234 doi: 10.1109/TIFS.2015.2486744
                        [31] Fridrich J, Pevny T, Kodovsky J. Statistically undetectable JPEG steganography: Dead ends challenges, and opportunities. In: Proceedings of the 9th Workshop on Multimedia and Security. Dallas, USA: ACM, 2007. 3?14
                        [32] Guo L J, Ni J Q, Su W K, Tang C P, Shi Y Q. Using statistical image model for JPEG steganography: Uniform embedding revisited. IEEE Transactions on Information Forensics and Security, 2015, 10(12): 2669?2680 doi: 10.1109/TIFS.2015.2473815
                        [33] Qian Y L, Dong J, Wang W, Tan T N. Deep learning for steganalysis via convolutional neural networks. In: Proceedings of the SPIE 9409, Media Watermarking, Security, and Forensics. Bellingham, USA: SPIE, 2015. 1?10
                        [34] 衛星, 李佳, 孫曉, 劉邵凡, 陸陽(yáng). 基于混合生成對抗網(wǎng)絡(luò )的多視角圖像生成算法. 自動(dòng)化學(xué)報, 2021, 47(11): 2623?2636

                        Wei Xing, Li Jia, Sun Xiao, Liu Shao-Fan, Lu Yang. Cross-view image generation via mixture generative adversarial network. Acta Automatica Sinica, 2021, 47(11): 2623?2636
                        [35] 胡銘菲, 左信, 劉建偉. 深度生成模型綜述. 自動(dòng)化學(xué)報, 2022, 48(1): 40?74 doi: 10.16383/j.aas.c190866

                        Hu Ming-Fei, Zuo Xin, Liu Jian-Wei. Survey on deep generative model. Acta Automatica Sinica, 2022, 48(1): 40?74 doi: 10.16383/j.aas.c190866
                        [36] 董胤蓬, 蘇航, 朱軍. 面向對抗樣本的深度神經(jīng)網(wǎng)絡(luò )可解釋性分析. 自動(dòng)化學(xué)報, 2022, 48(1): 75?86

                        Dong Yin-Peng, Su Hang, Zhu Jun. Interpretability analysis of deep neural networks with adversarial examples. Acta Automatica Sinica, 2022, 48(1): 75?86
                        [37] 余正飛, 閆巧, 周鋆. 面向網(wǎng)絡(luò )空間防御的對抗機器學(xué)習研究綜述. 自動(dòng)化學(xué)報, 2022, 48(7): 1625?1649

                        Yu Zheng-Fei, Yan Qiao, Zhou Yun. A survey on adversarial machine learning for cyberspace defense. Acta Automatica Sinica, 2022, 48(7): 1625?1649
                        [38] 趙博宇, 張長(cháng)青, 陳蕾, 劉新旺, 李澤超, 胡清華. 生成式不完整多視圖數據聚類(lèi). 自動(dòng)化學(xué)報, 2021, 47(8): 1867?1875 doi: 10.16383/j.aas.c200121

                        Zhao Bo-Yu, Zhang Chang-Qing, Chen Lei, Liu Xin-Wang, Li Ze-Chao, Hu Qing-Hua. Generative model for partial multi-view clustering. Acta Automatica Sinica, 2021, 47(8): 1867?1875 doi: 10.16383/j.aas.c200121
                        [39] 張博瑋, 鄭建飛, 胡昌華, 裴洪, 董青. 基于流模型的缺失數據生成方法在剩余壽命預測中的應用. 自動(dòng)化學(xué)報, 2023, 49(1): 185?196

                        Zhang Bo-Wei, Zheng Jian-Fei, Hu Chang-Hua, Pei Hong, Dong Qing. Missing data generation method based on flow model and its application in remaining life prediction. Acta Automatica Sinica, 2023, 49(1): 185?196
                        [40] Wei G, Guo J, Ke Y Z, Wang K, Yang S, Sheng N. A three-stage GAN model based on edge and color prediction for image outpainting. Expert Systems With Applications, 2023, 214: Article No. 119136 doi: 10.1016/j.eswa.2022.119136
                        [41] Wang Y F, Dong X S, Wang L X, Chen W D, Zhang X J. Optimizing small-sample disk fault detection based on LSTM-GAN model. ACM Transactions on Architecture and Code Optimization, 2022, 19(1): 1?24
                        [42] Boroumand M, Chen M, Fridrich J. Deep residual network for steganalysis of digital images. IEEE Transactions on Information Forensics and Security, 2019, 14(5): 1181?1193 doi: 10.1109/TIFS.2018.2871749
                      2. 加載中
                      3. 圖(4) / 表(9)
                        計量
                        • 文章訪(fǎng)問(wèn)數:  468
                        • HTML全文瀏覽量:  170
                        • PDF下載量:  138
                        • 被引次數: 0
                        出版歷程
                        • 收稿日期:  2022-09-30
                        • 錄用日期:  2023-04-12
                        • 網(wǎng)絡(luò )出版日期:  2023-08-09
                        • 刊出日期:  2024-04-26

                        目錄

                          /

                          返回文章
                          返回