1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
      <samp id="qm3rj"></samp>
      <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

      <video id="qm3rj"><code id="qm3rj"></code></video>

        1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
            1. 2.765

              2022影響因子

              (CJCR)

              • 中文核心
              • EI
              • 中國科技核心
              • Scopus
              • CSCD
              • 英國科學文摘

              留言板

              尊敬的讀者、作者、審稿人, 關于本刊的投稿、審稿、編輯和出版的任何問題, 您可以本頁添加留言。我們將盡快給您答復。謝謝您的支持!

              姓名
              郵箱
              手機號碼
              標題
              留言內容
              驗證碼

              融合實體和上下文信息的篇章關系抽取研究

              黃河燕 袁長森 馮沖

              黃河燕, 袁長森, 馮沖. 融合實體和上下文信息的篇章關系抽取研究. 自動化學報, xxxx, xx(x): x?xx doi: 10.16383/j.aas.c220966
              引用本文: 黃河燕, 袁長森, 馮沖. 融合實體和上下文信息的篇章關系抽取研究. 自動化學報, xxxx, xx(x): x?xx doi: 10.16383/j.aas.c220966
              Huang He-Yan, Yuan Chang-Sen, Feng Chong. Document-level relation extraction with entity and context information. Acta Automatica Sinica, xxxx, xx(x): x?xx doi: 10.16383/j.aas.c220966
              Citation: Huang He-Yan, Yuan Chang-Sen, Feng Chong. Document-level relation extraction with entity and context information. Acta Automatica Sinica, xxxx, xx(x): x?xx doi: 10.16383/j.aas.c220966

              融合實體和上下文信息的篇章關系抽取研究

              doi: 10.16383/j.aas.c220966
              詳細信息
                作者簡介:

                黃河燕:北京理工大學計算機學院教授. 主要研究方向為語言信息智能化處理, 社交網絡, 文本大數據分析和云計算. E-mail: hhy63@bit.edu.cn

                袁長森:北京理工大學計算機學院博士研究生. 主要研究方向為知識圖譜和信息抽取. 本文通信作者. E-mail: yuanchangsen@bit.edu.cn

                馮沖:北京理工大學計算機學院教授. 主要研究方向為機器翻譯, 信息抽取和信息檢索. E-mail: fengchong@bit.edu.cn

              Document-level Relation Extraction With Entity and Context Information

              More Information
                Author Bio:

                Huang He-Yan Professor at the School of Computer Science and Technology, Beijing Institute of Technology. Her research interest covers intelligent processing of language information, social network, data analysis, and cloud computing

                YUAN Chang-Sen Ph. D. candidate at the School of Computer Science and Technology, Beijing Institute of Technology. His research interest covers knowledge graph and information extraction. Corresponding author of this paper

                FENG Chong Professor at the School of Computer Science and Technology, Beijing Institute of Technology. His research interest covers machine learning, information extraction, and information retrieval

              • 摘要: 篇章關系抽取是識別篇章中實體對之間的關系. 相較于傳統的句子級別關系抽取, 篇章級別關系抽取任務更加貼近實際應用, 但是它對實體對的跨句子推理和上下文信息感知等問題提出了新的挑戰. 本文提出融合實體和上下文信息(Fuse entity and context information, FECI)的篇章關系抽取方法, 它包含兩個模塊, 分別是實體信息抽取模塊和上下文信息抽取模塊. 實體信息抽取模塊從兩個實體中自動地抽取出能夠表示實體對關系的特征. 上下文信息抽取模塊根據實體對的提及位置信息, 從篇章中抽取不同的上下文關系特征. 本文在三個篇章級別的關系抽取數據集上進行實驗, 效果得到顯著地提升.
              • 圖  1  篇章級別關系抽取數據集中的一個示例

                Fig.  1  An example of document-level relation extraction from the DocRED

                圖  2  模型框架圖主要有兩個部分, 分別是實體信息抽取模塊和上下文信息抽取模塊

                Fig.  2  Architecture of the proposed model, which contains two parts: Entity information extraction module and context information extraction module

                圖  3  案例分析, 篇章關系抽取開發集中的一個實例

                Fig.  3  An example of DocRED on development set for case study

                表  1  數據集的統計

                Table  1  Statistics of the datasets in experiments

                統計DocREDCDRGDA
                訓練集305350023353
                開發集10005005839
                測試集10005001000
                關系種類9722
                每篇的關系數量19.57.65.4
                下載: 導出CSV

                表  2  模型的超參數

                Table  2  Hyper-parameters of model

                參數名字DocREDCDRGDA
                批次大小444
                迭代次數303010
                學習率 (編碼)$5\times e^{-5}$$5\times e^{-5}$$5\times e^{-5}$
                學習率 (分類)$1\times e^{-4}$$1\times e^{-4}$$1\times e^{-4}$
                分組大小646464
                Dropout0.10.10.1
                梯度裁剪1.01.01.0
                下載: 導出CSV

                表  3  在DocRED數據集上開發集和測試集的實驗結果

                Table  3  Main results on the development and test sets of DocRED

                ModelDevelopmentTest
                Ign F1 ($\%$)F1 ($\%$)Ign F1 ($\%$)F1 ($\%$)
                CNN41.5843.4540.3342.26
                LSTM48.4450.6847.7150.07
                Bi-LSTM48.8750.9448.7851.06
                Context-Aware48.9451.0948.4050.70
                HIN-GloVe51.0652.9551.1553.30
                GAT-GloVe45.1751.4447.3649.51
                GCNN-GloVe46.2251.5249.5951.62
                EoG-GloVe45.9452.1549.4851.82
                AGGCN-GloVe46.2952.4748.8951.45
                LSR-GloVe48.8255.1752.1554.18
                BERT-REBASE54.1653.20
                RoBERTaBASE53.8556.0553.5255.77
                BERT-TWO-StepBASE54.4253.92
                HIN-BERTBASE54.2956.3153.7055.60
                CorefBERTBASE55.3257.5154.5456.96
                LSR-BERTBASE52.4359.0056.9759.05
                BERT-EBASE56.5158.52
                GAINBASE59.1461.2259.0061.24
                FECIBASE (Our model)59.7461.3859.8161.22
                下載: 導出CSV

                表  4  在CDR和GDA數據集上測試集F1值

                Table  4  F1 of test set on CDR and GDA datasets

                ModelCDRGDA
                BRAN62.1
                CNN62.3
                EoG63.681.5
                LSR-BERT64.882.2
                SciBERTBASE65.182.5
                SciBERT-EBASE65.983.3
                FECIBASE (Our model)69.283.7
                下載: 導出CSV

                表  5  FECIBASE模型在開發集上的消融研究結果

                Table  5  Ablation study of FECIBASE on development set

                ModelDevelopment
                Ign F1 (%)F1 (%)P (M)T (s)
                FECIBASE59.7461.38133.42962.4
                w/o Entity58.1660.07132.22831.7
                w/o Context58.6760.89130.5482.3
                下載: 導出CSV

                表  6  FECIBASE模型在開發集上噪聲實體和噪聲上下文的實驗結果

                Table  6  The results of noisy entity and noisy context of FECIBASE on development set

                ModelDevelopment
                Ign F1F1
                FECIBASE59.7461.38
                Head entity58.4260.14
                Tail entity57.9760.08
                Entity pair58.9160.85
                Tradition57.4259.72
                Co-occurrence58.2761.01
                Non co-occurrence56.7258.86
                下載: 導出CSV

                表  7  FECIBASE模型在開發集上不同上下文信息的實驗結果

                Table  7  The results of different contexts of FECIBASE on development set

                ModelDevelopment
                Ign F1F1
                FECIBASE59.7461.38
                Random58.4760.61
                Mean59.5660.94
                Tradition58.1960.06
                下載: 導出CSV

                表  8  不同方法在開發集上的效率

                Table  8  Efficiency of different methods on development set

                ModelDevelopment
                P (M)Train T (s)Decoder T(s)
                LSR-BERTBASE112.1282.938.8
                GAINBASE217.02271.6817.2
                FECIBASE133.42962.4829.0
                下載: 導出CSV
                1. <button id="qm3rj"><thead id="qm3rj"></thead></button>
                  <samp id="qm3rj"></samp>
                  <source id="qm3rj"><menu id="qm3rj"><pre id="qm3rj"></pre></menu></source>

                  <video id="qm3rj"><code id="qm3rj"></code></video>

                    1. <tt id="qm3rj"><track id="qm3rj"></track></tt>
                        亚洲第一网址_国产国产人精品视频69_久久久久精品视频_国产精品第九页
                      1. [1] Yu M, Yin W P, Hasan K S, Santos C D, Xiang B, Zhou B W. Improved neural relation detection for knowledge base question answering. ArXiv preprint arXiv: 1704.06194, 2017.
                        [2] Chen Z Y, Chang C H, Chen Y P, Nayak J, Ku L W. UHop: An unrestricted-hop relation extraction framework for knowledge-based question answering. ArXiv preprint arXiv: 1904.01246, 2019.
                        [3] Yu H Z, Li H S, Mao D H, and Cai Q. A relationship extraction method for domain knowledge graph construction. World Wide Web, 2020, No.23(2): 735—753. doi: 10.1007/s11280-019-00765-y
                        [4] Ristoski P, Gentile A L, Alba A, Gruhl D, Welch S. Large-scale relation extraction from web documents and knowledge graphs with human-in-the-loop. Journal of Web Semant, 2020, No.60: 100546. doi: 10.1016/j.websem.2019.100546
                        [5] Macdonald E and Barbosa D. Neural relation extraction on Wikipedia tables for augmenting knowledge graphs. In: Proceedings of the 29th ACM International Conference on Information & Knowledge Management. New York, USA: ACM, 2020. 2133–2136.
                        [6] Mintz M, Bills S, Snow R, Jurafsky D. Distant supervision for relation extraction without labeled data. In: Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Suntec, Singapore: ACL, 2009. 1003–1011.
                        [7] Lin Y K, Shen S Q, Liu Z Y, Luan H B, Sun M S. Neural relation extraction with selective attention over instances. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Berlin, Germany: ACL, 2016. 2124–2133.
                        [8] Miwa M, Bansal M. End-to-end relation extraction using LSTMs on sequences and tree structures. ArXiv preprint arXiv: 1601.00770, 2016.
                        [9] Zhang Y H, Qi P, Manning C D. Graph convolution over pruned dependency trees improves relation extraction. ArXiv preprint arXiv: 1809.10185, 2018.
                        [10] Guo Z J, Zhang Y, Lu W. Attention guided graph convolutional networks for relation extraction. ArXiv preprint arXiv: 1906.07510, 2019.
                        [11] Yao Y, Ye D M, Li P, Han X, Lin Y K, Liu Z H, et al. DocRED: A large-scale document-level relation extraction dataset. ArXiv preprint arXiv: 1906.06127, 2019.
                        [12] Zhou W X, Huang K, Ma T Y, Huang J. Document-level relation extraction with adaptive thresholding and localized context pooling. ArXiv preprint arXiv: 2010.11304, 2020.
                        [13] Zeng S, Xu R X, Chang B B, Li L. Double graph based reasoning for document-level relation extraction. ArXiv preprint arXiv: 2009.13752, 2020.
                        [14] Santos C N D, Xiang B, Zhou B W. Classifying relations by ranking with convolutional neural networks. ArXiv preprint arXiv: 1504.06580, 2015.
                        [15] Cho K, Merrienboer B V, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, et al. Learning phrase representations using RNN encoder-decoder for statistical machine translation. ArXiv preprint arXiv: 1406.1078, 2014.
                        [16] Liu Y, Wei F R, Li S J, Ji H, Zhou M, Wang H F. A dependency-based neural network for relation classification. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Short Papers). Beijing, China: ACL, 2015. 285–290.
                        [17] Christopoulou F, Miwa M, Ananiadou S. A walk-based model on entity graphs for relation extraction. ArXiv preprint arXiv: 1902.07023, 2019.
                        [18] Christopoulou F, Miwa M, Ananiadou S. Connecting the dots: Document-level neural relation extraction with edge-oriented graphs. ArXiv preprint arXiv: 1909.00228, 2019.
                        [19] Yang B S, Mitchell T. Joint extraction of events and entities within a document context. ArXiv preprint arXiv: 1609.03632, 2016.
                        [20] Swampillai K, Stevenson M. Extracting relations within and across sentences. In: Proceedings of the Recent Advances in Natural Language Processing. Hissar, Bulgaria: DBLP, 2011. 25–32.
                        [21] Jia R, Wong C, Poon H. Document-level n-ary relation extraction with multiscale representation learning. ArXiv preprint arXiv: 1904.02347, 2019.
                        [22] Verga P, Strubell E, McCallum A. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. ArXiv preprint arXiv: 1802.10569, 2018.
                        [23] Nan G S, Guo Z J, Sekulic I, Lu W. Reasoning with latent structure refinement for document-level relation extraction. ArXiv preprint arXiv: 2005.06312, 2020.
                        [24] Wang D F, Hu W, Cao E, Sun W J. Global-to-local neural networks for document-level relation extraction. ArXiv preprint arXiv: 2009.10359, 2020.
                        [25] Devlin J, Chang M W, Lee K, Toutanova K. BERT: Pre-training of deep bidirectional transformers for language understanding. ArXiv preprint arXiv: 1810.04805, 2019.
                        [26] Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, et al. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, California, USA: Curran Associates Inc., 2017. 6000–6010.
                        [27] Sennrich R, Haddow B, Birch A. Neural machine translation of rare words with subword units. ArXiv preprint arXiv: 1508.07909, 2016.
                        [28] Li J, Sun Y P, Johnson R J, Sciaky D, Wei C H, Leaman R, et al. BioCreative V CDR task corpus: A resource for chemical disease relation extraction. The Journal of Biological Databases and Curation.
                        [29] Wu Y, Luo R B, Leung H C M, Ting H, Lam T. RENET: A deep learning approach for extracting gene-disease associations from literature. In: Proceedings of the International Conference on Research in Computational Molecular Biology. Washington, USA: Springer, 2019. 272–284.
                        [30] Liu Y H, Ott M, Goyal N, Du J F, Joshi M, Chen D Q, et al. RoBERTa: A robustly optimized BERT pretraining approach. ArXiv preprint arXiv: 1907.11692, 2019.
                        [31] Beltagy I, Lo K, Cohan A. SciBERT: A pretrained language model for scientific text. ArXiv preprint arXiv: 1903.10676, 2019.
                        [32] Micikevicius P, Narang S, Alben J, Diamos G, Elsen E, Garca D, et al. Mixed precision training. ArXiv preprint arXiv: 1710.03740, 2018.
                        [33] Loshchilov I, Hutter F. Decoupled weight decay regularization. ArXiv preprint arXiv: 1711.05101, 2019.
                        [34] Velickovic P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y. Graph Attention Networks. ArXiv preprint arXiv: 1710.10903, 2018.
                        [35] Verga P, Strubell E, McCallum A. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. ArXiv preprint arXiv: 1802.10569, 2018.
                        [36] Wang H, Focke C, Sylvester R, Mishra N, Wang W. Fine-tune BERT for DocRED with two-step process. ArXiv preprint arXiv: 1906.04684, 2019.
                        [37] Tang H Z, Cao Y N, Zhang Z Y, Cao J X, Fang F, Wang S, et al. HIN: Hierarchical inference network for document-level relation extraction. ArXiv preprint arXiv: 2003.12754, 2020.
                        [38] Pennington J, Socher R, Manning C D. GloVe: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
                        [39] Nan G S, Guo Z J, Sekulic I, Lu W. Reasoning with latent structure refinement for document-level relation extraction. ArXiv preprint arXiv: 2005.06312, 2020.
                        [40] Ye D M, Lin Y K, Du J J, Liu Z H, Sun M S, Liu Z Y. Coreferential reasoning learning for language representation. ArXiv preprint arXiv: 2004.06870, 2020.
                        [41] Nguyen D Q, Verspoor K. Convolutional neural networks for chemical-disease relation extraction are improved with character-based word embeddings. ArXiv preprint arXiv: 2004.06870, 2020.
                      2. 加載中
                      3. 計量
                        • 文章訪問數:  293
                        • HTML全文瀏覽量:  114
                        • 被引次數: 0
                        出版歷程
                        • 收稿日期:  2022-12-12
                        • 錄用日期:  2023-03-29
                        • 網絡出版日期:  2023-08-28

                        目錄

                          /

                          返回文章
                          返回