Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/180124
Title: MF-MNER: multi-models fusion for MNER in Chinese clinical electronic medical records
Authors: Du, Haoze
Xu, Jiahao
Du, Zhiyong
Chen, Lihui
Ma, Shaohui
Wei, Dongqing
Wang, Xianfang
Keywords: Engineering
Issue Date: 2024
Source: Du, H., Xu, J., Du, Z., Chen, L., Ma, S., Wei, D. & Wang, X. (2024). MF-MNER: multi-models fusion for MNER in Chinese clinical electronic medical records. Interdisciplinary Sciences, Computational Life Sciences, 16(2), 489-502. https://dx.doi.org/10.1007/s12539-024-00624-z
Journal: Interdisciplinary Sciences, Computational Life Sciences 
Abstract: To address the problem of poor entity recognition performance caused by the lack of Chinese annotation in clinical electronic medical records, this paper proposes a multi-medical entity recognition method F-MNER using a fusion technique combining BART, Bi-LSTM, and CRF. First, after cleaning, encoding, and segmenting the electronic medical records, the obtained semantic representations are dynamically fused using a bidirectional autoregressive transformer (BART) model. Then, sequential information is captured using a bidirectional long short-term memory (Bi-LSTM) network. Finally, the conditional random field (CRF) is used to decode and output multi-task entity recognition. Experiments are performed on the CCKS2019 dataset, with micro avg Precision, macro avg Recall, weighted avg Precision reaching 0.880, 0.887, and 0.883, and micro avg F1-score, macro avg F1-score, weighted avg F1-score reaching 0.875, 0.876, and 0.876 respectively. Compared with existing models, our method outperforms the existing literature in three evaluation metrics (micro average, macro average, weighted average) under the same dataset conditions. In the case of weighted average, the Precision, Recall, and F1-score are 19.64%, 15.67%, and 17.58% higher than the existing BERT-BiLSTM-CRF model respectively. Experiments are performed on the actual clinical dataset with our MF-MNER, the Precision, Recall, and F1-score are 0.638, 0.825, and 0.719 under the micro-avg evaluation mechanism. The Precision, Recall, and F1-score are 0.685, 0.800, and 0.733 under the macro-avg evaluation mechanism. The Precision, Recall, and F1-score are 0.647, 0.825, and 0.722 under the weighted avg evaluation mechanism. The above results show that our method MF-MNER can integrate the advantages of BART, Bi-LSTM, and CRF layers, significantly improving the performance of downstream named entity recognition tasks with a small amount of annotation, and achieving excellent performance in terms of recall score, which has certain practical significance. Source code and datasets to reproduce the results in this paper are available at https://github.com/xfwang1969/MF-MNER .
URI: https://hdl.handle.net/10356/180124
ISSN: 1913-2751
DOI: 10.1007/s12539-024-00624-z
Schools: School of Electrical and Electronic Engineering 
Rights: © 2024 The Author(s). Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:EEE Journal Articles

Files in This Item:
File Description SizeFormat 
s12539-024-00624-z.pdf1.62 MBAdobe PDFThumbnail
View/Open

SCOPUSTM   
Citations 50

5
Updated on May 4, 2025

Page view(s)

63
Updated on May 6, 2025

Download(s)

11
Updated on May 6, 2025

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.