A Fine-grained Network for Joint Multimodal Entity-Relation Extraction

Yuan Li, Yi Cai, Jingyu Xu, Qing Li, Tao Wang

Research output: Contribution to journalArticlepeer-review

Abstract

Joint multimodal entity-relation extraction (JMERE) is a challenging task that involves two joint subtasks, i.e., named entity recognition and relation extraction, from multimodal data such as text sentences with associated images. Previous JMERE methods have primarily been employed. Pipeline models apply pre-trained unimodal models separately and ignore the interaction between modalities or word-pair relation tagging methods, which neglect neighboring word pairs. To address these limitations, we propose a fine-grained network for JMERE. Specifically, we introduce a fine-grained alignment module that utilizes a phrase-patch to establish connections between text phrases and visual objects. This module can learn consistent multimodal representations from multimodal data. Furthermore, we address the task-irrelevant image information issue by proposing a gate fusion module, which mitigates the impact of image noise and ensures a balanced representation between image objects and text representations. Furthermore, we design a multi-word decoder that enables ensemble prediction of tags for each word pair. This approach leverages the predicted results of neighboring word pairs, improving the ability to extract multi-word entities. Experimental results on a benchmark dataset demonstrate the superiority of our proposed model over state-of-the-art models in JMERE.
Original languageEnglish
JournalIEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING
Publication statusAccepted/In press - 17 Oct 2024

Fingerprint

Dive into the research topics of 'A Fine-grained Network for Joint Multimodal Entity-Relation Extraction'. Together they form a unique fingerprint.

Cite this