Multi task learning with general vector space for cross-lingual semantic relation detection

Rizka W. Sholikah*, Agus Z. Arifin, Chastine Fatichah, Ayu Purwarianti

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Semantic relation detection has an important role in natural language processing. In a supervised approach, the training process requires a sufficient amount of labeled data. However, in low-resource languages, labeled data are limited, whereas in rich-resource languages, labeled data are available in large quantities. In addition, various studies tend to model the single-task problem without considering the generalization with other tasks. Hence, a strategy that can utilize the availability of labeled data in rich-resource languages and generalize models to improve the identification of relations in a cross-lingual manner is needed. In this paper, we propose a framework to identify cross-lingual semantic relation using multi-task learning with a general vector space. The proposed method was designed to construct a general vector space and semantic relation identification. The experiments were conducted over three datasets: Indonesian–Arabic, English–Arabic, and English–Indonesia. The results show that the use of multi-task learning with a general vector space can overcome the problem of cross-lingual semantic relation identification. This is shown by the accuracy of the synonym and hypernym tasks that reached 84.9% and 84.8%, respectively.

Original languageEnglish
Pages (from-to)2161-2169
Number of pages9
JournalJournal of King Saud University - Computer and Information Sciences
Volume34
Issue number5
DOIs
Publication statusPublished - May 2022

Keywords

  • Cross-lingual semantic relation
  • General vector space
  • Hypernym
  • Multi-task learning
  • Synonym

Fingerprint

Dive into the research topics of 'Multi task learning with general vector space for cross-lingual semantic relation detection'. Together they form a unique fingerprint.

Cite this