Semantic relation detection based on multi-task learning and cross-lingual-view embedding

Rizka Wakhidatus Sholikah*, Agus Zainal Arifin, Chastine Fatichah, Ayu Purwarianti

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

Semantic relation extraction automatically is an important task in NLP. Various methods have been developed using either pattern-based approach or distributional approach. However, existing research only focuses on single task modeling without considering the possibility of generalization with other tasks. Besides, the methods that exist only use one view from task language as an input representation that might lack of features. This happens especially in languages that are classified as low resource language. Therefore, in this paper we proposed a framework for semantic relations classification based on multi-task architecture and cross-lingual-view embedding. There are two main stages in this framework, data augmentation based on pseudo parallel corpora and multi-task architecture with cross-lingual-view embedding. Further, extensive experiment of the proposed framework has been conducted. The results show that the use of rich resource language in cross-lingual-view embedding is able to support low-resource languages. This is shown by the results with accuracy and F1-scores of 85.8% and 87.6%, respectively. The comparison result also shows that our proposed model outperforms another state-of-the art.

Original languageEnglish
Pages (from-to)33-45
Number of pages13
JournalInternational Journal of Intelligent Engineering and Systems
Volume13
Issue number3
DOIs
Publication statusPublished - 2020

Keywords

  • Cross-lingual-view embedding
  • Distributional approach
  • Multi-task learning
  • Semantic relation

Fingerprint

Dive into the research topics of 'Semantic relation detection based on multi-task learning and cross-lingual-view embedding'. Together they form a unique fingerprint.

Cite this