SASA: Semantic-Aware Contrastive Learning Framework with Separated Attention for Triple Classification

SASA: Semantic-Aware Contrastive Learning Framework with Separated Attention for Triple Classification
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Knowledge Graphs~(KGs) often suffer from unreliable knowledge, which restricts their utility. Triple Classification~(TC) aims to determine the validity of triples from KGs. Recently, text-based methods learn entity and relation representations from natural language descriptions, significantly improving the generalization capabilities of TC models and setting new benchmarks in performance. However, there are still two critical challenges. First, existing methods often ignore the effective semantic interaction among different KG components. Second, most approaches adopt single binary classification training objective, leading to insufficient semantic representation learning. To address these challenges, we propose \textbf{SASA}, a novel framework designed to enhance TC models via separated attention mechanism and semantic-aware contrastive learning~(CL). Specifically, we first propose separated attention mechanism to encode triples into decoupled contextual representations and then fuse them through a more effective interactive way. Then, we introduce semantic-aware hierarchical CL as auxiliary training objective to guide models in improving their discriminative capabilities and achieving sufficient semantic learning, considering both local level and global level CL. Experimental results across two benchmark datasets demonstrate that SASA significantly outperforms state-of-the-art methods. In terms of accuracy, we advance the state-of-the-art by +5.9% on FB15k-237 and +3.4% on YAGO3-10.


💡 Research Summary

The paper introduces SASA, a novel framework for knowledge‑graph triple classification that tackles two persistent shortcomings of existing text‑based approaches: insufficient semantic interaction among the head entity, relation, and tail entity, and the reliance on a single binary classification loss that limits representation learning. SASA’s architecture consists of two main components. First, a dual‑tower encoder separates a triple (h, r, t) into two textual inputs: a concatenated head‑relation sequence (h +


Comments & Academic Discussion

Loading comments...

Leave a Comment