Bridging the Divide: End-to-End Sequence-Graph Learning
📝 Original Info
- Title: Bridging the Divide: End-to-End Sequence-Graph Learning
- ArXiv ID: 2510.25126
- Date: 2025-10-29
- Authors: 정보 없음 (논문에 저자 정보가 제공되지 않음)
📝 Abstract
Many real-world datasets are both sequential and relational: each node carries an event sequence while edges encode interactions. Existing methods in sequence modeling and graph modeling often neglect one modality or the other. We argue that sequences and graphs are not separate problems but complementary facets of the same dataset, and should be learned jointly. We introduce BRIDGE, a unified end-to-end architecture that couples a sequence encoder with a GNN under a single objective, allowing gradients to flow across both modules and learning task-aligned representations. To enable fine-grained token-level message passing among neighbors, we add TOKENXATTN, a token-level cross-attention layer that passes messages between events in neighboring sequences. Across two settings, friendship prediction (Brightkite) and fraud detection (Amazon), BRIDGE consistently outperforms static GNNs, temporal graph methods, and sequence-only baselines on ranking and classification metrics.💡 Deep Analysis
📄 Full Content
Reference
This content is AI-processed based on open access ArXiv data.