Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CogAlign: Learning to Align Textual Neural Representations to Cognitive Language Processing Signals (2106.05544v3)

Published 10 Jun 2021 in cs.CL

Abstract: Most previous studies integrate cognitive language processing signals (e.g., eye-tracking or EEG data) into neural models of NLP just by directly concatenating word embeddings with cognitive features, ignoring the gap between the two modalities (i.e., textual vs. cognitive) and noise in cognitive features. In this paper, we propose a CogAlign approach to these issues, which learns to align textual neural representations to cognitive features. In CogAlign, we use a shared encoder equipped with a modality discriminator to alternatively encode textual and cognitive inputs to capture their differences and commonalities. Additionally, a text-aware attention mechanism is proposed to detect task-related information and to avoid using noise in cognitive features. Experimental results on three NLP tasks, namely named entity recognition, sentiment analysis and relation extraction, show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets. Moreover, our model is able to transfer cognitive information to other datasets that do not have any cognitive processing signals.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Yuqi Ren (6 papers)
  2. Deyi Xiong (103 papers)
Citations (12)

Summary

We haven't generated a summary for this paper yet.