LMSOC: An Approach for Socially Sensitive Pretraining (2110.10319v1)
Abstract: While large-scale pretrained LLMs have been shown to learn effective linguistic representations for many NLP tasks, there remain many real-world contextual aspects of language that current approaches do not capture. For instance, consider a cloze-test "I enjoyed the ____ game this weekend": the correct answer depends heavily on where the speaker is from, when the utterance occurred, and the speaker's broader social milieu and preferences. Although language depends heavily on the geographical, temporal, and other social contexts of the speaker, these elements have not been incorporated into modern transformer-based LLMs. We propose a simple but effective approach to incorporate speaker social context into the learned representations of large-scale LLMs. Our method first learns dense representations of social contexts using graph representation learning algorithms and then primes LLM pretraining with these social context representations. We evaluate our approach on geographically-sensitive language-modeling tasks and show a substantial improvement (more than 100% relative lift on MRR) compared to baselines.
- Vivek Kulkarni (33 papers)
- Shubhanshu Mishra (15 papers)
- Aria Haghighi (7 papers)