Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

CoCoLM: COmplex COmmonsense Enhanced Language Model with Discourse Relations (2012.15643v2)

Published 31 Dec 2020 in cs.CL

Abstract: Large-scale pre-trained LLMs have demonstrated strong knowledge representation ability. However, recent studies suggest that even though these giant models contains rich simple commonsense knowledge (e.g., bird can fly and fish can swim.), they often struggle with the complex commonsense knowledge that involves multiple eventualities (verb-centric phrases, e.g., identifying the relationship between Jim yells at Bob'' andBob is upset'').To address this problem, in this paper, we propose to help pre-trained LLMs better incorporate complex commonsense knowledge. Different from existing fine-tuning approaches, we do not focus on a specific task and propose a general LLM named CoCoLM. Through the careful training over a large-scale eventuality knowledge graphs ASER, we successfully teach pre-trained LLMs (i.e., BERT and RoBERTa) rich complex commonsense knowledge among eventualities. Experiments on multiple downstream commonsense tasks that requires the correct understanding of eventualities demonstrate the effectiveness of CoCoLM.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Changlong Yu (22 papers)
  2. Hongming Zhang (111 papers)
  3. Yangqiu Song (196 papers)
  4. Wilfred Ng (10 papers)
Citations (21)

Summary

We haven't generated a summary for this paper yet.