Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation (2312.02439v3)

Published 5 Dec 2023 in cs.AI, cs.CL, and cs.CV

Abstract: Chain-of-Thought (CoT) guides LLMs to reason step-by-step, and can motivate their logical reasoning ability. While effective for logical tasks, CoT is not conducive to creative problem-solving which often requires out-of-box thoughts and is crucial for innovation advancements. In this paper, we explore the Leap-of-Thought (LoT) abilities within LLMs -- a non-sequential, creative paradigm involving strong associations and knowledge leaps. To this end, we study LLMs on the popular Oogiri game which needs participants to have good creativity and strong associative thinking for responding unexpectedly and humorously to the given image, text, or both, and thus is suitable for LoT study. Then to investigate LLMs' LoT ability in the Oogiri game, we first build a multimodal and multilingual Oogiri-GO dataset which contains over 130,000 samples from the Oogiri game, and observe the insufficient LoT ability or failures of most existing LLMs on the Oogiri game. Accordingly, we introduce a creative Leap-of-Thought (CLoT) paradigm to improve LLM's LoT ability. CLoT first formulates the Oogiri-GO dataset into LoT-oriented instruction tuning data to train pretrained LLM for achieving certain LoT humor generation and discrimination abilities. Then CLoT designs an explorative self-refinement that encourages the LLM to generate more creative LoT data via exploring parallels between seemingly unrelated concepts and selects high-quality data to train itself for self-refinement. CLoT not only excels in humor generation in the Oogiri game but also boosts creative abilities in various tasks like cloud guessing game and divergent association task. These findings advance our understanding and offer a pathway to improve LLMs' creative capacities for innovative applications across domains. The dataset, code, and models will be released online. https://zhongshsh.github.io/CLoT/.

The paper presents a paper that explores the concept of Leap-of-Thought (LoT) capabilities within LLMs, particularly as it applies to the generation of creative humor. Leap-of-Thought is defined as a non-sequential, creative paradigm that involves making strong associations and knowledge leaps. This is different from the more commonly known Chain-of-Thought (CoT) approach, which guides LLMs to reason step-by-step and motivates their logical reasoning abilities.

CoT has been effective for logical reasoning tasks where each subsequent thought is built upon the previous one, which is a more sequential thinking process. However, CoT might limit solutions in creative problem-solving scenarios where non-sequential thinking or leaps in thought are required, such as offering creative humor in response to a prompt. Addressing this gap, the authors introduce a creative Leap-of-Thought (CLoT) paradigm to enhance LLMs’ LoT abilities.

To investigate LoT in LLMs, the authors crafted the multimodal and multilingual Oogiri-GO dataset containing over 130,000 samples from the Oogiri game. The game requires participants to respond humorously and unexpectedly to images, text, or both, making it suitable for the paper of LoT. The researchers observed that existing LLMs have insufficient LoT ability for creative humor generation. Accordingly, they introduced the CLoT paradigm, which has two main stages.

Firstly, associable instruction tuning was designed to formulate the Oogiri-GO dataset into instructional tuning data to train pretrained LLMs for LoT humor generation and discrimination abilities. This stage utilizes instruction templates that offer clues and encourage uninhibited exploration to foster creative thinking. The second stage, exploratory self-refinement, enables the LLM to produce more creative LoT data by exploring parallels between seemingly unrelated concepts and to refine itself with high-quality data.

The paper shows that the CLoT-integrated LLMs outperform vanilla and CoT-integrated LLMs in multiple-choice and ranking questions within the Oogiri game. Additionally, CLoT can boost creative abilities on tasks like the cloud guessing game and the divergent association task, demonstrating its broader applicability.

The paper argues that employing review data, such as human rankings, could enhance CLoT through reinforcement learning methods. Additionally, future work should explore methods to minimize LLM training and maintain existing knowledge during instructional tuning. In conclusion, the proposed CLoT represents a significant step forward in enabling LLMs to engage in creative and innovative applications across various domains.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shanshan Zhong (14 papers)
  2. Zhongzhan Huang (25 papers)
  3. Shanghua Gao (20 papers)
  4. Wushao Wen (12 papers)
  5. Liang Lin (318 papers)
  6. Marinka Zitnik (79 papers)
  7. Pan Zhou (220 papers)
Citations (20)
Github Logo Streamline Icon: https://streamlinehq.com
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com