Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Can pre-trained language models generate titles for research papers? (2409.14602v2)

Published 22 Sep 2024 in cs.CL and cs.AI

Abstract: The title of a research paper communicates in a succinct style the main theme and, sometimes, the findings of the paper. Coming up with the right title is often an arduous task, and therefore, it would be beneficial to authors if title generation can be automated. In this paper, we fine-tune pre-trained LLMs to generate titles of papers from their abstracts. Additionally, we use GPT-3.5-turbo in a zero-shot setting to generate paper titles. The performance of the models is measured with ROUGE, METEOR, MoverScore, BERTScore and SciBERTScore metrics. We find that fine-tuned PEGASUS-large outperforms the other models, including fine-tuned LLaMA-3-8B and GPT-3.5-turbo, across most metrics. We also demonstrate that ChatGPT can generate creative titles for papers. Our observations suggest that AI-generated paper titles are generally accurate and appropriate.

Summary

We haven't generated a summary for this paper yet.