Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Teach LLMs to Personalize -- An Approach inspired by Writing Education (2308.07968v1)

Published 15 Aug 2023 in cs.CL

Abstract: Personalized text generation is an emerging research area that has attracted much attention in recent years. Most studies in this direction focus on a particular domain by designing bespoke features or models. In this work, we propose a general approach for personalized text generation using LLMs. Inspired by the practice of writing education, we develop a multistage and multitask framework to teach LLMs for personalized generation. In writing instruction, the task of writing from sources is often decomposed into multiple steps that involve finding, evaluating, summarizing, synthesizing, and integrating information. Analogously, our approach to personalized text generation consists of multiple stages: retrieval, ranking, summarization, synthesis, and generation. In addition, we introduce a multitask setting that helps the model improve its generation ability further, which is inspired by the observation in education that a student's reading proficiency and writing ability are often correlated. We evaluate our approach on three public datasets, each of which covers a different and representative domain. Our results show significant improvements over a variety of baselines.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Cheng Li (1094 papers)
  2. Mingyang Zhang (56 papers)
  3. Qiaozhu Mei (68 papers)
  4. Yaqing Wang (59 papers)
  5. Spurthi Amba Hombaiah (5 papers)
  6. Yi Liang (58 papers)
  7. Michael Bendersky (63 papers)
Citations (20)

Summary

We haven't generated a summary for this paper yet.