Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FineEdit: Unlock Instruction-Based Text Editing for LLMs (2502.13358v2)

Published 19 Feb 2025 in cs.CL

Abstract: LLMs have significantly advanced natural language processing, demonstrating strong capabilities in tasks such as text generation, summarization, and reasoning. Recently, their potential for automating precise text editing tasks across specialized domains, such as programming code, LaTeX, and structured database languages, has gained attention. However, current state-of-the-art LLMs still struggle with executing precise, instruction-driven edits, particularly when structural accuracy and strict adherence to domain conventions are required. To address these challenges, we introduce InstrEditBench, an automated benchmark dataset comprising over 30,000 structured editing tasks spanning diverse domains, including Wikipedia articles, LaTeX documents, source code, and database languages. Using this benchmark, we develop FineEdit, a specialized editing model explicitly trained for accurate, context-aware text modifications. Experimental evaluations demonstrate that FineEdit outperforms state-of-the-art models, achieving improvements of approximately 10% over Gemini models on single-turn edits, up to 30% over Llama-3.2-3B, and exceeding Mistral-7B-OpenOrca performance by over 40% on direct editing tasks. FineEdit also effectively generalizes to realistic multi-turn editing scenarios, highlighting its practical applicability.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Yiming Zeng (17 papers)
  2. Wanhao Yu (2 papers)
  3. Zexin Li (16 papers)
  4. Tao Ren (18 papers)
  5. Yu Ma (46 papers)
  6. Jinghan Cao (3 papers)
  7. Xiyan Chen (2 papers)
  8. Tingting Yu (20 papers)
X Twitter Logo Streamline Icon: https://streamlinehq.com