PatentEval: Understanding Errors in Patent Generation (2406.06589v2)
Abstract: In this work, we introduce a comprehensive error typology specifically designed for evaluating two distinct tasks in machine-generated patent texts: claims-to-abstract generation, and the generation of the next claim given previous ones. We have also developed a benchmark, PatentEval, for systematically assessing LLMs in this context. Our study includes a comparative analysis, annotated by humans, of various models. These range from those specifically adapted during training for tasks within the patent domain to the latest general-purpose LLMs. Furthermore, we explored and evaluated some metrics to approximate human judgments in patent text evaluation, analyzing the extent to which these metrics align with expert assessments. These approaches provide valuable insights into the capabilities and limitations of current LLMs in the specialized field of patent text generation.
- You Zuo (2 papers)
- Kim Gerdes (3 papers)
- BenoƮt Sagot (60 papers)
- Eric Villemonte de La Clergerie (1 paper)