Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AutoDetect: Towards a Unified Framework for Automated Weakness Detection in Large Language Models (2406.16714v2)

Published 24 Jun 2024 in cs.CL, cs.AI, and cs.LG

Abstract: Although LLMs are becoming increasingly powerful, they still exhibit significant but subtle weaknesses, such as mistakes in instruction-following or coding tasks. As these unexpected errors could lead to severe consequences in practical deployments, it is crucial to investigate the limitations within LLMs systematically. Traditional benchmarking approaches cannot thoroughly pinpoint specific model deficiencies, while manual inspections are costly and not scalable. In this paper, we introduce a unified framework, AutoDetect, to automatically expose weaknesses in LLMs across various tasks. Inspired by the educational assessment process that measures students' learning outcomes, AutoDetect consists of three LLM-powered agents: Examiner, Questioner, and Assessor. The collaboration among these three agents is designed to realize comprehensive and in-depth weakness identification. Our framework demonstrates significant success in uncovering flaws, with an identification success rate exceeding 30% in prominent models such as ChatGPT and Claude. More importantly, these identified weaknesses can guide specific model improvements, proving more effective than untargeted data augmentation methods like Self-Instruct. Our approach has led to substantial enhancements in popular LLMs, including the Llama series and Mistral-7b, boosting their performance by over 10% across several benchmarks. Code and data are publicly available at https://github.com/thu-coai/AutoDetect.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Jiale Cheng (18 papers)
  2. Yida Lu (10 papers)
  3. Xiaotao Gu (32 papers)
  4. Pei Ke (37 papers)
  5. Xiao Liu (402 papers)
  6. Yuxiao Dong (119 papers)
  7. Hongning Wang (107 papers)
  8. Jie Tang (302 papers)
  9. Minlie Huang (225 papers)
Citations (3)
X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets