Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Grounding and Evaluation for Large Language Models: Practical Challenges and Lessons Learned (Survey) (2407.12858v1)

Published 10 Jul 2024 in cs.CL, cs.AI, cs.CV, and cs.LG

Abstract: With the ongoing rapid adoption of AI-based systems in high-stakes domains, ensuring the trustworthiness, safety, and observability of these systems has become crucial. It is essential to evaluate and monitor AI systems not only for accuracy and quality-related metrics but also for robustness, bias, security, interpretability, and other responsible AI dimensions. We focus on LLMs and other generative AI models, which present additional challenges such as hallucinations, harmful and manipulative content, and copyright infringement. In this survey article accompanying our KDD 2024 tutorial, we highlight a wide range of harms associated with generative AI systems, and survey state of the art approaches (along with open challenges) to address these harms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Krishnaram Kenthapadi (42 papers)
  2. Mehrnoosh Sameki (6 papers)
  3. Ankur Taly (22 papers)
Citations (8)