Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
51 tokens/sec
GPT-4o
60 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Building Guardrails for Large Language Models (2402.01822v2)

Published 2 Feb 2024 in cs.CL and cs.AI

Abstract: As LLMs become more integrated into our daily lives, it is crucial to identify and mitigate their risks, especially when the risks can have profound impacts on human users and societies. Guardrails, which filter the inputs or outputs of LLMs, have emerged as a core safeguarding technology. This position paper takes a deep look at current open-source solutions (Llama Guard, Nvidia NeMo, Guardrails AI), and discusses the challenges and the road towards building more complete solutions. Drawing on robust evidence from previous research, we advocate for a systematic approach to construct guardrails for LLMs, based on comprehensive consideration of diverse contexts across various LLMs applications. We propose employing socio-technical methods through collaboration with a multi-disciplinary team to pinpoint precise technical requirements, exploring advanced neural-symbolic implementations to embrace the complexity of the requirements, and developing verification and testing to ensure the utmost quality of the final product.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Yi Dong (46 papers)
  2. Ronghui Mu (12 papers)
  3. Gaojie Jin (21 papers)
  4. Yi Qi (26 papers)
  5. Xingyu Zhao (61 papers)
  6. Jie Meng (95 papers)
  7. Wenjie Ruan (42 papers)
  8. Xiaowei Huang (121 papers)
  9. JinWei Hu (13 papers)
Citations (14)