Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

AdapterHub Playground: Simple and Flexible Few-Shot Learning with Adapters (2108.08103v3)

Published 18 Aug 2021 in cs.CL

Abstract: The open-access dissemination of pretrained LLMs through online repositories has led to a democratization of state-of-the-art NLP research. This also allows people outside of NLP to use such models and adapt them to specific use-cases. However, a certain amount of technical proficiency is still required which is an entry barrier for users who want to apply these models to a certain task but lack the necessary knowledge or resources. In this work, we aim to overcome this gap by providing a tool which allows researchers to leverage pretrained models without writing a single line of code. Built upon the parameter-efficient adapter modules for transfer learning, our AdapterHub Playground provides an intuitive interface, allowing the usage of adapters for prediction, training and analysis of textual data for a variety of NLP tasks. We present the tool's architecture and demonstrate its advantages with prototypical use-cases, where we show that predictive performance can easily be increased in a few-shot learning scenario. Finally, we evaluate its usability in a user study. We provide the code and a live interface at https://adapter-hub.github.io/playground.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (9)
  1. Tilman Beck (11 papers)
  2. Bela Bohlender (1 paper)
  3. Christina Viehmann (2 papers)
  4. Vincent Hane (1 paper)
  5. Yanik Adamson (1 paper)
  6. Jaber Khuri (1 paper)
  7. Jonas Brossmann (1 paper)
  8. Jonas Pfeiffer (34 papers)
  9. Iryna Gurevych (264 papers)
Citations (15)

Summary

We haven't generated a summary for this paper yet.