CRAFT: Extracting and Tuning Cultural Instructions from the Wild (2405.03138v2)
Abstract: LLMs have rapidly evolved as the foundation of various NLP applications. Despite their wide use cases, their understanding of culturally-related concepts and reasoning remains limited. Meantime, there is a significant need to enhance these models' cultural reasoning capabilities, especially concerning underrepresented regions. This paper introduces a novel pipeline for extracting high-quality, culturally-related instruction tuning datasets from vast unstructured corpora. We utilize a self-instruction generation pipeline to identify cultural concepts and trigger instruction. By integrating with a general-purpose instruction tuning dataset, our model demonstrates enhanced capabilities in recognizing and understanding regional cultural nuances, thereby enhancing its reasoning capabilities. We conduct experiments across three regions: Singapore, the Philippines, and the United States, achieving performance improvement of up to 6%. Our research opens new avenues for extracting cultural instruction tuning sets directly from unstructured data, setting a precedent for future innovations in the field.
- Bin Wang (750 papers)
- Geyu Lin (10 papers)
- Zhengyuan Liu (41 papers)
- Chengwei Wei (17 papers)
- Nancy F. Chen (97 papers)