Papers
Topics
Authors
Recent
2000 character limit reached

Evaluating Robustness of LLMs on Crisis-Related Microblogs across Events, Information Types, and Linguistic Features (2412.10413v1)

Published 8 Dec 2024 in cs.CL, cs.AI, and cs.SI

Abstract: The widespread use of microblogging platforms like X (formerly Twitter) during disasters provides real-time information to governments and response authorities. However, the data from these platforms is often noisy, requiring automated methods to filter relevant information. Traditionally, supervised machine learning models have been used, but they lack generalizability. In contrast, LLMs show better capabilities in understanding and processing natural language out of the box. This paper provides a detailed analysis of the performance of six well-known LLMs in processing disaster-related social media data from a large-set of real-world events. Our findings indicate that while LLMs, particularly GPT-4o and GPT-4, offer better generalizability across different disasters and information types, most LLMs face challenges in processing flood-related data, show minimal improvement despite the provision of examples (i.e., shots), and struggle to identify critical information categories like urgent requests and needs. Additionally, we examine how various linguistic features affect model performance and highlight LLMs' vulnerabilities against certain features like typos. Lastly, we provide benchmarking results for all events across both zero- and few-shot settings and observe that proprietary models outperform open-source ones in all tasks.

Summary

  • The paper benchmarks six LLMs, showing that proprietary models like GPT-4 outperform open-source models on crisis-related microblogs.
  • The study measures performance using F1-scores in zero- and few-shot settings, highlighting limitations in handling flood data and urgent requests.
  • It identifies linguistic challenges, such as typos and special characters, that hinder accurate processing, stressing the need for model improvements for diverse language contexts.

The paper "Evaluating Robustness of LLMs on Crisis-Related Microblogs across Events, Information Types, and Linguistic Features" explores the proficiency of various LLMs in processing disaster-related data from microblogging platforms. As the utilization of platforms such as Twitter (now known as X) increases during disasters for real-time updates and information sharing, the need for effective and automated data filtering mechanisms has become critical. This paper explores the performance of several leading LLMs in handling such data and evaluates them across different disaster types, information categories, and linguistic characteristics.

Key Findings

  1. Model Performance Across Disasters: The paper assesses six LLMs, including both proprietary models like GPT-4 and open-source ones such as Llama-2 and Mistral. It observes that proprietary models generally outperform open-source models. Specifically, GPT-4 and its offshoot GPT-4o display superior adaptability across diverse events, though challenges remain with flood-related data and critical information categories such as urgent requests and needs.
  2. Benchmarking and Key Metrics: The paper provides benchmarking of LLMs in both zero- and few-shot settings, reporting significant numerical metrics such as F1-scores. Proprietary models exhibit higher F1-scores, but even they do not surpass traditional supervised baselines in all cases. Notably, the introduction of few-shot examples does not consistently enhance the LLMs' performance, suggesting inherent limitations in their ability to generalize from limited examples.
  3. Analysis of Linguistic Features: The paper identifies specific linguistic features that pose challenges for LLMs, such as typographical errors and certain special characters. Models showed varied levels of susceptibility to these features, impacting their ability to process urgent and nuanced disaster information effectively.
  4. Geographic and Language Considerations: The paper highlights performance discrepancies between data originating from native-English-speaking versus non-English-speaking countries. All models perform better with data from native-English contexts, implying potential biases or limitations in the models' language processing capabilities.

Implications and Future Directions

The implications of these findings are manifold. Practically, the paper suggests that while LLMs offer improved situational awareness capabilities during disasters, they require enhancements to address specific data types and linguistic intricacies effectively. Theoretically, it proposes that the field must explore more robust models or techniques that better handle domain shifts and typographical variance inherent in real-world microblog data.

Future research could investigate further into qualitative assessments to pinpoint reasons behind the LLMs' underperformance on certain disaster types or data categories. Additionally, exploring the integration of visual information through multimodal models could offer more comprehensive solutions for disaster response frameworks.

Overall, this paper significantly contributes to the understanding of how LLMs perform across varied crisis scenarios and presents a critical assessment necessary for advancing AI applications in humanitarian and emergency response domains. It paves the way for more targeted developments that aim to refine LLMs to meet the exigent demands of real-time crisis management.

Slide Deck Streamline Icon: https://streamlinehq.com

Whiteboard

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

Sign up for free to view the 1 tweet with 5 likes about this paper.