Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
72 tokens/sec
GPT-4o
61 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
8 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

ChatGPT: Jack of all trades, master of none (2302.10724v4)

Published 21 Feb 2023 in cs.CL, cs.AI, cs.CY, and cs.LG
ChatGPT: Jack of all trades, master of none

Abstract: OpenAI has released the Chat Generative Pre-trained Transformer (ChatGPT) and revolutionized the approach in artificial intelligence to human-model interaction. Several publications on ChatGPT evaluation test its effectiveness on well-known NLP tasks. However, the existing studies are mostly non-automated and tested on a very limited scale. In this work, we examined ChatGPT's capabilities on 25 diverse analytical NLP tasks, most of them subjective even to humans, such as sentiment analysis, emotion recognition, offensiveness, and stance detection. In contrast, the other tasks require more objective reasoning like word sense disambiguation, linguistic acceptability, and question answering. We also evaluated GPT-4 model on five selected subsets of NLP tasks. We automated ChatGPT and GPT-4 prompting process and analyzed more than 49k responses. Our comparison of its results with available State-of-the-Art (SOTA) solutions showed that the average loss in quality of the ChatGPT model was about 25% for zero-shot and few-shot evaluation. For GPT-4 model, a loss for semantic tasks is significantly lower than for ChatGPT. We showed that the more difficult the task (lower SOTA performance), the higher the ChatGPT loss. It especially refers to pragmatic NLP problems like emotion recognition. We also tested the ability to personalize ChatGPT responses for selected subjective tasks via Random Contextual Few-Shot Personalization, and we obtained significantly better user-based predictions. Additional qualitative analysis revealed a ChatGPT bias, most likely due to the rules imposed on human trainers by OpenAI. Our results provide the basis for a fundamental discussion of whether the high quality of recent predictive NLP models can indicate a tool's usefulness to society and how the learning and validation procedures for such systems should be established.

Introduction

OpenAI's Generative Pre-trained Transformer (ChatGPT) is an AI system designed to provide detailed and precise answers across various domains. Several studies have tested ChatGPT's effectiveness on well-established NLP tasks. However, most have not leveraged automated evaluation and were limited in scope. Researchers from Wrocław University of Science and Technology in Poland delved into extensive testing of ChatGPT's range and depth of understanding using a diverse set of analytical NLP tasks.

Capabilities and Limitations

The paper evaluated ChatGPT on 25 diverse analytical NLP tasks across the spectrum of semantics and pragmatics. This involved tasks like word sense disambiguation, question answering, sentiment analysis, and pragmatic issues like emotion recognition. ChatGPT was automated and prompted to produce over 49,000 responses, which were compared against state-of-the-art (SOTA) solutions.

The findings revealed a varying performance by ChatGPT, with an average loss in quality of approximately 25% when compared to SOTA models. Notably, the more difficult the task (as indicated by lower SOTA performance), the higher the performance drop in ChatGPT's results, particularly for tasks requiring pragmatic understanding, such as emotion recognition.

Personalization and Bias

The research also tested the model's ability to personalize responses to selected subjective tasks, which resulted in better predictions tailored to individual user preferences. However, additional qualitative analysis uncovered biases within ChatGPT, likely due to rules imposed on human trainers by OpenAI. This highlights the intrinsic challenge in balancing neutrality with contextual accuracy.

Conclusions and Reflections

The results reveal that while ChatGPT shows significant abilities in a broad range of NLP tasks, it is not yet on par with specialized SOTA solutions. It holds promise as an AI tool that could potentially support various applications in society, provided it further hones its learning and validation procedures.

The outcomes from this paper provide valuable insights into the capabilities and areas for improvement of LLMs like ChatGPT. It suggests a need for continued research into making such models more robust, unbiased, and contextually sensitive to further their applicability and usefulness in real-world scenarios.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (20)
  1. Jan Kocoń (14 papers)
  2. Igor Cichecki (1 paper)
  3. Oliwier Kaszyca (1 paper)
  4. Mateusz Kochanek (1 paper)
  5. Dominika Szydło (1 paper)
  6. Joanna Baran (3 papers)
  7. Julita Bielaniewicz (3 papers)
  8. Marcin Gruza (4 papers)
  9. Arkadiusz Janz (5 papers)
  10. Kamil Kanclerz (3 papers)
  11. Anna Kocoń (1 paper)
  12. Wiktoria Mieleszczenko-Kowszewicz (5 papers)
  13. Piotr Miłkowski (3 papers)
  14. Marcin Oleksy (3 papers)
  15. Maciej Piasecki (5 papers)
  16. Łukasz Radliński (1 paper)
  17. Konrad Wojtasik (4 papers)
  18. Stanisław Woźniak (16 papers)
  19. Przemysław Kazienko (53 papers)
  20. Bartłomiej Koptyra (3 papers)
Citations (442)