Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

BERT based Transformers lead the way in Extraction of Health Information from Social Media (2104.07367v1)

Published 15 Apr 2021 in cs.CL and cs.SI

Abstract: This paper describes our submissions for the Social Media Mining for Health (SMM4H)2021 shared tasks. We participated in 2 tasks:(1) Classification, extraction and normalization of adverse drug effect (ADE) mentions in English tweets (Task-1) and (2) Classification of COVID-19 tweets containing symptoms(Task-6). Our approach for the first task uses the language representation model RoBERTa with a binary classification head. For the second task, we use BERTweet, based on RoBERTa. Fine-tuning is performed on the pre-trained models for both tasks. The models are placed on top of a custom domain-specific processing pipeline. Our system ranked first among all the submissions for subtask-1(a) with an F1-score of 61%. For subtask-1(b), our system obtained an F1-score of 50% with improvements up to +8% F1 over the score averaged across all submissions. The BERTweet model achieved an F1 score of 94% on SMM4H 2021 Task-6.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (8)
  1. Sidharth R (2 papers)
  2. Abhiraj Tiwari (2 papers)
  3. Parthivi Choubey (1 paper)
  4. Saisha Kashyap (1 paper)
  5. Sahil Khose (9 papers)
  6. Kumud Lakara (5 papers)
  7. Nishesh Singh (3 papers)
  8. Ujjwal Verma (16 papers)
Citations (13)