Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Shayona@SMM4H23: COVID-19 Self diagnosis classification using BERT and LightGBM models (2401.02158v1)

Published 4 Jan 2024 in cs.CL and cs.AI

Abstract: This paper describes approaches and results for shared Task 1 and 4 of SMMH4-23 by Team Shayona. Shared Task-1 was binary classification of english tweets self-reporting a COVID-19 diagnosis, and Shared Task-4 was Binary classification of English Reddit posts self-reporting a social anxiety disorder diagnosis. Our team has achieved the highest f1-score 0.94 in Task-1 among all participants. We have leveraged the Transformer model (BERT) in combination with the LightGBM model for both tasks.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (9)
  1. “Optuna: A next-generation hyperparameter optimization framework” In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining, 2019, pp. 2623–2631
  2. “Evaluating Patient Experiences in Dry Eye Disease Through Social Media Listening Research” In Ophthalmology and Therapy 8.3, 2019, pp. 407–420 DOI: 10.1007/s40123-019-0188-4
  3. “COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter” In arXiv preprint arXiv:2005.07503, 2020
  4. E. Essa, K. Omar and A. Alqahtani “Fake news detection based on a hybrid BERT and LightGBM models” In Complex & Intelligent Systems, 2023, pp. 1–12
  5. A. Ghourabi “A Security Model Based on LightGBM and Transformer to Protect Healthcare Systems From Cyberattacks” In IEEE Access 10, 2022, pp. 48890–48903 DOI: 10.1109/ACCESS.2022.3172432
  6. D. Hu “An introductory survey on attention mechanisms in NLP problems” In Intelligent Systems and Applications: Proceedings of the 2019 Intelligent Systems Conference (IntelliSys) Volume 2, 2020, pp. 432–448 Springer International Publishing
  7. “Overview of the eighth Social Media Mining for Health Applications (SMM4H) Shared Tasks at the AMIA 2023 Annual Symposium” In Proceedings of the Eighth Social Media Mining for Health Applications (SMM4H) Workshop and Shared Task, 2023
  8. “RoBERTa: A Robustly Optimized BERT Pretraining Approach” In arXiv preprint arXiv:1907.11692, 2019 arXiv: http://arxiv.org/abs/1907.11692
  9. A. Murarka, B. Radhakrishnan and S. Ravichandran “Detection and Classification of mental illnesses on social media using RoBERTa” In arXiv preprint arXiv:2011.11226, 2020
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (4)
  1. Rushi Chavda (1 paper)
  2. Darshan Makwana (1 paper)
  3. Vraj Patel (4 papers)
  4. Anupam Shukla (10 papers)

Summary

We haven't generated a summary for this paper yet.