Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash 92 TPS
Gemini 2.5 Pro 51 TPS Pro
GPT-5 Medium 27 TPS
GPT-5 High 30 TPS Pro
GPT-4o 87 TPS
GPT OSS 120B 379 TPS Pro
Kimi K2 185 TPS Pro
2000 character limit reached

Mao-Zedong At SemEval-2023 Task 4: Label Represention Multi-Head Attention Model With Contrastive Learning-Enhanced Nearest Neighbor Mechanism For Multi-Label Text Classification (2307.05174v1)

Published 11 Jul 2023 in cs.CL

Abstract: The study of human values is essential in both practical and theoretical domains. With the development of computational linguistics, the creation of large-scale datasets has made it possible to automatically recognize human values accurately. SemEval 2023 Task 4\cite{kiesel:2023} provides a set of arguments and 20 types of human values that are implicitly expressed in each argument. In this paper, we present our team's solution. We use the Roberta\cite{liu_roberta_2019} model to obtain the word vector encoding of the document and propose a multi-head attention mechanism to establish connections between specific labels and semantic components. Furthermore, we use a contrastive learning-enhanced K-nearest neighbor mechanism\cite{su_contrastive_2022} to leverage existing instance information for prediction. Our approach achieved an F1 score of 0.533 on the test set and ranked fourth on the leaderboard.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (15)
  1. Mark J Berger. Large scale multi-label text classification with semantic word vectors.
  2. Multi-label text classification approach for sentence level news emotion analysis. In Pattern Recognition and Machine Intelligence, Lecture Notes in Computer Science, pages 261–266. Springer.
  3. BERT: Pre-training of deep bidirectional transformers for language understanding.
  4. Extreme multi-label loss functions for recommendation, tagging, ranking & other missing label applications. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, pages 935–944. Association for Computing Machinery.
  5. Identifying the Human Values behind Arguments. In 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022), pages 4459–4471. Association for Computational Linguistics.
  6. Semeval-2023 task 4: Valueeval: Identification of human values behind arguments. In Proceedings of the 17th International Workshop on Semantic Evaluation, Toronto, Canada. Association for Computational Linguistics.
  7. RoBERTa: A robustly optimized BERT pretraining approach.
  8. Large-scale multi-label text classification — revisiting neural networks. In Machine Learning and Knowledge Discovery in Databases, Lecture Notes in Computer Science, pages 437–452. Springer.
  9. Multi-label text classification using attention-based graph neural network. In Proceedings of the 12th International Conference on Agents and Artificial Intelligence, pages 494–505.
  10. Study on multi-label text classification based on SVM. In 2009 Sixth International Conference on Fuzzy Systems and Knowledge Discovery, volume 1, pages 300–304.
  11. Evaluating feature selection methods for multi-label text classification.
  12. Contrastive learning-enhanced nearest neighbor mechanism for multi-label text classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 672–679. Association for Computational Linguistics.
  13. Attention is all you need.
  14. Label-specific document representation for multi-label text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 466–475. Association for Computational Linguistics.
  15. SGM: Sequence generation model for multi-label classification.
Citations (4)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.