2000 character limit reached
Applying SoftTriple Loss for Supervised Language Model Fine Tuning (2112.08462v1)
Published 15 Dec 2021 in cs.CL
Abstract: We introduce a new loss function TripleEntropy, to improve classification performance for fine-tuning general knowledge pre-trained LLMs based on cross-entropy and SoftTriple loss. This loss function can improve the robust RoBERTa baseline model fine-tuned with cross-entropy loss by about (0.02% - 2.29%). Thorough tests on popular datasets indicate a steady gain. The fewer samples in the training dataset, the higher gain -- thus, for small-sized dataset it is 0.78%, for medium-sized -- 0.86% for large -- 0.20% and for extra-large 0.04%.