Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
80 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Grid Search Hyperparameter Benchmarking of BERT, ALBERT, and LongFormer on DuoRC (2101.06326v2)

Published 15 Jan 2021 in cs.CL and cs.LG

Abstract: The purpose of this project is to evaluate three LLMs named BERT, ALBERT, and LongFormer on the Question Answering dataset called DuoRC. The LLM task has two inputs, a question, and a context. The context is a paragraph or an entire document while the output is the answer based on the context. The goal is to perform grid search hyperparameter fine-tuning using DuoRC. Pretrained weights of the models are taken from the Huggingface library. Different sets of hyperparameters are used to fine-tune the models using two versions of DuoRC which are the SelfRC and the ParaphraseRC. The results show that the ALBERT (pretrained using the SQuAD1 dataset) has an F1 score of 76.4 and an accuracy score of 68.52 after fine-tuning on the SelfRC dataset. The Longformer model (pretrained using the SQuAD and SelfRC datasets) has an F1 score of 52.58 and an accuracy score of 46.60 after fine-tuning on the ParaphraseRC dataset. The current results outperformed the results from the previous model by DuoRC.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Alex John Quijano (2 papers)
  2. Sam Nguyen (4 papers)
  3. Juanita Ordonez (2 papers)
Citations (6)