Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
Gemini 2.5 Pro
GPT-5
GPT-4o
DeepSeek R1 via Azure
2000 character limit reached

Effectiveness of Deep Networks in NLP using BiDAF as an example architecture (2109.00074v1)

Published 31 Aug 2021 in cs.CL and cs.LG

Abstract: Question Answering with NLP has progressed through the evolution of advanced model architectures like BERT and BiDAF and earlier word, character, and context-based embeddings. As BERT has leapfrogged the accuracy of models, an element of the next frontier can be the introduction of deep networks and an effective way to train them. In this context, I explored the effectiveness of deep networks focussing on the model encoder layer of BiDAF. BiDAF with its heterogeneous layers provides the opportunity not only to explore the effectiveness of deep networks but also to evaluate whether the refinements made in lower layers are additive to the refinements made in the upper layers of the model architecture. I believe the next greatest model in NLP will in fact fold in a solid LLMing like BERT with a composite architecture which will bring in refinements in addition to generic LLMing and will have a more extensive layered architecture. I experimented with the Bypass network, Residual Highway network, and DenseNet architectures. In addition, I evaluated the effectiveness of ensembling the last few layers of the network. I also studied the difference character embeddings make in adding them to the word embeddings, and whether the effects are additive with deep networks. My studies indicate that deep networks are in fact effective in giving a boost. Also, the refinements in the lower layers like embeddings are passed on additively to the gains made through deep networks.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)