Papers
Topics
Authors
Recent
Search
2000 character limit reached

Less is More: A Lightweight and Robust Neural Architecture for Discourse Parsing

Published 18 Oct 2022 in cs.CL | (2210.09537v2)

Abstract: Complex feature extractors are widely employed for text representation building. However, these complex feature extractors make the NLP systems prone to overfitting especially when the downstream training datasets are relatively small, which is the case for several discourse parsing tasks. Thus, we propose an alternative lightweight neural architecture that removes multiple complex feature extractors and only utilizes learnable self-attention modules to indirectly exploit pretrained neural LLMs, in order to maximally preserve the generalizability of pre-trained LLMs. Experiments on three common discourse parsing tasks show that powered by recent pretrained LLMs, the lightweight architecture consisting of only two self-attention layers obtains much better generalizability and robustness. Meanwhile, it achieves comparable or even better system performance with fewer learnable parameters and less processing time.

Authors (2)
Citations (2)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.