Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
38 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

WHEN FLUE MEETS FLANG: Benchmarks and Large Pre-trained Language Model for Financial Domain (2211.00083v1)

Published 31 Oct 2022 in cs.CL, cs.AI, and cs.LG

Abstract: Pre-trained LLMs have shown impressive performance on a variety of tasks and domains. Previous research on financial LLMs usually employs a generic training scheme to train standard model architectures, without completely leveraging the richness of the financial data. We propose a novel domain specific Financial LLM (FLANG) which uses financial keywords and phrases for better masking, together with span boundary objective and in-filing objective. Additionally, the evaluation benchmarks in the field have been limited. To this end, we contribute the Financial Language Understanding Evaluation (FLUE), an open-source comprehensive suite of benchmarks for the financial domain. These include new benchmarks across 5 NLP tasks in financial domain as well as common benchmarks used in the previous research. Experiments on these benchmarks suggest that our model outperforms those in prior literature on a variety of NLP tasks. Our models, code and benchmark data are publicly available on Github and Huggingface.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (10)
  1. Raj Sanjay Shah (18 papers)
  2. Kunal Chawla (10 papers)
  3. Dheeraj Eidnani (2 papers)
  4. Agam Shah (21 papers)
  5. Wendi Du (1 paper)
  6. Sudheer Chava (20 papers)
  7. Natraj Raman (13 papers)
  8. Charese Smiley (10 papers)
  9. Jiaao Chen (31 papers)
  10. Diyi Yang (151 papers)
Citations (87)
Youtube Logo Streamline Icon: https://streamlinehq.com