Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

PatentBERT: Patent Classification with Fine-Tuning a pre-trained BERT Model (1906.02124v2)

Published 14 May 2019 in cs.CL, cs.LG, and stat.ML

Abstract: In this work we focus on fine-tuning a pre-trained BERT model and applying it to patent classification. When applied to large datasets of over two millions patents, our approach outperforms the state of the art by an approach using CNN with word embeddings. In addition, we focus on patent claims without other parts in patent documents. Our contributions include: (1) a new state-of-the-art method based on pre-trained BERT model and fine-tuning for patent classification, (2) a large dataset USPTO-3M at the CPC subclass level with SQL statements that can be used by future researchers, (3) showing that patent claims alone are sufficient for classification task, in contrast to conventional wisdom.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Jieh-Sheng Lee (10 papers)
  2. Jieh Hsiang (7 papers)
Citations (87)

Summary

We haven't generated a summary for this paper yet.