Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Semantic Similarity Matching for Patent Documents Using Ensemble BERT-related Model and Novel Text Processing Method (2401.06782v1)

Published 6 Jan 2024 in cs.CL and cs.AI

Abstract: In the realm of patent document analysis, assessing semantic similarity between phrases presents a significant challenge, notably amplifying the inherent complexities of Cooperative Patent Classification (CPC) research. Firstly, this study addresses these challenges, recognizing early CPC work while acknowledging past struggles with language barriers and document intricacy. Secondly, it underscores the persisting difficulties of CPC research. To overcome these challenges and bolster the CPC system, This paper presents two key innovations. Firstly, it introduces an ensemble approach that incorporates four BERT-related models, enhancing semantic similarity accuracy through weighted averaging. Secondly, a novel text preprocessing method tailored for patent documents is introduced, featuring a distinctive input structure with token scoring that aids in capturing semantic relationships during CPC context training, utilizing BCELoss. Our experimental findings conclusively establish the effectiveness of both our Ensemble Model and novel text processing strategies when deployed on the U.S. Patent Phrase to Phrase Matching dataset.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (20)
  1. B. Lent, R. Agrawal, and R. Srikant, “Discovering trends in text databases.” in KDD, vol. 97, 1997, pp. 227–230.
  2. L. S. Larkey, “A patent search and classification system,” in Proceedings of the fourth ACM conference on Digital libraries, 1999, pp. 179–187.
  3. F. Gey, M. Buckland, A. Chen, and R. Larson, “Entry vocabulary-a technology to enhance digital search,” in Proceedings of the first international conference on Human language technology research, 2001.
  4. Y.-L. Chen and Y.-T. Chiu, “Cross-language patent matching via an international patent classification-based concept bridge,” Journal of information science, vol. 39, no. 6, pp. 737–753, 2013.
  5. B. Al-Shboul and S.-H. Myaeng, “Wikipedia-based query phrase expansion in patent class search,” Information retrieval, vol. 17, pp. 430–451, 2014.
  6. R. Prasad, “Searching bioremediation patents through cooperative patent classification (cpc),” Reviews on Environmental Health, vol. 31, no. 1, pp. 53–56, 2016.
  7. M. Shalaby, J. Stutzki, M. Schubert, and S. Günnemann, “An lstm approach to patent classification based on fixed hierarchy vectors,” in Proceedings of the 2018 SIAM International Conference on Data Mining.   SIAM, 2018, pp. 495–503.
  8. S. Li, J. Hu, Y. Cui, and J. Hu, “Deeppatent: patent classification with convolutional neural networks and word embedding,” Scientometrics, vol. 117, pp. 721–744, 2018.
  9. J.-S. Lee and J. Hsiang, “Patentbert: Patent classification with fine-tuning a pre-trained bert model,” arXiv preprint arXiv:1906.02124, 2019.
  10. H. Bekamiri, D. S. Hain, and R. Jurowetzki, “Patentsberta: A deep nlp based hybrid model for patent distance and classification using augmented sbert,” arXiv preprint arXiv:2103.11933, 2021.
  11. Y. Yoo, T.-S. Heo, D. Lim, and D. Seo, “Multi label classification of artificial intelligence related patents using modified d2sbert and sentence attention mechanism,” arXiv preprint arXiv:2303.03165, 2023.
  12. T. Ha and J.-M. Lee, “Examine the effectiveness of patent embedding-based company comparison method,” IEEE Access, vol. 11, pp. 23 455–23 461, 2023.
  13. Y. Hoshino, Y. Utsumi, Y. Matsuda, Y. Tanaka, and K. Nakata, “Ipc prediction of patent documents using neural network with attention for hierarchical structure,” Plos one, vol. 18, no. 3, p. e0282361, 2023.
  14. N. D. R. Pais, “Bert mapper: An entity linking method for patent text,” Ph.D. dissertation, 2023.
  15. P. He, J. Gao, and W. Chen, “Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing,” arXiv preprint arXiv:2111.09543, 2021.
  16. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018.
  17. K. Clark, M.-T. Luong, Q. V. Le, and C. D. Manning, “Electra: Pre-training text encoders as discriminators rather than generators,” arXiv preprint arXiv:2003.10555, 2020.
  18. I. Cohen, Y. Huang, J. Chen, J. Benesty, J. Benesty, J. Chen, Y. Huang, and I. Cohen, “Pearson correlation coefficient,” Noise reduction in speech processing, pp. 1–4, 2009.
  19. M. W. Browne, “Cross-validation methods,” Journal of mathematical psychology, vol. 44, no. 1, pp. 108–132, 2000.
  20. D. Wu, S.-T. Xia, and Y. Wang, “Adversarial weight perturbation helps robust generalization,” Advances in Neural Information Processing Systems, vol. 33, pp. 2958–2969, 2020.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Liqiang Yu (6 papers)
  2. Bo Liu (484 papers)
  3. Qunwei Lin (6 papers)
  4. Xinyu Zhao (54 papers)
  5. Chang Che (12 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.