Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Ignore Me But Don't Replace Me: Utilizing Non-Linguistic Elements for Pretraining on the Cybersecurity Domain (2403.10576v2)

Published 15 Mar 2024 in cs.CR, cs.CL, and cs.LG

Abstract: Cybersecurity information is often technically complex and relayed through unstructured text, making automation of cyber threat intelligence highly challenging. For such text domains that involve high levels of expertise, pretraining on in-domain corpora has been a popular method for LLMs to obtain domain expertise. However, cybersecurity texts often contain non-linguistic elements (such as URLs and hash values) that could be unsuitable with the established pretraining methodologies. Previous work in other domains have removed or filtered such text as noise, but the effectiveness of these methods have not been investigated, especially in the cybersecurity domain. We propose different pretraining methodologies and evaluate their effectiveness through downstream tasks and probing tasks. Our proposed strategy (selective MLM and jointly training NLE token classification) outperforms the commonly taken approach of replacing non-linguistic elements (NLEs). We use our domain-customized methodology to train CyBERTuned, a cybersecurity domain LLM that outperforms other cybersecurity PLMs on most tasks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Eugene Jang (10 papers)
  2. Jian Cui (62 papers)
  3. Dayeon Yim (2 papers)
  4. Youngjin Jin (5 papers)
  5. Jin-Woo Chung (6 papers)
  6. Seungwon Shin (27 papers)
  7. Yongjae Lee (28 papers)
Citations (1)

Summary

We haven't generated a summary for this paper yet.