Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Exploring the Limits of Transfer Learning with Unified Model in the Cybersecurity Domain (2302.10346v1)

Published 20 Feb 2023 in cs.CL, cs.AI, and cs.CR

Abstract: With the increase in cybersecurity vulnerabilities of software systems, the ways to exploit them are also increasing. Besides these, malware threats, irregular network interactions, and discussions about exploits in public forums are also on the rise. To identify these threats faster, to detect potentially relevant entities from any texts, and to be aware of software vulnerabilities, automated approaches are necessary. Application of NLP techniques in the Cybersecurity domain can help in achieving this. However, there are challenges such as the diverse nature of texts involved in the cybersecurity domain, the unavailability of large-scale publicly available datasets, and the significant cost of hiring subject matter experts for annotations. One of the solutions is building multi-task models that can be trained jointly with limited data. In this work, we introduce a generative multi-task model, Unified Text-to-Text Cybersecurity (UTS), trained on malware reports, phishing site URLs, programming code constructs, social media data, blogs, news articles, and public forum posts. We show UTS improves the performance of some cybersecurity datasets. We also show that with a few examples, UTS can be adapted to novel unseen tasks and the nature of data

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Kuntal Kumar Pal (13 papers)
  2. Kazuaki Kashihara (3 papers)
  3. Ujjwala Anantheswaran (6 papers)
  4. Kirby C. Kuznia (1 paper)
  5. Siddhesh Jagtap (1 paper)
  6. Chitta Baral (152 papers)
Citations (3)

Summary

We haven't generated a summary for this paper yet.