Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 174 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 23 tok/s Pro
GPT-4o 98 tok/s Pro
Kimi K2 190 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 34 tok/s Pro
2000 character limit reached

IoT-AMLHP: Aligned Multimodal Learning of Header-Payload Representations for Resource-Efficient Malicious IoT Traffic Classification (2504.14833v1)

Published 21 Apr 2025 in cs.NI and cs.CR

Abstract: Traffic classification is crucial for securing Internet of Things (IoT) networks. Deep learning-based methods can autonomously extract latent patterns from massive network traffic, demonstrating significant potential for IoT traffic classification tasks. However, the limited computational and spatial resources of IoT devices pose challenges for deploying more complex deep learning models. Existing methods rely heavily on either flow-level features or raw packet byte features. Flow-level features often require inspecting entire or most of the traffic flow, leading to excessive resource consumption, while raw packet byte features fail to distinguish between headers and payloads, overlooking semantic differences and introducing noise from feature misalignment. Therefore, this paper proposes IoT-AMLHP, an aligned multimodal learning framework for resource-efficient malicious IoT traffic classification. Firstly, the framework constructs a packet-wise header-payload representation by parsing packet headers and payload bytes, resulting in an aligned and standardized multimodal traffic representation that enhances the characterization of heterogeneous IoT traffic. Subsequently, the traffic representation is fed into a resource-efficient neural network comprising a multimodal feature extraction module and a multimodal fusion module. The extraction module employs efficient depthwise separable convolutions to capture multi-scale features from different modalities while maintaining a lightweight architecture. The fusion module adaptively captures complementary features from different modalities and effectively fuses multimodal features.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: