Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
175 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

MLMSA: Multi-Label Multi-Side-Channel-Information enabled Deep Learning Attacks on APUF Variants (2207.09744v2)

Published 20 Jul 2022 in cs.CR

Abstract: To improve the modeling resilience of silicon strong physical unclonable functions (PUFs), in particular, the APUFs, that yield a very large number of challenge response pairs (CRPs), a number of composited APUF variants such as XOR-APUF, interpose-PUF (iPUF), feed-forward APUF (FF-APUF),and OAX-APUF have been devised. When examining their security in terms of modeling resilience, utilizing multiple information sources such as power side channel information (SCI) or/and reliability SCI given a challenge is under-explored, which poses a challenge to their supposed modeling resilience in practice. Building upon multi-label/head deep learning model architecture,this work proposes Multi-Label Multi-Side-channel-information enabled deep learning Attacks (MLMSA) to thoroughly evaluate the modeling resilience of aforementioned APUF variants. Despite its simplicity, MLMSA can successfully break large-scaled APUF variants, which has not previously been achieved. More precisely, the MLMSA breaks 128-stage 30-XOR-APUF, (9, 9)- and (2, 18)-iPUFs, and (2, 2, 30)-OAX-APUF when CRPs, power SCI and reliability SCI are concurrently used. It breaks 128-stage 12-XOR-APUF and (2, 2, 9)-OAX-APUF even when only the easy-to-obtain reliability SCI and CRPs are exploited. The 128-stage six-loop FF-APUF and one-loop 20-XOR-FF-APUF can be broken by simultaneously using reliability SCI and CRPs. All these attacks are normally completed within an hour with a standard personalcomputer. Therefore, MLMSA is a useful technique for evaluating other existing or any emerging strong PUF designs.

Citations (4)

Summary

We haven't generated a summary for this paper yet.