Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/152417
Title: A modeling attack resistant deception technique for securing lightweight-PUF based authentication
Authors: Gu, Chongyan
Chang, Chip Hong
Liu, Weiqiang
Yu, Shichao
Wang, Yale
O’Neill, Máire
Keywords: Engineering::Electrical and electronic engineering
Issue Date: 2020
Source: Gu, C., Chang, C. H., Liu, W., Yu, S., Wang, Y. & O’Neill, M. (2020). A modeling attack resistant deception technique for securing lightweight-PUF based authentication. IEEE Transactions On Computer-Aided Design of Integrated Circuits and Systems, 40(6), 1183-1196. https://dx.doi.org/10.1109/TCAD.2020.3036807
Project: MOE2018-T1-001-131 (RG87/18)
Journal: IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Abstract: Silicon physical unclonable function (PUF) has emerged as a promising spoof-proof solution for low-cost device authentication. Due to practical constraints in preventing phishing through public network or insecure communication channels, simple PUF-based authentication protocol with unrestricted queries and transparent responses is vulnerable to modeling and replay attacks. Although PUF itself is lightweight, the ancillary cryptographic primitives required to support secure handshaking in classical PUF-based authentication protocol is not necessarily so. In this paper, we present a modeling attack resistant PUFbased mutual authentication scheme to mitigate the practical limitations in applications where a resource-rich server authenticates a device with no strong restriction imposed on the type of PUF design or any additional protection on the binary channel used for the authentication. Our scheme uses an active deception protocol to prevent machine learning (ML) attacks on a device with a monolithic integration of a genuine Strong PUF (SPUF), a fake PUF, a pseudo random number generator (PRNG), a register, a binary counter, a comparator and a simple controller. The hardware encapsulation makes collection of challenge response pairs (CRPs) easy for model building during enrollment but prohibitively time-consuming upon device deployment through the same interface. A genuine server can perform a mutual authentication with the device using a combined fresh challenge contributed by both the server and the device. The message exchanged in clear cannot be manipulated by the adversary to derive unused authentic CRPs. The adversary will have to either wait for an impractically long time to collect enough real CRPs by directly querying the device or the ML model derived from the collected CRPs will be poisoned to expose the imposer when it is used to perform a spoofing attack. The false PUF multiplexing is fortified against prediction of waiting time by doubling the time penalty for every unsuccessful guess. Our implementation results on field programmable gate array (FPGA) device and security analysis have corroborated the low hardware overheads and attack resistance of the proposed deception protocol.
URI: https://hdl.handle.net/10356/152417
ISSN: 0278-0070
DOI: 10.1109/TCAD.2020.3036807
Rights: © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/TCAD.2020.3036807
Fulltext Permission: open
Fulltext Availability: With Fulltext
Appears in Collections:EEE Journal Articles

Files in This Item:
File Description SizeFormat 
A Modeling Attack Resistant Deception Technique.pdf6.96 MBAdobe PDFView/Open

SCOPUSTM   
Citations 20

20
Updated on Nov 22, 2022

Web of ScienceTM
Citations 20

18
Updated on Nov 26, 2022

Page view(s)

139
Updated on Dec 1, 2022

Download(s) 50

38
Updated on Dec 1, 2022

Google ScholarTM

Check

Altmetric


Plumx

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.