ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking

Authors

  • Anuj Dubey North Carolina State University, Raleigh, US
  • Afzal Ahmad The Hong Kong University of Science and Technology, Hong Kong
  • Muhammad Adeel Pasha Lahore University of Management Sciences, Lahore, Pakistan
  • Rosario Cammarota Intel Labs, San Diego, US
  • Aydin Aysu North Carolina State University, Raleigh, US

DOI:

https://doi.org/10.46586/tches.v2022.i1.506-556

Keywords:

Neural networks, Side-channel attacks, Hardware Masking

Abstract

Intellectual Property (IP) thefts of trained machine learning (ML) models through side-channel attacks on inference engines are becoming a major threat. Indeed, several recent works have shown reverse engineering of the model internals using such attacks, but the research on building defenses is largely unexplored. There is a critical need to efficiently and securely transform those defenses from cryptography such as masking to ML frameworks. Existing works, however, revealed that a straightforward adaptation of such defenses either provides partial security or leads to high area overheads. To address those limitations, this work proposes a fundamentally new direction to construct neural networks that are inherently more compatible with masking. The key idea is to use modular arithmetic in neural networks and then efficiently realize masking, in either Boolean or arithmetic fashion, depending on the type of neural network layers. We demonstrate our approach on the edge-computing friendly binarized neural networks (BNN) and show how to modify the training and inference of such a network to work with modular arithmetic without sacrificing accuracy. We then design novel masking gadgets using Domain-Oriented Masking (DOM) to efficiently mask the unique operations of ML such as the activation function and the output layer classification, and we prove their security in the glitch-extended probing model. Finally, we implement fully masked neural networks on an FPGA, quantify that they can achieve a similar latency while reducing the FF and LUT costs over the state-of-the-art protected implementations by 34.2% and 42.6%, respectively, and demonstrate their first-order side-channel security with up to 1M traces.

Downloads

Published

2021-11-19

How to Cite

Dubey, A., Ahmad, A., Pasha, M. A., Cammarota, R., & Aysu, A. (2021). ModuloNET: Neural Networks Meet Modular Arithmetic for Efficient Hardware Masking. IACR Transactions on Cryptographic Hardware and Embedded Systems, 2022(1), 506–556. https://doi.org/10.46586/tches.v2022.i1.506-556

Issue

Section

Articles