Leading Degree: A Metric for Model Performance Evaluation and Hyperparameter Tuning in Deep Learning-Based Side-Channel Analysis
DOI:
https://doi.org/10.46586/tches.v2025.i2.333-361Keywords:
Profiling side-channel analysis, Deep learning, Model performance evaluation, Hyperparameter tuningAbstract
Side-channel analysis benefits a lot from deep learning techniques, which assist attackers in recovering the secret key with fewer attack traces than before, but it remains a problem to precisely measure deep learning model performance, so as to obtain a high-performance model. Commonly used evaluation metrics for deep learning like accuracy and precision cannot well meet the demand due to their deviation in side-channel analysis, and classical evaluation metrics for side-channel analysis like guessing entropy, success rate and TGE1 are not generic because they effectively evaluate model performance in only one of the two situations: whether models manage to recover the secret key with given attack traces or not, and not efficient because they need to be performed multiple times to counteract randomness. To attain an effective generic side-channel evaluation metric, we investigate the deterministic component of power consumption, find that the elements of score vector under a key follow a linearly transformed chi-square distribution approximately, and some wrong key hypotheses usually with top scores provide great assistance in model performance evaluation, and finally we propose a new metric called Leading Degree (LD) as well as its simplified version LD-simplified for measuring model performance, which offers similar accuracy but much better generality and efficiency compared with the classical side-channel benchmark metric TGE1, and offers similar generality and efficiency but significantly better accuracy compared with recently proposed sidechannel metrics like Label Correlation and Cross Entropy Ratio. LD/LD-simplified can be easily deployed in early stopping to avoid overfitting phenomena, and we build a bridge between LD/LD-simplified and TGE1, by observing an exponential relationship, which significantly shortens the estimating time for TGE1. At last, we apply LD as a reward function to better solve the reward function design problem in reinforcement learning-based model hyperparameter tuning of side-channel analysis, and obtain better CNN model architectures compared with the state-of-the-art models obtained by previous hyperparameter tuning methods.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Junfan Zhu, Jiqiang Lu

This work is licensed under a Creative Commons Attribution 4.0 International License.