TY - JOUR
T1 - Adversarial Robustness in Hybrid Quantum-Classical Deep Learning for Botnet DGA Detection
AU - Suryotrisongko, Hatma
AU - Musashi, Yasuo
AU - Tsuneda, Akio
AU - Sugitani, Kenichi
N1 - Publisher Copyright:
© 2022 Information Processing Society of Japan.
PY - 2022
Y1 - 2022
N2 - This paper aims to contribute to the adversarial defense research gap in the current state-of-the-art of adversarial machine learning (ML) attacks and defense. More specifically, it contributes to the metric measurement of the robustness of artificial intelligence (AI)/ML models against adversarial example attacks, which currently remains an open question in the cybersecurity domain and to an even greater extent for quantum computing-based AI/ML applications. We propose a new adversarial robustness measurement approach which measures the statistical properties (such as the average of the accuracies and t-test results) from the performance results of quantum ML model experiments involving various adversarial perturbation coefficients (attack strength) values. We argue that our proposed approach is suitable for practical use in realizing a quantum-safe world because, in the current noisy intermediate-scale quantum devices (NISQs) era, quantum noise is complex and challenging to model and therefore complicates the measurement task or benchmarking. The second contribution of our study is the novel hardened hybrid quantum-classical deep learning (DL) model for botnet domain generation algorithm (DGA) detection, employing a model hardening adversarial training technique for mitigating new types of unknown DGA adversaries since new cyberattack approaches from the cyber arms race need to be anticipated. Our analysis shows the vulnerability of the hybrid quantum DL model to adversarial example attacks by as much as a 19% average drop in accuracy. We also found the superior performance of our hardened model obtained average accuracy gains as high as 5.9%. Furthermore, we found that the hybrid quantum-classical DL approach gives the benefit of suppressing the negative impact of quantum noises on the classifier’s performance. We demonstrated how to apply our proposed measurement approach in evaluating our novel hybrid quantum DL model and highlighted the adversarial robustness of our model against adversarial example attacks as evidence of the practical implication of our study towards advancing the state of quantum adversarial machine learning research for the quantum-safe world.
AB - This paper aims to contribute to the adversarial defense research gap in the current state-of-the-art of adversarial machine learning (ML) attacks and defense. More specifically, it contributes to the metric measurement of the robustness of artificial intelligence (AI)/ML models against adversarial example attacks, which currently remains an open question in the cybersecurity domain and to an even greater extent for quantum computing-based AI/ML applications. We propose a new adversarial robustness measurement approach which measures the statistical properties (such as the average of the accuracies and t-test results) from the performance results of quantum ML model experiments involving various adversarial perturbation coefficients (attack strength) values. We argue that our proposed approach is suitable for practical use in realizing a quantum-safe world because, in the current noisy intermediate-scale quantum devices (NISQs) era, quantum noise is complex and challenging to model and therefore complicates the measurement task or benchmarking. The second contribution of our study is the novel hardened hybrid quantum-classical deep learning (DL) model for botnet domain generation algorithm (DGA) detection, employing a model hardening adversarial training technique for mitigating new types of unknown DGA adversaries since new cyberattack approaches from the cyber arms race need to be anticipated. Our analysis shows the vulnerability of the hybrid quantum DL model to adversarial example attacks by as much as a 19% average drop in accuracy. We also found the superior performance of our hardened model obtained average accuracy gains as high as 5.9%. Furthermore, we found that the hybrid quantum-classical DL approach gives the benefit of suppressing the negative impact of quantum noises on the classifier’s performance. We demonstrated how to apply our proposed measurement approach in evaluating our novel hybrid quantum DL model and highlighted the adversarial robustness of our model against adversarial example attacks as evidence of the practical implication of our study towards advancing the state of quantum adversarial machine learning research for the quantum-safe world.
KW - adversarial ML
KW - adversarial defense
KW - adversarial training
KW - computer security
KW - cybersecurity
KW - quantum adversarial machine learning
KW - quantum computing
KW - quantum deep learning
UR - http://www.scopus.com/inward/record.url?scp=85139177199&partnerID=8YFLogxK
U2 - 10.2197/IPSJJIP.30.636
DO - 10.2197/IPSJJIP.30.636
M3 - Article
AN - SCOPUS:85139177199
SN - 0387-6101
VL - 30
SP - 636
EP - 644
JO - Journal of Information Processing
JF - Journal of Information Processing
ER -