Loading...
Defeating deep learning based de-anonymization attacks with adversarial example
Yin, Haoyu ; Liu, Yingjian ; Li, Yue ; Guo, Zhongwen ;
Yin, Haoyu
Liu, Yingjian
Li, Yue
Guo, Zhongwen
Citations
Altmetric:
Genre
Journal article
Date
2023-09-27
Advisor
Committee member
Group
Department
Computer and Information Sciences
Permanent link to this record
Collections
Research Projects
Organizational Units
Journal Issue
DOI
https://doi.org/10.1016/j.jnca.2023.103733
Abstract
Deep learning (DL) technologies bring new threats to network security. Website fingerprinting attacks (WFA) using DL models can distinguish victim’s browsing activities protected by anonymity technologies. Unfortunately, traditional countermeasures (website fingerprinting defenses, WFD) fail to preserve privacy against DL models. In this paper, we apply adversarial example technology to implement new WFD with static analyzing (SA) and dynamic perturbation (DP) settings. Although DP setting is close to a real-world scenario, its supervisions are almost unavailable due to the uncertainty of upcoming traffics and the difficulty of dependency analysis over time. SA setting relaxes the real-time constraints in order to implement WFD under a supervised learning perspective. We propose Greedy Injection Attack (GIA), a novel adversarial method for WFD under SA setting based on zero-injection vulnerability test. Furthermore, Sniper is proposed to mitigate the computational cost by using a DL model to approximate zero-injection test. FCNSniper and RNNSniper are designed for SA and DP settings respectively. Experiments show that FCNSniper decreases classification accuracy of the state-of-the-art WFA model by 96.57% with only 2.29% bandwidth overhead. The learned knowledge can be efficiently transferred into RNNSniper. As an indirect adversarial example attack approach, FCNSniper can be well generalized to different target WFA models and datasets without suffering fatal failures from adversarial training.
Description
Citation
Haoyu Yin, Yingjian Liu, Yue Li, Zhongwen Guo, Yu Wang, Defeating deep learning based de-anonymization attacks with adversarial example, Journal of Network and Computer Applications, Volume 220, 2023, 103733, ISSN 1084-8045, https://doi.org/10.1016/j.jnca.2023.103733.
Citation to related work
Elsevier
Has part
Journal of Network and Computer Applications, Vol. 220
ADA compliance
For Americans with Disabilities Act (ADA) accommodation, including help with reading this content, please contact scholarshare@temple.edu