14.5.10.10.10 Black-Box Attacks, Robustness

Chapter Contents (Back)
Attacks. Black-Box Attacks.

Hang, J.[Jie], Han, K.[Keji], Chen, H.[Hui], Li, Y.[Yun],
Ensemble adversarial black-box attacks against deep learning systems,
PR(101), 2020, pp. 107184.
Elsevier DOI 2003
Black-box attack, Vulnerability, Ensemble adversarial attack, Diversity, Transferability BibRef

Correia-Silva, J.R.[Jacson Rodrigues], Berriel, R.F.[Rodrigo F.], Badue, C.[Claudine], de Souza, A.F.[Alberto F.], Oliveira-Santos, T.[Thiago],
Copycat CNN: Are random non-Labeled data enough to steal knowledge from black-box models?,
PR(113), 2021, pp. 107830.
Elsevier DOI 2103
Copy a CNN model. Deep learning, Convolutional neural network, Neural network attack, Stealing network knowledge, Knowledge distillation BibRef

Gragnaniello, D.[Diego], Marra, F.[Francesco], Verdoliva, L.[Luisa], Poggi, G.[Giovanni],
Perceptual quality-preserving black-box attack against deep learning image classifiers,
PRL(147), 2021, pp. 142-149.
Elsevier DOI 2106
Image classification, Face recognition, Adversarial attacks, Black-box BibRef

Li, N.N.[Nan-Nan], Chen, Z.Z.[Zhen-Zhong],
Toward Visual Distortion in Black-Box Attacks,
IP(30), 2021, pp. 6156-6167.
IEEE DOI 2107
Distortion, Visualization, Measurement, Loss measurement, Optimization, Convergence, Training, Black-box attack, classification BibRef

Lin, D.[Da], Wang, Y.G.[Yuan-Gen], Tang, W.X.[Wei-Xuan], Kang, X.G.[Xian-Gui],
Boosting Query Efficiency of Meta Attack With Dynamic Fine-Tuning,
SPLetters(29), 2022, pp. 2557-2561.
IEEE DOI 2301
Distortion, Optimization, Estimation, Training, Tuning, Closed box, Rate distortion theory, Adversarial attack, query efficiency BibRef

Cinà, A.E.[Antonio Emanuele], Torcinovich, A.[Alessandro], Pelillo, M.[Marcello],
A black-box adversarial attack for poisoning clustering,
PR(122), 2022, pp. 108306.
Elsevier DOI 2112
Adversarial learning, Unsupervised learning, Clustering, Robustness evaluation, Machine learning security BibRef

Ghosh, A.[Arka], Mullick, S.S.[Sankha Subhra], Datta, S.[Shounak], Das, S.[Swagatam], Das, A.K.[Asit Kr.], Mallipeddi, R.[Rammohan],
A black-box adversarial attack strategy with adjustable sparsity and generalizability for deep image classifiers,
PR(122), 2022, pp. 108279.
Elsevier DOI 2112
Adversarial attack, Black-box attack, Convolutional image classifier, Differential evolution, Sparse universal attack BibRef

Chen, S.[Sizhe], He, F.[Fan], Huang, X.L.[Xiao-Lin], Zhang, K.[Kun],
Relevance attack on detectors,
PR(124), 2022, pp. 108491.
Elsevier DOI 2203
Adversarial attack, Attack transferability, Black-box attack, Relevance map, Interpreters, Object detection BibRef

Wei, X.X.[Xing-Xing], Yan, H.Q.[Huan-Qian], Li, B.[Bo],
Sparse Black-Box Video Attack with Reinforcement Learning,
IJCV(130), No. 6, June 2022, pp. 1459-1473.
Springer DOI 2207
BibRef

Hu, Z.C.[Zi-Chao], Li, H.[Heng], Yuan, L.H.[Li-Heng], Cheng, Z.[Zhang], Yuan, W.[Wei], Zhu, M.[Ming],
Model scheduling and sample selection for ensemble adversarial example attacks,
PR(130), 2022, pp. 108824.
Elsevier DOI 2206
Adversarial example, Black-box attack, Model scheduling, Sample selection BibRef

Huang, L.F.[Li-Feng], Wei, S.X.[Shu-Xin], Gao, C.Y.[Cheng-Ying], Liu, N.[Ning],
Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks,
PR(131), 2022, pp. 108831.
Elsevier DOI 2208
Adversarial example, Transferability, Black-box attack, Defenses BibRef

Peng, B.[Bowen], Peng, B.[Bo], Yong, S.W.[Shao-Wei], Liu, L.[Li],
An Empirical Study of Fully Black-Box and Universal Adversarial Attack for SAR Target Recognition,
RS(14), No. 16, 2022, pp. xx-yy.
DOI Link 2208
BibRef

Li, C.[Chao], Yao, W.[Wen], Wang, H.D.[Han-Ding], Jiang, T.S.[Ting-Song],
Adaptive momentum variance for attention-guided sparse adversarial attacks,
PR(133), 2023, pp. 108979.
Elsevier DOI 2210
Deep neural networks, Black-box adversarial attacks, Transferability, Momentum variances BibRef

Li, T.[Tengjiao], Li, M.[Maosen], Yang, Y.H.[Yan-Hua], Deng, C.[Cheng],
Frequency domain regularization for iterative adversarial attacks,
PR(134), 2023, pp. 109075.
Elsevier DOI 2212
Adversarial examples, Transfer-based attack, Black-box attack, Frequency-domain characteristics BibRef

Dong, Y.P.[Yin-Peng], Cheng, S.Y.[Shu-Yu], Pang, T.Y.[Tian-Yu], Su, H.[Hang], Zhu, J.[Jun],
Query-Efficient Black-Box Adversarial Attacks Guided by a Transfer-Based Prior,
PAMI(44), No. 12, December 2022, pp. 9536-9548.
IEEE DOI 2212
Estimation, Optimization, Analytical models, Numerical models, Deep learning, Approximation algorithms, Weight measurement, transferability BibRef

Zhao, C.L.[Cheng-Long], Ni, B.B.[Bing-Bing], Mei, S.B.[Shi-Bin],
Explore Adversarial Attack via Black Box Variational Inference,
SPLetters(29), 2022, pp. 2088-2092.
IEEE DOI 2211
Monte Carlo methods, Computational modeling, Probability distribution, Gaussian distribution, Bayes methods, Bayesian inference BibRef

Hu, C.[Cong], Xu, H.Q.[Hao-Qi], Wu, X.J.[Xiao-Jun],
Substitute Meta-Learning for Black-Box Adversarial Attack,
SPLetters(29), 2022, pp. 2472-2476.
IEEE DOI 2212
Training, Closed box, Task analysis, Signal processing algorithms, Generators, Classification algorithms, Data models, substitute training BibRef

Theagarajan, R.[Rajkumar], Bhanu, B.[Bir],
Privacy Preserving Defense For Black Box Classifiers Against On-Line Adversarial Attacks,
PAMI(44), No. 12, December 2022, pp. 9503-9520.
IEEE DOI 2212
Training, Perturbation methods, Bayes methods, Uncertainty, Deep learning, Privacy, Data models, Adversarial defense, privacy preserving defense BibRef

Sun, X.X.[Xu-Xiang], Cheng, G.[Gong], Li, H.[Hongda], Pei, L.[Lei], Han, J.W.[Jun-Wei],
On Single-Model Transferable Targeted Attacks: A Closer Look at Decision-Level Optimization,
IP(32), 2023, pp. 2972-2984.
IEEE DOI 2306
Optimization, Adversarial machine learning, Closed box, Sun, Measurement, Linear programming, Tuning, Adversarial attacks, balanced logit loss BibRef

Mao, Z.S.[Zhong-Shu], Lu, Y.Q.[Yi-Qin], Cheng, Z.[Zhe], Shen, X.[Xiong],
Enhancing transferability of adversarial examples with pixel-level scale variation,
SP:IC(118), 2023, pp. 117020.
Elsevier DOI 2310
Adversarial example, Transferability, Black box, Input transformation, Pixel level BibRef

Mumcu, F.[Furkan], Yilmaz, Y.[Yasin],
Sequential architecture-agnostic black-box attack design and analysis,
PR(147), 2024, pp. 110066.
Elsevier DOI 2312
Adversarial machine learning, Black-box attacks, Transferability of attacks, Vision transformers, Sequential hypothesis testing BibRef

Wang, D.H.[Dong-Hua], Yao, W.[Wen], Jiang, T.[Tingsong], Chen, X.Q.[Xiao-Qian],
Improving Transferability of Universal Adversarial Perturbation With Feature Disruption,
IP(33), 2024, pp. 722-737.
IEEE DOI 2402
Training, Perturbation methods, Closed box, Task analysis, Glass box, Data models, Linear programming, transferability of UAP BibRef

Yuan, Z.[Zheng], Zhang, J.[Jie], Jiang, Z.Y.[Zhao-Yan], Li, L.L.[Liang-Liang], Shan, S.G.[Shi-Guang],
Adaptive Perturbation for Adversarial Attack,
PAMI(46), No. 8, August 2024, pp. 5663-5676.
IEEE DOI 2407
Perturbation methods, Iterative methods, Adaptation models, Generators, Closed box, Security, Training, Adversarial attack, adaptive perturbation BibRef

Ran, R.[Ran], Wei, J.[Jiwei], Zhang, C.N.[Chao-Ning], Wang, G.Q.[Guo-Qing], Yang, Y.[Yang], Shen, H.T.[Heng Tao],
Adaptive Multi-scale Degradation-Based Attack for Boosting the Adversarial Transferability,
MultMed(26), 2024, pp. 10979-10990.
IEEE DOI 2412
Perturbation methods, Adaptation models, Closed box, Iterative methods, Computational modeling, Robustness, Glass box, transferability BibRef

Dai, X.L.[Xue-Long], Li, Y.J.[Yan-Jie], Duan, M.X.[Ming-Xing], Xiao, B.[Bin],
Diffusion Models as Strong Adversaries,
IP(33), 2024, pp. 6734-6747.
IEEE DOI 2501
Diffusion models, Data models, Closed box, Training, Threat modeling, Perturbation methods, Deep learning, Training data, Uncertainty, DNN BibRef

Hu, C.Y.[Cheng-Yin], Shi, W.W.[Wei-Wen], Tian, L.[Ling], Li, W.[Wen],
Adversarial catoptric light: An effective, stealthy and robust physical-world attack to DNNs,
IET-CV(18), No. 5, 2024, pp. 557-573.
DOI Link 2408
AdvCL, DNNs, effectiveness, physical attacks, robustness, stealthiness BibRef

Hu, C.Y.[Cheng-Yin], Shi, W.W.[Wei-Wen], Tian, L.[Ling], Li, W.[Wen],
Adversarial Neon Beam: A light-based physical attack to DNNs,
CVIU(238), 2024, pp. 103877.
Elsevier DOI Code:
WWW Link. 2312
DNNs, Black-box light-based physical attack, AdvNB, Effectiveness, Stealthiness, Robustness BibRef

Hu, C.Y.[Cheng-Yin], Shi, W.W.[Wei-Wen], Tian, L.[Ling],
Adversarial color projection: A projector-based physical-world attack to DNNs,
IVC(140), 2023, pp. 104861.
Elsevier DOI Code:
WWW Link. 2312
DNNs, Black-box projector-based physical attack, Adversarial color projection, Effectiveness, Stealthiness, Robustness BibRef

Shi, Y.C.[Yu-Cheng], Han, Y.H.[Ya-Hong], Hu, Q.H.[Qing-Hua], Yang, Y.[Yi], Tian, Q.[Qi],
Query-Efficient Black-Box Adversarial Attack With Customized Iteration and Sampling,
PAMI(45), No. 2, February 2023, pp. 2226-2245.
IEEE DOI 2301
Adaptation models, Optimization, Data models, Computational modeling, Gaussian noise, Trajectory, transfer-based attack BibRef

Zhang, Y.[Yu], Gong, Z.Q.[Zhi-Qiang], Zhang, Y.C.[Yi-Chuang], Bin, K.C.[Kang-Cheng], Li, Y.Q.[Yong-Qian], Qi, J.H.[Jia-Hao], Wen, H.[Hao], Zhong, P.[Ping],
Boosting transferability of physical attack against detectors by redistributing separable attention,
PR(138), 2023, pp. 109435.
Elsevier DOI 2303
Physical attack, Transferability, Multi-layer attention, Object detection, Black-box models BibRef

Yin, F.[Fei], Zhang, Y.[Yong], Wu, B.Y.[Bao-Yuan], Feng, Y.[Yan], Zhang, J.Y.[Jing-Yi], Fan, Y.B.[Yan-Bo], Yang, Y.J.[Yu-Jiu],
Generalizable Black-Box Adversarial Attack With Meta Learning,
PAMI(46), No. 3, March 2024, pp. 1804-1818.
IEEE DOI Code:
WWW Link. 2402
Perturbation methods, Closed box, Generators, Task analysis, Glass box, Training, Adaptation models, conditional distribution of perturbation BibRef

Feng, Y.[Yan], Wu, B.Y.[Bao-Yuan], Fan, Y.B.[Yan-Bo], Liu, L.[Li], Li, Z.F.[Zhi-Feng], Xia, S.T.[Shu-Tao],
Boosting Black-Box Attack with Partially Transferred Conditional Adversarial Distribution,
CVPR22(15074-15083)
IEEE DOI 2210
Training, Learning systems, Deep learning, Solid modeling, Perturbation methods, Computational modeling, Adversarial attack and defense BibRef

Lu, Y.T.[Yan-Tao], Ren, H.N.[Hai-Ning], Chai, W.H.[Wei-Heng], Velipasalar, S.[Senem], Li, Y.[Yilan],
Time-aware and task-transferable adversarial attack for perception of autonomous vehicles,
PRL(178), 2024, pp. 145-152.
Elsevier DOI 2402
Adversarial attack, Black-box, Perception, Real-time BibRef

Lu, Y.T.[Yan-Tao], Liu, N.[Ning], Li, Y.[Yilan], Chen, J.C.[Jin-Chao], Velipasalar, S.[Senem],
Cross-task and time-aware adversarial attack framework for perception of autonomous driving,
PR(165), 2025, pp. 111652.
Elsevier DOI 2505
Adversarial examples, Cross-task, Perception, Autonomous driving BibRef

Khedr, Y.M.[Yasmeen M.], Liu, X.[Xin], He, K.[Kun],
TransMix: Crafting highly transferable adversarial examples to evade face recognition models,
IVC(146), 2024, pp. 105022.
Elsevier DOI 2405
Adversarial examples, Attack transferability, Face verification, Data augmentation, Black-box attack BibRef

Huang, J.L.[Jie-Lun], Huang, G.H.[Guo-Heng], Zhang, X.H.[Xu-Hui], Yuan, X.C.[Xiao-Chen], Xie, F.F.[Fen-Fang], Pun, C.M.[Chi-Man], Zhong, G.[Guo],
Black-box reversible adversarial examples with invertible neural network,
IVC(147), 2024, pp. 105094.
Elsevier DOI 2406
Image restoration, Adversarial attack, Invertible neural network BibRef

Huang, X.S.[Xing-Sen], Miao, D.[Deshui], Wang, H.P.[Hong-Peng], Wang, Y.W.[Yao-Wei], Li, X.[Xin],
Context-Guided Black-Box Attack for Visual Tracking,
MultMed(26), 2024, pp. 8824-8835.
IEEE DOI 2408
Target tracking, Feature extraction, Visualization, Transformers, Interference, Image reconstruction, Robustness, Visual tracking, adversarial attack BibRef

Qian, X.L.[Xue-Lin], Wang, W.X.[Wen-Xuan], Jiang, Y.G.[Yu-Gang], Xue, X.Y.[Xiang-Yang], Fu, Y.W.[Yan-Wei],
Dynamic Routing and Knowledge Re-Learning for Data-Free Black-Box Attack,
PAMI(47), No. 1, January 2025, pp. 486-501.
IEEE DOI 2412
Data models, Training, Closed box, Adaptation models, Training data, Computational modeling, Generators, Logic gates, Data privacy, knowledge re-learning BibRef

Hu, C.[Cong], He, Z.C.[Zhi-Chao], Wu, X.J.[Xiao-Jun],
Query-efficient black-box ensemble attack via dynamic surrogate weighting,
PR(161), 2025, pp. 111263.
Elsevier DOI 2502
Black-box attack, Ensemble strategies, Deep neural networks, Transferable adversarial example, Image classification BibRef

Chen, J.Q.[Jian-Qi], Chen, H.[Hao], Chen, K.[Keyan], Zhang, Y.[Yilan], Zou, Z.X.[Zheng-Xia], Shi, Z.W.[Zhen-Wei],
Diffusion Models for Imperceptible and Transferable Adversarial Attack,
PAMI(47), No. 2, February 2025, pp. 961-977.
IEEE DOI 2501
Diffusion models, Perturbation methods, Closed box, Noise reduction, Solid modeling, Image color analysis, Glass box, transferable attack BibRef

Sun, X.X.[Xu-Xiang], Cheng, G.[Gong], Li, H.[Hongda], Lang, C.[Chunbo], Han, J.W.[Jun-Wei],
STDatav2: Accessing Efficient Black-Box Stealing for Adversarial Attacks,
PAMI(47), No. 4, April 2025, pp. 2429-2445.
IEEE DOI 2503
Closed box, Training, Data models, Generators, Glass box, Training data, Distributed databases, Optimization, surrogate training data (STData) BibRef

Sun, X.X.[Xu-Xiang], Cheng, G.[Gong], Li, H.[Hongda], Pei, L.[Lei], Han, J.W.[Jun-Wei],
Exploring Effective Data for Surrogate Training Towards Black-box Attack,
CVPR22(15334-15343)
IEEE DOI 2210
Training, Codes, Computational modeling, Semantics, Training data, Diversity methods, Adversarial attack and defense, retrieval BibRef

Li, C.[Chao], Jiang, T.[Tingsong], Wang, H.[Handing], Yao, W.[Wen], Wang, D.H.[Dong-Hua],
Optimizing Latent Variables in Integrating Transfer and Query Based Attack Framework,
PAMI(47), No. 1, January 2025, pp. 161-171.
IEEE DOI 2412
Perturbation methods, Generators, Closed box, Optimization, Glass box, Technological innovation, Particle swarm optimization, query BibRef

Cheng, R.[Riran], Wang, X.P.[Xu-Peng], Sohel, F.[Ferdous], Lei, H.[Hang],
Black-Box Explainability-Guided Adversarial Attack for 3D Object Tracking,
CirSysVideo(35), No. 7, July 2025, pp. 6881-6894.
IEEE DOI 2507
Target tracking, Perturbation methods, Solid modeling, Object tracking, Predictive models, Point cloud compression, imperceptibility BibRef

Wang, D.H.[Dong-Hua], Yao, W.[Wen], Jiang, T.[Tingsong], Li, C.[Chao], Chen, X.Q.[Xiao-Qian],
Universal Multi-View Black-Box Attack Against Object Detectors via Layout Optimization,
CirSysVideo(35), No. 7, July 2025, pp. 7129-7142.
IEEE DOI 2507
Perturbation methods, Optimization, Layout, Closed box, Detectors, Solid modeling, Glass box, Training, Security, Adversarial examples, object detection BibRef

Zheng, M.X.[Mei-Xi], Yan, X.C.[Xuan-Chen], Zhu, Z.[Zihao], Chen, H.R.[Hong-Rui], Wu, B.Y.[Bao-Yuan],
BlackboxBench: A Comprehensive Benchmark of Black-Box Adversarial Attacks,
PAMI(47), No. 9, September 2025, pp. 7867-7885.
IEEE DOI 2508
Closed box, Benchmark testing, Data models, Perturbation methods, Glass box, Training, Robustness, Pipelines, Libraries, transfer-based adversarial attacks BibRef

He, S.[Shuai], Zheng, S.[Shuntian], Ming, A.[Anlong], Wang, Y.[Yanni], Ma, H.D.[Hua-Dong],
DA3Attacker: A Diffusion-Based Attacker Against Aesthetics-Oriented Black-Box Models,
IP(34), 2025, pp. 5300-5311.
IEEE DOI 2509
Filters, Transformers, Closed box, Computational modeling, Training, Robustness, Adaptation models, Security, Predictive models, deep learning BibRef

Zhan, Y.[Yike], Zheng, B.L.[Bao-Lin], Liu, D.X.[Dong-Xin], Deng, B.[Boren], Yang, X.[Xu],
Exploring black-box adversarial attacks on Interpretable Deep Learning Systems,
CVIU(259), 2025, pp. 104423.
Elsevier DOI 2509
Interpretability in computer vision, Deep learning systems, Adversarial examples, Black-box attacks BibRef

Zeng, P.P.[Peng-Peng], Yuan, S.M.[Sheng-Ming], Zhang, Q.L.[Qi-Long], Deng, H.M.[Hui-Min], Xu, H.[Hui], Gao, L.[Lianli],
Staircase Sign Method: Boosting adversarial attacks by mitigating gradient distortion,
PR(170), 2026, pp. 111983.
Elsevier DOI Code:
WWW Link. 2509
Adversarial examples, Black-box transferability, Gradient manipulation BibRef

de Paz, E.G.G.[Erick G.G.], Vaquera-Huerta, H.[Humberto], Albores-Velasco, F.J.[Francisco Javier], Bauer-Mengelberg, J.R.[John R.], Romero-Padilla, J.M.[Juan Manuel],
Convex Partition: A Bayesian Regression Tree for Black-box Optimisation,
PRL(196), 2025, pp. 344-350.
Elsevier DOI 2509
Surrogate-based optimisation, Black-box optimisation, Regression trees BibRef

Yu, J.[Jimiao], Chen, H.[Honglong], Li, J.J.[Jun-Jian], Chen, L.H.[Ling-Han], Gao, Y.D.[Yu-Dong], Liu, W.F.[Wei-Feng], Zhang, L.[Lei],
Black-Box Adversarial Defense Based on Image Decomposition and Reconstruction,
MultMed(27), 2025, pp. 5909-5921.
IEEE DOI 2510
Training, Perturbation methods, Artificial neural networks, Robustness, Purification, Image reconstruction, Diffusion models, deep neural networks BibRef


Park, J.[Jeonghwan], McLaughlin, N.[Niall], Alouani, I.[Ihsen],
Mind the Gap: Detecting Black-box Adversarial Attacks in the Making through Query Update Analysis,
CVPR25(10235-10243)
IEEE DOI 2508
Measurement, Adaptation models, Noise, Closed box, Machine learning, Monitoring, Context modeling BibRef

Qiao, Y.Q.[Yan-Qi], Liu, D.[Dazhuang], Wang, R.[Rui], Liang, K.[Kaitai],
Low-Frequency Black-Box Backdoor Attack via Evolutionary Algorithm,
WACV25(7582-7592)
IEEE DOI 2505
Training, Perturbation methods, Closed box, Evolutionary computation, Simulated annealing, Inspection, Space exploration BibRef

Shukla, S.[Shubhi], Dalui, S.[Subhadeep], Alam, M.[Manaar], Datta, S.[Shubhajit], Mondal, A.[Arijit], Mukhopadhyay, D.[Debdeep], Chakrabarti, P.P.[Partha Pratim],
Guardian of the Ensembles: Introducing Pairwise Adversarially Robust Loss for Resisting Adversarial Attacks in DNN Ensembles,
WACV25(7205-7214)
IEEE DOI Code:
WWW Link. 2505
Training, Accuracy, Perturbation methods, Neural networks, Closed box, Robustness, Ensemble learning, decision-boundary diversity BibRef

Choi, J.I.[Jung Im], Lan, Q.Z.[Qi-Zhen], Tian, Q.[Qing],
Improving Deep Detector Robustness via Detection-Related Discriminant Maximization and Reorganization,
WACV25(1518-1527)
IEEE DOI 2505
Training, Location awareness, Visualization, Computational modeling, Perturbation methods, Closed box, detection-related discriminant optimization BibRef

Perla, N.K.[Neeresh Kumar], Hossain, M.I.[Md Iqbal], Sajeeda, A.[Afia], Shao, M.[Ming],
Are Exemplar-Based Class Incremental Learning Models Victim of Black-Box Poison Attacks?,
WACV25(6785-6794)
IEEE DOI 2505
Training, Incremental learning, Toxicology, Computational modeling, Closed box, Predictive models, Prediction algorithms, Robustness, adversarial machine learning BibRef

Zhang, X.W.[Xin-Wei], Zhang, T.Y.[Tian-Yuan], Zhang, Y.T.[Yi-Tong], Liu, S.C.[Shuang-Cheng],
Enhancing the Transferability of Adversarial Attacks with Stealth Preservation,
AML24(2915-2925)
IEEE DOI 2410
Visualization, Fuses, Computational modeling, Perturbation methods, Closed box, Iterative methods BibRef

Ye, M.[Muchao], Xu, X.[Xiang], Zhang, Q.[Qin], Wu, J.[Jonathan],
Sharpness-Aware Optimization for Real-World Adversarial Attacks for Diverse Compute Platforms with Enhanced Transferability,
AML24(2937-2946)
IEEE DOI 2410
Computational modeling, Perturbation methods, Closed box, Artificial neural networks, Pressing BibRef

Wu, H.[Han], Ou, G.[Guanyan], Wu, W.B.[Wei-Bin], Zheng, Z.[Zibin],
Improving Transferable Targeted Adversarial Attacks with Model Self-Enhancement,
CVPR24(24615-24624)
IEEE DOI Code:
WWW Link. 2410
Training, Codes, Perturbation methods, Design methodology, Closed box, Robustness BibRef

Tang, B.[Bowen], Wang, Z.[Zheng], Bin, Y.[Yi], Dou, Q.[Qi], Yang, Y.[Yang], Shen, H.T.[Heng Tao],
Ensemble Diversity Facilitates Adversarial Transferability,
CVPR24(24377-24386)
IEEE DOI Code:
WWW Link. 2410
Perturbation methods, Closed box, Stochastic processes, Reinforcement learning, Adversarial Attack BibRef

Dubinski, J.[Jan], Kowalczuk, A.[Antoni], Pawlak, S.[Stanislaw], Rokita, P.[Przemyslaw], Trzcinski, T.[Tomasz], Morawiecki, P.[Pawel],
Towards More Realistic Membership Inference Attacks on Large Diffusion Models,
WACV24(4848-4857)
IEEE DOI 2404
Training, Privacy, Data privacy, Computational modeling, Closed box, Reliability, Algorithms, Explainable, fair, accountable, ethical computer vision BibRef

Wang, Z.Y.[Zhen-Yi], Shen, L.[Li], Guo, J.F.[Jun-Feng], Duan, T.H.[Tie-Hang], Luan, S.[Siyu], Liu, T.L.[Tong-Liang], Gao, M.C.[Ming-Chen],
Training A Secure Model Against Data-free Model Extraction,
ECCV24(LXXIX: 323-340).
Springer DOI 2412
BibRef

Meng, L.Z.[Ling-Zhuang], Shao, M.[Mingwen], Qiao, Y.J.[Yuan-Jian], Liu, W.J.[Wen-Jie],
Inter-class Topology Alignment for Efficient Black-box Substitute Attacks,
ECCV24(XXXIV: 261-277).
Springer DOI 2412
BibRef

Yang, N.[Nan], Li, Z.[Zihan], Long, Z.[Zhen], Huang, X.L.[Xiao-Lin], Zhu, C.[Ce], Liu, Y.P.[Yi-Peng],
Efficient Black-Box Adversarial Attack on Deep Clustering Models,
ICIP24(1044-1049)
IEEE DOI 2411
Training, Search methods, Closed box, Clustering algorithms, Switches, Generators, Adversarial examples, Deep clustering, Generator adversarial network BibRef

Nayak, G.K.[Gaurav Kumar], Khatri, I.[Inder], Rawal, R.[Ruchit], Chakraborty, A.[Anirban],
Data-free Defense of Black Box Models Against Adversarial Attacks,
FaDE-TCV24(254-263)
IEEE DOI 2410
Training, Accuracy, Sensitivity, Noise, Training data, Predictive models, Black-box Defense, Wavelet Decomposition BibRef

Park, J.[Jeonghwan], Miller, P.[Paul], McLaughlin, N.[Niall],
Hard-label based Small Query Black-box Adversarial Attack,
WACV24(3974-3983)
IEEE DOI 2404
Computational modeling, Closed box, Benchmark testing, Predictive models, Prediction algorithms, Video recognition and understanding BibRef

Aich, A.[Abhishek], Li, S.[Shasha], Song, C.Y.[Cheng-Yu], Asif, M.S.[M. Salman], Krishnamurthy, S.V.[Srikanth V.], Roy-Chowdhury, A.K.[Amit K.],
Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks,
WACV23(1308-1318)
IEEE DOI 2302
Perturbation methods, Computational modeling, Closed box, Generators, Convolutional neural networks, Glass box, adversarial attack and defense methods BibRef

Hirose, Y.[Yudai], Ono, S.[Satoshi],
Black-box Adversarial Attack against Visual Interpreters for Deep Neural Networks,
MVA23(1-6)
DOI Link 2403
Adaptation models, Visualization, Perturbation methods, Machine vision, Closed box, Artificial neural networks, Predictive models BibRef

Baia, A.E.[Alina Elena], Poggioni, V.[Valentina], Cavallaro, A.[Andrea],
Black-Box Attacks on Image Activity Prediction and its Natural Language Explanations,
AROW23(3688-3697)
IEEE DOI 2401
BibRef

Zhang, Y.H.[Yi-Hua], Cai, R.[Ruisi], Chen, T.L.[Tian-Long], Reza, M.F.[Md Farhamdur], Rahmati, A.[Ali], Wu, T.F.[Tian-Fu], Dai, H.[Huaiyu],
CGBA: Curvature-aware Geometric Black-box Attack,
ICCV23(124-133)
IEEE DOI Code:
WWW Link. 2401
BibRef

Park, H.[Hojin], Park, J.[Jaewoo], Dong, X.[Xingbo], Teoh, A.B.J.[Andrew Beng Jin],
Towards Query Efficient and Generalizable Black-Box Face Reconstruction Attack,
ICIP23(1060-1064)
IEEE DOI 2312
BibRef

Han, G.J.[Gyo-Jin], Choi, J.[Jaehyun], Lee, H.[Haeil], Kim, J.[Junmo],
Reinforcement Learning-Based Black-Box Model Inversion Attacks,
CVPR23(20504-20513)
IEEE DOI 2309
BibRef

Williams, P.N.[Phoenix Neale], Li, K.[Ke],
Black-Box Sparse Adversarial Attack via Multi-Objective Optimisation CVPR Proceedings,
CVPR23(12291-12301)
IEEE DOI 2309
BibRef

Zhao, A.[Anqi], Chu, T.[Tong], Liu, Y.[Yahao], Li, W.[Wen], Li, J.J.[Jing-Jing], Duan, L.X.[Li-Xin],
Minimizing Maximum Model Discrepancy for Transferable Black-box Targeted Attacks,
CVPR23(8153-8162)
IEEE DOI 2309
BibRef

Cai, Z.[Zikui], Tan, Y.[Yaoteng], Asif, M.S.[M. Salman],
Ensemble-based Blackbox Attacks on Dense Prediction,
CVPR23(4045-4055)
IEEE DOI 2309
BibRef

Zhang, C.N.[Chao-Ning], Benz, P.[Philipp], Karjauv, A.[Adil], Cho, J.W.[Jae Won], Zhang, K.[Kang], Kweon, I.S.[In So],
Investigating Top-k White-Box and Transferable Black-box Attack,
CVPR22(15064-15073)
IEEE DOI 2210
Measurement, Codes, Semantics, Adversarial attack and defense BibRef

Wang, B.H.[Bing-Hui], Li, Y.Q.[You-Qi], Zhou, P.[Pan],
Bandits for Structure Perturbation-based Black-box Attacks to Graph Neural Networks with Theoretical Guarantees,
CVPR22(13369-13377)
IEEE DOI 2210
Bridges, Perturbation methods, Computational modeling, Graph neural networks, Task analysis, Adversarial attack and defense BibRef

Beetham, J.[James], Kardan, N.[Navid], Mian, A.[Ajmal], Shah, M.[Mubarak],
Detecting Compromised Architecture/Weights of a Deep Model,
ICPR22(2843-2849)
IEEE DOI 2212
Smoothing methods, Perturbation methods, Closed box, Detectors, Predictive models, Data models BibRef

Aithal, M.B.[Manjushree B.], Li, X.H.[Xiao-Hua],
Boundary Defense Against Black-box Adversarial Attacks,
ICPR22(2349-2356)
IEEE DOI 2212
Degradation, Limiting, Gaussian noise, Neural networks, Closed box, Reliability theory BibRef

Ji, Y.[Yimu], Ding, J.Y.[Jian-Yu], Chen, Z.Y.[Zhi-Yu], Wu, F.[Fei], Zhang, C.[Chi], Sun, Y.M.[Yi-Ming], Sun, J.[Jing], Liu, S.D.[Shang-Dong],
Simulator Attack+ for Black-Box Adversarial Attack,
ICIP22(636-640)
IEEE DOI 2211
Deep learning, Codes, Perturbation methods, Neural networks, Usability, Meta-learning, Adversarial Attack, Black-box Attack BibRef

Liang, S.Y.[Si-Yuan], Li, L.K.[Long-Kang], Fan, Y.B.[Yan-Bo], Jia, X.J.[Xiao-Jun], Li, J.Z.[Jing-Zhi], Wu, B.Y.[Bao-Yuan], Cao, X.C.[Xiao-Chun],
A Large-Scale Multiple-Objective Method for Black-box Attack Against Object Detection,
ECCV22(IV:619-636).
Springer DOI 2211
BibRef

Wang, D.[Dan], Wang, Y.G.[Yuan-Gen],
Decision-based Black-box Attack Specific to Large-size Images,
ACCV22(II:357-372).
Springer DOI 2307
BibRef

Na, D.B.[Dong-Bin], Ji, S.[Sangwoo], Kim, J.[Jong],
Unrestricted Black-box Adversarial Attack Using GAN with Limited Queries,
AdvRob22(467-482).
Springer DOI 2304
BibRef

Kim, W.J.[Woo Jae], Hong, S.[Seunghoon], Yoon, S.E.[Sung-Eui],
Diverse Generative Perturbations on Attention Space for Transferable Adversarial Attacks,
ICIP22(281-285)
IEEE DOI 2211
Codes, Perturbation methods, Stochastic processes, Generators, Space exploration, Adversarial examples, Black-box, Diversity BibRef

Wang, Y.X.[Yi-Xu], Li, J.[Jie], Liu, H.[Hong], Wang, Y.[Yan], Wu, Y.J.[Yong-Jian], Huang, F.Y.[Fei-Yue], Ji, R.R.[Rong-Rong],
Black-Box Dissector: Towards Erasing-Based Hard-Label Model Stealing Attack,
ECCV22(V:192-208).
Springer DOI 2211
BibRef

Tran, H.[Hoang], Lu, D.[Dan], Zhang, G.[Guannan],
Exploiting the Local Parabolic Landscapes of Adversarial Losses to Accelerate Black-Box Adversarial Attack,
ECCV22(V:317-334).
Springer DOI 2211
BibRef

Wang, T.[Tong], Yao, Y.[Yuan], Xu, F.[Feng], An, S.W.[Sheng-Wei], Tong, H.H.[Hang-Hang], Wang, T.[Ting],
An Invisible Black-Box Backdoor Attack Through Frequency Domain,
ECCV22(XIII:396-413).
Springer DOI 2211
BibRef

Zhou, L.J.[Lin-Jun], Cui, P.[Peng], Zhang, X.X.[Xing-Xuan], Jiang, Y.[Yinan], Yang, S.Q.[Shi-Qiang],
Adversarial Eigen Attack on BlackBox Models,
CVPR22(15233-15241)
IEEE DOI 2210
Jacobian matrices, Deep learning, Perturbation methods, Computational modeling, Training data, Data models, Optimization methods BibRef

Zhang, J.[Jie], Li, B.[Bo], Xu, J.H.[Jiang-He], Wu, S.[Shuang], Ding, S.H.[Shou-Hong], Zhang, L.[Lei], Wu, C.[Chao],
Towards Efficient Data Free Blackbox Adversarial Attack,
CVPR22(15094-15104)
IEEE DOI 2210
Data privacy, Computational modeling, Training data, Machine learning, Generative adversarial networks, Data models, Adversarial attack and defense BibRef

Wang, W.X.[Wen-Xuan], Qian, X.L.[Xue-Lin], Fu, Y.W.[Yan-Wei], Xue, X.Y.[Xiang-Yang],
DST: Dynamic Substitute Training for Data-free Black-box Attack,
CVPR22(14341-14350)
IEEE DOI 2210
Training, Adaptation models, Computational modeling, Neural networks, Training data, Logic gates, Adversarial attack and defense BibRef

Wang, W.X.[Wen-Xuan], Yin, B.J.[Bang-Jie], Yao, T.P.[Tai-Ping], Zhang, L.[Li], Fu, Y.W.[Yan-Wei], Ding, S.H.[Shou-Hong], Li, J.L.[Ji-Lin], Huang, F.Y.[Fei-Yue], Xue, X.Y.[Xiang-Yang],
Delving into Data: Effectively Substitute Training for Black-box Attack,
CVPR21(4759-4768)
IEEE DOI 2111
Training, Computational modeling, Training data, Distributed databases, Data visualization, Data models BibRef

Jia, S.[Shuai], Song, Y.B.[Yi-Bing], Ma, C.[Chao], Yang, X.K.[Xiao-Kang],
IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking,
CVPR21(6705-6714)
IEEE DOI 2111
Deep learning, Visualization, Correlation, Codes, Perturbation methods, Robustness BibRef

Ma, C.[Chen], Chen, L.[Li], Yong, J.H.[Jun-Hai],
Simulating Unknown Target Models for Query-Efficient Black-box Attacks,
CVPR21(11830-11839)
IEEE DOI 2111
Training, Deep learning, Codes, Computational modeling, Training data, Complexity theory BibRef

Maho, T.[Thibault], Furon, T.[Teddy], Le Merrer, E.[Erwan],
SurFree: a fast surrogate-free black-box attack,
CVPR21(10425-10434)
IEEE DOI 2111
Estimation, Focusing, Machine learning, Distortion, Convergence BibRef

Li, J.[Jie], Ji, R.R.[Rong-Rong], Chen, P.X.[Pei-Xian], Zhang, B.C.[Bao-Chang], Hong, X.P.[Xiao-Peng], Zhang, R.X.[Rui-Xin], Li, S.X.[Shao-Xin], Li, J.L.[Ji-Lin], Huang, F.Y.[Fei-Yue], Wu, Y.J.[Yong-Jian],
Aha! Adaptive History-driven Attack for Decision-based Black-box Models,
ICCV21(16148-16157)
IEEE DOI 2203
Dimensionality reduction, Adaptation models, Perturbation methods, Computational modeling, Optimization, Faces, BibRef

Zhang, C.N.[Chao-Ning], Benz, P.[Philipp], Karjauv, A.[Adil], Kweon, I.S.[In So],
Data-free Universal Adversarial Perturbation and Black-box Attack,
ICCV21(7848-7857)
IEEE DOI 2203
Training, Image segmentation, Limiting, Image recognition, Codes, Perturbation methods, Adversarial learning, BibRef

Liang, S.Y.[Si-Yuan], Wu, B.Y.[Bao-Yuan], Fan, Y.B.[Yan-Bo], Wei, X.X.[Xing-Xing], Cao, X.C.[Xiao-Chun],
Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection,
ICCV21(7677-7687)
IEEE DOI 2203
Costs, Perturbation methods, Detectors, Object detection, Predictive models, Search problems, Task analysis, Detection and localization in 2D and 3D BibRef

Yuan, J.[Jianhe], He, Z.H.[Zhi-Hai],
Consistency-Sensitivity Guided Ensemble Black-Box Adversarial Attacks in Low-Dimensional Spaces,
ICCV21(7758-7766)
IEEE DOI 2203
Deep learning, Sensitivity, Design methodology, Computational modeling, Neural networks, Task analysis, Recognition and classification BibRef

Lu, Y.T.[Yan-Tao], Du, X.Y.[Xue-Ying], Sun, B.K.[Bing-Kun], Ren, H.N.[Hai-Ning], Velipasalar, S.[Senem],
Fabricate-Vanish: An Effective and Transferable Black-Box Adversarial Attack Incorporating Feature Distortion,
ICIP21(809-813)
IEEE DOI 2201
Deep learning, Adaptation models, Image processing, Neural networks, Noise reduction, Distortion, Adversarial Examples BibRef

Kim, B.C.[Byeong Cheon], Yu, Y.J.[Young-Joon], Ro, Y.M.[Yong Man],
Robust Decision-Based Black-Box Adversarial Attack via Coarse-To-Fine Random Search,
ICIP21(3048-3052)
IEEE DOI 2201
Deep learning, Image processing, Estimation, Robustness, Optimization, Adversarial attack, black-box attack, decision-based, random search BibRef

Wang, H.P.[Hui-Po], Yu, N.[Ning], Fritz, M.[Mario],
Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs,
CVPR21(7868-7877)
IEEE DOI 2111
Industries, Codes, Image synthesis, Computational modeling, Process control, Aerospace electronics BibRef

Xiao, Y.R.[Yan-Ru], Wang, C.[Cong],
You See What I Want You to See: Exploring Targeted Black-Box Transferability Attack for Hash-based Image Retrieval Systems,
CVPR21(1934-1943)
IEEE DOI 2111
Codes, Image retrieval, Multimedia databases, Classification algorithms, Image storage BibRef

Li, X.D.[Xiao-Dan], Li, J.F.[Jin-Feng], Chen, Y.F.[Yue-Feng], Ye, S.[Shaokai], He, Y.[Yuan], Wang, S.H.[Shu-Hui], Su, H.[Hang], Xue, H.[Hui],
QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval,
CVPR21(3329-3338)
IEEE DOI 2111
Visualization, Databases, Image retrieval, Training data, Search engines, Loss measurement, Robustness BibRef

Dong, Y.P.[Yin-Peng], Yang, X.[Xiao], Deng, Z.J.[Zhi-Jie], Pang, T.Y.[Tian-Yu], Xiao, Z.H.[Zi-Hao], Su, H.[Hang], Zhu, J.[Jun],
Black-box Detection of Backdoor Attacks with Limited Information and Data,
ICCV21(16462-16471)
IEEE DOI 2203
Training, Deep learning, Neural networks, Training data, Predictive models, Prediction algorithms, Adversarial learning, BibRef

Byun, J.[Junyoung], Go, H.[Hyojun], Kim, C.[Changick],
On the Effectiveness of Small Input Noise for Defending Against Query-based Black-Box Attacks,
WACV22(3819-3828)
IEEE DOI 2202
Deep learning, Codes, Additives, Computational modeling, Neural networks, Estimation, Adversarial Attack and Defense Methods Deep Learning BibRef

Feng, X.J.[Xin-Jie], Yao, H.X.[Hong-Xun], Che, W.B.[Wen-Bin], Zhang, S.P.[Sheng-Ping],
An Effective Way to Boost Black-box Adversarial Attack,
MMMod20(I:393-404).
Springer DOI 2003
BibRef

Yang, C.L.[Cheng-Lin], Kortylewski, A.[Adam], Xie, C.[Cihang], Cao, Y.Z.[Yin-Zhi], Yuille, A.L.[Alan L.],
Patchattack: A Black-box Texture-based Attack with Reinforcement Learning,
ECCV20(XXVI:681-698).
Springer DOI 2011
BibRef

Andriushchenko, M.[Maksym], Croce, F.[Francesco], Flammarion, N.[Nicolas], Hein, M.[Matthias],
Square Attack: A Query-efficient Black-box Adversarial Attack via Random Search,
ECCV20(XXIII:484-501).
Springer DOI 2011
BibRef

Li, J., Ji, R., Liu, H., Liu, J., Zhong, B., Deng, C., Tian, Q.,
Projection Probability-Driven Black-Box Attack,
CVPR20(359-368)
IEEE DOI 2008
Perturbation methods, Sensors, Optimization, Sparse matrices, Compressed sensing, Google, Neural networks BibRef

Li, H., Xu, X., Zhang, X., Yang, S., Li, B.,
QEBA: Query-Efficient Boundary-Based Blackbox Attack,
CVPR20(1218-1227)
IEEE DOI 2008
Perturbation methods, Estimation, Predictive models, Machine learning, Cats, Pipelines, Neural networks BibRef

Rahmati, A., Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Frossard, P.[Pascal], Dai, H.,
GeoDA: A Geometric Framework for Black-Box Adversarial Attacks,
CVPR20(8443-8452)
IEEE DOI 2008
Perturbation methods, Estimation, Covariance matrices, Gaussian distribution, Measurement, Neural networks, Robustness BibRef

Brunner, T., Diehl, F., Le, M.T., Knoll, A.,
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks,
ICCV19(4957-4965)
IEEE DOI 2004
application program interfaces, cloud computing, feature extraction, image classification, security of data, Training BibRef

Liu, Y.J.[Yu-Jia], Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Frossard, P.[Pascal],
A Geometry-Inspired Decision-Based Attack,
ICCV19(4889-4897)
IEEE DOI 2004
Deal with adversarial attack. geometry, image classification, image recognition, neural nets, security of data, black-box settings, Gaussian noise BibRef

Huang, Q., Katsman, I., Gu, Z., He, H., Belongie, S., Lim, S.,
Enhancing Adversarial Example Transferability With an Intermediate Level Attack,
ICCV19(4732-4741)
IEEE DOI 2004
cryptography, neural nets, optimisation, black-box transferability, source model, target models, adversarial examples, Artificial intelligence BibRef

Shi, Y.C.[Yu-Cheng], Wang, S.[Siyu], Han, Y.H.[Ya-Hong],
Curls and Whey: Boosting Black-Box Adversarial Attacks,
CVPR19(6512-6520).
IEEE DOI 2002
BibRef

Zhao, P., Liu, S., Chen, P., Hoang, N., Xu, K., Kailkhura, B., Lin, X.,
On the Design of Black-Box Adversarial Examples by Leveraging Gradient-Free Optimization and Operator Splitting Method,
ICCV19(121-130)
IEEE DOI 2004
Bayes methods, image classification, image retrieval, learning (artificial intelligence), optimisation, Estimation BibRef

Wang, S., Shi, Y., Han, Y.,
Universal Perturbation Generation for Black-box Attack Using Evolutionary Algorithms,
ICPR18(1277-1282)
IEEE DOI 1812
Perturbation methods, Evolutionary computation, Sociology, Statistics, Training, Neural networks, Robustness BibRef

Narodytska, N., Kasiviswanathan, S.,
Simple Black-Box Adversarial Attacks on Deep Neural Networks,
PRIV17(1310-1318)
IEEE DOI 1709
Knowledge engineering, Network architecture, Neural networks, Robustness, Training BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Adversarial Networks, Attacks, Defense, Surveys, Evaluations .


Last update:Oct 6, 2025 at 14:07:43