14.5.9.9.3 Adversarial Attacks

Chapter Contents (Back)
adversarial Networks. Generative Networks. Attacks. GAN.
See also Countering Adversarial Attacks, Defense, Robustness.
See also Adversarial Networks, Adversarial Inputs, Generative Adversarial.
See also Camouflaged Object Detection, Camouflage.

Biggio, B.[Battista], Roli, F.[Fabio],
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning,
PR(84), 2018, pp. 317-331.
Elsevier DOI 1809
Award, Pattern Recognition. Adversarial machine learning, Evasion attacks, Poisoning attacks, Adversarial examples, Secure learning, Deep learning BibRef

Hang, J.[Jie], Han, K.[Keji], Chen, H.[Hui], Li, Y.[Yun],
Ensemble adversarial black-box attacks against deep learning systems,
PR(101), 2020, pp. 107184.
Elsevier DOI 2003
Black-box attack, Vulnerability, Ensemble adversarial attack, Diversity, Transferability BibRef

Croce, F.[Francesco], Rauber, J.[Jonas], Hein, M.[Matthias],
Scaling up the Randomized Gradient-Free Adversarial Attack Reveals Overestimation of Robustness Using Established Attacks,
IJCV(128), No. 4, April 2020, pp. 1028-1046.
Springer DOI 2004
BibRef
Earlier: A1, A3, Only:
A Randomized Gradient-Free Attack on ReLU Networks,
GCPR18(215-227).
Springer DOI 1905
Award, GCPR, HM. BibRef

Romano, Y.[Yaniv], Aberdam, A.[Aviad], Sulam, J.[Jeremias], Elad, M.[Michael],
Adversarial Noise Attacks of Deep Learning Architectures: Stability Analysis via Sparse-Modeled Signals,
JMIV(62), No. 3, April 2020, pp. 313-327.
Springer DOI 2004
BibRef

Ozbulak, U.[Utku], Gasparyan, M.[Manvel], de Neve, W.[Wesley], van Messem, A.[Arnout],
Perturbation analysis of gradient-based adversarial attacks,
PRL(135), 2020, pp. 313-320.
Elsevier DOI 2006
Adversarial attacks, Adversarial examples, Deep learning, Perturbation analysis BibRef

Wan, S.[Sheng], Wu, T.Y.[Tung-Yu], Hsu, H.W.[Heng-Wei], Wong, W.H.[Wing Hung], Lee, C.Y.[Chen-Yi],
Feature Consistency Training With JPEG Compressed Images,
CirSysVideo(30), No. 12, December 2020, pp. 4769-4780.
IEEE DOI 2012
Deep neural networks are vulnerable to JPEG compression artifacts. Image coding, Distortion, Training, Transform coding, Robustness, Quantization (signal), Feature extraction, Compression artifacts, classification robustness BibRef

Che, Z., Borji, A., Zhai, G., Ling, S., Li, J., Tian, Y., Guo, G., Le Callet, P.,
Adversarial Attack Against Deep Saliency Models Powered by Non-Redundant Priors,
IP(30), 2021, pp. 1973-1988.
IEEE DOI 2101
Computational modeling, Perturbation methods, Redundancy, Task analysis, Visualization, Robustness, Neural networks, gradient estimation BibRef

Xu, Y., Du, B., Zhang, L.,
Assessing the Threat of Adversarial Examples on Deep Neural Networks for Remote Sensing Scene Classification: Attacks and Defenses,
GeoRS(59), No. 2, February 2021, pp. 1604-1617.
IEEE DOI 2101
Remote sensing, Neural networks, Deep learning, Perturbation methods, Feature extraction, Task analysis, scene classification BibRef

Correia-Silva, J.R.[Jacson Rodrigues], Berriel, R.F.[Rodrigo F.], Badue, C.[Claudine], de Souza, A.F.[Alberto F.], Oliveira-Santos, T.[Thiago],
Copycat CNN: Are random non-Labeled data enough to steal knowledge from black-box models?,
PR(113), 2021, pp. 107830.
Elsevier DOI 2103
Copy a CNN model. Deep learning, Convolutional neural network, Neural network attack, Stealing network knowledge, Knowledge distillation BibRef

Xiao, Y.[Yatie], Pun, C.M.[Chi-Man], Liu, B.[Bo],
Fooling deep neural detection networks with adaptive object-oriented adversarial perturbation,
PR(115), 2021, pp. 107903.
Elsevier DOI 2104
Object detection, Adversarial attack, Adaptive object-oriented perturbation BibRef

Yamanaka, K.[Koichiro], Takahashi, K.[Keita], Fujii, T.[Toshiaki], Matsumoto, R.[Ryuraroh],
Simultaneous Attack on CNN-Based Monocular Depth Estimation and Optical Flow Estimation,
IEICE(E104-D), No. 5, May 2021, pp. 785-788.
WWW Link. 2105
BibRef

Lin, H.Y.[Hsiao-Ying], Biggio, B.[Battista],
Adversarial Machine Learning: Attacks From Laboratories to the Real World,
Computer(54), No. 5, May 2021, pp. 56-60.
IEEE DOI 2106
Adversarial machine learning, Data models, Training data, Biological system modeling BibRef

Wang, B.[Bo], Zhao, M.[Mengnan], Wang, W.[Wei], Wei, F.[Fei], Qin, Z.[Zhan], Ren, K.[Kui],
Are You Confident That You Have Successfully Generated Adversarial Examples?,
CirSysVideo(31), No. 6, June 2021, pp. 2089-2099.
IEEE DOI 2106
Perturbation methods, Iterative methods, Computational modeling, Neural networks, Security, Training, Robustness, buffer BibRef

Gragnaniello, D.[Diego], Marra, F.[Francesco], Verdoliva, L.[Luisa], Poggi, G.[Giovanni],
Perceptual quality-preserving black-box attack against deep learning image classifiers,
PRL(147), 2021, pp. 142-149.
Elsevier DOI 2106
Image classification, Face recognition, Adversarial attacks, Black-box BibRef

Tang, S.L.[San-Li], Huang, X.L.[Xiao-Lin], Chen, M.J.[Ming-Jian], Sun, C.J.[Cheng-Jin], Yang, J.[Jie],
Adversarial Attack Type I: Cheat Classifiers by Significant Changes,
PAMI(43), No. 3, March 2021, pp. 1100-1109.
IEEE DOI 2102
Neural networks, Training, Aerospace electronics, Toy manufacturing industry, Sun, Face recognition, Task analysis, supervised variational autoencoder BibRef

Wang, L.[Lin], Yoon, K.J.[Kuk-Jin],
PSAT-GAN: Efficient Adversarial Attacks Against Holistic Scene Understanding,
IP(30), 2021, pp. 7541-7553.
IEEE DOI 2109
Task analysis, Perturbation methods, Visualization, Pipelines, Autonomous vehicles, Semantics, Generative adversarial networks, generative model BibRef

Mohamad-Nezami, O.[Omid], Chaturvedi, A.[Akshay], Dras, M.[Mark], Garain, U.[Utpal],
Pick-Object-Attack: Type-specific adversarial attack for object detection,
CVIU(211), 2021, pp. 103257.
Elsevier DOI 2110
Adversarial attack, Faster R-CNN, Deep learning, Image captioning BibRef

Qin, C.[Chuan], Wu, L.[Liang], Zhang, X.[Xinpeng], Feng, G.[Guorui],
Efficient Non-Targeted Attack for Deep Hashing Based Image Retrieval,
SPLetters(28), 2021, pp. 1893-1897.
IEEE DOI 2110
Codes, Perturbation methods, Hamming distance, Image retrieval, Training, Feature extraction, Databases, Adversarial example, image retrieval BibRef

CinÓ, A.E.[Antonio Emanuele], Torcinovich, A.[Alessandro], Pelillo, M.[Marcello],
A black-box adversarial attack for poisoning clustering,
PR(122), 2022, pp. 108306.
Elsevier DOI 2112
Adversarial learning, Unsupervised learning, Clustering, Robustness evaluation, Machine learning security BibRef

Du, C.[Chuan], Zhang, L.[Lei],
Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network,
RS(13), No. 21, 2021, pp. xx-yy.
DOI Link 2112
BibRef

Ghosh, A.[Arka], Mullick, S.S.[Sankha Subhra], Datta, S.[Shounak], Das, S.[Swagatam], Das, A.K.[Asit Kr.], Mallipeddi, R.[Rammohan],
A black-box adversarial attack strategy with adjustable sparsity and generalizability for deep image classifiers,
PR(122), 2022, pp. 108279.
Elsevier DOI 2112
Adversarial attack, Black-box attack, Convolutional image classifier, Differential evolution, Sparse universal attack BibRef

Xia, P.F.[Peng-Fei], Niu, H.J.[Hong-Jing], Li, Z.Q.[Zi-Qiang], Li, B.[Bin],
On the receptive field misalignment in CAM-based visual explanations,
PRL(152), 2021, pp. 275-282.
Elsevier DOI 2112
Convolutional neural networks, Visual explanations, Class activation mapping, Receptive field misalignment, Adversarial marginal attack BibRef

Wang, H.J.[Hong-Jun], Li, G.B.[Guan-Bin], Liu, X.B.[Xiao-Bai], Lin, L.[Liang],
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning,
PAMI(44), No. 4, April 2022, pp. 1725-1737.
IEEE DOI 2203
Training, Monte Carlo methods, Space exploration, Robustness, Markov processes, Cats, Iterative methods, Adversarial example, robustness and safety of machine learning BibRef

Chen, S.[Sizhe], He, Z.B.[Zheng-Bao], Sun, C.J.[Cheng-Jin], Yang, J.[Jie], Huang, X.L.[Xiao-Lin],
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet,
PAMI(44), No. 4, April 2022, pp. 2188-2197.
IEEE DOI 2203
Heating systems, Training, Neural networks, Perturbation methods, Semantics, Visualization, Error analysis, Adversarial attack, DAmageNet BibRef

Chen, S.[Sizhe], He, F.[Fan], Huang, X.L.[Xiao-Lin], Zhang, K.[Kun],
Relevance attack on detectors,
PR(124), 2022, pp. 108491.
Elsevier DOI 2203
Adversarial attack, Attack transferability, Black-box attack, Relevance map, Interpreters, Object detection BibRef

Kim, J.[Jinsub],
On Optimality of Deterministic Rules in Adversarial Bayesian Detection,
SPLetters(29), 2022, pp. 757-761.
IEEE DOI 2204
Bayes methods, Games, Zirconium, Markov processes, Detectors, Uncertainty, Training data, Adversarial Bayesian detection, input data falsification BibRef

Liang, Q.[Qi], Li, Q.[Qiang], Yang, S.[Song],
LP-GAN: Learning perturbations based on generative adversarial networks for point cloud adversarial attacks,
IVC(120), 2022, pp. 104370.
Elsevier DOI 2204
3D model, Point cloud, Adversarial attack, GAN, BibRef


Li, Y.M.[Yi-Ming], Wen, C.C.[Cong-Cong], Juefei-Xu, F.[Felix], Feng, C.[Chen],
Fooling LiDAR Perception via Adversarial Trajectory Perturbation,
ICCV21(7878-7887)
IEEE DOI 2203
Point cloud compression, Wireless communication, Wireless sensor networks, Laser radar, Perturbation methods, Vision for robotics and autonomous vehicles BibRef

Wang, X.S.[Xiao-Sen], He, X.R.[Xuan-Ran], Wang, J.D.[Jing-Dong], He, K.[Kun],
Admix: Enhancing the Transferability of Adversarial Attacks,
ICCV21(16138-16147)
IEEE DOI 2203
Deep learning, Codes, Neural networks, Adversarial machine learning, Task analysis, Standards, Recognition and classification BibRef

Li, J.[Jie], Ji, R.R.[Rong-Rong], Chen, P.X.[Pei-Xian], Zhang, B.C.[Bao-Chang], Hong, X.P.[Xiao-Peng], Zhang, R.X.[Rui-Xin], Li, S.X.[Shao-Xin], Li, J.L.[Ji-Lin], Huang, F.Y.[Fei-Yue], Wu, Y.J.[Yong-Jian],
Aha! Adaptive History-driven Attack for Decision-based Black-box Models,
ICCV21(16148-16157)
IEEE DOI 2203
Dimensionality reduction, Adaptation models, Perturbation methods, Computational modeling, Optimization, Faces, BibRef

Chen, S.[Si], Kahla, M.[Mostafa], Jia, R.[Ruoxi], Qi, G.J.[Guo-Jun],
Knowledge-Enriched Distributional Model Inversion Attacks,
ICCV21(16158-16167)
IEEE DOI 2203
Training, Deep learning, Privacy, Codes, Computational modeling, Neural networks, Adversarial learning, Motion and tracking BibRef

Zhou, M.[Mo], Wang, L.[Le], Niu, Z.X.[Zhen-Xing], Zhang, Q.[Qilin], Xu, Y.H.[Ying-Hui], Zheng, N.N.[Nan-Ning], Hua, G.[Gang],
Practical Relative Order Attack in Deep Ranking,
ICCV21(16393-16402)
IEEE DOI 2203
Measurement, Deep learning, Correlation, Perturbation methods, Neural networks, Interference, Adversarial learning, Fairness, Image and video retrieval BibRef

Li, Y.[Yuezun], Li, Y.M.[Yi-Ming], Wu, B.Y.[Bao-Yuan], Li, L.K.[Long-Kang], He, R.[Ran], Lyu, S.W.[Si-Wei],
Invisible Backdoor Attack with Sample-Specific Triggers,
ICCV21(16443-16452)
IEEE DOI 2203
Training, Additive noise, Deep learning, Steganography, Image coding, Perturbation methods, Adversarial learning, Recognition and classification BibRef

Doan, K.[Khoa], Lao, Y.J.[Ying-Jie], Zhao, W.J.[Wei-Jie], Li, P.[Ping],
LIRA: Learnable, Imperceptible and Robust Backdoor Attacks,
ICCV21(11946-11956)
IEEE DOI 2203
Deformable models, Visualization, Toxicology, Heuristic algorithms, Neural networks, Stochastic processes, Inspection, Neural generative models BibRef

Shafran, A.[Avital], Peleg, S.[Shmuel], Hoshen, Y.[Yedid],
Membership Inference Attacks are Easier on Difficult Problems,
ICCV21(14800-14809)
IEEE DOI 2203
Training, Image segmentation, Uncertainty, Semantics, Neural networks, Benchmark testing, Data models, Fairness, grouping and shape BibRef

Zhang, C.[Chaoning], Benz, P.[Philipp], Karjauv, A.[Adil], Kweon, I.S.[In So],
Data-free Universal Adversarial Perturbation and Black-box Attack,
ICCV21(7848-7857)
IEEE DOI 2203
Training, Image segmentation, Limiting, Image recognition, Codes, Perturbation methods, Adversarial learning, BibRef

Liang, S.Y.[Si-Yuan], Wu, B.Y.[Bao-Yuan], Fan, Y.[Yanbo], Wei, X.X.[Xing-Xing], Cao, X.C.[Xiao-Chun],
Parallel Rectangle Flip Attack: A Query-based Black-box Attack against Object Detection,
ICCV21(7677-7687)
IEEE DOI 2203
Costs, Perturbation methods, Detectors, Object detection, Predictive models, Search problems, Task analysis, Detection and localization in 2D and 3D BibRef

Naseer, M.[Muzammal], Khan, S.[Salman], Hayat, M.[Munawar], Khan, F.S.[Fahad Shahbaz], Porikli, F.[Fatih],
On Generating Transferable Targeted Perturbations,
ICCV21(7688-7697)
IEEE DOI 2203
Codes, Perturbation methods, Computational modeling, Transformers, Linear programming, Generators, Adversarial learning, Recognition and classification BibRef

Chen, H.[Huili], Fu, C.[Cheng], Zhao, J.[Jishen], Koushanfar, F.[Farinaz],
ProFlip: Targeted Trojan Attack with Progressive Bit Flips,
ICCV21(7698-7707)
IEEE DOI 2203
Training, Runtime, Neurons, Neural networks, Random access memory, Predictive models, Laser modes, Adversarial learning, Optimization and learning methods BibRef

Rony, J.[JÚr˘me], Granger, E.[Eric], Pedersoli, M.[Marco], Ayed, I.B.[Ismail Ben],
Augmented Lagrangian Adversarial Attacks,
ICCV21(7718-7727)
IEEE DOI 2203
Computational modeling, Computational efficiency, Computational complexity, Adversarial learning, Optimization and learning methods BibRef

Yuan, Z.[Zheng], Zhang, J.[Jie], Jia, Y.[Yunpei], Tan, C.[Chuanqi], Xue, T.[Tao], Shan, S.G.[Shi-Guang],
Meta Gradient Adversarial Attack,
ICCV21(7728-7737)
IEEE DOI 2203
Philosophical considerations, Computer architecture, Task analysis, Adversarial learning, BibRef

Park, G.Y.[Geon Yeong], Lee, S.W.[Sang Wan],
Reliably fast adversarial training via latent adversarial perturbation,
ICCV21(7738-7747)
IEEE DOI 2203
Training, Costs, Perturbation methods, Linearity, Minimization, Computational efficiency, Adversarial learning, Recognition and classification BibRef

Tu, J.[James], Wang, T.[Tsunhsuan], Wang, J.[Jingkang], Manivasagam, S.[Sivabalan], Ren, M.[Mengye], Urtasun, R.[Raquel],
Adversarial Attacks On Multi-Agent Communication,
ICCV21(7748-7757)
IEEE DOI 2203
Deep learning, Fault tolerance, Protocols, Computational modeling, Neural networks, Fault tolerant systems, Robustness, Vision for robotics and autonomous vehicles BibRef

Yuan, J.[Jianhe], He, Z.H.[Zhi-Hai],
Consistency-Sensitivity Guided Ensemble Black-Box Adversarial Attacks in Low-Dimensional Spaces,
ICCV21(7758-7766)
IEEE DOI 2203
Deep learning, Sensitivity, Design methodology, Computational modeling, Neural networks, Task analysis, Recognition and classification BibRef

Feng, W.W.[Wei-Wei], Wu, B.Y.[Bao-Yuan], Zhang, T.Z.[Tian-Zhu], Zhang, Y.[Yong], Zhang, Y.D.[Yong-Dong],
Meta-Attack: Class-agnostic and Model-agnostic Physical Adversarial Attack,
ICCV21(7767-7776)
IEEE DOI 2203
Training, Deep learning, Image color analysis, Shape, Computational modeling, Neural networks, Adversarial learning, BibRef

Kim, J.Y.[Jae-Yeon], Hua, B.S.[Binh-Son], Nguyen, D.T.[Duc Thanh], Yeung, S.K.[Sai-Kit],
Minimal Adversarial Examples for Deep Learning on 3D Point Clouds,
ICCV21(7777-7786)
IEEE DOI 2203
Point cloud compression, Deep learning, Image color analysis, Perturbation methods, Semantics, Adversarial learning, Recognition and classification BibRef

Stutz, D.[David], Hein, M.[Matthias], Schiele, B.[Bernt],
Relating Adversarially Robust Generalization to Flat Minima,
ICCV21(7787-7797)
IEEE DOI 2203
Training, Correlation, Perturbation methods, Computational modeling, Robustness, Loss measurement, Optimization and learning methods BibRef

Li, C.[Chao], Gao, S.Q.[Shang-Qian], Deng, C.[Cheng], Liu, W.[Wei], Huang, H.[Heng],
Adversarial Attack on Deep Cross-Modal Hamming Retrieval,
ICCV21(2198-2207)
IEEE DOI 2203
Learning systems, Knowledge engineering, Deep learning, Correlation, Perturbation methods, Neural networks, Vision + other modalities BibRef

Duan, R.[Ranjie], Chen, Y.[Yuefeng], Niu, D.[Dantong], Yang, Y.[Yun], Qin, A.K., He, Y.[Yuan],
AdvDrop: Adversarial Attack to DNNs by Dropping Information,
ICCV21(7486-7495)
IEEE DOI 2203
Deep learning, Visualization, Neural networks, Robustness, Visual perception, Adversarial learning, BibRef

Xiang, Z.[Zhen], Miller, D.J.[David J.], Chen, S.[Siheng], Li, X.[Xi], Kesidis, G.[George],
A Backdoor Attack against 3D Point Cloud Classifiers,
ICCV21(7577-7587)
IEEE DOI 2203
Geometry, Point cloud compression, Training, Barium, Toxicology, Adversarial learning, Recognition and classification, Vision for robotics and autonomous vehicles BibRef

Hwang, J.[Jaehui], Kim, J.H.[Jun-Hyuk], Choi, J.H.[Jun-Ho], Lee, J.S.[Jong-Seok],
Just One Moment: Structural Vulnerability of Deep Action Recognition against One Frame Attack,
ICCV21(7648-7656)
IEEE DOI 2203
Analytical models, Perturbation methods, Task analysis, Adversarial learning, Action and behavior recognition BibRef

Moayeri, M.[Mazda], Feizi, S.[Soheil],
Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings,
ICCV21(7657-7666)
IEEE DOI 2203
Training, Adaptation models, Toxicology, Costs, Perturbation methods, Computational modeling, Adversarial learning, Transfer/Low-shot/Semi/Unsupervised Learning BibRef

Wang, Z.B.[Zhi-Bo], Guo, H.[Hengchang], Zhang, Z.F.[Zhi-Fei], Liu, W.X.[Wen-Xin], Qin, Z.[Zhan], Ren, K.[Kui],
Feature Importance-aware Transferable Adversarial Attacks,
ICCV21(7619-7628)
IEEE DOI 2203
Degradation, Limiting, Correlation, Computational modeling, Aggregates, Transforms, Adversarial learning, Explainable AI, Recognition and classification BibRef

Wang, X.[Xin], Lin, S.Y.[Shu-Yun], Zhang, H.[Hao], Zhu, Y.F.[Yu-Fei], Zhang, Q.S.[Quan-Shi],
Interpreting Attributions and Interactions of Adversarial Attacks,
ICCV21(1075-1084)
IEEE DOI 2203
Visualization, Costs, Perturbation methods, Estimation, Task analysis, Faces, Explainable AI, Adversarial learning BibRef

Kumar, C.[Chetan], Kumar, D.[Deepak], Shao, M.[Ming],
Generative Adversarial Attack on Ensemble Clustering,
WACV22(3839-3848)
IEEE DOI 2202
Clustering methods, Supervised learning, Clustering algorithms, Benchmark testing, Probabilistic logic, Semi- and Un- supervised Learning BibRef

Du, A.[Andrew], Chen, B.[Bo], Chin, T.J.[Tat-Jun], Law, Y.W.[Yee Wei], Sasdelli, M.[Michele], Rajasegaran, R.[Ramesh], Campbell, D.[Dillon],
Physical Adversarial Attacks on an Aerial Imagery Object Detector,
WACV22(3798-3808)
IEEE DOI 2202
Measurement, Deep learning, Satellites, Neural networks, Lighting, Detectors, Observers, Deep Learning -> Adversarial Learning, Adversarial Attack and Defense Methods BibRef

Zhao, B.Y.[Bing-Yin], Lao, Y.J.[Ying-Jie],
Towards Class-Oriented Poisoning Attacks Against Neural Networks,
WACV22(2244-2253)
IEEE DOI 2202
Training, Measurement, Computational modeling, Neural networks, Machine learning, Predictive models, Adversarial Attack and Defense Methods BibRef

Chen, Z.H.[Zhen-Hua], Wang, C.H.[Chu-Hua], Crandall, D.[David],
Semantically Stealthy Adversarial Attacks against Segmentation Models,
WACV22(2846-2855)
IEEE DOI 2202
Image segmentation, Perturbation methods, Computational modeling, Feature extraction, Context modeling, Grouping and Shape BibRef

Yin, M.J.[Ming-Jun], Li, S.[Shasha], Song, C.Y.[Cheng-Yu], Asif, M.S.[M. Salman], Roy-Chowdhury, A.K.[Amit K.], Krishnamurthy, S.V.[Srikanth V.],
ADC: Adversarial attacks against object Detection that evade Context consistency checks,
WACV22(2836-2845)
IEEE DOI 2202
Deep learning, Adaptation models, Computational modeling, Neural networks, Buildings, Detectors, Adversarial Attack and Defense Methods Object Detection/Recognition/Categorization BibRef

Lu, Y.T.[Yan-Tao], Du, X.Y.[Xue-Ying], Sun, B.K.[Bing-Kun], Ren, H.N.[Hai-Ning], Velipasalar, S.[Senem],
Fabricate-Vanish: An Effective and Transferable Black-Box Adversarial Attack Incorporating Feature Distortion,
ICIP21(809-813)
IEEE DOI 2201
Deep learning, Adaptation models, Image processing, Neural networks, Noise reduction, Distortion, Adversarial Examples BibRef

Ren, Y.K.[Yan-Kun], Li, L.F.[Long-Fei], Zhou, J.[Jun],
Simtrojan: Stealthy Backdoor Attack,
ICIP21(819-823)
IEEE DOI 2201
Deep learning, Training, Target recognition, Image processing, Buildings, Extraterrestrial measurements, deep learning BibRef

Li, X.R.[Xiao-Rui], Cui, W.Y.[Wei-Yu], Huang, J.W.[Jia-Wei], Wang, W.Y.[Wen-Yi], Chen, J.W.[Jian-Wen],
Regularized Intermediate Layers Attack: Adversarial Examples With High Transferability,
ICIP21(1904-1908)
IEEE DOI 2201
Image recognition, Filtering, Perturbation methods, Optimization methods, Convolutional neural networks, Transferability BibRef

Bai, T.[Tao], Zhao, J.[Jun], Zhu, J.[Jinlin], Han, S.[Shoudong], Chen, J.[Jiefeng], Li, B.[Bo], Kot, A.[Alex],
AI-GAN: Attack-Inspired Generation of Adversarial Examples,
ICIP21(2543-2547)
IEEE DOI 2201
Training, Image quality, Deep learning, Perturbation methods, Image processing, Generative adversarial networks, deep learning BibRef

Kim, B.C.[Byeong Cheon], Yu, Y.J.[Young-Joon], Ro, Y.M.[Yong Man],
Robust Decision-Based Black-Box Adversarial Attack via Coarse-To-Fine Random Search,
ICIP21(3048-3052)
IEEE DOI 2201
Deep learning, Image processing, Estimation, Robustness, Optimization, Adversarial attack, black-box attack, decision-based, random search BibRef

Abdelfattah, M.[Mazen], Yuan, K.[Kaiwen], Wang, Z.J.[Z. Jane], Ward, R.[Rabab],
Towards Universal Physical Attacks on Cascaded Camera-Lidar 3d Object Detection Models,
ICIP21(3592-3596)
IEEE DOI 2201
Geometry, Deep learning, Solid modeling, Laser radar, Image processing, Object detection, Adversarial attacks, deep learning BibRef

Gurulingan, N.K.[Naresh Kumar], Arani, E.[Elahe], Zonooz, B.[Bahram],
UniNet: A Unified Scene Understanding Network and Exploring Multi-Task Relationships through the Lens of Adversarial Attacks,
DeepMTL21(2239-2248)
IEEE DOI 2112
Shape, Semantics, Neural networks, Information sharing, Estimation, Object detection BibRef

Ding, Y.Z.[Yu-Zhen], Thakur, N.[Nupur], Li, B.X.[Bao-Xin],
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers,
AROW21(142-151)
IEEE DOI 2112
Measurement, Deep learning, Neural networks, Buildings, Gaussian distribution BibRef

Boloor, A.[Adith], Wu, T.[Tong], Naughton, P.[Patrick], Chakrabarti, A.[Ayan], Zhang, X.[Xuan], Vorobeychik, Y.[Yevgeniy],
Can Optical Trojans Assist Adversarial Perturbations?,
AROW21(122-131)
IEEE DOI 2112
Perturbation methods, Neural networks, Pipelines, Optical device fabrication, Cameras, Optical imaging, Trojan horses BibRef

Gnanasambandam, A.[Abhiram], Sherman, A.M.[Alex M.], Chan, S.H.[Stanley H.],
Optical Adversarial Attack,
AROW21(92-101)
IEEE DOI 2112
Integrated optics, Computational modeling, Lighting, Optical imaging BibRef

Lennon, M.[Max], Drenkow, N.[Nathan], Burlina, P.[Phil],
Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose?,
AROW21(112-121)
IEEE DOI 2112
Measurement, Training, Heating systems, Sensitivity analysis, Conferences BibRef

Yu, Y.R.[Yun-Rui], Gao, X.T.[Xi-Tong], Xu, C.Z.[Cheng-Zhong],
LAFEAT: Piercing Through Adversarial Defenses with Latent Features,
CVPR21(5731-5741)
IEEE DOI 2111
Degradation, Schedules, Computational modeling, Perturbation methods, Lattices, Robustness BibRef

Wang, H.P.[Hui-Po], Yu, N.[Ning], Fritz, M.[Mario],
Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs,
CVPR21(7868-7877)
IEEE DOI 2111
Industries, Codes, Image synthesis, Computational modeling, Process control, Aerospace electronics BibRef

Wang, X.S.[Xiao-Sen], He, K.[Kun],
Enhancing the Transferability of Adversarial Attacks through Variance Tuning,
CVPR21(1924-1933)
IEEE DOI 2111
Deep learning, Codes, Perturbation methods, Computational modeling, Pattern recognition, Iterative methods BibRef

Pony, R.[Roi], Naeh, I.[Itay], Mannor, S.[Shie],
Over-the-Air Adversarial Flickering Attacks against Video Recognition Networks,
CVPR21(515-524)
IEEE DOI 2111
Deep learning, Perturbation methods, Observers, Pattern recognition, Image classification BibRef

Xiao, Y.[Yanru], Wang, C.[Cong],
You See What I Want You to See: Exploring Targeted Black-Box Transferability Attack for Hash-based Image Retrieval Systems,
CVPR21(1934-1943)
IEEE DOI 2111
Codes, Image retrieval, Multimedia databases, Pattern recognition, Classification algorithms, Image storage BibRef

Rampini, A.[Arianna], Pestarini, F.[Franco], Cosmo, L.[Luca], Melzi, S.[Simone], RodolÓ, E.[Emanuele],
Universal Spectral Adversarial Attacks for Deformable Shapes,
CVPR21(3215-3225)
IEEE DOI 2111
Geometry, Shape, Perturbation methods, Predictive models, Eigenvalues and eigenfunctions, Robustness BibRef

Li, X.D.[Xiao-Dan], Li, J.F.[Jin-Feng], Chen, Y.[Yuefeng], Ye, S.[Shaokai], He, Y.[Yuan], Wang, S.[Shuhui], Su, H.[Hang], Xue, H.[Hui],
QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval,
CVPR21(3329-3338)
IEEE DOI 2111
Visualization, Databases, Image retrieval, Training data, Search engines, Loss measurement, Robustness BibRef

Wang, W.X.[Wen-Xuan], Yin, B.J.[Bang-Jie], Yao, T.P.[Tai-Ping], Zhang, L.[Li], Fu, Y.W.[Yan-Wei], Ding, S.H.[Shou-Hong], Li, J.L.[Ji-Lin], Huang, F.Y.[Fei-Yue], Xue, X.Y.[Xiang-Yang],
Delving into Data: Effectively Substitute Training for Black-box Attack,
CVPR21(4759-4768)
IEEE DOI 2111
Training, Computational modeling, Training data, Distributed databases, Data visualization, Data models BibRef

Jia, S.[Shuai], Song, Y.B.[Yi-Bing], Ma, C.[Chao], Yang, X.K.[Xiao-Kang],
IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking,
CVPR21(6705-6714)
IEEE DOI 2111
Deep learning, Visualization, Correlation, Codes, Perturbation methods, Robustness BibRef

Rezaei, S.[Shahbaz], Liu, X.[Xin],
On the Difficulty of Membership Inference Attacks,
CVPR21(7888-7896)
IEEE DOI 2111
Training, Analytical models, Codes, Computational modeling, Computer architecture, Pattern recognition BibRef

Kariyappa, S.[Sanjay], Prakash, A.[Atul], Qureshi, M.K.[Moinuddin K],
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation,
CVPR21(13809-13818)
IEEE DOI 2111
Training, Cloning, Estimation, Machine learning, Intellectual property, Predictive models, Data models BibRef

Duan, R.J.[Ran-Jie], Mao, X.F.[Xiao-Feng], Qin, A.K., Chen, Y.F.[Yue-Feng], Ye, S.[Shaokai], He, Y.[Yuan], Yang, Y.[Yun],
Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink,
CVPR21(16057-16066)
IEEE DOI 2111
Deep learning, Laser theory, Robustness, Pattern recognition, Laser beams BibRef

Wang, X.[Xunguang], Zhang, Z.[Zheng], Wu, B.Y.[Bao-Yuan], Shen, F.[Fumin], Lu, G.M.[Guang-Ming],
Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing,
CVPR21(16352-16361)
IEEE DOI 2111
Knowledge engineering, Codes, Hamming distance, Semantics, Image retrieval, Prototypes BibRef

Ma, C.[Chen], Chen, L.[Li], Yong, J.H.[Jun-Hai],
Simulating Unknown Target Models for Query-Efficient Black-box Attacks,
CVPR21(11830-11839)
IEEE DOI 2111
Training, Deep learning, Codes, Computational modeling, Training data, Complexity theory BibRef

Maho, T.[Thibault], Furon, T.[Teddy], Le Merrer, E.[Erwan],
SurFree: a fast surrogate-free black-box attack,
CVPR21(10425-10434)
IEEE DOI 2111
Estimation, Focusing, Machine learning, Distortion, Pattern recognition, Convergence BibRef

Zolfi, A.[Alon], Kravchik, M.[Moshe], Elovici, Y.[Yuval], Shabtai, A.[Asaf],
The Translucent Patch: A Physical and Universal Attack on Object Detectors,
CVPR21(15227-15236)
IEEE DOI 2111
Face recognition, Optimization methods, Detectors, Object detection, Cameras, Autonomous vehicles BibRef

Chen, Z.K.[Zhi-Kai], Xie, L.X.[Ling-Xi], Pang, S.M.[Shan-Min], He, Y.[Yong], Tian, Q.[Qi],
Appending Adversarial Frames for Universal Video Attack,
WACV21(3198-3207)
IEEE DOI 2106
Measurement, Perturbation methods, Semantics, Pipelines, Euclidean distance BibRef

Tan, Y.X.M.[Yi Xianz Marcus], Elovici, Y.[Yuval], Binder, A.[Alexander],
Adaptive Noise Injection for Training Stochastic Student Networks from Deterministic Teachers,
ICPR21(7587-7594)
IEEE DOI 2105
Training, Adaptation models, Adaptive systems, Computational modeling, Stochastic processes, Machine learning, stochastic networks BibRef

Cancela, B.[Brais], Bolˇn-Canedo, V.[Verˇnica], Alonso-Betanzos, A.[Amparo],
A delayed Elastic-Net approach for performing adversarial attacks,
ICPR21(378-384)
IEEE DOI 2105
Perturbation methods, Data preprocessing, Benchmark testing, Size measurement, Robustness, Pattern recognition, Security BibRef

Li, X.C.[Xiu-Chuan], Zhang, X.Y.[Xu-Yao], Yin, F.[Fei], Liu, C.L.[Cheng-Lin],
F-mixup: Attack CNNs From Fourier Perspective,
ICPR21(541-548)
IEEE DOI 2105
Training, Frequency-domain analysis, Perturbation methods, Neural networks, Robustness, Pattern recognition, High frequency BibRef

Grosse, K.[Kathrin], Smith, M.T.[Michael T.], Backes, M.[Michael],
Killing Four Birds with one Gaussian Process: The Relation between different Test-Time Attacks,
ICPR21(4696-4703)
IEEE DOI 2105
Analytical models, Reverse engineering, Training data, Gaussian processes, Data models, Classification algorithms, Pattern recognition BibRef

Barati, R.[Ramin], Safabakhsh, R.[Reza], Rahmati, M.[Mohammad],
Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks,
ICPR21(7036-7042)
IEEE DOI 2105
Training, Artificial neural networks, Pattern recognition, Proposals, Convergence, adversarial attack, robustness, adversarial training BibRef

Li, W.J.[Wen-Jie], Tondi, B.[Benedetta], Ni, R.R.[Rong-Rong], Barni, M.[Mauro],
Increased-confidence Adversarial Examples for Deep Learning Counter-forensics,
MMForWild20(411-424).
Springer DOI 2103
BibRef

Dong, X.S.[Xin-Shuai], Liu, H.[Hong], Ji, R.R.[Rong-Rong], Cao, L.J.[Liu-Juan], Ye, Q.X.[Qi-Xiang], Liu, J.Z.[Jian-Zhuang], Tian, Q.[Qi],
API-net: Robust Generative Classifier via a Single Discriminator,
ECCV20(XIII:379-394).
Springer DOI 2011
BibRef

Liu, A.S.[Ai-Shan], Huang, T.R.[Tai-Ran], Liu, X.L.[Xiang-Long], Xu, Y.T.[Yi-Tao], Ma, Y.Q.[Yu-Qing], Chen, X.[Xinyun], Maybank, S.J.[Stephen J.], Tao, D.C.[Da-Cheng],
Spatiotemporal Attacks for Embodied Agents,
ECCV20(XVII:122-138).
Springer DOI 2011
Code, Adversarial Attack.
WWW Link. BibRef

Fan, Y.[Yanbo], Wu, B.Y.[Bao-Yuan], Li, T.H.[Tuan-Hui], Zhang, Y.[Yong], Li, M.Y.[Ming-Yang], Li, Z.F.[Zhi-Feng], Yang, Y.[Yujiu],
Sparse Adversarial Attack via Perturbation Factorization,
ECCV20(XXII:35-50).
Springer DOI 2011
BibRef

Guo, J.F.[Jun-Feng], Liu, C.[Cong],
Practical Poisoning Attacks on Neural Networks,
ECCV20(XXVII:142-158).
Springer DOI 2011
BibRef

Liu, Y.F.[Yun-Fei], Ma, X.J.[Xing-Jun], Bailey, J.[James], Lu, F.[Feng],
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks,
ECCV20(X:182-199).
Springer DOI 2011
BibRef

Feng, X.J.[Xin-Jie], Yao, H.X.[Hong-Xun], Che, W.B.[Wen-Bin], Zhang, S.P.[Sheng-Ping],
An Effective Way to Boost Black-box Adversarial Attack,
MMMod20(I:393-404).
Springer DOI 2003
BibRef

Costales, R., Mao, C., Norwitz, R., Kim, B., Yang, J.,
Live Trojan Attacks on Deep Neural Networks,
AML-CV20(3460-3469)
IEEE DOI 2008
Trojan horses, Computational modeling, Neural networks, Machine learning BibRef

Haque, M., Chauhan, A., Liu, C., Yang, W.,
ILFO: Adversarial Attack on Adaptive Neural Networks,
CVPR20(14252-14261)
IEEE DOI 2008
Computational modeling, Energy consumption, Robustness, Neural networks, Adaptation models, Machine learning, Perturbation methods BibRef

Zhou, M., Wu, J., Liu, Y., Liu, S., Zhu, C.,
DaST: Data-Free Substitute Training for Adversarial Attacks,
CVPR20(231-240)
IEEE DOI 2008
Data models, Training, Machine learning, Perturbation methods, Task analysis, Estimation BibRef

Ganeshan, A.[Aditya], Vivek, B.S., Radhakrishnan, V.B.[Venkatesh Babu],
FDA: Feature Disruptive Attack,
ICCV19(8068-8078)
IEEE DOI 2004
Deal with adversarial attacks. image classification, image representation, learning (artificial intelligence), neural nets, optimisation, BibRef

Han, J., Dong, X., Zhang, R., Chen, D., Zhang, W., Yu, N., Luo, P., Wang, X.,
Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once,
ICCV19(5157-5166)
IEEE DOI 2004
convolutional neural nets, learning (artificial intelligence), pattern classification, security of data, Decoding BibRef

Deng, Y., Karam, L.J.,
Universal Adversarial Attack Via Enhanced Projected Gradient Descent,
ICIP20(1241-1245)
IEEE DOI 2011
Perturbation methods, Computational modeling, Training, Computer architecture, Convolutional neural networks, projected gradient descent (PGD) BibRef

Sun, C., Chen, S., Cai, J., Huang, X.,
Type I Attack For Generative Models,
ICIP20(593-597)
IEEE DOI 2011
Image reconstruction, Decoding, Aerospace electronics, Generative adversarial networks, generative models BibRef

Yang, C.L.[Cheng-Lin], Kortylewski, A.[Adam], Xie, C.[Cihang], Cao, Y.Z.[Yin-Zhi], Yuille, A.L.[Alan L.],
Patchattack: A Black-box Texture-based Attack with Reinforcement Learning,
ECCV20(XXVI:681-698).
Springer DOI 2011
BibRef

Braunegg, A., Chakraborty, A.[Amartya], Krumdick, M.[Michael], Lape, N.[Nicole], Leary, S.[Sara], Manville, K.[Keith], Merkhofer, E.[Elizabeth], Strickhart, L.[Laura], Walmer, M.[Matthew],
Apricot: A Dataset of Physical Adversarial Attacks on Object Detection,
ECCV20(XXI:35-50).
Springer DOI 2011
BibRef

Zhang, H.[Hu], Zhu, L.C.[Lin-Chao], Zhu, Y.[Yi], Yang, Y.[Yi],
Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior,
ECCV20(XX:240-256).
Springer DOI 2011
BibRef

Gao, L.L.[Lian-Li], Zhang, Q.L.[Qi-Long], Song, J.K.[Jing-Kuan], Liu, X.L.[Xiang-Long], Shen, H.T.[Heng Tao],
Patch-wise Attack for Fooling Deep Neural Network,
ECCV20(XXVIII:307-322).
Springer DOI 2011
BibRef

Andriushchenko, M.[Maksym], Croce, F.[Francesco], Flammarion, N.[Nicolas], Hein, M.[Matthias],
Square Attack: A Query-efficient Black-box Adversarial Attack via Random Search,
ECCV20(XXIII:484-501).
Springer DOI 2011
BibRef

Bai, J.W.[Jia-Wang], Chen, B.[Bin], Li, Y.M.[Yi-Ming], Wu, D.X.[Dong-Xian], Guo, W.W.[Wei-Wei], Xia, S.T.[Shu-Tao], Yang, E.H.[En-Hui],
Targeted Attack for Deep Hashing Based Retrieval,
ECCV20(I:618-634).
Springer DOI 2011
BibRef

Nakka, K.K.[Krishna Kanth], Salzmann, M.[Mathieu],
Indirect Local Attacks for Context-aware Semantic Segmentation Networks,
ECCV20(V:611-628).
Springer DOI 2011
BibRef

Wu, Z.X.[Zu-Xuan], Lim, S.N.[Ser-Nam], Davis, L.S.[Larry S.], Goldstein, T.[Tom],
Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors,
ECCV20(IV:1-17).
Springer DOI 2011
BibRef

Li, Q.Z.[Qi-Zhang], Guo, Y.W.[Yi-Wen], Chen, H.[Hao],
Yet Another Intermediate-level Attack,
ECCV20(XVI: 241-257).
Springer DOI 2010
BibRef

Zhao, S., Ma, X., Zheng, X., Bailey, J., Chen, J., Jiang, Y.,
Clean-Label Backdoor Attacks on Video Recognition Models,
CVPR20(14431-14440)
IEEE DOI 2008
Training, Data models, Toxicology, Perturbation methods, Training data, Image resolution, Pipelines BibRef

Kolouri, S., Saha, A., Pirsiavash, H., Hoffmann, H.,
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs,
CVPR20(298-307)
IEEE DOI 2008
Training, Perturbation methods, Data models, Computational modeling, Machine learning, Benchmark testing BibRef

Li, J., Ji, R., Liu, H., Liu, J., Zhong, B., Deng, C., Tian, Q.,
Projection Probability-Driven Black-Box Attack,
CVPR20(359-368)
IEEE DOI 2008
Perturbation methods, Sensors, Optimization, Sparse matrices, Compressed sensing, Google, Neural networks BibRef

Yan, B., Wang, D., Lu, H., Yang, X.,
Cooling-Shrinking Attack: Blinding the Tracker With Imperceptible Noises,
CVPR20(987-996)
IEEE DOI 2008
Target tracking, Generators, Heating systems, Perturbation methods, Object tracking, Training BibRef

Li, H., Xu, X., Zhang, X., Yang, S., Li, B.,
QEBA: Query-Efficient Boundary-Based Blackbox Attack,
CVPR20(1218-1227)
IEEE DOI 2008
Perturbation methods, Estimation, Predictive models, Machine learning, Cats, Pipelines, Neural networks BibRef

Li, M., Deng, C., Li, T., Yan, J., Gao, X., Huang, H.,
Towards Transferable Targeted Attack,
CVPR20(638-646)
IEEE DOI 2008
Curing, Iterative methods, Extraterrestrial measurements, Entropy, Perturbation methods, Robustness BibRef

Truong, L., Jones, C., Hutchinson, B., August, A., Praggastis, B., Jasper, R., Nichols, N., Tuor, A.,
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers,
AML-CV20(3422-3431)
IEEE DOI 2008
Data models, Training, Computational modeling, Machine learning, Training data, Safety BibRef

Gupta, S., Dube, P., Verma, A.,
Improving the affordability of robustness training for DNNs,
AML-CV20(3383-3392)
IEEE DOI 2008
Training, Mathematical model, Computational modeling, Robustness, Neural networks, Computer architecture, Optimization BibRef

Zhang, Z., Wu, T.,
Learning Ordered Top-k Adversarial Attacks via Adversarial Distillation,
AML-CV20(3364-3373)
IEEE DOI 2008
Perturbation methods, Robustness, Task analysis, Semantics, Training, Visualization, Protocols BibRef

Chen, X., Yan, X., Zheng, F., Jiang, Y., Xia, S., Zhao, Y., Ji, R.,
One-Shot Adversarial Attacks on Visual Tracking With Dual Attention,
CVPR20(10173-10182)
IEEE DOI 2008
Target tracking, Task analysis, Visualization, Perturbation methods, Object tracking, Optimization BibRef

Zhou, H., Chen, D., Liao, J., Chen, K., Dong, X., Liu, K., Zhang, W., Hua, G., Yu, N.,
LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud Based Deep Networks,
CVPR20(10353-10362)
IEEE DOI 2008
Feature extraction, Perturbation methods, Decoding, Training, Neural networks, Target recognition BibRef

Rahmati, A., Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Frossard, P.[Pascal], Dai, H.,
GeoDA: A Geometric Framework for Black-Box Adversarial Attacks,
CVPR20(8443-8452)
IEEE DOI 2008
Perturbation methods, Estimation, Covariance matrices, Gaussian distribution, Measurement, Neural networks, Robustness BibRef

Machiraju, H.[Harshitha], Balasubramanian, V.N.[Vineeth N],
A Little Fog for a Large Turn,
WACV20(2891-2900)
IEEE DOI 2006
Perturbation methods, Meteorology, Autonomous robots, Task analysis, Data models, Predictive models, Robustness BibRef

Yang, C.H., Liu, Y., Chen, P., Ma, X., Tsai, Y.J.,
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks,
ICIP19(3811-3815)
IEEE DOI 1910
Causal Reasoning, Adversarial Example, Adversarial Robustness, Interpretable Deep Learning, Visual Reasoning BibRef

Yao, H., Regan, M., Yang, Y., Ren, Y.,
Image Decomposition and Classification Through a Generative Model,
ICIP19(400-404)
IEEE DOI 1910
Generative model, classification, adversarial defense BibRef

Brunner, T., Diehl, F., Le, M.T., Knoll, A.,
Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks,
ICCV19(4957-4965)
IEEE DOI 2004
application program interfaces, cloud computing, feature extraction, image classification, security of data, Training BibRef

Liu, Y.J.[Yu-Jia], Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Frossard, P.[Pascal],
A Geometry-Inspired Decision-Based Attack,
ICCV19(4889-4897)
IEEE DOI 2004
Deal with adversarial attack. geometry, image classification, image recognition, neural nets, security of data, black-box settings, Gaussian noise BibRef

Li, J., Ji, R., Liu, H., Hong, X., Gao, Y., Tian, Q.,
Universal Perturbation Attack Against Image Retrieval,
ICCV19(4898-4907)
IEEE DOI 2004
feature extraction, image classification, image representation, image retrieval, learning (artificial intelligence), Pipelines BibRef

Finlay, C., Pooladian, A., Oberman, A.,
The LogBarrier Adversarial Attack: Making Effective Use of Decision Boundary Information,
ICCV19(4861-4869)
IEEE DOI 2004
gradient methods, image classification, minimisation, neural nets, security of data, LogBarrier adversarial attack, Benchmark testing BibRef

Huang, Q., Katsman, I., Gu, Z., He, H., Belongie, S., Lim, S.,
Enhancing Adversarial Example Transferability With an Intermediate Level Attack,
ICCV19(4732-4741)
IEEE DOI 2004
cryptography, neural nets, optimisation, black-box transferability, source model, target models, adversarial examples, Artificial intelligence BibRef

Jandial, S., Mangla, P., Varshney, S., Balasubramanian, V.,
AdvGAN++: Harnessing Latent Layers for Adversary Generation,
NeruArch19(2045-2048)
IEEE DOI 2004
feature extraction, neural nets, MNIST datasets, CIFAR-10 datasets, attack rates, realistic images, latent features, input image, AdvGAN BibRef

Wang, C.L.[Cheng-Long], Bunel, R.[Rudy], Dvijotham, K.[Krishnamurthy], Huang, P.S.[Po-Sen], Grefenstette, E.[Edward], Kohli, P.[Pushmeet],
Knowing When to Stop: Evaluation and Verification of Conformity to Output-Size Specifications,
CVPR19(12252-12261).
IEEE DOI 2002
ulnerability of these models to attacks aimed at changing the output-size. BibRef

Modas, A.[Apostolos], Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Frossard, P.[Pascal],
SparseFool: A Few Pixels Make a Big Difference,
CVPR19(9079-9088).
IEEE DOI 2002
sparse attack. BibRef

Yao, Z.[Zhewei], Gholami, A.[Amir], Xu, P.[Peng], Keutzer, K.[Kurt], Mahoney, M.W.[Michael W.],
Trust Region Based Adversarial Attack on Neural Networks,
CVPR19(11342-11351).
IEEE DOI 2002
BibRef

Zeng, X.H.[Xiao-Hui], Liu, C.X.[Chen-Xi], Wang, Y.S.[Yu-Siang], Qiu, W.[Weichao], Xie, L.X.[Ling-Xi], Tai, Y.W.[Yu-Wing], Tang, C.K.[Chi-Keung], Yuille, A.L.[Alan L.],
Adversarial Attacks Beyond the Image Space,
CVPR19(4297-4306).
IEEE DOI 2002
BibRef

Corneanu, C.A.[Ciprian A.], Madadi, M.[Meysam], Escalera, S.[Sergio], Martinez, A.M.[Aleix M.],
What Does It Mean to Learn in Deep Networks? And, How Does One Detect Adversarial Attacks?,
CVPR19(4752-4761).
IEEE DOI 2002
BibRef

Shi, Y.C.[Yu-Cheng], Wang, S.[Siyu], Han, Y.H.[Ya-Hong],
Curls and Whey: Boosting Black-Box Adversarial Attacks,
CVPR19(6512-6520).
IEEE DOI 2002
BibRef

Liu, X.Q.[Xuan-Qing], Hsieh, C.J.[Cho-Jui],
Rob-GAN: Generator, Discriminator, and Adversarial Attacker,
CVPR19(11226-11235).
IEEE DOI 2002
BibRef

Gupta, P.[Puneet], Rahtu, E.[Esa],
MLAttack: Fooling Semantic Segmentation Networks by Multi-layer Attacks,
GCPR19(401-413).
Springer DOI 1911
BibRef

Barni, M., Kallas, K., Tondi, B.,
A New Backdoor Attack in CNNS by Training Set Corruption Without Label Poisoning,
ICIP19(101-105)
IEEE DOI 1910
Adversarial learning, security of deep learning, backdoor poisoning attacks, training with poisoned data BibRef

Zhao, W.[Wei], Yang, P.P.[Peng-Peng], Ni, R.R.[Rong-Rong], Zhao, Y.[Yao], Li, W.J.[Wen-Jie],
Cycle GAN-Based Attack on Recaptured Images to Fool both Human and Machine,
IWDW18(83-92).
Springer DOI 1905
BibRef

Wang, S., Shi, Y., Han, Y.,
Universal Perturbation Generation for Black-box Attack Using Evolutionary Algorithms,
ICPR18(1277-1282)
IEEE DOI 1812
Perturbation methods, Evolutionary computation, Sociology, Statistics, Training, Neural networks, Robustness BibRef

Xu, X.J.[Xiao-Jun], Chen, X.Y.[Xin-Yun], Liu, C.[Chang], Rohrbach, A.[Anna], Darrell, T.J.[Trevor J.], Song, D.[Dawn],
Fooling Vision and Language Models Despite Localization and Attention Mechanism,
CVPR18(4951-4961)
IEEE DOI 1812
Attacks. Prediction algorithms, Computational modeling, Neural networks, Knowledge discovery, Visualization, Predictive models, Natural languages BibRef

Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.,
Boosting Adversarial Attacks with Momentum,
CVPR18(9185-9193)
IEEE DOI 1812
Iterative methods, Robustness, Training, Data models, Adaptation models, Security BibRef

Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.,
Robust Physical-World Attacks on Deep Learning Visual Classification,
CVPR18(1625-1634)
IEEE DOI 1812
Perturbation methods, Roads, Cameras, Visualization, Pipelines, Autonomous vehicles, Detectors BibRef

Narodytska, N., Kasiviswanathan, S.,
Simple Black-Box Adversarial Attacks on Deep Neural Networks,
PRIV17(1310-1318)
IEEE DOI 1709
Knowledge engineering, Network architecture, Neural networks, Robustness, Training BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
VAE, Variational Autoencoder .


Last update:May 15, 2022 at 14:39:14