14.5.9.10.7 Adversarial Attacks

Chapter Contents (Back)
adversarial Networks. Generative Networks. Attacks. GAN.
See also Countering Adversarial Attacks, Defense, Robustness.
See also Adversarial Networks, Adversarial Inputs, Generative Adversarial.
See also Backdoor Attacks.
See also Camouflaged Object Detection, Camouflage.
See also Black-Box Attacks, Robustness.

Biggio, B.[Battista], Roli, F.[Fabio],
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning,
PR(84), 2018, pp. 317-331.
Elsevier DOI 1809
Award, Pattern Recognition. Adversarial machine learning, Evasion attacks, Poisoning attacks, Adversarial examples, Secure learning, Deep learning BibRef

Croce, F.[Francesco], Rauber, J.[Jonas], Hein, M.[Matthias],
Scaling up the Randomized Gradient-Free Adversarial Attack Reveals Overestimation of Robustness Using Established Attacks,
IJCV(128), No. 4, April 2020, pp. 1028-1046.
Springer DOI 2004
BibRef
Earlier: A1, A3, Only:
A Randomized Gradient-Free Attack on ReLU Networks,
GCPR18(215-227).
Springer DOI 1905
Award, GCPR, HM. BibRef

Aberdam, A.[Aviad], Golts, A.[Alona], Elad, M.[Michael],
Ada-LISTA: Learned Solvers Adaptive to Varying Models,
PAMI(44), No. 12, December 2022, pp. 9222-9235.
IEEE DOI 2212
Dictionaries, Adaptation models, Training, Convergence, Encoding, Sparse matrices, Numerical models, Sparse coding, learned solvers, deep learning modeling BibRef

Ozbulak, U.[Utku], Gasparyan, M.[Manvel], de Neve, W.[Wesley], van Messem, A.[Arnout],
Perturbation analysis of gradient-based adversarial attacks,
PRL(135), 2020, pp. 313-320.
Elsevier DOI 2006
Adversarial attacks, Adversarial examples, Deep learning, Perturbation analysis BibRef

Wan, S.[Sheng], Wu, T.Y.[Tung-Yu], Hsu, H.W.[Heng-Wei], Wong, W.H.[Wing Hung], Lee, C.Y.[Chen-Yi],
Feature Consistency Training With JPEG Compressed Images,
CirSysVideo(30), No. 12, December 2020, pp. 4769-4780.
IEEE DOI 2012
Deep neural networks are vulnerable to JPEG compression artifacts. Image coding, Distortion, Training, Transform coding, Robustness, Quantization (signal), Feature extraction, Compression artifacts, classification robustness BibRef

Che, Z., Borji, A., Zhai, G., Ling, S., Li, J., Tian, Y., Guo, G., Le Callet, P.,
Adversarial Attack Against Deep Saliency Models Powered by Non-Redundant Priors,
IP(30), 2021, pp. 1973-1988.
IEEE DOI 2101
Computational modeling, Perturbation methods, Redundancy, Task analysis, Visualization, Robustness, Neural networks, gradient estimation BibRef

Xu, Y., Du, B., Zhang, L.,
Assessing the Threat of Adversarial Examples on Deep Neural Networks for Remote Sensing Scene Classification: Attacks and Defenses,
GeoRS(59), No. 2, February 2021, pp. 1604-1617.
IEEE DOI 2101
Remote sensing, Neural networks, Deep learning, Perturbation methods, Feature extraction, Task analysis, scene classification BibRef

Xiao, Y.[Yatie], Pun, C.M.[Chi-Man], Liu, B.[Bo],
Fooling deep neural detection networks with adaptive object-oriented adversarial perturbation,
PR(115), 2021, pp. 107903.
Elsevier DOI 2104
Object detection, Adversarial attack, Adaptive object-oriented perturbation BibRef

Yamanaka, K.[Koichiro], Takahashi, K.[Keita], Fujii, T.[Toshiaki], Matsumoto, R.[Ryuraroh],
Simultaneous Attack on CNN-Based Monocular Depth Estimation and Optical Flow Estimation,
IEICE(E104-D), No. 5, May 2021, pp. 785-788.
WWW Link. 2105
BibRef

Lin, H.Y.[Hsiao-Ying], Biggio, B.[Battista],
Adversarial Machine Learning: Attacks From Laboratories to the Real World,
Computer(54), No. 5, May 2021, pp. 56-60.
IEEE DOI 2106
Adversarial machine learning, Data models, Training data, Biological system modeling BibRef

Wang, B.[Bo], Zhao, M.[Mengnan], Wang, W.[Wei], Wei, F.[Fei], Qin, Z.[Zhan], Ren, K.[Kui],
Are You Confident That You Have Successfully Generated Adversarial Examples?,
CirSysVideo(31), No. 6, June 2021, pp. 2089-2099.
IEEE DOI 2106
Perturbation methods, Iterative methods, Computational modeling, Neural networks, Security, Training, Robustness, buffer BibRef

Tang, S.L.[San-Li], Huang, X.L.[Xiao-Lin], Chen, M.J.[Ming-Jian], Sun, C.J.[Cheng-Jin], Yang, J.[Jie],
Adversarial Attack Type I: Cheat Classifiers by Significant Changes,
PAMI(43), No. 3, March 2021, pp. 1100-1109.
IEEE DOI 2102
Neural networks, Training, Aerospace electronics, Toy manufacturing industry, Sun, Face recognition, Task analysis, supervised variational autoencoder BibRef

Wang, L.[Lin], Yoon, K.J.[Kuk-Jin],
PSAT-GAN: Efficient Adversarial Attacks Against Holistic Scene Understanding,
IP(30), 2021, pp. 7541-7553.
IEEE DOI 2109
Task analysis, Perturbation methods, Visualization, Pipelines, Autonomous vehicles, Semantics, Generative adversarial networks, generative model BibRef

Mohamad-Nezami, O.[Omid], Chaturvedi, A.[Akshay], Dras, M.[Mark], Garain, U.[Utpal],
Pick-Object-Attack: Type-specific adversarial attack for object detection,
CVIU(211), 2021, pp. 103257.
Elsevier DOI 2110
Adversarial attack, Faster R-CNN, Deep learning, Image captioning BibRef

Qin, C.[Chuan], Wu, L.[Liang], Zhang, X.P.[Xin-Peng], Feng, G.R.[Guo-Rui],
Efficient Non-Targeted Attack for Deep Hashing Based Image Retrieval,
SPLetters(28), 2021, pp. 1893-1897.
IEEE DOI 2110
Codes, Perturbation methods, Hamming distance, Image retrieval, Training, Feature extraction, Databases, Adversarial example, image retrieval BibRef

Du, C.[Chuan], Zhang, L.[Lei],
Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network,
RS(13), No. 21, 2021, pp. xx-yy.
DOI Link 2112
BibRef

Wang, H.J.[Hong-Jun], Li, G.B.[Guan-Bin], Liu, X.B.[Xiao-Bai], Lin, L.[Liang],
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning,
PAMI(44), No. 4, April 2022, pp. 1725-1737.
IEEE DOI 2203
Training, Monte Carlo methods, Space exploration, Robustness, Markov processes, Cats, Iterative methods, Adversarial example, robustness and safety of machine learning BibRef

Chen, S.[Sizhe], He, Z.B.[Zheng-Bao], Sun, C.J.[Cheng-Jin], Yang, J.[Jie], Huang, X.L.[Xiao-Lin],
Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet,
PAMI(44), No. 4, April 2022, pp. 2188-2197.
IEEE DOI 2203
Heating systems, Training, Neural networks, Perturbation methods, Semantics, Visualization, Error analysis, Adversarial attack, DAmageNet BibRef

Kim, J.[Jinsub],
On Optimality of Deterministic Rules in Adversarial Bayesian Detection,
SPLetters(29), 2022, pp. 757-761.
IEEE DOI 2204
Bayes methods, Games, Zirconium, Markov processes, Detectors, Uncertainty, Training data, Adversarial Bayesian detection, input data falsification BibRef

Sun, X.X.[Xu-Xiang], Cheng, G.[Gong], Pei, L.[Lei], Han, J.W.[Jun-Wei],
Query-efficient decision-based attack via sampling distribution reshaping,
PR(129), 2022, pp. 108728.
Elsevier DOI 2206
Adversarial examples, Decision-based attack, Image classification, Normal vector estimation, Distribution reshaping BibRef

Chen, S.M.[Shan-Mou], Zhang, Q.Q.[Qiang-Qiang], Lin, D.Y.[Dong-Yuan], Wang, S.Y.[Shi-Yuan],
A Class of Nonlinear Kalman Filters Under a Generalized Measurement Model With False Data Injection Attacks,
SPLetters(29), 2022, pp. 1187-1191.
IEEE DOI 2206
Additives, Kalman filters, Data models, Noise measurement, Time measurement, Numerical models, Loss measurement, Cyber attack, nonlinear Kalman filtering BibRef

Chen, M.[Mantun], Wang, Y.J.[Yong-Jun], Zhu, X.T.[Xia-Tian],
Few-shot Website Fingerprinting attack with Meta-Bias Learning,
PR(130), 2022, pp. 108739.
Elsevier DOI 2206
User privacy, Internet anonymity, Data traffic, Website fingerprinting, Deep learning, Neural network, Parameter factorization BibRef

Zhang, Z.[Zheng], Wang, X.G.[Xun-Guang], Lu, G.M.[Guang-Ming], Shen, F.M.[Fu-Min], Zhu, L.[Lei],
Targeted Attack of Deep Hashing Via Prototype-Supervised Adversarial Networks,
MultMed(24), 2022, pp. 3392-3404.
IEEE DOI 2207
Semantics, Prototypes, Generators, Optimization, Cats, Binary codes, Task analysis, Adversarial example, targeted attack, deep hashing, generative adversarial network BibRef

Wang, T.S.[Tian-Shi], Zhu, L.[Lei], Zhang, Z.[Zheng], Zhang, H.X.[Hua-Xiang], Han, J.W.[Jun-Wei],
Targeted Adversarial Attack Against Deep Cross-Modal Hashing Retrieval,
CirSysVideo(33), No. 10, October 2023, pp. 6159-6172.
IEEE DOI Code:
WWW Link. 2310
BibRef

Wang, X.G.[Xun-Guang], Zhang, Z.[Zheng], Wu, B.Y.[Bao-Yuan], Shen, F.M.[Fu-Min], Lu, G.M.[Guang-Ming],
Prototype-supervised Adversarial Network for Targeted Attack of Deep Hashing,
CVPR21(16352-16361)
IEEE DOI 2111
Knowledge engineering, Codes, Hamming distance, Semantics, Image retrieval, Prototypes BibRef

He, Z.[Ziwen], Wang, W.[Wei], Dong, J.[Jing], Tan, T.N.[Tie-Niu],
Revisiting ensemble adversarial attack,
SP:IC(107), 2022, pp. 116747.
Elsevier DOI 2208
Adversarial attack, Ensemble strategies, Gradient-based methods, Deep neural networks, Image classification BibRef

Akhtar, N.[Naveed], Jalwana, M.A.A.K.[Mohammad A. A. K.], Bennamoun, M.[Mohammed], Mian, A.[Ajmal],
Attack to Fool and Explain Deep Networks,
PAMI(44), No. 10, October 2022, pp. 5980-5995.
IEEE DOI 2209
Perturbation methods, Computational modeling, Visualization, Predictive models, Data models, Tools, Task analysis, explainable AI BibRef

Ma, K.[Ke], Xu, Q.Q.[Qian-Qian], Zeng, J.S.[Jin-Shan], Cao, X.C.[Xiao-Chun], Huang, Q.M.[Qing-Ming],
Poisoning Attack Against Estimating From Pairwise Comparisons,
PAMI(44), No. 10, October 2022, pp. 6393-6408.
IEEE DOI 2209
Optimization, Heuristic algorithms, Sports, Voting, Uncertainty, Games, Data models, Adversarial learning, poisoning attack, distributionally robust optimization BibRef

Deng, Y.P.[Ying-Peng], Karam, L.J.[Lina J.],
Frequency-Tuned Universal Adversarial Attacks on Texture Recognition,
IP(31), 2022, pp. 5856-5868.
IEEE DOI 2209
Perturbation methods, Frequency-domain analysis, Training, Feature extraction, Image recognition, Generators, just-noticeable difference (JND) BibRef

Giulivi, L.[Loris], Jere, M.[Malhar], Rossi, L.[Loris], Koushanfar, F.[Farinaz], Ciocarlie, G.[Gabriela], Hitaj, B.[Briland], Boracchi, G.[Giacomo],
Adversarial scratches: Deployable attacks to CNN classifiers,
PR(133), 2023, pp. 108985.
Elsevier DOI 2210
Adversarial perturbations, Adversarial attacks, Deep learning, Convolutional neural networks, Bézier curves BibRef

Lin, X.X.[Xi-Xun], Zhou, C.[Chuan], Wu, J.[Jia], Yang, H.[Hong], Wang, H.B.[Hai-Bo], Cao, Y.[Yanan], Wang, B.[Bin],
Exploratory Adversarial Attacks on Graph Neural Networks for Semi-Supervised Node Classification,
PR(133), 2023, pp. 109042.
Elsevier DOI 2210
Gradient-based attacks, Maximal gradient, Graph neural networks, Semi-supervised node classification BibRef

Zhao, C.L.[Cheng-Long], Ni, B.B.[Bing-Bing], Mei, S.B.[Shi-Bin],
Explore Adversarial Attack via Black Box Variational Inference,
SPLetters(29), 2022, pp. 2088-2092.
IEEE DOI 2211
Monte Carlo methods, Computational modeling, Probability distribution, Gaussian distribution, Bayes methods, Bayesian inference BibRef

Bai, T.[Tao], Wang, H.[Hao], Wen, B.[Bihan],
Targeted Universal Adversarial Examples for Remote Sensing,
RS(14), No. 22, 2022, pp. xx-yy.
DOI Link 2212
BibRef

Agarwal, A.[Akshay], Ratha, N.[Nalini], Vatsa, M.[Mayank], Singh, R.[Richa],
Crafting Adversarial Perturbations via Transformed Image Component Swapping,
IP(31), 2022, pp. 7338-7349.
IEEE DOI 2212
Perturbation methods, Databases, Hybrid fiber coaxial cables, Training, Kernel, Image resolution, Additives, Image components, wavelet BibRef

Kazemi, E.[Ehsan], Kerdreux, T.[Thomas], Wang, L.Q.[Li-Qiang],
Minimally Distorted Structured Adversarial Attacks,
IJCV(131), No. 1, January 2023, pp. 160-176.
Springer DOI 2301
BibRef

Yuan, H.J.[Hao-Jie], Chu, Q.[Qi], Zhu, F.[Feng], Zhao, R.[Rui], Liu, B.[Bin], Yu, N.H.[Neng-Hai],
AutoMA: Towards Automatic Model Augmentation for Transferable Adversarial Attacks,
MultMed(25), 2023, pp. 203-213.
IEEE DOI 2301
Transforms, Computational modeling, Training, Perturbation methods, Distortion, Data models, Image color analysis, Adversarial attack, transferability BibRef

Wei, X.X.[Xing-Xing], Guo, Y.[Ying], Yu, J.[Jie],
Adversarial Sticker: A Stealthy Attack Method in the Physical World,
PAMI(45), No. 3, March 2023, pp. 2711-2725.
IEEE DOI 2302
Face recognition, Perturbation methods, Task analysis, Image retrieval, Image recognition, Adaptation models, TV, physical world BibRef

Guo, Y.W.[Yi-Wen], Li, Q.Z.[Qi-Zhang], Zuo, W.M.[Wang-Meng], Chen, H.[Hao],
An Intermediate-Level Attack Framework on the Basis of Linear Regression,
PAMI(45), No. 3, March 2023, pp. 2726-2735.
IEEE DOI 2302
Linear regression, Computer science, Computational modeling, Support vector machines, Feature extraction, Symbols, robustness BibRef

Qin, C.[Chuan], Gao, S.Y.[Sheng-Yan], Zhang, X.P.[Xin-Peng], Feng, G.R.[Guo-Rui],
CADW: CGAN-Based Attack on Deep Robust Image Watermarking,
MultMedMag(30), No. 1, January 2023, pp. 28-35.
IEEE DOI 2305
Watermarking, Copyright protection, Generators, Robustness, Data models, Visualization, Generative adversarial networks, Deep Learning BibRef

Lin, G.Y.[Geng-You], Pan, Z.S.[Zhi-Song], Zhou, X.Y.[Xing-Yu], Duan, Y.[Yexin], Bai, W.[Wei], Zhan, D.[Dazhi], Zhu, L.[Leqian], Zhao, G.[Gaoqiang], Li, T.[Tao],
Boosting Adversarial Transferability with Shallow-Feature Attack on SAR Images,
RS(15), No. 10, 2023, pp. xx-yy.
DOI Link 2306
BibRef

Sun, X.X.[Xu-Xiang], Cheng, G.[Gong], Li, H.[Hongda], Pei, L.[Lei], Han, J.W.[Jun-Wei],
On Single-Model Transferable Targeted Attacks: A Closer Look at Decision-Level Optimization,
IP(32), 2023, pp. 2972-2984.
IEEE DOI 2306
Optimization, Adversarial machine learning, Closed box, Sun, Measurement, Linear programming, Tuning, Adversarial attacks, balanced logit loss BibRef

Pan, J.H.[Jian-Hong], Foo, L.G.[Lin Geng], Zheng, Q.C.[Qi-Chen], Fan, Z.P.[Zhi-Peng], Rahmani, H.[Hossein], Ke, Q.H.[Qiu-Hong], Liu, J.[Jun],
GradMDM: Adversarial Attack on Dynamic Networks,
PAMI(45), No. 9, September 2023, pp. 11374-11381.
IEEE DOI 2309
BibRef
Earlier: A1, A3, A4, A5, A6, A7, Only:
GradAuto: Energy-Oriented Attack on Dynamic Neural Networks,
ECCV22(IV:637-653).
Springer DOI 2211
BibRef

Bai, J.W.[Jia-Wang], Wu, B.Y.[Bao-Yuan], Li, Z.F.[Zhi-Feng], Xia, S.T.[Shu-Tao],
Versatile Weight Attack via Flipping Limited Bits,
PAMI(45), No. 11, November 2023, pp. 13653-13665.
IEEE DOI 2310
BibRef

Zhang, S.H.[Shi-Hui], Zuo, D.X.[Dong-Xu], Yang, Y.L.[Yong-Liang], Zhang, X.W.[Xiao-Wei],
A Transferable Adversarial Belief Attack With Salient Region Perturbation Restriction,
MultMed(25), 2023, pp. 4296-4306.
IEEE DOI 2310
BibRef

Liang, X.Y.[Xiao-Yu], Qian, Y.[Yaguan], Huang, J.C.[Jian-Chang], Ling, X.[Xiang], Wang, B.[Bin], Wu, C.M.[Chun-Ming], Swaileh, W.[Wassim],
Towards desirable decision boundary by Moderate-Margin Adversarial Training,
PRL(173), 2023, pp. 30-37.
Elsevier DOI 2310
Adversarial training, Adversarial attack, Trade-off, Decision boundary BibRef

Zheng, S.J.[Shi-Jun], Liu, W.Q.[Wei-Quan], Shen, S.Q.[Si-Qi], Zang, Y.[Yu], Wen, C.[Chenglu], Cheng, M.[Ming], Wang, C.[Cheng],
Adaptive local adversarial attacks on 3D point clouds,
PR(144), 2023, pp. 109825.
Elsevier DOI 2310
Point clouds, Adversarial attack, Salient regions, Adversarial examples BibRef

Li, Y.[Yang], Pan, Q.[Quan], Feng, Z.W.[Zhao-Wen], Cambria, E.[Erik],
Few pixels attacks with generative model,
PR(144), 2023, pp. 109849.
Elsevier DOI 2310
Neural network vulnerability, Adversarial attack, Few pixels attacks, Generative attack BibRef

Xiao, Y.[Yatie], Zhou, J.Z.[Ji-Zhe], Chen, K.Y.[Kong-Yang], Liu, Z.B.[Zhen-Bang],
Revisiting the transferability of adversarial examples via source-agnostic adversarial feature inducing method,
PR(144), 2023, pp. 109828.
Elsevier DOI 2310
Adversarial attack, Transferability, Feature inducing, Diversity BibRef

Mao, Z.S.[Zhong-Shu], Lu, Y.Q.[Yi-Qin], Cheng, Z.[Zhe], Shen, X.[Xiong],
Enhancing transferability of adversarial examples with pixel-level scale variation,
SP:IC(118), 2023, pp. 117020.
Elsevier DOI 2310
Adversarial example, Transferability, Black box, Input transformation, Pixel level BibRef

Sun, J.L.[Jia-Liang], Yao, W.[Wen], Jiang, T.[Tingsong], Chen, X.Q.[Xiao-Qian],
Efficient search of comprehensively robust neural architectures via multi-fidelity evaluation,
PR(146), 2024, pp. 110038.
Elsevier DOI 2311
Model robustness, Adversarial attacks, Neural architecture search, Surrogate model BibRef

Mumcu, F.[Furkan], Yilmaz, Y.[Yasin],
Sequential architecture-agnostic black-box attack design and analysis,
PR(147), 2024, pp. 110066.
Elsevier DOI 2312
Adversarial machine learning, Black-box attacks, Transferability of attacks, Vision transformers, Sequential hypothesis testing BibRef

Chen, T.[Tong], Ma, Z.[Zhan],
Toward Robust Neural Image Compression: Adversarial Attack and Model Finetuning,
CirSysVideo(33), No. 12, December 2023, pp. 7842-7856.
IEEE DOI Code:
WWW Link. 2312
BibRef

Wan, C.[Chen], Huang, F.[Fangjun], Zhao, X.F.[Xian-Feng],
Average Gradient-Based Adversarial Attack,
MultMed(25), 2023, pp. 9572-9585.
IEEE DOI 2312
BibRef

Akers, M.[Matthew], Barton, A.[Armon],
Forming Adversarial Example Attacks Against Deep Neural Networks With Reinforcement Learning,
Computer(57), No. 1, January 2024, pp. 88-99.
IEEE DOI 2401
BibRef

Wang, D.H.[Dong-Hua], Yao, W.[Wen], Jiang, T.[Tingsong], Chen, X.Q.[Xiao-Qian],
Improving Transferability of Universal Adversarial Perturbation With Feature Disruption,
IP(33), 2024, pp. 722-737.
IEEE DOI 2402
Training, Perturbation methods, Closed box, Task analysis, Glass box, Data models, Linear programming, transferability of UAP BibRef

Wei, X.X.[Xing-Xing], Zhao, S.[Shiji],
Boosting Adversarial Transferability With Learnable Patch-Wise Masks,
MultMed(26), 2024, pp. 3778-3787.
IEEE DOI 2402
Perturbation methods, Adaptation models, Visualization, Training, Predictive models, Iterative methods, Statistics, DNNs, Adversarial Transferability BibRef

Li, J.[Jing], Wei, X.M.[Xiao-Meng],
Research on efficient detection network method for remote sensing images based on self attention mechanism,
IVC(142), 2024, pp. 104884.
Elsevier DOI 2402
Remote sensing images, Image detection, Faster R-CNN, Self attention mechanism, End-to-end BibRef

Wei, X.X.[Xing-Xing], Huang, Y.[Yao], Sun, Y.T.[Yi-Tong], Yu, J.[Jie],
Unified Adversarial Patch for Visible-Infrared Cross-Modal Attacks in the Physical World,
PAMI(46), No. 4, April 2024, pp. 2348-2363.
IEEE DOI 2403
BibRef
Earlier:
Unified Adversarial Patch for Cross-modal Attacks in the Physical World,
ICCV23(4422-4431)
IEEE DOI 2401
Shape, Detectors, Pedestrians, Task analysis, Deformation, Infrared sensors, Object detection, Adversarial examples, visible-infrared BibRef

Liu, T.F.[Tai-Feng], Yang, C.[Chao], Liu, X.J.[Xin-Jing], Han, R.D.[Rui-Dong], Ma, J.F.[Jian-Feng],
RPAU: Fooling the Eyes of UAVs via Physical Adversarial Patches,
ITS(25), No. 3, March 2024, pp. 2586-2598.
IEEE DOI 2405
Autonomous aerial vehicles, Navigation, Security, Perturbation methods, Target tracking, Deep learning, Cameras, adversarial attack BibRef

Yuan, Z.[Zheng], Zhang, J.[Jie], Jiang, Z.Y.[Zhao-Yan], Li, L.L.[Liang-Liang], Shan, S.G.[Shi-Guang],
Adaptive Perturbation for Adversarial Attack,
PAMI(46), No. 8, August 2024, pp. 5663-5676.
IEEE DOI 2407
Perturbation methods, Iterative methods, Adaptation models, Generators, Closed box, Security, Training, Adversarial attack, adaptive perturbation BibRef

Tao, A.[An], Duan, Y.[Yueqi], Wang, Y.Q.[Ying-Qi], Lu, J.W.[Ji-Wen], Zhou, J.[Jie],
Dynamics-Aware Adversarial Attack of Adaptive Neural Networks,
CirSysVideo(34), No. 7, July 2024, pp. 5505-5518.
IEEE DOI Code:
WWW Link. 2407
Adaptive systems, Neural networks, Convolution, Network architecture, Point cloud compression, Lead, leaded gradient method BibRef

Li, X.[Xin], Zhu, G.P.[Guo-Pu], Wang, S.[Shen], Zhou, Y.C.[Yi-Cong], Zhang, X.P.[Xin-Peng],
Deep Reverse Attack on SIFT Features With a Coarse-to-Fine GAN Model,
CirSysVideo(34), No. 7, July 2024, pp. 6391-6402.
IEEE DOI Code:
WWW Link. 2407
Image reconstruction, Generative adversarial networks, Feature extraction, Generators, generative adversarial network (GAN) BibRef

Liu, J.W.[Jia-Wei], Gong, X.[Xun], Wang, T.T.[Ting-Ting], Hu, Y.F.[Yun-Feng], Chen, H.[Hong],
A proxy-data-based hierarchical adversarial patch generation method,
CVIU(246), 2024, pp. 104066.
Elsevier DOI 2408
Adversarial patch, Data privacy, Physical adversarial attack, Proxy dataset BibRef


Zhu, X.P.[Xiao-Pei], Liu, Y.Q.[Yu-Qiu], Hu, Z.[Zhanhao], Li, J.M.[Jian-Min], Hu, X.L.[Xiao-Lin],
Infrared Adversarial Car Stickers,
CVPR24(24284-24293)
IEEE DOI 2410
propose a physical attack method against infrared detectors. Infrared detectors, YOLO, Solid modeling, Smoothing methods, Pedestrians, Detectors, Adversarial Example, Object Detection BibRef

Chen, D.[Dake], Li, S.[Shiduo], Zhang, Y.[Yuke], Li, C.H.[Cheng-Hao], Kundu, S.[Souvik], Beerel, P.A.[Peter A.],
DIA: Diffusion based Inverse Network Attack on Collaborative Inference,
FaDE-TCV24(124-130)
IEEE DOI 2410
Analytical models, Privacy, Computational modeling, Neural networks, Collaboration, Transformers BibRef

Zhang, X.W.[Xin-Wei], Zhang, T.Y.[Tian-Yuan], Zhang, Y.T.[Yi-Tong], Liu, S.C.[Shuang-Cheng],
Enhancing the Transferability of Adversarial Attacks with Stealth Preservation,
AML24(2915-2925)
IEEE DOI 2410
Visualization, Fuses, Computational modeling, Perturbation methods, Closed box, Iterative methods BibRef

Ye, M.[Muchao], Xu, X.[Xiang], Zhang, Q.[Qin], Wu, J.[Jonathan],
Sharpness-Aware Optimization for Real-World Adversarial Attacks for Diverse Compute Platforms with Enhanced Transferability,
AML24(2937-2946)
IEEE DOI 2410
Computational modeling, Perturbation methods, Closed box, Artificial neural networks, Pressing BibRef

Fang, Z.W.[Zheng-Wei], Wang, R.[Rui], Huang, T.[Tao], Jing, L.P.[Li-Ping],
Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning,
CVPR24(24841-24850)
IEEE DOI 2410
Deep learning, Image transformation, Perturbation methods, Computational modeling, Stochastic processes, Stochastic gradient descent BibRef

Ahmed, S.[Sabbir], Zhou, R.[Ranyang], Angizi, S.[Shaahin], Rakin, A.S.[Adnan Siraj],
Deep-TROJ: An Inference Stage Trojan Insertion Algorithm Through Efficient Weight Replacement Attack,
CVPR24(24810-24819)
IEEE DOI Code:
WWW Link. 2410
Training, Threat modeling, Fault diagnosis, Random access memory, Artificial neural networks, Transformers BibRef

Ming, D.[Di], Ren, P.[Peng], Wang, Y.L.[Yun-Long], Feng, X.[Xin],
Transferable Structural Sparse Adversarial Attack Via Exact Group Sparsity Training,
CVPR24(24696-24705)
IEEE DOI Code:
WWW Link. 2410
Training, Quantization (signal), Codes, Perturbation methods, Semantic segmentation, Object detection, sparse training BibRef

Guesmi, A.[Amira], Ding, R.[Ruitian], Hanif, M.A.[Muhammad Abdullah], Alouani, I.[Ihsen], Shafique, M.[Muhammad],
DAP: A Dynamic Adversarial Patch for Evading Person Detectors,
CVPR24(24595-24604)
IEEE DOI 2410
Measurement, Deformation, Image edge detection, Smart cameras, Detectors, Linear programming, Robustness, Adversarial patches, Adversarial Attacks BibRef

Wu, H.[Han], Ou, G.[Guanyan], Wu, W.B.[Wei-Bin], Zheng, Z.[Zibin],
Improving Transferable Targeted Adversarial Attacks with Model Self-Enhancement,
CVPR24(24615-24624)
IEEE DOI Code:
WWW Link. 2410
Training, Codes, Perturbation methods, Design methodology, Closed box, Robustness BibRef

Xu, J.Y.[Jing-Yao], Lu, Y.T.[Yue-Tong], Li, Y.D.[Yan-Dong], Lu, S.Y.[Si-Yang], Wang, D.D.[Dong-Dong], Wei, X.[Xiang],
Perturbing Attention Gives You More Bang for the Buck: Subtle Imaging Perturbations That Efficiently Fool Customized Diffusion Models,
CVPR24(24534-24543)
IEEE DOI 2410
Training, Sensitivity, Social networking (online), Publishing, Perturbation methods, Noise, Diffusion models, Adversarial Attack BibRef

Tang, B.[Bowen], Wang, Z.[Zheng], Bin, Y.[Yi], Dou, Q.[Qi], Yang, Y.[Yang], Shen, H.T.[Heng Tao],
Ensemble Diversity Facilitates Adversarial Transferability,
CVPR24(24377-24386)
IEEE DOI Code:
WWW Link. 2410
Perturbation methods, Closed box, Stochastic processes, Reinforcement learning, Adversarial Attack BibRef

Mahmood, H.[Hassan], Elhamifar, E.[Ehsan],
Semantic-Aware Multi-Label Adversarial Attacks,
CVPR24(24251-24262)
IEEE DOI Code:
WWW Link. 2410
Codes, Computational modeling, Semantics, Knowledge graphs, Predictive models, Prediction algorithms, Adversarial Attacks, Semantic Attacks BibRef

Zhang, M.X.[Min-Xing], Yu, N.[Ning], Wen, R.[Rui], Backes, M.[Michael], Zhang, Y.[Yang],
Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models,
WACV24(4827-4837)
IEEE DOI 2404
Training, Data privacy, Privacy, Visualization, Publishing, Computational modeling, Training data, Algorithms, Explainable, fair BibRef

Dubinski, J.[Jan], Kowalczuk, A.[Antoni], Pawlak, S.[Stanislaw], Rokita, P.[Przemyslaw], Trzcinski, T.[Tomasz], Morawiecki, P.[Pawel],
Towards More Realistic Membership Inference Attacks on Large Diffusion Models,
WACV24(4848-4857)
IEEE DOI 2404
Training, Privacy, Data privacy, Computational modeling, Closed box, Reliability, Algorithms, Explainable, fair, accountable, ethical computer vision BibRef

Cohen, G.[Gilad], Giryes, R.[Raja],
Membership Inference Attack Using Self Influence Functions,
WACV24(4880-4889)
IEEE DOI Code:
WWW Link. 2404
Training, Differential privacy, Estimation, Computer architecture, Machine learning, Predictive models, Data augmentation, Algorithms, ethical computer vision BibRef

Ledda, E.[Emanuele], Angioni, D.[Daniele], Piras, G.[Giorgio], Fumera, G.[Giorgio], Biggio, B.[Battista], Roli, F.[Fabio],
Adversarial Attacks Against Uncertainty Quantification,
Uncertainty23(4601-4610)
IEEE DOI 2401
BibRef

Tal, O.B.[Ofir Bar], Haviv, A.[Adi], Bermano, A.H.[Amit H.],
OMG-Attack: Self-Supervised On-Manifold Generation of Transferable Evasion Attacks,
AROW23(3698-3708)
IEEE DOI 2401
BibRef

Ambati, R.[Rahul], Akhtar, N.[Naveed], Mian, A.[Ajmal], Rawat, Y.S.[Yogesh S.],
PRAT: PRofiling Adversarial aTtacks,
AROW23(3669-3678)
IEEE DOI Code:
WWW Link. 2401
BibRef

Ko, M.[Myeongseob], Jin, M.[Ming], Wang, C.G.[Chen-Guang], Jia, R.X.[Ruo-Xi],
Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study,
ICCV23(4848-4858)
IEEE DOI Code:
WWW Link. 2401
BibRef

Dong, J.S.[Jian-Shuo], Qiu, H.[Han], Li, Y.M.[Yi-Ming], Zhang, T.W.[Tian-Wei], Li, Y.J.[Yuan-Jie], Lai, Z.[Zeqi], Zhang, C.[Chao], Xia, S.T.[Shu-Tao],
One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training,
ICCV23(4665-4675)
IEEE DOI Code:
WWW Link. 2401
BibRef

Ma, W.S.[Wen-Shuo], Li, Y.D.[Yi-Dong], Jia, X.F.[Xiao-Feng], Xu, W.[Wei],
Transferable Adversarial Attack for Both Vision Transformers and Convolutional Networks via Momentum Integrated Gradients,
ICCV23(4607-4616)
IEEE DOI 2401
BibRef

Chen, X.Q.[Xin-Quan], Gao, X.T.[Xi-Tong], Zhao, J.J.[Juan-Juan], Ye, K.J.[Ke-Jiang], Xu, C.Z.[Cheng-Zhong],
AdvDiffuser: Natural Adversarial Example Synthesis with Diffusion Models,
ICCV23(4539-4549)
IEEE DOI 2401
BibRef

Zhou, T.[Tao], Ye, Q.[Qi], Luo, W.H.[Wen-Han], Zhang, K.[Kaihao], Shi, Z.G.[Zhi-Guo], Chen, J.M.[Ji-Ming],
F&F Attack: Adversarial Attack against Multiple Object Trackers by Inducing False Negatives and False Positives,
ICCV23(4550-4560)
IEEE DOI 2401
BibRef

Qian, Y.[Yaguan], He, S.[Shuke], Zhao, C.Y.[Chen-Yu], Sha, J.Q.[Jia-Qiang], Wang, W.[Wei], Wang, B.[Bin],
LEA2: A Lightweight Ensemble Adversarial Attack via Non-overlapping Vulnerable Frequency Regions,
ICCV23(4487-4498)
IEEE DOI 2401
BibRef

Maho, T.[Thibault], Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Furon, T.[Teddy],
How to choose your best allies for a transferable attack?,
ICCV23(4519-4528)
IEEE DOI 2401
BibRef

Wang, D.H.[Dong-Hua], Yao, W.[Wen], Jiang, T.[Tingsong], Li, C.[Chao], Chen, X.Q.[Xiao-Qian],
RFLA: A Stealthy Reflected Light Adversarial Attack in the Physical World,
ICCV23(4432-4442)
IEEE DOI 2401
BibRef

Zhou, Z.Q.[Zi-Qi], Hu, S.[Shengshan], Zhao, R.Z.[Rui-Zhi], Wang, Q.[Qian], Zhang, L.Y.[Leo Yu], Hou, J.H.[Jun-Hui], Jin, H.[Hai],
Downstream-agnostic Adversarial Examples,
ICCV23(4322-4332)
IEEE DOI Code:
WWW Link. 2401
BibRef

Kim, H.S.[Hee-Seon], Son, M.J.[Min-Ji], Kim, M.[Minbeom], Kwon, M.J.[Myung-Joon], Kim, C.[Changick],
Breaking Temporal Consistency: Generating Video Universal Adversarial Perturbations Using Image Models,
ICCV23(4302-4311)
IEEE DOI 2401
BibRef

Stolik, T.[Tomer], Lang, I.[Itai], Avidan, S.[Shai],
SAGA: Spectral Adversarial Geometric Attack on 3D Meshes,
ICCV23(4261-4271)
IEEE DOI 2401
BibRef

Zeng, H.[Hui], Zhang, T.[Tong], Chen, B.W.[Bi-Wei], Peng, A.[Anjie],
Enhancing Targeted Transferability Via Suppressing High-Confidence Labels,
ICIP23(3309-3313)
IEEE DOI Code:
WWW Link. 2312
BibRef

Kim, Y.[Yoonji], Cho, S.J.[Seung-Ju], Byun, J.[Junyoung], Kwon, M.J.[Myung-Joon], Kim, C.[Changick],
Improving Adversarial Transferability Via Feature Translation,
ICIP23(3359-3363)
IEEE DOI 2312
BibRef

Coscia, P.[Pasquale], Genovese, A.[Angelo], Scotti, F.[Fabio], Piuri, V.[Vincenzo],
Adversarial Defect Synthesis for Industrial Products in Low Data Regime,
ICIP23(1360-1364)
IEEE DOI 2312
BibRef

Lin, Z.[Zhi], Peng, A.[Anjie], Zeng, H.[Hui], Wu, K.J.[Kai-Jun], Yu, W.X.[Wen-Xin],
An Enhanced Neuron Attribution-Based Attack Via Pixel Dropping,
ICIP23(3439-3443)
IEEE DOI 2312
BibRef

Ma, M.Z.[Ming-Zhi], Zheng, W.J.[Wei-Jie], Lv, W.L.[Wan-Li], Ren, L.[Lu], Su, H.[Hang], Yin, Z.X.[Zhao-Xia],
Multi-Label Adversarial Attack Based on Label Correlation,
ICIP23(2050-2054)
IEEE DOI 2312
BibRef

Wu, T.[Tao], Luo, T.[Tie], Wunsch, D.C.[Donald C.],
GNP Attack: Transferable Adversarial Examples Via Gradient Norm Penalty,
ICIP23(3110-3114)
IEEE DOI 2312
BibRef

Sha, Z.Y.[Ze-Yang], He, X.L.[Xin-Lei], Yu, N.[Ning], Backes, M.[Michael], Zhang, Y.[Yang],
Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders,
CVPR23(16373-16383)
IEEE DOI 2309
BibRef

Feng, W.W.[Wei-Wei], Xu, N.[Nanqing], Zhang, T.Z.[Tian-Zhu], Zhang, Y.D.[Yong-Dong],
Dynamic Generative Targeted Attacks with Pattern Injection,
CVPR23(16404-16414)
IEEE DOI 2309
BibRef

Takahashi, H.[Hideaki], Liu, J.J.[Jing-Jing], Liu, Y.[Yang],
Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack,
CVPR23(12198-12207)
IEEE DOI 2309
BibRef

Wei, Z.P.[Zhi-Peng], Chen, J.J.[Jing-Jing], Wu, Z.[Zuxuan], Jiang, Y.G.[Yu-Gang],
Enhancing the Self-Universality for Transferable Targeted Attacks,
CVPR23(12281-12290)
IEEE DOI 2309
BibRef

Chen, W.X.[Wei-Xin], Song, D.[Dawn], Li, B.[Bo],
TrojDiff: Trojan Attacks on Diffusion Models with Diverse Targets,
CVPR23(4035-4044)
IEEE DOI 2309
BibRef

Zhuang, H.M.[Hao-Min], Zhang, Y.H.[Yi-Hua], Liu, S.[Sijia],
A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion,
AML23(2385-2392)
IEEE DOI 2309
BibRef

Brown, D.[Davis], Kvinge, H.[Henry],
Making Corgis Important for Honeycomb Classification: Adversarial Attacks on Concept-based Explainability Tools,
TAG-PRA23(620-627)
IEEE DOI 2309
BibRef

Shukla, N.[Nitish], Banerjee, S.[Sudipta],
Generating Adversarial Attacks in the Latent Space,
GCV23(730-739)
IEEE DOI 2309
BibRef

Zhang, L.[Lili], Wang, X.D.[Xiao-Dong],
Advfilter: Adversarial Example Generated by Perturbing Optical Path,
ACCVWS22(33-44).
Springer DOI 2307
BibRef

Koren, T.[Tom], Talker, L.[Lior], Dinerstein, M.[Michael], Vitek, R.[Ran],
Consistent Semantic Attacks on Optical Flow,
ACCV22(VII:501-517).
Springer DOI 2307
BibRef

Wu, H.[Hao], Wang, J.[Jinwei], Zhang, J.W.[Jia-Wei], Luo, X.Y.[Xiang-Yang], Ma, B.[Bin],
Improving the Transferability of Adversarial Attacks Through Both Front and Rear Vector Method,
IWDW22(83-97).
Springer DOI 2307
BibRef

Waseda, F.[Futa], Nishikawa, S.[Sosuke], Le, T.N.[Trung-Nghia], Nguyen, H.H.[Huy H.], Echizen, I.[Isao],
Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently,
WACV23(1360-1368)
IEEE DOI 2302
Deep learning, Analytical models, Perturbation methods, Neural networks, Predictive models, ethical computer vision BibRef

Aich, A.[Abhishek], Li, S.[Shasha], Song, C.Y.[Cheng-Yu], Asif, M.S.[M. Salman], Krishnamurthy, S.V.[Srikanth V.], Roy-Chowdhury, A.K.[Amit K.],
Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks,
WACV23(1308-1318)
IEEE DOI 2302
Perturbation methods, Computational modeling, Closed box, Generators, Convolutional neural networks, Glass box, adversarial attack and defense methods BibRef

Shapira, A.[Avishag], Zolfi, A.[Alon], Demetrio, L.[Luca], Biggio, B.[Battista], Shabtai, A.[Asaf],
Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep Object Detectors,
WACV23(4560-4569)
IEEE DOI 2302
Perturbation methods, Pipelines, Phantoms, Detectors, Object detection, Predictive models, Prediction algorithms, adversarial attack and defense methods BibRef

Tan, H.X.[Han-Xiao], Kotthaus, H.[Helena],
Explainability-Aware One Point Attack for Point Cloud Neural Networks,
WACV23(4570-4579)
IEEE DOI 2302
Point cloud compression, Codes, Filtering, Computer network reliability, Neural networks, Robustness, ethical computer vision BibRef

Chen, C.C.[Chun-Chun], Zhu, W.J.[Wen-Jie], Peng, B.[Bo], Lu, H.J.[Hui-Juan],
Towards Robust Community Detection via Extreme Adversarial Attacks,
ICPR22(2231-2237)
IEEE DOI 2212
Training, Perturbation methods, Image edge detection, Heuristic algorithms, Complex networks, Robustness BibRef

Tavallali, P.[Pooya], Behzadan, V.[Vahid], Alizadeh, A.[Azar], Ranganath, A.[Aditya], Singhal, M.[Mukesh],
Adversarial Label-Poisoning Attacks and Defense for General Multi-Class Models Based on Synthetic Reduced Nearest Neighbor,
ICIP22(3717-3722)
IEEE DOI 2211
Training, Resistance, Analytical models, Machine learning algorithms, Clustering algorithms, Machine Learning BibRef

Lin, Z.[Zhi], Peng, A.[Anjie], Wei, R.[Rong], Yu, W.X.[Wen-Xin], Zeng, H.[Hui],
An Enhanced Transferable Adversarial Attack of Scale-Invariant Methods,
ICIP22(3788-3792)
IEEE DOI 2211
Convolution, Convolutional neural networks, convolution neural network, adversarial examples, transferability BibRef

Ran, Y.[Yu], Wang, Y.G.[Yuan-Gen],
Sign-OPT+: An Improved Sign Optimization Adversarial Attack,
ICIP22(461-465)
IEEE DOI 2211
Backtracking, Costs, Codes, Training data, Data models, Complexity theory, adversarial example, binary search BibRef

Wang, D.[Dan], Lin, J.[Jiayu], Wang, Y.G.[Yuan-Gen],
Query-Efficient Adversarial Attack Based On Latin Hypercube Sampling,
ICIP22(546-550)
IEEE DOI 2211
Codes, Barium, Estimation, Benchmark testing, Hypercubes, adversarial attacks, boundary attacks, Latin Hypercube Sampling, query efficiency BibRef

Aneja, S.[Shivangi], Markhasin, L.[Lev], Nießner, M.[Matthias],
TAFIM: Targeted Adversarial Attacks Against Facial Image Manipulations,
ECCV22(XIV:58-75).
Springer DOI 2211
BibRef

Long, Y.Y.[Yu-Yang], Zhang, Q.L.[Qi-Long], Zeng, B.[Boheng], Gao, L.L.[Lian-Li], Liu, X.L.[Xiang-Long], Zhang, J.[Jian], Song, J.K.[Jing-Kuan],
Frequency Domain Model Augmentation for Adversarial Attack,
ECCV22(IV:549-566).
Springer DOI 2211
BibRef

Yuan, Z.[Zheng], Zhang, J.[Jie], Shan, S.G.[Shi-Guang],
Adaptive Image Transformations for Transfer-Based Adversarial Attack,
ECCV22(V:1-17).
Springer DOI 2211
BibRef

Cao, Y.L.[Yu-Long], Xiao, C.W.[Chao-Wei], Anandkumar, A.[Anima], Xu, D.[Danfei], Pavone, M.[Marco],
AdvDO: Realistic Adversarial Attacks for Trajectory Prediction,
ECCV22(V:36-52).
Springer DOI 2211
BibRef

Bai, J.W.[Jia-Wang], Gao, K.F.[Kuo-Feng], Gong, D.H.[Di-Hong], Xia, S.T.[Shu-Tao], Li, Z.F.[Zhi-Feng], Liu, W.[Wei],
Hardly Perceptible Trojan Attack Against Neural Networks with Bit Flips,
ECCV22(V:104-121).
Springer DOI 2211
BibRef

Liu, G.[Ganlin], Huang, X.W.[Xiao-Wei], Yi, X.P.[Xin-Ping],
Adversarial Label Poisoning Attack on Graph Neural Networks via Label Propagation,
ECCV22(V:227-243).
Springer DOI 2211
BibRef

Byun, J.[Junyoung], Shim, K.[Kyujin], Go, H.[Hyojun], Kim, C.[Changick],
Hidden Conditional Adversarial Attacks,
ICIP22(1306-1310)
IEEE DOI 2211
Deep learning, Neural networks, Inspection, Controllability, Safety, Reliability, Adversarial attack, Hidden condition BibRef

Son, M.J.[Min-Ji], Kwon, M.J.[Myung-Joon], Kim, H.S.[Hee-Seon], Byun, J.[Junyoung], Cho, S.[Seungju], Kim, C.[Changick],
Adaptive Warping Network for Transferable Adversarial Attacks,
ICIP22(3056-3060)
IEEE DOI 2211
Deep learning, Adaptation models, Adaptive systems, Perturbation methods, Neural networks, Search problems, Warping BibRef

Cao, X.Y.[Xiao-Yu], Gong, N.Z.Q.[Neil Zhen-Qiang],
MPAF: Model Poisoning Attacks to Federated Learning based on Fake Clients,
FedVision22(3395-3403)
IEEE DOI 2210
Training, Computational modeling, Production, Collaborative work BibRef

Xu, Q.L.[Qiu-Ling], Tao, G.H.[Guan-Hong], Zhang, X.Y.[Xiang-Yu],
Bounded Adversarial Attack on Deep Content Features,
CVPR22(15182-15191)
IEEE DOI 2210
Ethics, Neurons, Gaussian distribution, Regulation, Adversarial attack and defense, Representation learning BibRef

Luo, C.[Cheng], Lin, Q.L.[Qin-Liang], Xie, W.C.[Wei-Cheng], Wu, B.Z.[Bi-Zhu], Xie, J.H.[Jin-Heng], Shen, L.L.[Lin-Lin],
Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity,
CVPR22(15294-15303)
IEEE DOI 2210
Representation learning, Measurement, Visualization, Perturbation methods, Semantics, Self- semi- meta- unsupervised learning BibRef

Suryanto, N.[Naufal], Kim, Y.[Yongsu], Kang, H.[Hyoeun], Larasati, H.T.[Harashta Tatimma], Yun, Y.Y.[Young-Yeo], Le, T.T.H.[Thi-Thu-Huong], Yang, H.[Hunmin], Oh, S.Y.[Se-Yoon], Kim, H.[Howon],
DTA: Physical Camouflage Attacks using Differentiable Transformation Network,
CVPR22(15284-15293)
IEEE DOI 2210
Solid modeling, Object detection, Rendering (computer graphics), Engines, Adversarial attack and defense, retrieval BibRef

Zhong, Y.Q.[Yi-Qi], Liu, X.M.[Xian-Ming], Zhai, D.[Deming], Jiang, J.J.[Jun-Jun], Ji, X.Y.[Xiang-Yang],
Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon,
CVPR22(15324-15333)
IEEE DOI 2210
Printing, Laser theory, Codes, Perturbation methods, Machine vision, Machine learning, Adversarial attack and defense, retrieval BibRef

Tong, A.C.H.[Adrien Chan-Hon],
Symmetric adversarial poisoning against deep learning,
IPTA20(1-5)
IEEE DOI 2206
Support vector machines, Training, Deep learning, Perturbation methods, Image processing, Training data, Tools, deep learning BibRef

Li, Y.M.[Yi-Ming], Wen, C.C.[Cong-Cong], Juefei-Xu, F.[Felix], Feng, C.[Chen],
Fooling LiDAR Perception via Adversarial Trajectory Perturbation,
ICCV21(7878-7887)
IEEE DOI 2203
Point cloud compression, Wireless communication, Wireless sensor networks, Laser radar, Perturbation methods, Vision for robotics and autonomous vehicles BibRef

Wang, X.S.[Xiao-Sen], He, X.R.[Xuan-Ran], Wang, J.D.[Jing-Dong], He, K.[Kun],
Admix: Enhancing the Transferability of Adversarial Attacks,
ICCV21(16138-16147)
IEEE DOI 2203
Deep learning, Codes, Neural networks, Adversarial machine learning, Task analysis, Standards, Recognition and classification BibRef

Chen, S.[Si], Kahla, M.[Mostafa], Jia, R.X.[Ruo-Xi], Qi, G.J.[Guo-Jun],
Knowledge-Enriched Distributional Model Inversion Attacks,
ICCV21(16158-16167)
IEEE DOI 2203
Training, Deep learning, Privacy, Codes, Computational modeling, Neural networks, Adversarial learning, Motion and tracking BibRef

Zhou, M.[Mo], Wang, L.[Le], Niu, Z.X.[Zhen-Xing], Zhang, Q.L.[Qi-Lin], Xu, Y.H.[Ying-Hui], Zheng, N.N.[Nan-Ning], Hua, G.[Gang],
Practical Relative Order Attack in Deep Ranking,
ICCV21(16393-16402)
IEEE DOI 2203
Measurement, Deep learning, Correlation, Perturbation methods, Neural networks, Interference, Adversarial learning, Fairness, Image and video retrieval BibRef

Shafran, A.[Avital], Peleg, S.[Shmuel], Hoshen, Y.[Yedid],
Membership Inference Attacks are Easier on Difficult Problems,
ICCV21(14800-14809)
IEEE DOI 2203
Training, Image segmentation, Uncertainty, Semantics, Neural networks, Benchmark testing, Data models, Fairness, grouping and shape BibRef

Naseer, M.[Muzammal], Khan, S.[Salman], Hayat, M.[Munawar], Khan, F.S.[Fahad Shahbaz], Porikli, F.M.[Fatih M.],
On Generating Transferable Targeted Perturbations,
ICCV21(7688-7697)
IEEE DOI 2203
Codes, Perturbation methods, Computational modeling, Transformers, Linear programming, Generators, Adversarial learning, Recognition and classification BibRef

Chen, H.[Huili], Fu, C.[Cheng], Zhao, J.[Jishen], Koushanfar, F.[Farinaz],
ProFlip: Targeted Trojan Attack with Progressive Bit Flips,
ICCV21(7698-7707)
IEEE DOI 2203
Training, Runtime, Neurons, Neural networks, Random access memory, Predictive models, Laser modes, Adversarial learning, Optimization and learning methods BibRef

Rony, J.[Jérôme], Granger, E.[Eric], Pedersoli, M.[Marco], Ayed, I.B.[Ismail Ben],
Augmented Lagrangian Adversarial Attacks,
ICCV21(7718-7727)
IEEE DOI 2203
Computational modeling, Computational efficiency, Computational complexity, Adversarial learning, Optimization and learning methods BibRef

Yuan, Z.[Zheng], Zhang, J.[Jie], Jia, Y.[Yunpei], Tan, C.[Chuanqi], Xue, T.[Tao], Shan, S.G.[Shi-Guang],
Meta Gradient Adversarial Attack,
ICCV21(7728-7737)
IEEE DOI 2203
Philosophical considerations, Task analysis, Adversarial learning, BibRef

Park, G.Y.[Geon Yeong], Lee, S.W.[Sang Wan],
Reliably fast adversarial training via latent adversarial perturbation,
ICCV21(7738-7747)
IEEE DOI 2203
Training, Costs, Perturbation methods, Linearity, Minimization, Computational efficiency, Adversarial learning, Recognition and classification BibRef

Tu, J.[James], Wang, T.[Tsunhsuan], Wang, J.K.[Jing-Kang], Manivasagam, S.[Sivabalan], Ren, M.[Mengye], Urtasun, R.[Raquel],
Adversarial Attacks On Multi-Agent Communication,
ICCV21(7748-7757)
IEEE DOI 2203
Deep learning, Fault tolerance, Protocols, Computational modeling, Neural networks, Fault tolerant systems, Robustness, Vision for robotics and autonomous vehicles BibRef

Feng, W.W.[Wei-Wei], Wu, B.Y.[Bao-Yuan], Zhang, T.Z.[Tian-Zhu], Zhang, Y.[Yong], Zhang, Y.D.[Yong-Dong],
Meta-Attack: Class-agnostic and Model-agnostic Physical Adversarial Attack,
ICCV21(7767-7776)
IEEE DOI 2203
Training, Deep learning, Image color analysis, Shape, Computational modeling, Neural networks, Adversarial learning, BibRef

Kim, J.Y.[Jae-Yeon], Hua, B.S.[Binh-Son], Nguyen, D.T.[Duc Thanh], Yeung, S.K.[Sai-Kit],
Minimal Adversarial Examples for Deep Learning on 3D Point Clouds,
ICCV21(7777-7786)
IEEE DOI 2203
Point cloud compression, Deep learning, Image color analysis, Perturbation methods, Semantics, Adversarial learning, Recognition and classification BibRef

Stutz, D.[David], Hein, M.[Matthias], Schiele, B.[Bernt],
Relating Adversarially Robust Generalization to Flat Minima,
ICCV21(7787-7797)
IEEE DOI 2203
Training, Correlation, Perturbation methods, Computational modeling, Robustness, Loss measurement, Optimization and learning methods BibRef

Li, C.[Chao], Gao, S.Q.[Shang-Qian], Deng, C.[Cheng], Liu, W.[Wei], Huang, H.[Heng],
Adversarial Attack on Deep Cross-Modal Hamming Retrieval,
ICCV21(2198-2207)
IEEE DOI 2203
Learning systems, Knowledge engineering, Deep learning, Correlation, Perturbation methods, Neural networks, Vision + other modalities BibRef

Duan, R.J.[Ran-Jie], Chen, Y.F.[Yue-Feng], Niu, D.[Dantong], Yang, Y.[Yun], Qin, A.K., He, Y.[Yuan],
AdvDrop: Adversarial Attack to DNNs by Dropping Information,
ICCV21(7486-7495)
IEEE DOI 2203
Deep learning, Visualization, Neural networks, Robustness, Visual perception, Adversarial learning, BibRef

Hwang, J.[Jaehui], Kim, J.H.[Jun-Hyuk], Choi, J.H.[Jun-Ho], Lee, J.S.[Jong-Seok],
Just One Moment: Structural Vulnerability of Deep Action Recognition against One Frame Attack,
ICCV21(7648-7656)
IEEE DOI 2203
Analytical models, Perturbation methods, Task analysis, Adversarial learning, Action and behavior recognition BibRef

Moayeri, M.[Mazda], Feizi, S.[Soheil],
Sample Efficient Detection and Classification of Adversarial Attacks via Self-Supervised Embeddings,
ICCV21(7657-7666)
IEEE DOI 2203
Training, Adaptation models, Toxicology, Costs, Perturbation methods, Computational modeling, Adversarial learning, Transfer/Low-shot/Semi/Unsupervised Learning BibRef

Wang, Z.B.[Zhi-Bo], Guo, H.C.[Heng-Chang], Zhang, Z.F.[Zhi-Fei], Liu, W.X.[Wen-Xin], Qin, Z.[Zhan], Ren, K.[Kui],
Feature Importance-aware Transferable Adversarial Attacks,
ICCV21(7619-7628)
IEEE DOI 2203
Degradation, Limiting, Correlation, Computational modeling, Aggregates, Transforms, Adversarial learning, Explainable AI, Recognition and classification BibRef

Wang, X.[Xin], Lin, S.Y.[Shu-Yun], Zhang, H.[Hao], Zhu, Y.F.[Yu-Fei], Zhang, Q.S.[Quan-Shi],
Interpreting Attributions and Interactions of Adversarial Attacks,
ICCV21(1075-1084)
IEEE DOI 2203
Visualization, Costs, Perturbation methods, Estimation, Task analysis, Faces, Explainable AI, Adversarial learning BibRef

Kumar, C.[Chetan], Kumar, D.[Deepak], Shao, M.[Ming],
Generative Adversarial Attack on Ensemble Clustering,
WACV22(3839-3848)
IEEE DOI 2202
Clustering methods, Supervised learning, Clustering algorithms, Benchmark testing, Probabilistic logic, Semi- and Un- supervised Learning BibRef

Du, A.[Andrew], Chen, B.[Bo], Chin, T.J.[Tat-Jun], Law, Y.W.[Yee Wei], Sasdelli, M.[Michele], Rajasegaran, R.[Ramesh], Campbell, D.[Dillon],
Physical Adversarial Attacks on an Aerial Imagery Object Detector,
WACV22(3798-3808)
IEEE DOI 2202
Measurement, Deep learning, Satellites, Neural networks, Lighting, Detectors, Observers, Deep Learning -> Adversarial Learning, Adversarial Attack and Defense Methods BibRef

Zhao, B.Y.[Bing-Yin], Lao, Y.J.[Ying-Jie],
Towards Class-Oriented Poisoning Attacks Against Neural Networks,
WACV22(2244-2253)
IEEE DOI 2202
Training, Measurement, Computational modeling, Neural networks, Machine learning, Predictive models, Adversarial Attack and Defense Methods BibRef

Chen, Z.H.[Zhen-Hua], Wang, C.H.[Chu-Hua], Crandall, D.[David],
Semantically Stealthy Adversarial Attacks against Segmentation Models,
WACV22(2846-2855)
IEEE DOI 2202
Image segmentation, Perturbation methods, Computational modeling, Feature extraction, Context modeling, Grouping and Shape BibRef

Yin, M.J.[Ming-Jun], Li, S.[Shasha], Song, C.Y.[Cheng-Yu], Asif, M.S.[M. Salman], Roy-Chowdhury, A.K.[Amit K.], Krishnamurthy, S.V.[Srikanth V.],
ADC: Adversarial attacks against object Detection that evade Context consistency checks,
WACV22(2836-2845)
IEEE DOI 2202
Deep learning, Adaptation models, Computational modeling, Neural networks, Buildings, Detectors, Adversarial Attack and Defense Methods Object Detection/Recognition/Categorization BibRef

Li, X.R.[Xiao-Rui], Cui, W.Y.[Wei-Yu], Huang, J.W.[Jia-Wei], Wang, W.Y.[Wen-Yi], Chen, J.W.[Jian-Wen],
Regularized Intermediate Layers Attack: Adversarial Examples With High Transferability,
ICIP21(1904-1908)
IEEE DOI 2201
Image recognition, Filtering, Perturbation methods, Optimization methods, Convolutional neural networks, Transferability BibRef

Bai, T.[Tao], Zhao, J.[Jun], Zhu, J.L.[Jin-Lin], Han, S.D.[Shou-Dong], Chen, J.F.[Jie-Feng], Li, B.[Bo], Kot, A.[Alex],
AI-GAN: Attack-Inspired Generation of Adversarial Examples,
ICIP21(2543-2547)
IEEE DOI 2201
Training, Image quality, Deep learning, Perturbation methods, Image processing, Generative adversarial networks, deep learning BibRef

Abdelfattah, M.[Mazen], Yuan, K.W.[Kai-Wen], Wang, Z.J.[Z. Jane], Ward, R.[Rabab],
Towards Universal Physical Attacks on Cascaded Camera-Lidar 3d Object Detection Models,
ICIP21(3592-3596)
IEEE DOI 2201
Geometry, Deep learning, Solid modeling, Laser radar, Image processing, Object detection, Adversarial attacks, deep learning BibRef

Gurulingan, N.K.[Naresh Kumar], Arani, E.[Elahe], Zonooz, B.[Bahram],
UniNet: A Unified Scene Understanding Network and Exploring Multi-Task Relationships through the Lens of Adversarial Attacks,
DeepMTL21(2239-2248)
IEEE DOI 2112
Shape, Semantics, Neural networks, Information sharing, Estimation, Object detection BibRef

Ding, Y.Z.[Yu-Zhen], Thakur, N.[Nupur], Li, B.X.[Bao-Xin],
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers,
AROW21(142-151)
IEEE DOI 2112
Measurement, Deep learning, Neural networks, Buildings, Gaussian distribution BibRef

Boloor, A.[Adith], Wu, T.[Tong], Naughton, P.[Patrick], Chakrabarti, A.[Ayan], Zhang, X.[Xuan], Vorobeychik, Y.[Yevgeniy],
Can Optical Trojans Assist Adversarial Perturbations?,
AROW21(122-131)
IEEE DOI 2112
Perturbation methods, Neural networks, Pipelines, Optical device fabrication, Cameras, Optical imaging, Trojan horses BibRef

Gnanasambandam, A.[Abhiram], Sherman, A.M.[Alex M.], Chan, S.H.[Stanley H.],
Optical Adversarial Attack,
AROW21(92-101)
IEEE DOI 2112
Integrated optics, Computational modeling, Lighting, Optical imaging BibRef

Yu, Y.R.[Yun-Rui], Gao, X.T.[Xi-Tong], Xu, C.Z.[Cheng-Zhong],
LAFEAT: Piercing Through Adversarial Defenses with Latent Features,
CVPR21(5731-5741)
IEEE DOI 2111
Degradation, Schedules, Computational modeling, Perturbation methods, Lattices, Robustness BibRef

Wang, X.S.[Xiao-Sen], He, K.[Kun],
Enhancing the Transferability of Adversarial Attacks through Variance Tuning,
CVPR21(1924-1933)
IEEE DOI 2111
Deep learning, Codes, Perturbation methods, Computational modeling, Iterative methods BibRef

Pony, R.[Roi], Naeh, I.[Itay], Mannor, S.[Shie],
Over-the-Air Adversarial Flickering Attacks against Video Recognition Networks,
CVPR21(515-524)
IEEE DOI 2111
Deep learning, Perturbation methods, Observers, Image classification BibRef

Rampini, A.[Arianna], Pestarini, F.[Franco], Cosmo, L.[Luca], Melzi, S.[Simone], Rodolŕ, E.[Emanuele],
Universal Spectral Adversarial Attacks for Deformable Shapes,
CVPR21(3215-3225)
IEEE DOI 2111
Geometry, Shape, Perturbation methods, Predictive models, Eigenvalues and eigenfunctions, Robustness BibRef

Rezaei, S.[Shahbaz], Liu, X.[Xin],
On the Difficulty of Membership Inference Attacks,
CVPR21(7888-7896)
IEEE DOI 2111
Training, Analytical models, Codes, Computational modeling BibRef

Kariyappa, S.[Sanjay], Prakash, A.[Atul], Qureshi, M.K.[Moinuddin K],
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation,
CVPR21(13809-13818)
IEEE DOI 2111
Training, Cloning, Estimation, Machine learning, Intellectual property, Predictive models, Data models BibRef

Duan, R.J.[Ran-Jie], Mao, X.F.[Xiao-Feng], Qin, A.K., Chen, Y.F.[Yue-Feng], Ye, S.[Shaokai], He, Y.[Yuan], Yang, Y.[Yun],
Adversarial Laser Beam: Effective Physical-World Attack to DNNs in a Blink,
CVPR21(16057-16066)
IEEE DOI 2111
Deep learning, Laser theory, Robustness, Laser beams BibRef

Chen, Z.K.[Zhi-Kai], Xie, L.X.[Ling-Xi], Pang, S.M.[Shan-Min], He, Y.[Yong], Tian, Q.[Qi],
Appending Adversarial Frames for Universal Video Attack,
WACV21(3198-3207)
IEEE DOI 2106
Measurement, Perturbation methods, Semantics, Pipelines, Euclidean distance BibRef

Cancela, B.[Brais], Bolón-Canedo, V.[Verónica], Alonso-Betanzos, A.[Amparo],
A delayed Elastic-Net approach for performing adversarial attacks,
ICPR21(378-384)
IEEE DOI 2105
Perturbation methods, Data preprocessing, Benchmark testing, Size measurement, Robustness, Security BibRef

Li, X.C.[Xiu-Chuan], Zhang, X.Y.[Xu-Yao], Yin, F.[Fei], Liu, C.L.[Cheng-Lin],
F-mixup: Attack CNNs From Fourier Perspective,
ICPR21(541-548)
IEEE DOI 2105
Training, Frequency-domain analysis, Perturbation methods, Neural networks, Robustness, High frequency BibRef

Grosse, K.[Kathrin], Smith, M.T.[Michael T.], Backes, M.[Michael],
Killing Four Birds with one Gaussian Process: The Relation between different Test-Time Attacks,
ICPR21(4696-4703)
IEEE DOI 2105
Analytical models, Reverse engineering, Training data, Gaussian processes, Data models, Classification algorithms BibRef

Barati, R.[Ramin], Safabakhsh, R.[Reza], Rahmati, M.[Mohammad],
Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks,
ICPR21(7036-7042)
IEEE DOI 2105
Training, Artificial neural networks, Proposals, Convergence, adversarial attack, robustness, adversarial training BibRef

Li, W.J.[Wen-Jie], Tondi, B.[Benedetta], Ni, R.R.[Rong-Rong], Barni, M.[Mauro],
Increased-confidence Adversarial Examples for Deep Learning Counter-forensics,
MMForWild20(411-424).
Springer DOI 2103
BibRef

Dong, X.S.[Xin-Shuai], Liu, H.[Hong], Ji, R.R.[Rong-Rong], Cao, L.J.[Liu-Juan], Ye, Q.X.[Qi-Xiang], Liu, J.Z.[Jian-Zhuang], Tian, Q.[Qi],
API-net: Robust Generative Classifier via a Single Discriminator,
ECCV20(XIII:379-394).
Springer DOI 2011
BibRef

Liu, A.S.[Ai-Shan], Huang, T.R.[Tai-Ran], Liu, X.L.[Xiang-Long], Xu, Y.T.[Yi-Tao], Ma, Y.Q.[Yu-Qing], Chen, X.Y.[Xin-Yun], Maybank, S.J.[Stephen J.], Tao, D.C.[Da-Cheng],
Spatiotemporal Attacks for Embodied Agents,
ECCV20(XVII:122-138).
Springer DOI 2011
Code, Adversarial Attack.
WWW Link. BibRef

Fan, Y.B.[Yan-Bo], Wu, B.Y.[Bao-Yuan], Li, T.H.[Tuan-Hui], Zhang, Y.[Yong], Li, M.Y.[Ming-Yang], Li, Z.F.[Zhi-Feng], Yang, Y.J.[Yu-Jiu],
Sparse Adversarial Attack via Perturbation Factorization,
ECCV20(XXII:35-50).
Springer DOI 2011
BibRef

Guo, J.F.[Jun-Feng], Liu, C.[Cong],
Practical Poisoning Attacks on Neural Networks,
ECCV20(XXVII:142-158).
Springer DOI 2011
BibRef

Costales, R., Mao, C., Norwitz, R., Kim, B., Yang, J.,
Live Trojan Attacks on Deep Neural Networks,
AML-CV20(3460-3469)
IEEE DOI 2008
Trojan horses, Computational modeling, Neural networks, Machine learning BibRef

Haque, M., Chauhan, A., Liu, C., Yang, W.,
ILFO: Adversarial Attack on Adaptive Neural Networks,
CVPR20(14252-14261)
IEEE DOI 2008
Computational modeling, Energy consumption, Robustness, Neural networks, Adaptation models, Machine learning, Perturbation methods BibRef

Zhou, M., Wu, J., Liu, Y., Liu, S., Zhu, C.,
DaST: Data-Free Substitute Training for Adversarial Attacks,
CVPR20(231-240)
IEEE DOI 2008
Data models, Training, Machine learning, Perturbation methods, Task analysis, Estimation BibRef

Ganeshan, A.[Aditya], Vivek, B.S., Radhakrishnan, V.B.[Venkatesh Babu],
FDA: Feature Disruptive Attack,
ICCV19(8068-8078)
IEEE DOI 2004
Deal with adversarial attacks. image classification, image representation, learning (artificial intelligence), neural nets, optimisation, BibRef

Han, J., Dong, X., Zhang, R., Chen, D., Zhang, W., Yu, N., Luo, P., Wang, X.,
Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once,
ICCV19(5157-5166)
IEEE DOI 2004
convolutional neural nets, learning (artificial intelligence), pattern classification, security of data, Decoding BibRef

Deng, Y., Karam, L.J.,
Universal Adversarial Attack Via Enhanced Projected Gradient Descent,
ICIP20(1241-1245)
IEEE DOI 2011
Perturbation methods, Computational modeling, Training, Convolutional neural networks, projected gradient descent (PGD) BibRef

Sun, C., Chen, S., Cai, J., Huang, X.,
Type I Attack For Generative Models,
ICIP20(593-597)
IEEE DOI 2011
Image reconstruction, Decoding, Aerospace electronics, Generative adversarial networks, generative models BibRef

Braunegg, A., Chakraborty, A.[Amartya], Krumdick, M.[Michael], Lape, N.[Nicole], Leary, S.[Sara], Manville, K.[Keith], Merkhofer, E.[Elizabeth], Strickhart, L.[Laura], Walmer, M.[Matthew],
Apricot: A Dataset of Physical Adversarial Attacks on Object Detection,
ECCV20(XXI:35-50).
Springer DOI 2011
BibRef

Zhang, H.[Hu], Zhu, L.C.[Lin-Chao], Zhu, Y.[Yi], Yang, Y.[Yi],
Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior,
ECCV20(XX:240-256).
Springer DOI 2011
BibRef

Gao, L.L.[Lian-Li], Zhang, Q.L.[Qi-Long], Song, J.K.[Jing-Kuan], Liu, X.L.[Xiang-Long], Shen, H.T.[Heng Tao],
Patch-wise Attack for Fooling Deep Neural Network,
ECCV20(XXVIII:307-322).
Springer DOI 2011
BibRef

Bai, J.W.[Jia-Wang], Chen, B.[Bin], Li, Y.M.[Yi-Ming], Wu, D.X.[Dong-Xian], Guo, W.W.[Wei-Wei], Xia, S.T.[Shu-Tao], Yang, E.H.[En-Hui],
Targeted Attack for Deep Hashing Based Retrieval,
ECCV20(I:618-634).
Springer DOI 2011
BibRef

Nakka, K.K.[Krishna Kanth], Salzmann, M.[Mathieu],
Indirect Local Attacks for Context-aware Semantic Segmentation Networks,
ECCV20(V:611-628).
Springer DOI 2011
BibRef

Wu, Z.X.[Zu-Xuan], Lim, S.N.[Ser-Nam], Davis, L.S.[Larry S.], Goldstein, T.[Tom],
Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors,
ECCV20(IV:1-17).
Springer DOI 2011
BibRef

Li, Q.Z.[Qi-Zhang], Guo, Y.W.[Yi-Wen], Chen, H.[Hao],
Yet Another Intermediate-level Attack,
ECCV20(XVI: 241-257).
Springer DOI 2010
BibRef

Li, M., Deng, C., Li, T., Yan, J., Gao, X., Huang, H.,
Towards Transferable Targeted Attack,
CVPR20(638-646)
IEEE DOI 2008
Curing, Iterative methods, Extraterrestrial measurements, Entropy, Perturbation methods, Robustness BibRef

Gupta, S., Dube, P., Verma, A.,
Improving the affordability of robustness training for DNNs,
AML-CV20(3383-3392)
IEEE DOI 2008
Training, Mathematical model, Computational modeling, Robustness, Neural networks, Optimization BibRef

Zhang, Z., Wu, T.,
Learning Ordered Top-k Adversarial Attacks via Adversarial Distillation,
AML-CV20(3364-3373)
IEEE DOI 2008
Perturbation methods, Robustness, Task analysis, Semantics, Training, Visualization, Protocols BibRef

Chen, X., Yan, X., Zheng, F., Jiang, Y., Xia, S., Zhao, Y., Ji, R.,
One-Shot Adversarial Attacks on Visual Tracking With Dual Attention,
CVPR20(10173-10182)
IEEE DOI 2008
Target tracking, Task analysis, Visualization, Perturbation methods, Object tracking, Optimization BibRef

Zhou, H., Chen, D., Liao, J., Chen, K., Dong, X., Liu, K., Zhang, W., Hua, G., Yu, N.,
LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud Based Deep Networks,
CVPR20(10353-10362)
IEEE DOI 2008
Feature extraction, Perturbation methods, Decoding, Training, Neural networks, Target recognition BibRef

Machiraju, H.[Harshitha], Balasubramanian, V.N.[Vineeth N],
A Little Fog for a Large Turn,
WACV20(2891-2900)
IEEE DOI 2006
Perturbation methods, Meteorology, Autonomous robots, Task analysis, Data models, Predictive models, Robustness BibRef

Yang, C.H., Liu, Y., Chen, P., Ma, X., Tsai, Y.J.,
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks,
ICIP19(3811-3815)
IEEE DOI 1910
Causal Reasoning, Adversarial Example, Adversarial Robustness, Interpretable Deep Learning, Visual Reasoning BibRef

Yao, H., Regan, M., Yang, Y., Ren, Y.,
Image Decomposition and Classification Through a Generative Model,
ICIP19(400-404)
IEEE DOI 1910
Generative model, classification, adversarial defense BibRef

Li, J., Ji, R., Liu, H., Hong, X., Gao, Y., Tian, Q.,
Universal Perturbation Attack Against Image Retrieval,
ICCV19(4898-4907)
IEEE DOI 2004
feature extraction, image classification, image representation, image retrieval, learning (artificial intelligence), Pipelines BibRef

Finlay, C., Pooladian, A., Oberman, A.,
The LogBarrier Adversarial Attack: Making Effective Use of Decision Boundary Information,
ICCV19(4861-4869)
IEEE DOI 2004
gradient methods, image classification, minimisation, neural nets, security of data, LogBarrier adversarial attack, Benchmark testing BibRef

Jandial, S., Mangla, P., Varshney, S., Balasubramanian, V.,
AdvGAN++: Harnessing Latent Layers for Adversary Generation,
NeruArch19(2045-2048)
IEEE DOI 2004
feature extraction, neural nets, MNIST datasets, CIFAR-10 datasets, attack rates, realistic images, latent features, input image, AdvGAN BibRef

Wang, C.L.[Cheng-Long], Bunel, R.[Rudy], Dvijotham, K.[Krishnamurthy], Huang, P.S.[Po-Sen], Grefenstette, E.[Edward], Kohli, P.[Pushmeet],
Knowing When to Stop: Evaluation and Verification of Conformity to Output-Size Specifications,
CVPR19(12252-12261).
IEEE DOI 2002
ulnerability of these models to attacks aimed at changing the output-size. BibRef

Modas, A.[Apostolos], Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Frossard, P.[Pascal],
SparseFool: A Few Pixels Make a Big Difference,
CVPR19(9079-9088).
IEEE DOI 2002
sparse attack. BibRef

Yao, Z.W.[Zhe-Wei], Gholami, A.[Amir], Xu, P.[Peng], Keutzer, K.[Kurt], Mahoney, M.W.[Michael W.],
Trust Region Based Adversarial Attack on Neural Networks,
CVPR19(11342-11351).
IEEE DOI 2002
BibRef

Zeng, X.H.[Xiao-Hui], Liu, C.X.[Chen-Xi], Wang, Y.S.[Yu-Siang], Qiu, W.C.[Wei-Chao], Xie, L.X.[Ling-Xi], Tai, Y.W.[Yu-Wing], Tang, C.K.[Chi-Keung], Yuille, A.L.[Alan L.],
Adversarial Attacks Beyond the Image Space,
CVPR19(4297-4306).
IEEE DOI 2002
BibRef

Corneanu, C.A.[Ciprian A.], Madadi, M.[Meysam], Escalera, S.[Sergio], Martinez, A.M.[Aleix M.],
What Does It Mean to Learn in Deep Networks? And, How Does One Detect Adversarial Attacks?,
CVPR19(4752-4761).
IEEE DOI 2002
BibRef

Liu, X.Q.[Xuan-Qing], Hsieh, C.J.[Cho-Jui],
Rob-GAN: Generator, Discriminator, and Adversarial Attacker,
CVPR19(11226-11235).
IEEE DOI 2002
BibRef

Gupta, P.[Puneet], Rahtu, E.[Esa],
MLAttack: Fooling Semantic Segmentation Networks by Multi-layer Attacks,
GCPR19(401-413).
Springer DOI 1911
BibRef

Zhao, W.[Wei], Yang, P.P.[Peng-Peng], Ni, R.R.[Rong-Rong], Zhao, Y.[Yao], Li, W.J.[Wen-Jie],
Cycle GAN-Based Attack on Recaptured Images to Fool both Human and Machine,
IWDW18(83-92).
Springer DOI 1905
BibRef

Xu, X.J.[Xiao-Jun], Chen, X.Y.[Xin-Yun], Liu, C.[Chang], Rohrbach, A.[Anna], Darrell, T.J.[Trevor J.], Song, D.[Dawn],
Fooling Vision and Language Models Despite Localization and Attention Mechanism,
CVPR18(4951-4961)
IEEE DOI 1812
Attacks. Prediction algorithms, Computational modeling, Neural networks, Knowledge discovery, Visualization, Predictive models, Natural languages BibRef

Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.,
Boosting Adversarial Attacks with Momentum,
CVPR18(9185-9193)
IEEE DOI 1812
Iterative methods, Robustness, Training, Data models, Adaptation models, Security BibRef

Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., Song, D.,
Robust Physical-World Attacks on Deep Learning Visual Classification,
CVPR18(1625-1634)
IEEE DOI 1812
Perturbation methods, Roads, Cameras, Visualization, Pipelines, Autonomous vehicles, Detectors BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Backdoor Attacks .


Last update:Nov 26, 2024 at 16:40:19