Biggio, B.[Battista],
Roli, F.[Fabio],
Wild Patterns: Ten Years After the Rise of Adversarial Machine
Learning,
PR(84), 2018, pp. 317-331.
Elsevier DOI
1809
Award, Pattern Recognition. Adversarial machine learning, Evasion attacks,
Poisoning attacks, Adversarial examples, Secure learning, Deep learning
BibRef
Hang, J.[Jie],
Han, K.[Keji],
Chen, H.[Hui],
Li, Y.[Yun],
Ensemble adversarial black-box attacks against deep learning systems,
PR(101), 2020, pp. 107184.
Elsevier DOI
2003
Black-box attack, Vulnerability, Ensemble adversarial attack,
Diversity, Transferability
BibRef
Croce, F.[Francesco],
Rauber, J.[Jonas],
Hein, M.[Matthias],
Scaling up the Randomized Gradient-Free Adversarial Attack Reveals
Overestimation of Robustness Using Established Attacks,
IJCV(128), No. 4, April 2020, pp. 1028-1046.
Springer DOI
2004
BibRef
Earlier: A1, A3, Only:
A Randomized Gradient-Free Attack on ReLU Networks,
GCPR18(215-227).
Springer DOI
1905
Award, GCPR, HM.
BibRef
Romano, Y.[Yaniv],
Aberdam, A.[Aviad],
Sulam, J.[Jeremias],
Elad, M.[Michael],
Adversarial Noise Attacks of Deep Learning Architectures:
Stability Analysis via Sparse-Modeled Signals,
JMIV(62), No. 3, April 2020, pp. 313-327.
Springer DOI
2004
BibRef
Aberdam, A.[Aviad],
Golts, A.[Alona],
Elad, M.[Michael],
Ada-LISTA: Learned Solvers Adaptive to Varying Models,
PAMI(44), No. 12, December 2022, pp. 9222-9235.
IEEE DOI
2212
Dictionaries, Adaptation models, Training, Convergence, Encoding,
Sparse matrices, Numerical models, Sparse coding, learned solvers,
deep learning modeling
BibRef
Ozbulak, U.[Utku],
Gasparyan, M.[Manvel],
de Neve, W.[Wesley],
van Messem, A.[Arnout],
Perturbation analysis of gradient-based adversarial attacks,
PRL(135), 2020, pp. 313-320.
Elsevier DOI
2006
Adversarial attacks, Adversarial examples, Deep learning, Perturbation analysis
BibRef
Wan, S.[Sheng],
Wu, T.Y.[Tung-Yu],
Hsu, H.W.[Heng-Wei],
Wong, W.H.[Wing Hung],
Lee, C.Y.[Chen-Yi],
Feature Consistency Training With JPEG Compressed Images,
CirSysVideo(30), No. 12, December 2020, pp. 4769-4780.
IEEE DOI
2012
Deep neural networks are vulnerable to JPEG compression artifacts.
Image coding, Distortion, Training, Transform coding, Robustness,
Quantization (signal), Feature extraction, Compression artifacts,
classification robustness
BibRef
Che, Z.,
Borji, A.,
Zhai, G.,
Ling, S.,
Li, J.,
Tian, Y.,
Guo, G.,
Le Callet, P.,
Adversarial Attack Against Deep Saliency Models Powered by
Non-Redundant Priors,
IP(30), 2021, pp. 1973-1988.
IEEE DOI
2101
Computational modeling, Perturbation methods, Redundancy,
Task analysis, Visualization, Robustness, Neural networks,
gradient estimation
BibRef
Xu, Y.,
Du, B.,
Zhang, L.,
Assessing the Threat of Adversarial Examples on Deep Neural Networks
for Remote Sensing Scene Classification: Attacks and Defenses,
GeoRS(59), No. 2, February 2021, pp. 1604-1617.
IEEE DOI
2101
Remote sensing, Neural networks, Deep learning,
Perturbation methods, Feature extraction, Task analysis,
scene classification
BibRef
Correia-Silva, J.R.[Jacson Rodrigues],
Berriel, R.F.[Rodrigo F.],
Badue, C.[Claudine],
de Souza, A.F.[Alberto F.],
Oliveira-Santos, T.[Thiago],
Copycat CNN: Are random non-Labeled data enough to steal knowledge
from black-box models?,
PR(113), 2021, pp. 107830.
Elsevier DOI
2103
Copy a CNN model.
Deep learning, Convolutional neural network,
Neural network attack, Stealing network knowledge, Knowledge distillation
BibRef
Xiao, Y.[Yatie],
Pun, C.M.[Chi-Man],
Liu, B.[Bo],
Fooling deep neural detection networks with adaptive object-oriented
adversarial perturbation,
PR(115), 2021, pp. 107903.
Elsevier DOI
2104
Object detection, Adversarial attack, Adaptive object-oriented perturbation
BibRef
Yamanaka, K.[Koichiro],
Takahashi, K.[Keita],
Fujii, T.[Toshiaki],
Matsumoto, R.[Ryuraroh],
Simultaneous Attack on CNN-Based Monocular Depth Estimation and Optical
Flow Estimation,
IEICE(E104-D), No. 5, May 2021, pp. 785-788.
WWW Link.
2105
BibRef
Lin, H.Y.[Hsiao-Ying],
Biggio, B.[Battista],
Adversarial Machine Learning: Attacks From Laboratories to the Real
World,
Computer(54), No. 5, May 2021, pp. 56-60.
IEEE DOI
2106
Adversarial machine learning, Data models, Training data,
Biological system modeling
BibRef
Wang, B.[Bo],
Zhao, M.[Mengnan],
Wang, W.[Wei],
Wei, F.[Fei],
Qin, Z.[Zhan],
Ren, K.[Kui],
Are You Confident That You Have Successfully Generated Adversarial
Examples?,
CirSysVideo(31), No. 6, June 2021, pp. 2089-2099.
IEEE DOI
2106
Perturbation methods, Iterative methods, Computational modeling,
Neural networks, Security, Training, Robustness,
buffer
BibRef
Gragnaniello, D.[Diego],
Marra, F.[Francesco],
Verdoliva, L.[Luisa],
Poggi, G.[Giovanni],
Perceptual quality-preserving black-box attack against deep learning
image classifiers,
PRL(147), 2021, pp. 142-149.
Elsevier DOI
2106
Image classification, Face recognition, Adversarial attacks, Black-box
BibRef
Tang, S.L.[San-Li],
Huang, X.L.[Xiao-Lin],
Chen, M.J.[Ming-Jian],
Sun, C.J.[Cheng-Jin],
Yang, J.[Jie],
Adversarial Attack Type I: Cheat Classifiers by Significant Changes,
PAMI(43), No. 3, March 2021, pp. 1100-1109.
IEEE DOI
2102
Neural networks, Training, Aerospace electronics,
Toy manufacturing industry, Sun, Face recognition, Task analysis,
supervised variational autoencoder
BibRef
Wang, L.[Lin],
Yoon, K.J.[Kuk-Jin],
PSAT-GAN: Efficient Adversarial Attacks Against Holistic Scene
Understanding,
IP(30), 2021, pp. 7541-7553.
IEEE DOI
2109
Task analysis, Perturbation methods, Visualization, Pipelines,
Autonomous vehicles, Semantics, Generative adversarial networks,
generative model
BibRef
Mohamad-Nezami, O.[Omid],
Chaturvedi, A.[Akshay],
Dras, M.[Mark],
Garain, U.[Utpal],
Pick-Object-Attack:
Type-specific adversarial attack for object detection,
CVIU(211), 2021, pp. 103257.
Elsevier DOI
2110
Adversarial attack, Faster R-CNN, Deep learning,
Image captioning
BibRef
Qin, C.[Chuan],
Wu, L.[Liang],
Zhang, X.P.[Xin-Peng],
Feng, G.R.[Guo-Rui],
Efficient Non-Targeted Attack for Deep Hashing Based Image Retrieval,
SPLetters(28), 2021, pp. 1893-1897.
IEEE DOI
2110
Codes, Perturbation methods, Hamming distance, Image retrieval,
Training, Feature extraction, Databases, Adversarial example,
image retrieval
BibRef
Cinŕ, A.E.[Antonio Emanuele],
Torcinovich, A.[Alessandro],
Pelillo, M.[Marcello],
A black-box adversarial attack for poisoning clustering,
PR(122), 2022, pp. 108306.
Elsevier DOI
2112
Adversarial learning, Unsupervised learning, Clustering,
Robustness evaluation, Machine learning security
BibRef
Du, C.[Chuan],
Zhang, L.[Lei],
Adversarial Attack for SAR Target Recognition Based on
UNet-Generative Adversarial Network,
RS(13), No. 21, 2021, pp. xx-yy.
DOI Link
2112
BibRef
Ghosh, A.[Arka],
Mullick, S.S.[Sankha Subhra],
Datta, S.[Shounak],
Das, S.[Swagatam],
Das, A.K.[Asit Kr.],
Mallipeddi, R.[Rammohan],
A black-box adversarial attack strategy with adjustable sparsity and
generalizability for deep image classifiers,
PR(122), 2022, pp. 108279.
Elsevier DOI
2112
Adversarial attack, Black-box attack,
Convolutional image classifier, Differential evolution,
Sparse universal attack
BibRef
Wang, H.J.[Hong-Jun],
Li, G.B.[Guan-Bin],
Liu, X.B.[Xiao-Bai],
Lin, L.[Liang],
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning,
PAMI(44), No. 4, April 2022, pp. 1725-1737.
IEEE DOI
2203
Training, Monte Carlo methods, Space exploration, Robustness,
Markov processes, Cats, Iterative methods, Adversarial example,
robustness and safety of machine learning
BibRef
Chen, S.[Sizhe],
He, Z.B.[Zheng-Bao],
Sun, C.J.[Cheng-Jin],
Yang, J.[Jie],
Huang, X.L.[Xiao-Lin],
Universal Adversarial Attack on Attention and the Resulting Dataset
DAmageNet,
PAMI(44), No. 4, April 2022, pp. 2188-2197.
IEEE DOI
2203
Heating systems, Training, Neural networks, Perturbation methods,
Semantics, Visualization, Error analysis, Adversarial attack, DAmageNet
BibRef
Chen, S.[Sizhe],
He, F.[Fan],
Huang, X.L.[Xiao-Lin],
Zhang, K.[Kun],
Relevance attack on detectors,
PR(124), 2022, pp. 108491.
Elsevier DOI
2203
Adversarial attack, Attack transferability, Black-box attack,
Relevance map, Interpreters, Object detection
BibRef
Kim, J.[Jinsub],
On Optimality of Deterministic Rules in Adversarial Bayesian
Detection,
SPLetters(29), 2022, pp. 757-761.
IEEE DOI
2204
Bayes methods, Games, Zirconium, Markov processes, Detectors,
Uncertainty, Training data, Adversarial Bayesian detection,
input data falsification
BibRef
Wei, X.X.[Xing-Xing],
Yan, H.Q.[Huan-Qian],
Li, B.[Bo],
Sparse Black-Box Video Attack with Reinforcement Learning,
IJCV(130), No. 6, June 2022, pp. 1459-1473.
Springer DOI
2207
BibRef
Sun, X.X.[Xu-Xiang],
Cheng, G.[Gong],
Pei, L.[Lei],
Han, J.W.[Jun-Wei],
Query-efficient decision-based attack via sampling distribution
reshaping,
PR(129), 2022, pp. 108728.
Elsevier DOI
2206
Adversarial examples, Decision-based attack,
Image classification, Normal vector estimation, Distribution reshaping
BibRef
Chen, S.M.[Shan-Mou],
Zhang, Q.Q.[Qiang-Qiang],
Lin, D.Y.[Dong-Yuan],
Wang, S.Y.[Shi-Yuan],
A Class of Nonlinear Kalman Filters Under a Generalized Measurement
Model With False Data Injection Attacks,
SPLetters(29), 2022, pp. 1187-1191.
IEEE DOI
2206
Additives, Kalman filters, Data models, Noise measurement,
Time measurement, Numerical models, Loss measurement, Cyber attack,
nonlinear Kalman filtering
BibRef
Hu, Z.C.[Zi-Chao],
Li, H.[Heng],
Yuan, L.H.[Li-Heng],
Cheng, Z.[Zhang],
Yuan, W.[Wei],
Zhu, M.[Ming],
Model scheduling and sample selection for ensemble adversarial
example attacks,
PR(130), 2022, pp. 108824.
Elsevier DOI
2206
Adversarial example, Black-box attack, Model scheduling, Sample selection
BibRef
Chen, M.[Mantun],
Wang, Y.J.[Yong-Jun],
Zhu, X.T.[Xia-Tian],
Few-shot Website Fingerprinting attack with Meta-Bias Learning,
PR(130), 2022, pp. 108739.
Elsevier DOI
2206
User privacy, Internet anonymity, Data traffic,
Website fingerprinting, Deep learning, Neural network, Parameter factorization
BibRef
Zhang, Z.[Zheng],
Wang, X.G.[Xun-Guang],
Lu, G.M.[Guang-Ming],
Shen, F.M.[Fu-Min],
Zhu, L.[Lei],
Targeted Attack of Deep Hashing Via Prototype-Supervised Adversarial
Networks,
MultMed(24), 2022, pp. 3392-3404.
IEEE DOI
2207
Semantics, Prototypes, Generators, Optimization, Cats, Binary codes,
Task analysis, Adversarial example, targeted attack, deep hashing,
generative adversarial network
BibRef
Wang, X.G.[Xun-Guang],
Zhang, Z.[Zheng],
Wu, B.Y.[Bao-Yuan],
Shen, F.M.[Fu-Min],
Lu, G.M.[Guang-Ming],
Prototype-supervised Adversarial Network for Targeted Attack of Deep
Hashing,
CVPR21(16352-16361)
IEEE DOI
2111
Knowledge engineering, Codes, Hamming distance,
Semantics, Image retrieval, Prototypes
BibRef
Huang, L.F.[Li-Feng],
Wei, S.X.[Shu-Xin],
Gao, C.Y.[Cheng-Ying],
Liu, N.[Ning],
Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks,
PR(131), 2022, pp. 108831.
Elsevier DOI
2208
Adversarial example, Transferability, Black-box attack, Defenses
BibRef
He, Z.[Ziwen],
Wang, W.[Wei],
Dong, J.[Jing],
Tan, T.N.[Tie-Niu],
Revisiting ensemble adversarial attack,
SP:IC(107), 2022, pp. 116747.
Elsevier DOI
2208
Adversarial attack, Ensemble strategies,
Gradient-based methods, Deep neural networks, Image classification
BibRef
Cheng, Y.P.[Yu-Peng],
Guo, Q.[Qing],
Juefei-Xu, F.[Felix],
Lin, S.W.[Shang-Wei],
Feng, W.[Wei],
Lin, W.S.[Wei-Si],
Liu, Y.[Yang],
Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack,
MultMed(24), 2022, pp. 3807-3822.
IEEE DOI
2208
Noise reduction, Kernel, Task analysis, Image denoising,
Image quality, Noise measurement, Deep learning, adversarial attack
BibRef
Peng, B.[Bowen],
Peng, B.[Bo],
Yong, S.W.[Shao-Wei],
Liu, L.[Li],
An Empirical Study of Fully Black-Box and Universal Adversarial
Attack for SAR Target Recognition,
RS(14), No. 16, 2022, pp. xx-yy.
DOI Link
2208
BibRef
Akhtar, N.[Naveed],
Jalwana, M.A.A.K.[Mohammad A. A. K.],
Bennamoun, M.[Mohammed],
Mian, A.[Ajmal],
Attack to Fool and Explain Deep Networks,
PAMI(44), No. 10, October 2022, pp. 5980-5995.
IEEE DOI
2209
Perturbation methods, Computational modeling, Visualization,
Predictive models, Data models, Tools, Task analysis,
explainable AI
BibRef
Ma, K.[Ke],
Xu, Q.Q.[Qian-Qian],
Zeng, J.S.[Jin-Shan],
Cao, X.C.[Xiao-Chun],
Huang, Q.M.[Qing-Ming],
Poisoning Attack Against Estimating From Pairwise Comparisons,
PAMI(44), No. 10, October 2022, pp. 6393-6408.
IEEE DOI
2209
Optimization, Heuristic algorithms, Sports, Voting, Uncertainty, Games,
Data models, Adversarial learning, poisoning attack,
distributionally robust optimization
BibRef
Zhang, J.[Jie],
Chen, D.D.[Dong-Dong],
Huang, Q.D.[Qi-Dong],
Liao, J.[Jing],
Zhang, W.M.[Wei-Ming],
Feng, H.M.[Hua-Min],
Hua, G.[Gang],
Yu, N.H.[Neng-Hai],
Poison Ink: Robust and Invisible Backdoor Attack,
IP(31), 2022, pp. 5691-5705.
IEEE DOI
2209
Toxicology, Ink, Training, Robustness, Data models, Training data,
Task analysis, Backdoor attack, stealthiness, robustness,
generality, flexibility
BibRef
Deng, Y.P.[Ying-Peng],
Karam, L.J.[Lina J.],
Frequency-Tuned Universal Adversarial Attacks on Texture Recognition,
IP(31), 2022, pp. 5856-5868.
IEEE DOI
2209
Perturbation methods, Frequency-domain analysis, Training,
Feature extraction, Image recognition, Generators,
just-noticeable difference (JND)
BibRef
Lin, B.Q.[Bing-Qian],
Zhu, Y.[Yi],
Long, Y.X.[Yan-Xin],
Liang, X.D.[Xiao-Dan],
Ye, Q.X.[Qi-Xiang],
Lin, L.[Liang],
Adversarial Reinforced Instruction Attacker for Robust
Vision-Language Navigation,
PAMI(44), No. 10, October 2022, pp. 7175-7189.
IEEE DOI
2209
Navigation from language.
Navigation, Task analysis, Visualization, Robustness,
Perturbation methods, Stairs, Natural languages,
self-supervised learning
BibRef
Giulivi, L.[Loris],
Jere, M.[Malhar],
Rossi, L.[Loris],
Koushanfar, F.[Farinaz],
Ciocarlie, G.[Gabriela],
Hitaj, B.[Briland],
Boracchi, G.[Giacomo],
Adversarial scratches: Deployable attacks to CNN classifiers,
PR(133), 2023, pp. 108985.
Elsevier DOI
2210
Adversarial perturbations, Adversarial attacks, Deep learning,
Convolutional neural networks, Bézier curves
BibRef
Li, C.[Chao],
Yao, W.[Wen],
Wang, H.D.[Han-Ding],
Jiang, T.S.[Ting-Song],
Adaptive momentum variance for attention-guided sparse adversarial
attacks,
PR(133), 2023, pp. 108979.
Elsevier DOI
2210
Deep neural networks, Black-box adversarial attacks,
Transferability, Momentum variances
BibRef
Lin, X.X.[Xi-Xun],
Zhou, C.[Chuan],
Wu, J.[Jia],
Yang, H.[Hong],
Wang, H.B.[Hai-Bo],
Cao, Y.[Yanan],
Wang, B.[Bin],
Exploratory Adversarial Attacks on Graph Neural Networks for
Semi-Supervised Node Classification,
PR(133), 2023, pp. 109042.
Elsevier DOI
2210
Gradient-based attacks, Maximal gradient,
Graph neural networks, Semi-supervised node classification
BibRef
Zhao, C.L.[Cheng-Long],
Ni, B.B.[Bing-Bing],
Mei, S.B.[Shi-Bin],
Explore Adversarial Attack via Black Box Variational Inference,
SPLetters(29), 2022, pp. 2088-2092.
IEEE DOI
2211
Monte Carlo methods, Computational modeling,
Probability distribution, Gaussian distribution, Bayes methods,
Bayesian inference
BibRef
Bai, T.[Tao],
Wang, H.[Hao],
Wen, B.[Bihan],
Targeted Universal Adversarial Examples for Remote Sensing,
RS(14), No. 22, 2022, pp. xx-yy.
DOI Link
2212
BibRef
Li, T.[Tengjiao],
Li, M.[Maosen],
Yang, Y.H.[Yan-Hua],
Deng, C.[Cheng],
Frequency domain regularization for iterative adversarial attacks,
PR(134), 2023, pp. 109075.
Elsevier DOI
2212
Adversarial examples, Transfer-based attack, Black-box attack,
Frequency-domain characteristics
BibRef
Dong, Y.P.[Yin-Peng],
Cheng, S.Y.[Shu-Yu],
Pang, T.Y.[Tian-Yu],
Su, H.[Hang],
Zhu, J.[Jun],
Query-Efficient Black-Box Adversarial Attacks Guided by a
Transfer-Based Prior,
PAMI(44), No. 12, December 2022, pp. 9536-9548.
IEEE DOI
2212
Estimation, Optimization, Analytical models, Numerical models,
Deep learning, Approximation algorithms, Weight measurement,
transferability
BibRef
Zhang, Y.C.[Yi-Chuang],
Zhang, Y.[Yu],
Qi, J.H.[Jia-Hao],
Bin, K.C.[Kang-Cheng],
Wen, H.[Hao],
Tong, X.Q.[Xun-Qian],
Zhong, P.[Ping],
Adversarial Patch Attack on Multi-Scale Object Detection for UAV
Remote Sensing Images,
RS(14), No. 21, 2022, pp. xx-yy.
DOI Link
2212
BibRef
Hu, C.[Cong],
Xu, H.Q.[Hao-Qi],
Wu, X.J.[Xiao-Jun],
Substitute Meta-Learning for Black-Box Adversarial Attack,
SPLetters(29), 2022, pp. 2472-2476.
IEEE DOI
2212
Training, Closed box, Task analysis, Signal processing algorithms,
Generators, Classification algorithms, Data models, substitute training
BibRef
Agarwal, A.[Akshay],
Ratha, N.[Nalini],
Vatsa, M.[Mayank],
Singh, R.[Richa],
Crafting Adversarial Perturbations via Transformed Image Component
Swapping,
IP(31), 2022, pp. 7338-7349.
IEEE DOI
2212
Perturbation methods, Databases, Hybrid fiber coaxial cables,
Training, Kernel, Image resolution, Additives, Image components, wavelet
BibRef
Shi, Y.C.[Yu-Cheng],
Han, Y.H.[Ya-Hong],
Hu, Q.H.[Qing-Hua],
Yang, Y.[Yi],
Tian, Q.[Qi],
Query-Efficient Black-Box Adversarial Attack With Customized
Iteration and Sampling,
PAMI(45), No. 2, February 2023, pp. 2226-2245.
IEEE DOI
2301
Adaptation models, Optimization, Data models, Computational modeling,
Gaussian noise, Trajectory, transfer-based attack
BibRef
Kazemi, E.[Ehsan],
Kerdreux, T.[Thomas],
Wang, L.Q.[Li-Qiang],
Minimally Distorted Structured Adversarial Attacks,
IJCV(131), No. 1, January 2023, pp. 160-176.
Springer DOI
2301
BibRef
Yuan, H.J.[Hao-Jie],
Chu, Q.[Qi],
Zhu, F.[Feng],
Zhao, R.[Rui],
Liu, B.[Bin],
Yu, N.H.[Neng-Hai],
AutoMA: Towards Automatic Model Augmentation for Transferable
Adversarial Attacks,
MultMed(25), 2023, pp. 203-213.
IEEE DOI
2301
Transforms, Computational modeling, Training, Perturbation methods, Distortion,
Data models, Image color analysis, Adversarial attack, transferability
BibRef
Wei, X.X.[Xing-Xing],
Guo, Y.[Ying],
Yu, J.[Jie],
Adversarial Sticker: A Stealthy Attack Method in the Physical World,
PAMI(45), No. 3, March 2023, pp. 2711-2725.
IEEE DOI
2302
Face recognition, Perturbation methods, Task analysis,
Image retrieval, Image recognition, Adaptation models, TV,
physical world
BibRef
Guo, Y.[Yiwen],
Li, Q.Z.[Qi-Zhang],
Zuo, W.M.[Wang-Meng],
Chen, H.[Hao],
An Intermediate-Level Attack Framework on the Basis of Linear
Regression,
PAMI(45), No. 3, March 2023, pp. 2726-2735.
IEEE DOI
2302
Linear regression, Computer science, Computational modeling,
Support vector machines, Feature extraction, Symbols, robustness
BibRef
Yang, D.[Dong],
Chen, W.[Wei],
Wei, S.J.[Song-Jie],
DTFA: Adversarial attack with discrete cosine transform noise and
target features on deep neural networks,
IET-IPR(17), No. 5, 2023, pp. 1464-1477.
DOI Link
2304
adversarial example, image classification, regional sampling, target attack
BibRef
Ying, C.Y.[Cheng-Yang],
You, Q.B.[Qiao-Ben],
Zhou, X.N.[Xin-Ning],
Su, H.[Hang],
Ding, W.[Wenbo],
Ai, J.Y.[Jian-Yong],
Consistent attack: Universal adversarial perturbation on embodied
vision navigation,
PRL(168), 2023, pp. 57-63.
Elsevier DOI
2304
Embodied agent, Vision navigation, Deep neural networks,
Universal adversarial noise
BibRef
Gao, Y.H.[Ying-Hua],
Li, Y.M.[Yi-Ming],
Zhu, L.[Linghui],
Wu, D.X.[Dong-Xian],
Jiang, Y.[Yong],
Xia, S.T.[Shu-Tao],
Not All Samples Are Born Equal:
Towards Effective Clean-Label Backdoor Attacks,
PR(139), 2023, pp. 109512.
Elsevier DOI
2304
Backdoor attack, Clean-label attack, Sample selection,
Trustworthy ML, AI Security, Deep learning
BibRef
Qin, C.[Chuan],
Gao, S.Y.[Sheng-Yan],
Zhang, X.P.[Xin-Peng],
Feng, G.R.[Guo-Rui],
CADW: CGAN-Based Attack on Deep Robust Image Watermarking,
MultMedMag(30), No. 1, January 2023, pp. 28-35.
IEEE DOI
2305
Watermarking, Copyright protection, Generators, Robustness,
Data models, Visualization, Generative adversarial networks,
Deep Learning
BibRef
Lin, G.Y.[Geng-You],
Pan, Z.S.[Zhi-Song],
Zhou, X.Y.[Xing-Yu],
Duan, Y.[Yexin],
Bai, W.[Wei],
Zhan, D.[Dazhi],
Zhu, L.[Leqian],
Zhao, G.[Gaoqiang],
Li, T.[Tao],
Boosting Adversarial Transferability with Shallow-Feature Attack on
SAR Images,
RS(15), No. 10, 2023, pp. xx-yy.
DOI Link
2306
BibRef
Sun, X.X.[Xu-Xiang],
Cheng, G.[Gong],
Li, H.[Hongda],
Pei, L.[Lei],
Han, J.W.[Jun-Wei],
On Single-Model Transferable Targeted Attacks:
A Closer Look at Decision-Level Optimization,
IP(32), 2023, pp. 2972-2984.
IEEE DOI
2306
Optimization, Adversarial machine learning, Closed box, Sun,
Measurement, Linear programming, Tuning, Adversarial attacks,
balanced logit loss
BibRef
Wei, X.X.[Xing-Xing],
Guo, Y.[Ying],
Yu, J.[Jie],
Zhang, B.[Bo],
Simultaneously Optimizing Perturbations and Positions for Black-Box
Adversarial Patch Attacks,
PAMI(45), No. 7, July 2023, pp. 9041-9054.
IEEE DOI
2306
Perturbation methods, Face recognition, Task analysis,
Optimization, Closed box, Estimation, Detectors, Adversarial patches,
traffic sign recognition
BibRef
Pan, J.H.[Jian-Hong],
Foo, L.G.[Lin Geng],
Zheng, Q.C.[Qi-Chen],
Fan, Z.P.[Zhi-Peng],
Rahmani, H.[Hossein],
Ke, Q.H.[Qiu-Hong],
Liu, J.[Jun],
GradMDM: Adversarial Attack on Dynamic Networks,
PAMI(45), No. 9, September 2023, pp. 11374-11381.
IEEE DOI
2309
BibRef
Earlier: A1, A3, A4, A5, A6, A7, Only:
GradAuto: Energy-Oriented Attack on Dynamic Neural Networks,
ECCV22(IV:637-653).
Springer DOI
2211
BibRef
Koren, T.[Tom],
Talker, L.[Lior],
Dinerstein, M.[Michael],
Vitek, R.[Ran],
Consistent Semantic Attacks on Optical Flow,
ACCV22(VII:501-517).
Springer DOI
2307
BibRef
Wang, D.[Dan],
Wang, Y.G.[Yuan-Gen],
Decision-based Black-box Attack Specific to Large-size Images,
ACCV22(II:357-372).
Springer DOI
2307
BibRef
Wu, H.[Hao],
Wang, J.[Jinwei],
Zhang, J.W.[Jia-Wei],
Luo, X.Y.[Xiang-Yang],
Ma, B.[Bin],
Improving the Transferability of Adversarial Attacks Through Both Front
and Rear Vector Method,
IWDW22(83-97).
Springer DOI
2307
BibRef
Na, D.B.[Dong-Bin],
Ji, S.[Sangwoo],
Kim, J.[Jong],
Unrestricted Black-box Adversarial Attack Using GAN with Limited
Queries,
AdvRob22(467-482).
Springer DOI
2304
BibRef
Chan, S.H.[Shih-Han],
Dong, Y.P.[Yin-Peng],
Zhu, J.[Jun],
Zhang, X.L.[Xiao-Lu],
Zhou, J.[Jun],
Baddet: Backdoor Attacks on Object Detection,
AdvRob22(396-412).
Springer DOI
2304
Backdoor trigger into a small portion of training data.
BibRef
Waseda, F.[Futa],
Nishikawa, S.[Sosuke],
Le, T.N.[Trung-Nghia],
Nguyen, H.H.[Huy H.],
Echizen, I.[Isao],
Closer Look at the Transferability of Adversarial Examples:
How They Fool Different Models Differently,
WACV23(1360-1368)
IEEE DOI
2302
Deep learning, Analytical models, Perturbation methods,
Neural networks, Predictive models,
ethical computer vision
BibRef
Luzi, L.[Lorenzo],
Marrero, C.O.[Carlos Ortiz],
Wynar, N.[Nile],
Baraniuk, R.G.[Richard G.],
Henry, M.J.[Michael J.],
Evaluating generative networks using Gaussian mixtures of image
features,
WACV23(279-288)
IEEE DOI
2302
Image resolution, Inverse problems, Computational modeling,
Perturbation methods, Gaussian noise, Gaussian distribution,
adversarial attack and defense methods
BibRef
Aich, A.[Abhishek],
Li, S.[Shasha],
Song, C.[Chengyu],
Asif, M.S.[M. Salman],
Krishnamurthy, S.V.[Srikanth V.],
Roy-Chowdhury, A.K.[Amit K.],
Leveraging Local Patch Differences in Multi-Object Scenes for
Generative Adversarial Attacks,
WACV23(1308-1318)
IEEE DOI
2302
Perturbation methods, Computational modeling, Closed box,
Generators, Convolutional neural networks, Glass box,
adversarial attack and defense methods
BibRef
Shapira, A.[Avishag],
Zolfi, A.[Alon],
Demetrio, L.[Luca],
Biggio, B.[Battista],
Shabtai, A.[Asaf],
Phantom Sponges: Exploiting Non-Maximum Suppression to Attack Deep
Object Detectors,
WACV23(4560-4569)
IEEE DOI
2302
Perturbation methods, Pipelines, Phantoms, Detectors,
Object detection, Predictive models, Prediction algorithms,
adversarial attack and defense methods
BibRef
Tan, H.X.[Han-Xiao],
Kotthaus, H.[Helena],
Explainability-Aware One Point Attack for Point Cloud Neural Networks,
WACV23(4570-4579)
IEEE DOI
2302
Point cloud compression, Codes, Filtering, Computer network reliability,
Neural networks, Robustness, ethical computer vision
BibRef
Ramakrishnan, G.[Goutham],
Albarghouthi, A.[Aws],
Backdoors in Neural Models of Source Code,
ICPR22(2892-2899)
IEEE DOI
2212
Deep learning, Codes, Source coding, Neural networks, Training data,
Implants, Predictive models
BibRef
Chen, C.C.[Chun-Chun],
Zhu, W.J.[Wen-Jie],
Peng, B.[Bo],
Lu, H.J.[Hui-Juan],
Towards Robust Community Detection via Extreme Adversarial Attacks,
ICPR22(2231-2237)
IEEE DOI
2212
Training, Perturbation methods, Image edge detection,
Heuristic algorithms, Complex networks, Robustness
BibRef
Tavallali, P.[Pooya],
Behzadan, V.[Vahid],
Alizadeh, A.[Azar],
Ranganath, A.[Aditya],
Singhal, M.[Mukesh],
Adversarial Label-Poisoning Attacks and Defense for General
Multi-Class Models Based on Synthetic Reduced Nearest Neighbor,
ICIP22(3717-3722)
IEEE DOI
2211
Training, Resistance, Analytical models,
Machine learning algorithms, Clustering algorithms, Machine Learning
BibRef
Lin, Z.[Zhi],
Peng, A.[Anjie],
Wei, R.[Rong],
Yu, W.X.[Wen-Xin],
Zeng, H.[Hui],
An Enhanced Transferable Adversarial Attack of Scale-Invariant
Methods,
ICIP22(3788-3792)
IEEE DOI
2211
Convolution, Convolutional neural networks,
convolution neural network, adversarial examples, transferability
BibRef
Ran, Y.[Yu],
Wang, Y.G.[Yuan-Gen],
Sign-OPT+: An Improved Sign Optimization Adversarial Attack,
ICIP22(461-465)
IEEE DOI
2211
Backtracking, Costs, Codes, Training data, Data models,
Complexity theory, adversarial example, binary search
BibRef
Wang, D.[Dan],
Lin, J.[Jiayu],
Wang, Y.G.[Yuan-Gen],
Query-Efficient Adversarial Attack Based On Latin Hypercube Sampling,
ICIP22(546-550)
IEEE DOI
2211
Codes, Barium, Estimation, Benchmark testing, Hypercubes,
adversarial attacks, boundary attacks, Latin Hypercube Sampling,
query efficiency
BibRef
Kim, W.J.[Woo Jae],
Hong, S.[Seunghoon],
Yoon, S.E.[Sung-Eui],
Diverse Generative Perturbations on Attention Space for Transferable
Adversarial Attacks,
ICIP22(281-285)
IEEE DOI
2211
Codes, Perturbation methods, Stochastic processes, Generators,
Space exploration, Adversarial examples, Black-box, Diversity
BibRef
Aneja, S.[Shivangi],
Markhasin, L.[Lev],
Nießner, M.[Matthias],
TAFIM: Targeted Adversarial Attacks Against Facial Image Manipulations,
ECCV22(XIV:58-75).
Springer DOI
2211
BibRef
Long, Y.Y.[Yu-Yang],
Zhang, Q.L.[Qi-Long],
Zeng, B.[Boheng],
Gao, L.L.[Lian-Li],
Liu, X.L.[Xiang-Long],
Zhang, J.[Jian],
Song, J.K.[Jing-Kuan],
Frequency Domain Model Augmentation for Adversarial Attack,
ECCV22(IV:549-566).
Springer DOI
2211
BibRef
Phan, H.[Huy],
Shi, C.[Cong],
Xie, Y.[Yi],
Zhang, T.F.[Tian-Fang],
Li, Z.H.[Zhuo-Hang],
Zhao, T.M.[Tian-Ming],
Liu, J.[Jian],
Wang, Y.[Yan],
Chen, Y.Y.[Ying-Ying],
Yuan, B.[Bo],
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact
DNN,
ECCV22(IV:708-724).
Springer DOI
2211
BibRef
Yuan, Z.[Zheng],
Zhang, J.[Jie],
Shan, S.G.[Shi-Guang],
Adaptive Image Transformations for Transfer-Based Adversarial Attack,
ECCV22(V:1-17).
Springer DOI
2211
BibRef
Cao, Y.L.[Yu-Long],
Xiao, C.W.[Chao-Wei],
Anandkumar, A.[Anima],
Xu, D.[Danfei],
Pavone, M.[Marco],
AdvDO: Realistic Adversarial Attacks for Trajectory Prediction,
ECCV22(V:36-52).
Springer DOI
2211
BibRef
Bai, J.W.[Jia-Wang],
Gao, K.F.[Kuo-Feng],
Gong, D.H.[Di-Hong],
Xia, S.T.[Shu-Tao],
Li, Z.F.[Zhi-Feng],
Liu, W.[Wei],
Hardly Perceptible Trojan Attack Against Neural Networks with Bit Flips,
ECCV22(V:104-121).
Springer DOI
2211
BibRef
Wang, Y.X.[Yi-Xu],
Li, J.[Jie],
Liu, H.[Hong],
Wang, Y.[Yan],
Wu, Y.J.[Yong-Jian],
Huang, F.Y.[Fei-Yue],
Ji, R.R.[Rong-Rong],
Black-Box Dissector:
Towards Erasing-Based Hard-Label Model Stealing Attack,
ECCV22(V:192-208).
Springer DOI
2211
BibRef
Liu, G.[Ganlin],
Huang, X.W.[Xiao-Wei],
Yi, X.P.[Xin-Ping],
Adversarial Label Poisoning Attack on Graph Neural Networks via Label
Propagation,
ECCV22(V:227-243).
Springer DOI
2211
BibRef
Tran, H.[Hoang],
Lu, D.[Dan],
Zhang, G.[Guannan],
Exploiting the Local Parabolic Landscapes of Adversarial Losses to
Accelerate Black-Box Adversarial Attack,
ECCV22(V:317-334).
Springer DOI
2211
BibRef
Wang, T.[Tong],
Yao, Y.[Yuan],
Xu, F.[Feng],
An, S.W.[Sheng-Wei],
Tong, H.H.[Hang-Hang],
Wang, T.[Ting],
An Invisible Black-Box Backdoor Attack Through Frequency Domain,
ECCV22(XIII:396-413).
Springer DOI
2211
BibRef
Byun, J.[Junyoung],
Shim, K.[Kyujin],
Go, H.[Hyojun],
Kim, C.[Changick],
Hidden Conditional Adversarial Attacks,
ICIP22(1306-1310)
IEEE DOI
2211
Deep learning, Neural networks, Inspection, Controllability, Safety,
Reliability, Adversarial attack, Hidden condition
BibRef
Son, M.J.[Min-Ji],
Kwon, M.J.[Myung-Joon],
Kim, H.S.[Hee-Seon],
Byun, J.[Junyoung],
Cho, S.[Seungju],
Kim, C.[Changick],
Adaptive Warping Network for Transferable Adversarial Attacks,
ICIP22(3056-3060)
IEEE DOI
2211
Deep learning, Adaptation models, Adaptive systems,
Perturbation methods, Neural networks, Search problems, Warping
BibRef
Feng, Y.[Yu],
Ma, B.[Benteng],
Zhang, J.[Jing],
Zhao, S.S.[Shan-Shan],
Xia, Y.[Yong],
Tao, D.C.[Da-Cheng],
FIBA: Frequency-Injection based Backdoor Attack in Medical Image
Analysis,
CVPR22(20844-20853)
IEEE DOI
2210
Training, Image segmentation, Codes, Frequency-domain analysis,
Computational modeling, Semantics, Predictive models, Medical,
Privacy and federated learning
BibRef
Cao, X.Y.[Xiao-Yu],
Gong, N.Z.Q.[Neil Zhen-Qiang],
MPAF: Model Poisoning Attacks to Federated Learning based on Fake
Clients,
FedVision22(3395-3403)
IEEE DOI
2210
Training, Computational modeling, Production,
Collaborative work, Pattern recognition
BibRef
Xu, Q.L.[Qiu-Ling],
Tao, G.H.[Guan-Hong],
Zhang, X.Y.[Xiang-Yu],
Bounded Adversarial Attack on Deep Content Features,
CVPR22(15182-15191)
IEEE DOI
2210
Ethics, Neurons, Gaussian distribution,
Regulation, Pattern recognition, Adversarial attack and defense,
Representation learning
BibRef
Zhao, Z.D.[Zhen-Dong],
Chen, X.J.[Xiao-Jun],
Xuan, Y.X.[Yue-Xin],
Dong, Y.[Ye],
Wang, D.[Dakui],
Liang, K.[Kaitai],
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible
Perturbation and Latent Representation Constraints,
CVPR22(15192-15201)
IEEE DOI
2210
Training, Resistance, Representation learning, Adaptation models,
Visualization, Toxicology, Perturbation methods,
Machine learning
BibRef
Sun, X.X.[Xu-Xiang],
Cheng, G.[Gong],
Li, H.[Hongda],
Pei, L.[Lei],
Han, J.W.[Jun-Wei],
Exploring Effective Data for Surrogate Training Towards Black-box
Attack,
CVPR22(15334-15343)
IEEE DOI
2210
Training, Codes, Computational modeling, Semantics, Training data,
Diversity methods, Adversarial attack and defense, retrieval
BibRef
Zhou, L.J.[Lin-Jun],
Cui, P.[Peng],
Zhang, X.X.[Xing-Xuan],
Jiang, Y.[Yinan],
Yang, S.Q.[Shi-Qiang],
Adversarial Eigen Attack on BlackBox Models,
CVPR22(15233-15241)
IEEE DOI
2210
Jacobian matrices, Deep learning, Perturbation methods,
Computational modeling, Training data, Data models, Optimization methods
BibRef
Luo, C.[Cheng],
Lin, Q.L.[Qin-Liang],
Xie, W.C.[Wei-Cheng],
Wu, B.Z.[Bi-Zhu],
Xie, J.H.[Jin-Heng],
Shen, L.L.[Lin-Lin],
Frequency-driven Imperceptible Adversarial Attack on Semantic
Similarity,
CVPR22(15294-15303)
IEEE DOI
2210
Representation learning, Measurement, Visualization,
Perturbation methods, Semantics,
Self- semi- meta- unsupervised learning
BibRef
Suryanto, N.[Naufal],
Kim, Y.[Yongsu],
Kang, H.[Hyoeun],
Larasati, H.T.[Harashta Tatimma],
Yun, Y.Y.[Young-Yeo],
Le, T.T.H.[Thi-Thu-Huong],
Yang, H.[Hunmin],
Oh, S.Y.[Se-Yoon],
Kim, H.[Howon],
DTA: Physical Camouflage Attacks using Differentiable Transformation
Network,
CVPR22(15284-15293)
IEEE DOI
2210
Solid modeling, Object detection, Rendering (computer graphics),
Pattern recognition, Engines, Adversarial attack and defense, retrieval
BibRef
Zhong, Y.Q.[Yi-Qi],
Liu, X.[Xianming],
Zhai, D.[Deming],
Jiang, J.J.[Jun-Jun],
Ji, X.Y.[Xiang-Yang],
Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon,
CVPR22(15324-15333)
IEEE DOI
2210
Printing, Laser theory, Codes, Perturbation methods, Machine vision,
Machine learning, Adversarial attack and defense, retrieval
BibRef
Tong, A.C.H.[Adrien Chan-Hon],
Symmetric adversarial poisoning against deep learning,
IPTA20(1-5)
IEEE DOI
2206
Support vector machines, Training, Deep learning,
Perturbation methods, Image processing, Training data, Tools, deep learning
BibRef
Li, Y.M.[Yi-Ming],
Wen, C.C.[Cong-Cong],
Juefei-Xu, F.[Felix],
Feng, C.[Chen],
Fooling LiDAR Perception via Adversarial Trajectory Perturbation,
ICCV21(7878-7887)
IEEE DOI
2203
Point cloud compression, Wireless communication,
Wireless sensor networks, Laser radar, Perturbation methods,
Vision for robotics and autonomous vehicles
BibRef
Wang, X.S.[Xiao-Sen],
He, X.R.[Xuan-Ran],
Wang, J.D.[Jing-Dong],
He, K.[Kun],
Admix: Enhancing the Transferability of Adversarial Attacks,
ICCV21(16138-16147)
IEEE DOI
2203
Deep learning, Codes, Neural networks,
Adversarial machine learning, Task analysis, Standards,
Recognition and classification
BibRef
Li, J.[Jie],
Ji, R.R.[Rong-Rong],
Chen, P.X.[Pei-Xian],
Zhang, B.C.[Bao-Chang],
Hong, X.P.[Xiao-Peng],
Zhang, R.X.[Rui-Xin],
Li, S.X.[Shao-Xin],
Li, J.L.[Ji-Lin],
Huang, F.Y.[Fei-Yue],
Wu, Y.J.[Yong-Jian],
Aha! Adaptive History-driven Attack for Decision-based Black-box
Models,
ICCV21(16148-16157)
IEEE DOI
2203
Dimensionality reduction, Adaptation models,
Perturbation methods, Computational modeling, Optimization, Faces,
BibRef
Chen, S.[Si],
Kahla, M.[Mostafa],
Jia, R.[Ruoxi],
Qi, G.J.[Guo-Jun],
Knowledge-Enriched Distributional Model Inversion Attacks,
ICCV21(16158-16167)
IEEE DOI
2203
Training, Deep learning, Privacy, Codes, Computational modeling,
Neural networks, Adversarial learning, Motion and tracking
BibRef
Zhou, M.[Mo],
Wang, L.[Le],
Niu, Z.X.[Zhen-Xing],
Zhang, Q.[Qilin],
Xu, Y.H.[Ying-Hui],
Zheng, N.N.[Nan-Ning],
Hua, G.[Gang],
Practical Relative Order Attack in Deep Ranking,
ICCV21(16393-16402)
IEEE DOI
2203
Measurement, Deep learning, Correlation, Perturbation methods,
Neural networks, Interference, Adversarial learning, Fairness,
Image and video retrieval
BibRef
Li, Y.Z.[Yue-Zun],
Li, Y.M.[Yi-Ming],
Wu, B.Y.[Bao-Yuan],
Li, L.K.[Long-Kang],
He, R.[Ran],
Lyu, S.W.[Si-Wei],
Invisible Backdoor Attack with Sample-Specific Triggers,
ICCV21(16443-16452)
IEEE DOI
2203
Training, Additive noise, Deep learning, Steganography, Image coding,
Perturbation methods, Adversarial learning, Recognition and classification
BibRef
Doan, K.[Khoa],
Lao, Y.J.[Ying-Jie],
Zhao, W.J.[Wei-Jie],
Li, P.[Ping],
LIRA: Learnable, Imperceptible and Robust Backdoor Attacks,
ICCV21(11946-11956)
IEEE DOI
2203
Deformable models, Visualization, Toxicology, Heuristic algorithms,
Neural networks, Stochastic processes, Inspection,
Neural generative models
BibRef
Shafran, A.[Avital],
Peleg, S.[Shmuel],
Hoshen, Y.[Yedid],
Membership Inference Attacks are Easier on Difficult Problems,
ICCV21(14800-14809)
IEEE DOI
2203
Training, Image segmentation, Uncertainty, Semantics,
Neural networks, Benchmark testing, Data models, Fairness,
grouping and shape
BibRef
Zhang, C.N.[Chao-Ning],
Benz, P.[Philipp],
Karjauv, A.[Adil],
Kweon, I.S.[In So],
Data-free Universal Adversarial Perturbation and Black-box Attack,
ICCV21(7848-7857)
IEEE DOI
2203
Training, Image segmentation, Limiting, Image recognition, Codes,
Perturbation methods, Adversarial learning,
BibRef
Liang, S.Y.[Si-Yuan],
Wu, B.Y.[Bao-Yuan],
Fan, Y.B.[Yan-Bo],
Wei, X.X.[Xing-Xing],
Cao, X.C.[Xiao-Chun],
Parallel Rectangle Flip Attack: A Query-based Black-box Attack
against Object Detection,
ICCV21(7677-7687)
IEEE DOI
2203
Costs, Perturbation methods, Detectors, Object detection,
Predictive models, Search problems, Task analysis,
Detection and localization in 2D and 3D
BibRef
Naseer, M.[Muzammal],
Khan, S.[Salman],
Hayat, M.[Munawar],
Khan, F.S.[Fahad Shahbaz],
Porikli, F.M.[Fatih M.],
On Generating Transferable Targeted Perturbations,
ICCV21(7688-7697)
IEEE DOI
2203
Codes, Perturbation methods, Computational modeling, Transformers,
Linear programming, Generators, Adversarial learning,
Recognition and classification
BibRef
Chen, H.[Huili],
Fu, C.[Cheng],
Zhao, J.[Jishen],
Koushanfar, F.[Farinaz],
ProFlip: Targeted Trojan Attack with Progressive Bit Flips,
ICCV21(7698-7707)
IEEE DOI
2203
Training, Runtime, Neurons, Neural networks, Random access memory,
Predictive models, Laser modes, Adversarial learning,
Optimization and learning methods
BibRef
Rony, J.[Jérôme],
Granger, E.[Eric],
Pedersoli, M.[Marco],
Ayed, I.B.[Ismail Ben],
Augmented Lagrangian Adversarial Attacks,
ICCV21(7718-7727)
IEEE DOI
2203
Computational modeling, Computational efficiency,
Computational complexity, Adversarial learning,
Optimization and learning methods
BibRef
Yuan, Z.[Zheng],
Zhang, J.[Jie],
Jia, Y.[Yunpei],
Tan, C.[Chuanqi],
Xue, T.[Tao],
Shan, S.G.[Shi-Guang],
Meta Gradient Adversarial Attack,
ICCV21(7728-7737)
IEEE DOI
2203
Philosophical considerations,
Task analysis, Adversarial learning,
BibRef
Park, G.Y.[Geon Yeong],
Lee, S.W.[Sang Wan],
Reliably fast adversarial training via latent adversarial
perturbation,
ICCV21(7738-7747)
IEEE DOI
2203
Training, Costs, Perturbation methods, Linearity, Minimization,
Computational efficiency, Adversarial learning, Recognition and classification
BibRef
Tu, J.[James],
Wang, T.[Tsunhsuan],
Wang, J.K.[Jing-Kang],
Manivasagam, S.[Sivabalan],
Ren, M.[Mengye],
Urtasun, R.[Raquel],
Adversarial Attacks On Multi-Agent Communication,
ICCV21(7748-7757)
IEEE DOI
2203
Deep learning, Fault tolerance, Protocols, Computational modeling,
Neural networks, Fault tolerant systems, Robustness,
Vision for robotics and autonomous vehicles
BibRef
Yuan, J.[Jianhe],
He, Z.H.[Zhi-Hai],
Consistency-Sensitivity Guided Ensemble Black-Box Adversarial Attacks
in Low-Dimensional Spaces,
ICCV21(7758-7766)
IEEE DOI
2203
Deep learning, Sensitivity, Design methodology,
Computational modeling, Neural networks, Task analysis,
Recognition and classification
BibRef
Feng, W.W.[Wei-Wei],
Wu, B.Y.[Bao-Yuan],
Zhang, T.Z.[Tian-Zhu],
Zhang, Y.[Yong],
Zhang, Y.D.[Yong-Dong],
Meta-Attack: Class-agnostic and Model-agnostic Physical Adversarial
Attack,
ICCV21(7767-7776)
IEEE DOI
2203
Training, Deep learning, Image color analysis, Shape,
Computational modeling, Neural networks, Adversarial learning,
BibRef
Kim, J.Y.[Jae-Yeon],
Hua, B.S.[Binh-Son],
Nguyen, D.T.[Duc Thanh],
Yeung, S.K.[Sai-Kit],
Minimal Adversarial Examples for Deep Learning on 3D Point Clouds,
ICCV21(7777-7786)
IEEE DOI
2203
Point cloud compression, Deep learning, Image color analysis,
Perturbation methods, Semantics, Adversarial learning,
Recognition and classification
BibRef
Stutz, D.[David],
Hein, M.[Matthias],
Schiele, B.[Bernt],
Relating Adversarially Robust Generalization to Flat Minima,
ICCV21(7787-7797)
IEEE DOI
2203
Training, Correlation, Perturbation methods,
Computational modeling, Robustness, Loss measurement,
Optimization and learning methods
BibRef
Li, C.[Chao],
Gao, S.Q.[Shang-Qian],
Deng, C.[Cheng],
Liu, W.[Wei],
Huang, H.[Heng],
Adversarial Attack on Deep Cross-Modal Hamming Retrieval,
ICCV21(2198-2207)
IEEE DOI
2203
Learning systems, Knowledge engineering, Deep learning,
Correlation, Perturbation methods, Neural networks,
Vision + other modalities
BibRef
Duan, R.[Ranjie],
Chen, Y.[Yuefeng],
Niu, D.[Dantong],
Yang, Y.[Yun],
Qin, A.K.,
He, Y.[Yuan],
AdvDrop: Adversarial Attack to DNNs by Dropping Information,
ICCV21(7486-7495)
IEEE DOI
2203
Deep learning, Visualization, Neural networks, Robustness,
Visual perception, Adversarial learning,
BibRef
Xiang, Z.[Zhen],
Miller, D.J.[David J.],
Chen, S.[Siheng],
Li, X.[Xi],
Kesidis, G.[George],
A Backdoor Attack against 3D Point Cloud Classifiers,
ICCV21(7577-7587)
IEEE DOI
2203
Geometry, Point cloud compression, Training, Barium, Toxicology,
Adversarial learning, Recognition and classification,
Vision for robotics and autonomous vehicles
BibRef
Hwang, J.[Jaehui],
Kim, J.H.[Jun-Hyuk],
Choi, J.H.[Jun-Ho],
Lee, J.S.[Jong-Seok],
Just One Moment: Structural Vulnerability of Deep Action Recognition
against One Frame Attack,
ICCV21(7648-7656)
IEEE DOI
2203
Analytical models, Perturbation methods, Task analysis,
Adversarial learning, Action and behavior recognition
BibRef
Moayeri, M.[Mazda],
Feizi, S.[Soheil],
Sample Efficient Detection and Classification of Adversarial Attacks
via Self-Supervised Embeddings,
ICCV21(7657-7666)
IEEE DOI
2203
Training, Adaptation models, Toxicology, Costs, Perturbation methods,
Computational modeling, Adversarial learning,
Transfer/Low-shot/Semi/Unsupervised Learning
BibRef
Wang, Z.B.[Zhi-Bo],
Guo, H.[Hengchang],
Zhang, Z.F.[Zhi-Fei],
Liu, W.X.[Wen-Xin],
Qin, Z.[Zhan],
Ren, K.[Kui],
Feature Importance-aware Transferable Adversarial Attacks,
ICCV21(7619-7628)
IEEE DOI
2203
Degradation, Limiting, Correlation, Computational modeling,
Aggregates, Transforms, Adversarial learning, Explainable AI, Recognition and classification
BibRef
Wang, X.[Xin],
Lin, S.Y.[Shu-Yun],
Zhang, H.[Hao],
Zhu, Y.F.[Yu-Fei],
Zhang, Q.S.[Quan-Shi],
Interpreting Attributions and Interactions of Adversarial Attacks,
ICCV21(1075-1084)
IEEE DOI
2203
Visualization, Costs, Perturbation methods, Estimation,
Task analysis, Faces, Explainable AI, Adversarial learning
BibRef
Kumar, C.[Chetan],
Kumar, D.[Deepak],
Shao, M.[Ming],
Generative Adversarial Attack on Ensemble Clustering,
WACV22(3839-3848)
IEEE DOI
2202
Clustering methods, Supervised learning,
Clustering algorithms, Benchmark testing, Probabilistic logic,
Semi- and Un- supervised Learning
BibRef
Du, A.[Andrew],
Chen, B.[Bo],
Chin, T.J.[Tat-Jun],
Law, Y.W.[Yee Wei],
Sasdelli, M.[Michele],
Rajasegaran, R.[Ramesh],
Campbell, D.[Dillon],
Physical Adversarial Attacks on an Aerial Imagery Object Detector,
WACV22(3798-3808)
IEEE DOI
2202
Measurement, Deep learning, Satellites, Neural networks, Lighting,
Detectors, Observers, Deep Learning -> Adversarial Learning,
Adversarial Attack and Defense Methods
BibRef
Zhao, B.Y.[Bing-Yin],
Lao, Y.J.[Ying-Jie],
Towards Class-Oriented Poisoning Attacks Against Neural Networks,
WACV22(2244-2253)
IEEE DOI
2202
Training, Measurement, Computational modeling,
Neural networks, Machine learning, Predictive models,
Adversarial Attack and Defense Methods
BibRef
Chen, Z.H.[Zhen-Hua],
Wang, C.H.[Chu-Hua],
Crandall, D.[David],
Semantically Stealthy Adversarial Attacks against Segmentation Models,
WACV22(2846-2855)
IEEE DOI
2202
Image segmentation, Perturbation methods,
Computational modeling, Feature extraction, Context modeling,
Grouping and Shape
BibRef
Yin, M.J.[Ming-Jun],
Li, S.[Shasha],
Song, C.Y.[Cheng-Yu],
Asif, M.S.[M. Salman],
Roy-Chowdhury, A.K.[Amit K.],
Krishnamurthy, S.V.[Srikanth V.],
ADC: Adversarial attacks against object Detection that evade Context
consistency checks,
WACV22(2836-2845)
IEEE DOI
2202
Deep learning, Adaptation models,
Computational modeling, Neural networks, Buildings, Detectors,
Adversarial Attack and Defense Methods Object
Detection/Recognition/Categorization
BibRef
Lu, Y.T.[Yan-Tao],
Du, X.Y.[Xue-Ying],
Sun, B.K.[Bing-Kun],
Ren, H.N.[Hai-Ning],
Velipasalar, S.[Senem],
Fabricate-Vanish: An Effective and Transferable Black-Box Adversarial
Attack Incorporating Feature Distortion,
ICIP21(809-813)
IEEE DOI
2201
Deep learning, Adaptation models, Image processing,
Neural networks, Noise reduction, Distortion, Adversarial Examples
BibRef
Ren, Y.K.[Yan-Kun],
Li, L.F.[Long-Fei],
Zhou, J.[Jun],
Simtrojan: Stealthy Backdoor Attack,
ICIP21(819-823)
IEEE DOI
2201
Deep learning, Training, Target recognition, Image processing,
Buildings, Extraterrestrial measurements, deep learning
BibRef
Li, X.R.[Xiao-Rui],
Cui, W.Y.[Wei-Yu],
Huang, J.W.[Jia-Wei],
Wang, W.Y.[Wen-Yi],
Chen, J.W.[Jian-Wen],
Regularized Intermediate Layers Attack:
Adversarial Examples With High Transferability,
ICIP21(1904-1908)
IEEE DOI
2201
Image recognition, Filtering, Perturbation methods,
Optimization methods, Convolutional neural networks, Transferability
BibRef
Bai, T.[Tao],
Zhao, J.[Jun],
Zhu, J.[Jinlin],
Han, S.D.[Shou-Dong],
Chen, J.F.[Jie-Feng],
Li, B.[Bo],
Kot, A.[Alex],
AI-GAN: Attack-Inspired Generation of Adversarial Examples,
ICIP21(2543-2547)
IEEE DOI
2201
Training, Image quality, Deep learning, Perturbation methods,
Image processing, Generative adversarial networks, deep learning
BibRef
Kim, B.C.[Byeong Cheon],
Yu, Y.J.[Young-Joon],
Ro, Y.M.[Yong Man],
Robust Decision-Based Black-Box Adversarial Attack via Coarse-To-Fine
Random Search,
ICIP21(3048-3052)
IEEE DOI
2201
Deep learning, Image processing, Estimation, Robustness,
Optimization, Adversarial attack, black-box attack, decision-based,
random search
BibRef
Abdelfattah, M.[Mazen],
Yuan, K.[Kaiwen],
Wang, Z.J.[Z. Jane],
Ward, R.[Rabab],
Towards Universal Physical Attacks on Cascaded Camera-Lidar 3d Object
Detection Models,
ICIP21(3592-3596)
IEEE DOI
2201
Geometry, Deep learning, Solid modeling, Laser radar,
Image processing, Object detection, Adversarial attacks, deep learning
BibRef
Gurulingan, N.K.[Naresh Kumar],
Arani, E.[Elahe],
Zonooz, B.[Bahram],
UniNet: A Unified Scene Understanding Network and Exploring
Multi-Task Relationships through the Lens of Adversarial Attacks,
DeepMTL21(2239-2248)
IEEE DOI
2112
Shape, Semantics, Neural networks, Information sharing,
Estimation, Object detection
BibRef
Ding, Y.Z.[Yu-Zhen],
Thakur, N.[Nupur],
Li, B.X.[Bao-Xin],
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers,
AROW21(142-151)
IEEE DOI
2112
Measurement, Deep learning,
Neural networks, Buildings, Gaussian distribution
BibRef
Boloor, A.[Adith],
Wu, T.[Tong],
Naughton, P.[Patrick],
Chakrabarti, A.[Ayan],
Zhang, X.[Xuan],
Vorobeychik, Y.[Yevgeniy],
Can Optical Trojans Assist Adversarial Perturbations?,
AROW21(122-131)
IEEE DOI
2112
Perturbation methods, Neural networks, Pipelines,
Optical device fabrication, Cameras, Optical imaging, Trojan horses
BibRef
Gnanasambandam, A.[Abhiram],
Sherman, A.M.[Alex M.],
Chan, S.H.[Stanley H.],
Optical Adversarial Attack,
AROW21(92-101)
IEEE DOI
2112
Integrated optics, Computational modeling, Lighting, Optical imaging
BibRef
Lennon, M.[Max],
Drenkow, N.[Nathan],
Burlina, P.[Phil],
Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose?,
AROW21(112-121)
IEEE DOI
2112
Measurement, Training, Heating systems,
Sensitivity analysis, Conferences
BibRef
Yu, Y.R.[Yun-Rui],
Gao, X.T.[Xi-Tong],
Xu, C.Z.[Cheng-Zhong],
LAFEAT: Piercing Through Adversarial Defenses with Latent Features,
CVPR21(5731-5741)
IEEE DOI
2111
Degradation, Schedules, Computational modeling,
Perturbation methods, Lattices, Robustness
BibRef
Wang, H.P.[Hui-Po],
Yu, N.[Ning],
Fritz, M.[Mario],
Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs,
CVPR21(7868-7877)
IEEE DOI
2111
Industries, Codes, Image synthesis,
Computational modeling, Process control, Aerospace electronics
BibRef
Wang, X.S.[Xiao-Sen],
He, K.[Kun],
Enhancing the Transferability of Adversarial Attacks through Variance
Tuning,
CVPR21(1924-1933)
IEEE DOI
2111
Deep learning, Codes, Perturbation methods,
Computational modeling, Pattern recognition, Iterative methods
BibRef
Pony, R.[Roi],
Naeh, I.[Itay],
Mannor, S.[Shie],
Over-the-Air Adversarial Flickering Attacks against Video Recognition
Networks,
CVPR21(515-524)
IEEE DOI
2111
Deep learning, Perturbation methods, Observers,
Pattern recognition, Image classification
BibRef
Xiao, Y.[Yanru],
Wang, C.[Cong],
You See What I Want You to See: Exploring Targeted Black-Box
Transferability Attack for Hash-based Image Retrieval Systems,
CVPR21(1934-1943)
IEEE DOI
2111
Codes, Image retrieval, Multimedia databases,
Pattern recognition, Classification algorithms, Image storage
BibRef
Rampini, A.[Arianna],
Pestarini, F.[Franco],
Cosmo, L.[Luca],
Melzi, S.[Simone],
Rodolŕ, E.[Emanuele],
Universal Spectral Adversarial Attacks for Deformable Shapes,
CVPR21(3215-3225)
IEEE DOI
2111
Geometry, Shape, Perturbation methods,
Predictive models, Eigenvalues and eigenfunctions, Robustness
BibRef
Li, X.D.[Xiao-Dan],
Li, J.F.[Jin-Feng],
Chen, Y.F.[Yue-Feng],
Ye, S.[Shaokai],
He, Y.[Yuan],
Wang, S.H.[Shu-Hui],
Su, H.[Hang],
Xue, H.[Hui],
QAIR: Practical Query-efficient Black-Box Attacks for Image Retrieval,
CVPR21(3329-3338)
IEEE DOI
2111
Visualization, Databases, Image retrieval, Training data,
Search engines, Loss measurement, Robustness
BibRef
Wang, W.X.[Wen-Xuan],
Qian, X.L.[Xue-Lin],
Fu, Y.W.[Yan-Wei],
Xue, X.Y.[Xiang-Yang],
DST: Dynamic Substitute Training for Data-free Black-box Attack,
CVPR22(14341-14350)
IEEE DOI
2210
Training, Adaptation models, Computational modeling,
Neural networks, Training data, Logic gates,
Adversarial attack and defense
BibRef
Wang, W.X.[Wen-Xuan],
Yin, B.J.[Bang-Jie],
Yao, T.P.[Tai-Ping],
Zhang, L.[Li],
Fu, Y.W.[Yan-Wei],
Ding, S.H.[Shou-Hong],
Li, J.L.[Ji-Lin],
Huang, F.Y.[Fei-Yue],
Xue, X.Y.[Xiang-Yang],
Delving into Data: Effectively Substitute Training for Black-box
Attack,
CVPR21(4759-4768)
IEEE DOI
2111
Training, Computational modeling, Training data,
Distributed databases, Data visualization, Data models
BibRef
Jia, S.[Shuai],
Song, Y.B.[Yi-Bing],
Ma, C.[Chao],
Yang, X.K.[Xiao-Kang],
IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack
for Visual Object Tracking,
CVPR21(6705-6714)
IEEE DOI
2111
Deep learning, Visualization, Correlation, Codes,
Perturbation methods, Robustness
BibRef
Rezaei, S.[Shahbaz],
Liu, X.[Xin],
On the Difficulty of Membership Inference Attacks,
CVPR21(7888-7896)
IEEE DOI
2111
Training, Analytical models, Codes,
Computational modeling, Pattern recognition
BibRef
Kariyappa, S.[Sanjay],
Prakash, A.[Atul],
Qureshi, M.K.[Moinuddin K],
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient
Estimation,
CVPR21(13809-13818)
IEEE DOI
2111
Training, Cloning, Estimation, Machine learning,
Intellectual property, Predictive models, Data models
BibRef
Duan, R.J.[Ran-Jie],
Mao, X.F.[Xiao-Feng],
Qin, A.K.,
Chen, Y.F.[Yue-Feng],
Ye, S.[Shaokai],
He, Y.[Yuan],
Yang, Y.[Yun],
Adversarial Laser Beam:
Effective Physical-World Attack to DNNs in a Blink,
CVPR21(16057-16066)
IEEE DOI
2111
Deep learning, Laser theory, Robustness,
Pattern recognition, Laser beams
BibRef
Ma, C.[Chen],
Chen, L.[Li],
Yong, J.H.[Jun-Hai],
Simulating Unknown Target Models for Query-Efficient Black-box
Attacks,
CVPR21(11830-11839)
IEEE DOI
2111
Training, Deep learning, Codes,
Computational modeling, Training data, Complexity theory
BibRef
Maho, T.[Thibault],
Furon, T.[Teddy],
Le Merrer, E.[Erwan],
SurFree: a fast surrogate-free black-box attack,
CVPR21(10425-10434)
IEEE DOI
2111
Estimation, Focusing, Machine learning, Distortion,
Pattern recognition, Convergence
BibRef
Zolfi, A.[Alon],
Kravchik, M.[Moshe],
Elovici, Y.[Yuval],
Shabtai, A.[Asaf],
The Translucent Patch:
A Physical and Universal Attack on Object Detectors,
CVPR21(15227-15236)
IEEE DOI
2111
Face recognition, Optimization methods, Detectors,
Object detection, Cameras, Autonomous vehicles
BibRef
Chen, Z.K.[Zhi-Kai],
Xie, L.X.[Ling-Xi],
Pang, S.M.[Shan-Min],
He, Y.[Yong],
Tian, Q.[Qi],
Appending Adversarial Frames for Universal Video Attack,
WACV21(3198-3207)
IEEE DOI
2106
Measurement, Perturbation methods,
Semantics, Pipelines, Euclidean distance
BibRef
Tan, Y.X.M.[Yi Xianz Marcus],
Elovici, Y.[Yuval],
Binder, A.[Alexander],
Adaptive Noise Injection for Training Stochastic Student Networks
from Deterministic Teachers,
ICPR21(7587-7594)
IEEE DOI
2105
Training, Adaptation models, Adaptive systems,
Computational modeling, Stochastic processes, Machine learning,
stochastic networks
BibRef
Cancela, B.[Brais],
Bolón-Canedo, V.[Verónica],
Alonso-Betanzos, A.[Amparo],
A delayed Elastic-Net approach for performing adversarial attacks,
ICPR21(378-384)
IEEE DOI
2105
Perturbation methods, Data preprocessing, Benchmark testing,
Size measurement, Robustness, Pattern recognition, Security
BibRef
Li, X.C.[Xiu-Chuan],
Zhang, X.Y.[Xu-Yao],
Yin, F.[Fei],
Liu, C.L.[Cheng-Lin],
F-mixup: Attack CNNs From Fourier Perspective,
ICPR21(541-548)
IEEE DOI
2105
Training, Frequency-domain analysis, Perturbation methods,
Neural networks, Robustness, Pattern recognition, High frequency
BibRef
Grosse, K.[Kathrin],
Smith, M.T.[Michael T.],
Backes, M.[Michael],
Killing Four Birds with one Gaussian Process:
The Relation between different Test-Time Attacks,
ICPR21(4696-4703)
IEEE DOI
2105
Analytical models, Reverse engineering, Training data,
Gaussian processes, Data models, Classification algorithms, Pattern recognition
BibRef
Barati, R.[Ramin],
Safabakhsh, R.[Reza],
Rahmati, M.[Mohammad],
Towards Explaining Adversarial Examples Phenomenon in Artificial
Neural Networks,
ICPR21(7036-7042)
IEEE DOI
2105
Training, Artificial neural networks, Pattern recognition,
Proposals, Convergence, adversarial attack, robustness,
adversarial training
BibRef
Li, W.J.[Wen-Jie],
Tondi, B.[Benedetta],
Ni, R.R.[Rong-Rong],
Barni, M.[Mauro],
Increased-confidence Adversarial Examples for Deep Learning
Counter-forensics,
MMForWild20(411-424).
Springer DOI
2103
BibRef
Dong, X.S.[Xin-Shuai],
Liu, H.[Hong],
Ji, R.R.[Rong-Rong],
Cao, L.J.[Liu-Juan],
Ye, Q.X.[Qi-Xiang],
Liu, J.Z.[Jian-Zhuang],
Tian, Q.[Qi],
API-net: Robust Generative Classifier via a Single Discriminator,
ECCV20(XIII:379-394).
Springer DOI
2011
BibRef
Liu, A.S.[Ai-Shan],
Huang, T.R.[Tai-Ran],
Liu, X.L.[Xiang-Long],
Xu, Y.T.[Yi-Tao],
Ma, Y.Q.[Yu-Qing],
Chen, X.Y.[Xin-Yun],
Maybank, S.J.[Stephen J.],
Tao, D.C.[Da-Cheng],
Spatiotemporal Attacks for Embodied Agents,
ECCV20(XVII:122-138).
Springer DOI
2011
Code, Adversarial Attack.
WWW Link.
BibRef
Fan, Y.B.[Yan-Bo],
Wu, B.Y.[Bao-Yuan],
Li, T.H.[Tuan-Hui],
Zhang, Y.[Yong],
Li, M.Y.[Ming-Yang],
Li, Z.F.[Zhi-Feng],
Yang, Y.J.[Yu-Jiu],
Sparse Adversarial Attack via Perturbation Factorization,
ECCV20(XXII:35-50).
Springer DOI
2011
BibRef
Guo, J.F.[Jun-Feng],
Liu, C.[Cong],
Practical Poisoning Attacks on Neural Networks,
ECCV20(XXVII:142-158).
Springer DOI
2011
BibRef
Liu, Y.F.[Yun-Fei],
Ma, X.J.[Xing-Jun],
Bailey, J.[James],
Lu, F.[Feng],
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks,
ECCV20(X:182-199).
Springer DOI
2011
BibRef
Feng, X.J.[Xin-Jie],
Yao, H.X.[Hong-Xun],
Che, W.B.[Wen-Bin],
Zhang, S.P.[Sheng-Ping],
An Effective Way to Boost Black-box Adversarial Attack,
MMMod20(I:393-404).
Springer DOI
2003
BibRef
Costales, R.,
Mao, C.,
Norwitz, R.,
Kim, B.,
Yang, J.,
Live Trojan Attacks on Deep Neural Networks,
AML-CV20(3460-3469)
IEEE DOI
2008
Trojan horses, Computational modeling, Neural networks,
Machine learning
BibRef
Haque, M.,
Chauhan, A.,
Liu, C.,
Yang, W.,
ILFO: Adversarial Attack on Adaptive Neural Networks,
CVPR20(14252-14261)
IEEE DOI
2008
Computational modeling, Energy consumption, Robustness,
Neural networks, Adaptation models, Machine learning, Perturbation methods
BibRef
Zhou, M.,
Wu, J.,
Liu, Y.,
Liu, S.,
Zhu, C.,
DaST: Data-Free Substitute Training for Adversarial Attacks,
CVPR20(231-240)
IEEE DOI
2008
Data models, Training, Machine learning, Perturbation methods,
Task analysis, Estimation
BibRef
Ganeshan, A.[Aditya],
Vivek, B.S.,
Radhakrishnan, V.B.[Venkatesh Babu],
FDA: Feature Disruptive Attack,
ICCV19(8068-8078)
IEEE DOI
2004
Deal with adversarial attacks.
image classification, image representation,
learning (artificial intelligence), neural nets, optimisation,
BibRef
Han, J.,
Dong, X.,
Zhang, R.,
Chen, D.,
Zhang, W.,
Yu, N.,
Luo, P.,
Wang, X.,
Once a MAN: Towards Multi-Target Attack via Learning Multi-Target
Adversarial Network Once,
ICCV19(5157-5166)
IEEE DOI
2004
convolutional neural nets, learning (artificial intelligence),
pattern classification, security of data, Decoding
BibRef
Deng, Y.,
Karam, L.J.,
Universal Adversarial Attack Via Enhanced Projected Gradient Descent,
ICIP20(1241-1245)
IEEE DOI
2011
Perturbation methods, Computational modeling, Training,
Convolutional neural networks,
projected gradient descent (PGD)
BibRef
Sun, C.,
Chen, S.,
Cai, J.,
Huang, X.,
Type I Attack For Generative Models,
ICIP20(593-597)
IEEE DOI
2011
Image reconstruction, Decoding,
Aerospace electronics, Generative adversarial networks,
generative models
BibRef
Yang, C.L.[Cheng-Lin],
Kortylewski, A.[Adam],
Xie, C.[Cihang],
Cao, Y.Z.[Yin-Zhi],
Yuille, A.L.[Alan L.],
Patchattack: A Black-box Texture-based Attack with Reinforcement
Learning,
ECCV20(XXVI:681-698).
Springer DOI
2011
BibRef
Braunegg, A.,
Chakraborty, A.[Amartya],
Krumdick, M.[Michael],
Lape, N.[Nicole],
Leary, S.[Sara],
Manville, K.[Keith],
Merkhofer, E.[Elizabeth],
Strickhart, L.[Laura],
Walmer, M.[Matthew],
Apricot: A Dataset of Physical Adversarial Attacks on Object Detection,
ECCV20(XXI:35-50).
Springer DOI
2011
BibRef
Zhang, H.[Hu],
Zhu, L.C.[Lin-Chao],
Zhu, Y.[Yi],
Yang, Y.[Yi],
Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior,
ECCV20(XX:240-256).
Springer DOI
2011
BibRef
Gao, L.L.[Lian-Li],
Zhang, Q.L.[Qi-Long],
Song, J.K.[Jing-Kuan],
Liu, X.L.[Xiang-Long],
Shen, H.T.[Heng Tao],
Patch-wise Attack for Fooling Deep Neural Network,
ECCV20(XXVIII:307-322).
Springer DOI
2011
BibRef
Andriushchenko, M.[Maksym],
Croce, F.[Francesco],
Flammarion, N.[Nicolas],
Hein, M.[Matthias],
Square Attack: A Query-efficient Black-box Adversarial Attack via
Random Search,
ECCV20(XXIII:484-501).
Springer DOI
2011
BibRef
Bai, J.W.[Jia-Wang],
Chen, B.[Bin],
Li, Y.M.[Yi-Ming],
Wu, D.X.[Dong-Xian],
Guo, W.W.[Wei-Wei],
Xia, S.T.[Shu-Tao],
Yang, E.H.[En-Hui],
Targeted Attack for Deep Hashing Based Retrieval,
ECCV20(I:618-634).
Springer DOI
2011
BibRef
Nakka, K.K.[Krishna Kanth],
Salzmann, M.[Mathieu],
Indirect Local Attacks for Context-aware Semantic Segmentation Networks,
ECCV20(V:611-628).
Springer DOI
2011
BibRef
Wu, Z.X.[Zu-Xuan],
Lim, S.N.[Ser-Nam],
Davis, L.S.[Larry S.],
Goldstein, T.[Tom],
Making an Invisibility Cloak: Real World Adversarial Attacks on Object
Detectors,
ECCV20(IV:1-17).
Springer DOI
2011
BibRef
Li, Q.Z.[Qi-Zhang],
Guo, Y.W.[Yi-Wen],
Chen, H.[Hao],
Yet Another Intermediate-level Attack,
ECCV20(XVI: 241-257).
Springer DOI
2010
BibRef
Zhao, S.,
Ma, X.,
Zheng, X.,
Bailey, J.,
Chen, J.,
Jiang, Y.,
Clean-Label Backdoor Attacks on Video Recognition Models,
CVPR20(14431-14440)
IEEE DOI
2008
Training, Data models, Toxicology, Perturbation methods,
Training data, Image resolution, Pipelines
BibRef
Kolouri, S.,
Saha, A.,
Pirsiavash, H.,
Hoffmann, H.,
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs,
CVPR20(298-307)
IEEE DOI
2008
Training, Perturbation methods, Data models,
Computational modeling, Machine learning, Benchmark testing
BibRef
Li, J.,
Ji, R.,
Liu, H.,
Liu, J.,
Zhong, B.,
Deng, C.,
Tian, Q.,
Projection Probability-Driven Black-Box Attack,
CVPR20(359-368)
IEEE DOI
2008
Perturbation methods, Sensors, Optimization, Sparse matrices,
Compressed sensing, Google, Neural networks
BibRef
Yan, B.,
Wang, D.,
Lu, H.,
Yang, X.,
Cooling-Shrinking Attack:
Blinding the Tracker With Imperceptible Noises,
CVPR20(987-996)
IEEE DOI
2008
Target tracking, Generators, Heating systems, Perturbation methods,
Object tracking, Training
BibRef
Li, H.,
Xu, X.,
Zhang, X.,
Yang, S.,
Li, B.,
QEBA: Query-Efficient Boundary-Based Blackbox Attack,
CVPR20(1218-1227)
IEEE DOI
2008
Perturbation methods, Estimation, Predictive models,
Machine learning, Cats, Pipelines, Neural networks
BibRef
Li, M.,
Deng, C.,
Li, T.,
Yan, J.,
Gao, X.,
Huang, H.,
Towards Transferable Targeted Attack,
CVPR20(638-646)
IEEE DOI
2008
Curing, Iterative methods, Extraterrestrial measurements, Entropy,
Perturbation methods, Robustness
BibRef
Truong, L.,
Jones, C.,
Hutchinson, B.,
August, A.,
Praggastis, B.,
Jasper, R.,
Nichols, N.,
Tuor, A.,
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image
Classifiers,
AML-CV20(3422-3431)
IEEE DOI
2008
Data models, Training, Computational modeling, Machine learning,
Training data, Safety
BibRef
Gupta, S.,
Dube, P.,
Verma, A.,
Improving the affordability of robustness training for DNNs,
AML-CV20(3383-3392)
IEEE DOI
2008
Training, Mathematical model, Computational modeling, Robustness,
Neural networks, Optimization
BibRef
Zhang, Z.,
Wu, T.,
Learning Ordered Top-k Adversarial Attacks via Adversarial
Distillation,
AML-CV20(3364-3373)
IEEE DOI
2008
Perturbation methods, Robustness, Task analysis, Semantics, Training,
Visualization, Protocols
BibRef
Chen, X.,
Yan, X.,
Zheng, F.,
Jiang, Y.,
Xia, S.,
Zhao, Y.,
Ji, R.,
One-Shot Adversarial Attacks on Visual Tracking With Dual Attention,
CVPR20(10173-10182)
IEEE DOI
2008
Target tracking, Task analysis, Visualization,
Perturbation methods, Object tracking, Optimization
BibRef
Zhou, H.,
Chen, D.,
Liao, J.,
Chen, K.,
Dong, X.,
Liu, K.,
Zhang, W.,
Hua, G.,
Yu, N.,
LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack
of Point Cloud Based Deep Networks,
CVPR20(10353-10362)
IEEE DOI
2008
Feature extraction,
Perturbation methods, Decoding, Training, Neural networks, Target recognition
BibRef
Rahmati, A.,
Moosavi-Dezfooli, S.M.[Seyed-Mohsen],
Frossard, P.[Pascal],
Dai, H.,
GeoDA: A Geometric Framework for Black-Box Adversarial Attacks,
CVPR20(8443-8452)
IEEE DOI
2008
Perturbation methods, Estimation, Covariance matrices,
Gaussian distribution, Measurement, Neural networks, Robustness
BibRef
Machiraju, H.[Harshitha],
Balasubramanian, V.N.[Vineeth N],
A Little Fog for a Large Turn,
WACV20(2891-2900)
IEEE DOI
2006
Perturbation methods, Meteorology, Autonomous robots,
Task analysis, Data models, Predictive models, Robustness
BibRef
Yang, C.H.,
Liu, Y.,
Chen, P.,
Ma, X.,
Tsai, Y.J.,
When Causal Intervention Meets Adversarial Examples and Image Masking
for Deep Neural Networks,
ICIP19(3811-3815)
IEEE DOI
1910
Causal Reasoning, Adversarial Example, Adversarial Robustness,
Interpretable Deep Learning, Visual Reasoning
BibRef
Yao, H.,
Regan, M.,
Yang, Y.,
Ren, Y.,
Image Decomposition and Classification Through a Generative Model,
ICIP19(400-404)
IEEE DOI
1910
Generative model, classification, adversarial defense
BibRef
Brunner, T.,
Diehl, F.,
Le, M.T.,
Knoll, A.,
Guessing Smart:
Biased Sampling for Efficient Black-Box Adversarial Attacks,
ICCV19(4957-4965)
IEEE DOI
2004
application program interfaces, cloud computing,
feature extraction, image classification, security of data, Training
BibRef
Liu, Y.J.[Yu-Jia],
Moosavi-Dezfooli, S.M.[Seyed-Mohsen],
Frossard, P.[Pascal],
A Geometry-Inspired Decision-Based Attack,
ICCV19(4889-4897)
IEEE DOI
2004
Deal with adversarial attack.
geometry, image classification, image recognition, neural nets,
security of data, black-box settings, Gaussian noise
BibRef
Li, J.,
Ji, R.,
Liu, H.,
Hong, X.,
Gao, Y.,
Tian, Q.,
Universal Perturbation Attack Against Image Retrieval,
ICCV19(4898-4907)
IEEE DOI
2004
feature extraction, image classification, image representation,
image retrieval, learning (artificial intelligence), Pipelines
BibRef
Finlay, C.,
Pooladian, A.,
Oberman, A.,
The LogBarrier Adversarial Attack:
Making Effective Use of Decision Boundary Information,
ICCV19(4861-4869)
IEEE DOI
2004
gradient methods, image classification, minimisation, neural nets,
security of data, LogBarrier adversarial attack, Benchmark testing
BibRef
Huang, Q.,
Katsman, I.,
Gu, Z.,
He, H.,
Belongie, S.,
Lim, S.,
Enhancing Adversarial Example Transferability With an Intermediate
Level Attack,
ICCV19(4732-4741)
IEEE DOI
2004
cryptography, neural nets, optimisation, black-box transferability,
source model, target models, adversarial examples,
Artificial intelligence
BibRef
Jandial, S.,
Mangla, P.,
Varshney, S.,
Balasubramanian, V.,
AdvGAN++: Harnessing Latent Layers for Adversary Generation,
NeruArch19(2045-2048)
IEEE DOI
2004
feature extraction, neural nets, MNIST datasets, CIFAR-10 datasets,
attack rates, realistic images, latent features, input image,
AdvGAN
BibRef
Wang, C.L.[Cheng-Long],
Bunel, R.[Rudy],
Dvijotham, K.[Krishnamurthy],
Huang, P.S.[Po-Sen],
Grefenstette, E.[Edward],
Kohli, P.[Pushmeet],
Knowing When to Stop: Evaluation and Verification of Conformity to
Output-Size Specifications,
CVPR19(12252-12261).
IEEE DOI
2002
ulnerability of these models to attacks aimed at changing the output-size.
BibRef
Modas, A.[Apostolos],
Moosavi-Dezfooli, S.M.[Seyed-Mohsen],
Frossard, P.[Pascal],
SparseFool: A Few Pixels Make a Big Difference,
CVPR19(9079-9088).
IEEE DOI
2002
sparse attack.
BibRef
Yao, Z.[Zhewei],
Gholami, A.[Amir],
Xu, P.[Peng],
Keutzer, K.[Kurt],
Mahoney, M.W.[Michael W.],
Trust Region Based Adversarial Attack on Neural Networks,
CVPR19(11342-11351).
IEEE DOI
2002
BibRef
Zeng, X.H.[Xiao-Hui],
Liu, C.X.[Chen-Xi],
Wang, Y.S.[Yu-Siang],
Qiu, W.[Weichao],
Xie, L.X.[Ling-Xi],
Tai, Y.W.[Yu-Wing],
Tang, C.K.[Chi-Keung],
Yuille, A.L.[Alan L.],
Adversarial Attacks Beyond the Image Space,
CVPR19(4297-4306).
IEEE DOI
2002
BibRef
Corneanu, C.A.[Ciprian A.],
Madadi, M.[Meysam],
Escalera, S.[Sergio],
Martinez, A.M.[Aleix M.],
What Does It Mean to Learn in Deep Networks? And, How Does One Detect
Adversarial Attacks?,
CVPR19(4752-4761).
IEEE DOI
2002
BibRef
Shi, Y.C.[Yu-Cheng],
Wang, S.[Siyu],
Han, Y.H.[Ya-Hong],
Curls and Whey: Boosting Black-Box Adversarial Attacks,
CVPR19(6512-6520).
IEEE DOI
2002
BibRef
Liu, X.Q.[Xuan-Qing],
Hsieh, C.J.[Cho-Jui],
Rob-GAN: Generator, Discriminator, and Adversarial Attacker,
CVPR19(11226-11235).
IEEE DOI
2002
BibRef
Gupta, P.[Puneet],
Rahtu, E.[Esa],
MLAttack: Fooling Semantic Segmentation Networks by Multi-layer Attacks,
GCPR19(401-413).
Springer DOI
1911
BibRef
Barni, M.,
Kallas, K.,
Tondi, B.,
A New Backdoor Attack in CNNS by Training Set Corruption Without
Label Poisoning,
ICIP19(101-105)
IEEE DOI
1910
Adversarial learning, security of deep learning,
backdoor poisoning attacks, training with poisoned data
BibRef
Zhao, W.[Wei],
Yang, P.P.[Peng-Peng],
Ni, R.R.[Rong-Rong],
Zhao, Y.[Yao],
Li, W.J.[Wen-Jie],
Cycle GAN-Based Attack on Recaptured Images to Fool both Human and
Machine,
IWDW18(83-92).
Springer DOI
1905
BibRef
Wang, S.,
Shi, Y.,
Han, Y.,
Universal Perturbation Generation for Black-box Attack Using
Evolutionary Algorithms,
ICPR18(1277-1282)
IEEE DOI
1812
Perturbation methods, Evolutionary computation, Sociology,
Statistics, Training, Neural networks, Robustness
BibRef
Xu, X.J.[Xiao-Jun],
Chen, X.Y.[Xin-Yun],
Liu, C.[Chang],
Rohrbach, A.[Anna],
Darrell, T.J.[Trevor J.],
Song, D.[Dawn],
Fooling Vision and Language Models Despite Localization and Attention
Mechanism,
CVPR18(4951-4961)
IEEE DOI
1812
Attacks.
Prediction algorithms, Computational modeling, Neural networks,
Knowledge discovery, Visualization, Predictive models, Natural languages
BibRef
Dong, Y.,
Liao, F.,
Pang, T.,
Su, H.,
Zhu, J.,
Hu, X.,
Li, J.,
Boosting Adversarial Attacks with Momentum,
CVPR18(9185-9193)
IEEE DOI
1812
Iterative methods, Robustness, Training, Data models,
Adaptation models, Security
BibRef
Eykholt, K.,
Evtimov, I.,
Fernandes, E.,
Li, B.,
Rahmati, A.,
Xiao, C.,
Prakash, A.,
Kohno, T.,
Song, D.,
Robust Physical-World Attacks on Deep Learning Visual Classification,
CVPR18(1625-1634)
IEEE DOI
1812
Perturbation methods, Roads, Cameras, Visualization, Pipelines,
Autonomous vehicles, Detectors
BibRef
Narodytska, N.,
Kasiviswanathan, S.,
Simple Black-Box Adversarial Attacks on Deep Neural Networks,
PRIV17(1310-1318)
IEEE DOI
1709
Knowledge engineering, Network architecture,
Neural networks, Robustness, Training
BibRef
Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
VAE, Variational Autoencoder .