Miller, D.J.,
Xiang, Z.,
Kesidis, G.,
Adversarial Learning Targeting Deep Neural Network Classification:
A Comprehensive Review of Defenses Against Attacks,
PIEEE(108), No. 3, March 2020, pp. 402-433.
IEEE DOI
2003
Training data, Neural networks, Reverse engineering,
Machine learning, Robustness, Training data, Feature extraction,
white box
BibRef
Amini, S.,
Ghaemmaghami, S.,
Towards Improving Robustness of Deep Neural Networks to Adversarial
Perturbations,
MultMed(22), No. 7, July 2020, pp. 1889-1903.
IEEE DOI
2007
Robustness, Perturbation methods, Training, Deep learning,
Computer architecture, Neural networks, Signal to noise ratio,
interpretable
BibRef
Li, X.R.[Xu-Rong],
Ji, S.L.[Shou-Ling],
Ji, J.T.[Jun-Tao],
Ren, Z.Y.[Zhen-Yu],
Wu, C.M.[Chun-Ming],
Li, B.[Bo],
Wang, T.[Ting],
Adversarial examples detection through the sensitivity in space
mappings,
IET-CV(14), No. 5, August 2020, pp. 201-213.
DOI Link
2007
BibRef
Li, H.,
Zeng, Y.,
Li, G.,
Lin, L.,
Yu, Y.,
Online Alternate Generator Against Adversarial Attacks,
IP(29), 2020, pp. 9305-9315.
IEEE DOI
2010
Generators, Training, Perturbation methods, Knowledge engineering,
Convolutional neural networks, Deep learning, image classification
BibRef
Zhang, Y.G.[Yong-Gang],
Tian, X.M.[Xin-Mei],
Li, Y.[Ya],
Wang, X.C.[Xin-Chao],
Tao, D.C.[Da-Cheng],
Principal Component Adversarial Example,
IP(29), 2020, pp. 4804-4815.
IEEE DOI
2003
Manifolds, Neural networks, Perturbation methods, Distortion,
Task analysis, Robustness, Principal component analysis,
manifold learning
BibRef
Ma, X.J.[Xing-Jun],
Niu, Y.[Yuhao],
Gu, L.[Lin],
Wang, Y.[Yisen],
Zhao, Y.T.[Yi-Tian],
Bailey, J.[James],
Lu, F.[Feng],
Understanding adversarial attacks on deep learning based medical
image analysis systems,
PR(110), 2021, pp. 107332.
Elsevier DOI
2011
Adversarial attack, Adversarial example detection,
Medical image analysis, Deep learning
BibRef
Zhou, M.[Mo],
Niu, Z.X.[Zhen-Xing],
Wang, L.[Le],
Zhang, Q.L.[Qi-Lin],
Hua, G.[Gang],
Adversarial Ranking Attack and Defense,
ECCV20(XIV:781-799).
Springer DOI
2011
BibRef
Agarwal, A.[Akshay],
Vatsa, M.[Mayank],
Singh, R.[Richa],
Ratha, N.[Nalini],
Cognitive data augmentation for adversarial defense via pixel masking,
PRL(146), 2021, pp. 244-251.
Elsevier DOI
2105
Adversarial attacks, Deep learning, Data augmentation
BibRef
Li, Z.R.[Zhuo-Rong],
Feng, C.[Chao],
Wu, M.H.[Ming-Hui],
Yu, H.C.[Hong-Chuan],
Zheng, J.W.[Jian-Wei],
Zhu, F.[Fanwei],
Adversarial robustness via attention transfer,
PRL(146), 2021, pp. 172-178.
Elsevier DOI
2105
Adversarial defense, Robustness, Representation learning,
Visual attention, Transfer learning
BibRef
Zhang, S.D.[Shu-Dong],
Gao, H.[Haichang],
Rao, Q.X.[Qing-Xun],
Defense Against Adversarial Attacks by Reconstructing Images,
IP(30), 2021, pp. 6117-6129.
IEEE DOI
2107
Perturbation methods, Image reconstruction, Training,
Iterative methods, Computational modeling, Predictive models,
perceptual loss
BibRef
Li, N.N.[Nan-Nan],
Chen, Z.Z.[Zhen-Zhong],
Toward Visual Distortion in Black-Box Attacks,
IP(30), 2021, pp. 6156-6167.
IEEE DOI
2107
Distortion, Visualization, Measurement, Loss measurement,
Optimization, Convergence, Training, Black-box attack, classification
BibRef
Zhao, Z.Q.[Zhi-Qun],
Wang, H.Y.[Heng-You],
Sun, H.[Hao],
Yuan, J.H.[Jian-He],
Huang, Z.C.[Zhong-Chao],
He, Z.H.[Zhi-Hai],
Removing Adversarial Noise via Low-Rank Completion of
High-Sensitivity Points,
IP(30), 2021, pp. 6485-6497.
IEEE DOI
2107
Perturbation methods, Training, Neural networks, Image denoising,
Optimization, TV, Sensitivity, Adversarial examples, TV norm
BibRef
Hu, W.Z.[Wen-Zheng],
Li, M.Y.[Ming-Yang],
Wang, Z.[Zheng],
Wang, J.Q.[Jian-Qiang],
Zhang, C.S.[Chang-Shui],
DiFNet: Densely High-Frequency Convolutional Neural Networks,
SPLetters(28), 2021, pp. 1340-1344.
IEEE DOI
2107
Image edge detection, Convolution, Perturbation methods, Training,
Neural networks, Computer architecture, Robustness, Robust,
deep convolution neural network
BibRef
Mustafa, A.[Aamir],
Khan, S.H.[Salman H.],
Hayat, M.[Munawar],
Goecke, R.[Roland],
Shen, J.B.[Jian-Bing],
Shao, L.[Ling],
Deeply Supervised Discriminative Learning for Adversarial Defense,
PAMI(43), No. 9, September 2021, pp. 3154-3166.
IEEE DOI
2108
Robustness, Perturbation methods, Training, Linear programming,
Optimization, Marine vehicles, Prototypes, Adversarial defense,
deep supervision
BibRef
Karim, F.[Fazle],
Majumdar, S.[Somshubra],
Darabi, H.S.[Hou-Shang],
Adversarial Attacks on Time Series,
PAMI(43), No. 10, October 2021, pp. 3309-3320.
IEEE DOI
2109
Time series analysis, Computational modeling, Data models,
Neural networks, Machine learning, Training,
deep learning
BibRef
Khodabakhsh, A.[Ali],
Akhtar, Z.[Zahid],
Unknown presentation attack detection against rational attackers,
IET-Bio(10), No. 5, 2021, pp. 460-479.
DOI Link
2109
BibRef
Zhang, X.W.[Xing-Wei],
Zheng, X.L.[Xiao-Long],
Mao, W.J.[Wen-Ji],
Adversarial Perturbation Defense on Deep Neural Networks,
Surveys(54), No. 8, October 2021, pp. xx-yy.
DOI Link
2110
Survey, Adversarial Defense. security, deep neural networks, origin, Adversarial perturbation defense
BibRef
Chen, X.[Xuan],
Ma, Y.N.[Yue-Na],
Lu, S.W.[Shi-Wei],
Yao, Y.[Yu],
Boundary augment: A data augment method to defend poison attack,
IET-IPR(15), No. 13, 2021, pp. 3292-3303.
DOI Link
2110
BibRef
Xu, Y.H.[Yong-Hao],
Du, B.[Bo],
Zhang, L.P.[Liang-Pei],
Self-Attention Context Network: Addressing the Threat of Adversarial
Attacks for Hyperspectral Image Classification,
IP(30), 2021, pp. 8671-8685.
IEEE DOI
2110
Deep learning, Training, Hyperspectral imaging, Feature extraction,
Task analysis, Perturbation methods, Predictive models,
deep learning
BibRef
Yu, H.[Hang],
Liu, A.S.[Ai-Shan],
Li, G.C.[Geng-Chao],
Yang, J.C.[Ji-Chen],
Zhang, C.Z.[Chong-Zhi],
Progressive Diversified Augmentation for General Robustness of DNNs:
A Unified Approach,
IP(30), 2021, pp. 8955-8967.
IEEE DOI
2111
Robustness, Training, Handheld computers, Perturbation methods,
Complexity theory, Streaming media, Standards
BibRef
Prakash, C.D.[Charan D.],
Karam, L.J.[Lina J.],
It GAN Do Better: GAN-Based Detection of Objects on Images With
Varying Quality,
IP(30), 2021, pp. 9220-9230.
IEEE DOI
2112
Object detection, Training, Image quality, Computational modeling,
Task analysis, Generators, Distortion, Object detection, GAN,
RetinaNet
BibRef
Niu, S.[Sijie],
Qu, X.F.[Xiao-Feng],
Chen, J.[Junting],
Gao, X.[Xizhan],
Wang, T.[Tingwei],
Dong, J.W.[Ji-Wen],
MFNet-LE: Multilevel fusion network with Laplacian embedding for face
presentation attacks detection,
IET-IPR(15), No. 14, 2021, pp. 3608-3622.
DOI Link
2112
BibRef
Dai, T.[Tao],
Feng, Y.[Yan],
Chen, B.[Bin],
Lu, J.[Jian],
Xia, S.T.[Shu-Tao],
Deep image prior based defense against adversarial examples,
PR(122), 2022, pp. 108249.
Elsevier DOI
2112
Deep neural network, Adversarial example, Image prior, Defense
BibRef
Nguyen, H.H.[Huy H.],
Kuribayashi, M.[Minoru],
Yamagishi, J.[Junichi],
Echizen, I.[Isao],
Effects of Image Processing Operations on Adversarial Noise and Their
Use in Detecting and Correcting Adversarial Images,
IEICE(E105-D), No. 1, January 2022, pp. 65-77.
WWW Link.
2201
BibRef
Lo, S.Y.[Shao-Yuan],
Patel, V.M.[Vishal M.],
Defending Against Multiple and Unforeseen Adversarial Videos,
IP(31), 2022, pp. 962-973.
IEEE DOI
2201
Videos, Training, Robustness, Perturbation methods, Resists,
Image reconstruction, Image recognition, Adversarial video,
multi-perturbation robustness
BibRef
Gao, S.[Song],
Yu, S.[Shui],
Wu, L.[Liwen],
Yao, S.[Shaowen],
Zhou, X.W.[Xiao-Wei],
Detecting adversarial examples by additional evidence from noise
domain,
IET-IPR(16), No. 2, 2022, pp. 378-392.
DOI Link
2201
BibRef
Wang, J.W.[Jin-Wei],
Zhao, J.J.[Jun-Jie],
Yin, Q.L.[Qi-Lin],
Luo, X.Y.[Xiang-Yang],
Zheng, Y.[Yuhui],
Shi, Y.Q.[Yun-Qing],
Jha, S.I.K.[Sun-Il Kr.],
SmsNet: A New Deep Convolutional Neural Network Model for Adversarial
Example Detection,
MultMed(24), 2022, pp. 230-244.
IEEE DOI
2202
Feature extraction, Training, Manuals, Perturbation methods,
Information science, Principal component analysis, SmsConnection
BibRef
You, D.[Dan],
Wang, S.[Shouguang],
Seatzu, C.[Carla],
A Liveness-Enforcing Supervisor Tolerant to Sensor-Reading
Modification Attacks,
SMCS(52), No. 4, April 2022, pp. 2398-2411.
IEEE DOI
2203
Robot sensing systems, Control systems, Actuators, Petri nets,
Directed graphs, Supervisory control, Security, Attacks,
Petri nets (PNs)
BibRef
Mygdalis, V.[Vasileios],
Pitas, I.[Ioannis],
Hyperspherical class prototypes for adversarial robustness,
PR(125), 2022, pp. 108527.
Elsevier DOI
2203
Adversarial defense, Adversarial robustness,
Hypersphere prototype loss, HCP loss
BibRef
Liang, Q.[Qi],
Li, Q.[Qiang],
Nie, W.Z.[Wei-Zhi],
LD-GAN: Learning perturbations for adversarial defense based on GAN
structure,
SP:IC(103), 2022, pp. 116659.
Elsevier DOI
2203
Adversarial attacks, Adversarial defense,
Adversarial robustness, Image classification
BibRef
Shao, R.[Rui],
Perera, P.[Pramuditha],
Yuen, P.C.[Pong C.],
Patel, V.M.[Vishal M.],
Open-Set Adversarial Defense with Clean-Adversarial Mutual Learning,
IJCV(130), No. 1, January 2022, pp. 1070-1087.
Springer DOI
2204
BibRef
Earlier:
Open-set Adversarial Defense,
ECCV20(XVII:682-698).
Springer DOI
2011
BibRef
Wen, Y.X.[Yu-Xin],
Lin, J.H.[Jie-Hong],
Chen, K.[Ke],
Chen, C.L.P.[C. L. Philip],
Jia, K.[Kui],
Geometry-Aware Generation of Adversarial Point Clouds,
PAMI(44), No. 6, June 2022, pp. 2984-2999.
IEEE DOI
2205
Extend defense to 3D.
Shape, Image reconstruction, Surface reconstruction,
object surface geometry
BibRef
Poursaeed, O.[Omid],
Jiang, T.X.[Tian-Xing],
Yang, H.[Harry],
Belongie, S.[Serge],
Lim, S.N.[Ser-Nam],
Robustness and Generalization via Generative Adversarial Training,
ICCV21(15691-15700)
IEEE DOI
2203
Training, Deep learning, Image segmentation,
Computational modeling, Neural networks, Object detection,
Neural generative models
BibRef
Yu, C.[Cheng],
Chen, J.S.[Jian-Sheng],
Xue, Y.[Youze],
Liu, Y.Y.[Yu-Yang],
Wan, W.T.[Wei-Tao],
Bao, J.Y.[Jia-Yu],
Ma, H.M.[Hui-Min],
Defending against Universal Adversarial Patches by Clipping Feature
Norms,
ICCV21(16414-16422)
IEEE DOI
2203
Training, Visualization, Computational modeling,
Computer architecture, Robustness, Convolutional neural networks,
Recognition and classification
BibRef
Zhu, L.[Liuwan],
Ning, R.[Rui],
Xin, C.S.[Chun-Sheng],
Wang, C.G.[Chong-Gang],
Wu, H.Y.[Hong-Yi],
CLEAR: Clean-up Sample-Targeted Backdoor in Neural Networks,
ICCV21(16433-16442)
IEEE DOI
2203
Deep learning, Computational modeling, Neural networks,
Benchmark testing, Feature extraction, Safety,
BibRef
Zeng, Y.[Yi],
Park, W.[Won],
Mao, Z.M.[Z. Morley],
Jia, R.[Ruoxi],
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective,
ICCV21(16453-16461)
IEEE DOI
2203
Deep learning, Frequency-domain analysis, Detectors, Data models,
Security, Adversarial learning,
Image and video manipulation detection and integrity methods.
BibRef
Dong, Y.P.[Yin-Peng],
Yang, X.[Xiao],
Deng, Z.J.[Zhi-Jie],
Pang, T.[Tianyu],
Xiao, Z.[Zihao],
Su, H.[Hang],
Zhu, J.[Jun],
Black-box Detection of Backdoor Attacks with Limited Information and
Data,
ICCV21(16462-16471)
IEEE DOI
2203
Training, Deep learning, Neural networks, Training data,
Predictive models, Prediction algorithms, Adversarial learning,
BibRef
Wang, X.P.[Xue-Ping],
Li, S.[Shasha],
Liu, M.[Min],
Wang, Y.[Yaonan],
Roy-Chowdhury, A.K.[Amit K.],
Multi-Expert Adversarial Attack Detection in Person Re-identification
Using Context Inconsistency,
ICCV21(15077-15087)
IEEE DOI
2203
Deep learning, Perturbation methods, Neural networks, Detectors,
Feature extraction, Context modeling,
Image and video retrieval
BibRef
Huang, J.X.[Jia-Xing],
Guan, D.[Dayan],
Xiao, A.[Aoran],
Lu, S.J.[Shi-Jian],
RDA: Robust Domain Adaptation via Fourier Adversarial Attacking,
ICCV21(8968-8979)
IEEE DOI
2203
Training, Representation learning, Perturbation methods, Semantics,
Supervised learning, FAA, grouping and shape
BibRef
Zhou, D.W.[Da-Wei],
Wang, N.N.[Nan-Nan],
Peng, C.L.[Chun-Lei],
Gao, X.[Xinbo],
Wang, X.Y.[Xiao-Yu],
Yu, J.[Jun],
Liu, T.L.[Tong-Liang],
Removing Adversarial Noise in Class Activation Feature Space,
ICCV21(7858-7867)
IEEE DOI
2203
Training, Deep learning, Adaptation models, Perturbation methods,
Computational modeling, Noise reduction, Adversarial learning,
Transfer/Low-shot/Semi/Unsupervised Learning
BibRef
Benz, P.[Philipp],
Zhang, C.[Chaoning],
Kweon, I.S.[In So],
Batch Normalization Increases Adversarial Vulnerability and Decreases
Adversarial Transferability: A Non-Robust Feature Perspective,
ICCV21(7798-7807)
IEEE DOI
2203
Radio frequency, Training, Integrated circuits, Deep learning, Costs,
Neural networks, Adversarial learning, Explainable AI
BibRef
Yin, M.J.[Ming-Jun],
Li, S.[Shasha],
Cai, Z.[Zikui],
Song, C.Y.[Cheng-Yu],
Asif, M.S.[M. Salman],
Roy-Chowdhury, A.K.[Amit K.],
Krishnamurthy, S.V.[Srikanth V.],
Exploiting Multi-Object Relationships for Detecting Adversarial
Attacks in Complex Scenes,
ICCV21(7838-7847)
IEEE DOI
2203
Deep learning, Machine vision, Computational modeling,
Neural networks, Detectors, Context modeling, Adversarial learning,
Scene analysis and understanding
BibRef
Abusnaina, A.[Ahmed],
Wu, Y.H.[Yu-Hang],
Arora, S.[Sunpreet],
Wang, Y.Z.[Yi-Zhen],
Wang, F.[Fei],
Yang, H.[Hao],
Mohaisen, D.[David],
Adversarial Example Detection Using Latent Neighborhood Graph,
ICCV21(7667-7676)
IEEE DOI
2203
Training, Manifolds, Deep learning, Network topology,
Perturbation methods, Neural networks, Adversarial learning,
Recognition and classification
BibRef
Mao, C.Z.[Cheng-Zhi],
Chiquier, M.[Mia],
Wang, H.[Hao],
Yang, J.F.[Jun-Feng],
Vondrick, C.[Carl],
Adversarial Attacks are Reversible with Natural Supervision,
ICCV21(641-651)
IEEE DOI
2203
Training, Benchmark testing, Robustness, Inference algorithms,
Image restoration, Recognition and classification, Adversarial learning
BibRef
Zhao, X.J.[Xue-Jun],
Zhang, W.[Wencan],
Xiao, X.K.[Xiao-Kui],
Lim, B.[Brian],
Exploiting Explanations for Model Inversion Attacks,
ICCV21(662-672)
IEEE DOI
2203
Privacy, Semantics, Data visualization, Medical services,
Predictive models, Data models, Artificial intelligence,
Recognition and classification
BibRef
Wang, Q.[Qian],
Kurz, D.[Daniel],
Reconstructing Training Data from Diverse ML Models by Ensemble
Inversion,
WACV22(3870-3878)
IEEE DOI
2202
Training, Analytical models, Filtering, Training data,
Machine learning, Predictive models, Security/Surveillance
BibRef
Tursynbek, N.[Nurislam],
Petiushko, A.[Aleksandr],
Oseledets, I.[Ivan],
Geometry-Inspired Top-k Adversarial Perturbations,
WACV22(4059-4068)
IEEE DOI
2202
Perturbation methods, Prediction algorithms,
Multitasking, Classification algorithms, Task analysis,
Adversarial Attack and Defense Methods
BibRef
Nayak, G.K.[Gaurav Kumar],
Rawal, R.[Ruchit],
Chakraborty, A.[Anirban],
DAD: Data-free Adversarial Defense at Test Time,
WACV22(3788-3797)
IEEE DOI
2202
Training, Adaptation models, Biological system modeling,
Frequency-domain analysis, Training data, Computer architecture,
Adversarial Attack and Defense Methods
BibRef
Byun, J.[Junyoung],
Go, H.[Hyojun],
Kim, C.[Changick],
On the Effectiveness of Small Input Noise for Defending Against
Query-based Black-Box Attacks,
WACV22(3819-3828)
IEEE DOI
2202
Deep learning, Codes, Additives,
Computational modeling, Neural networks, Estimation,
Adversarial Attack and Defense Methods Deep Learning
BibRef
Scheliga, D.[Daniel],
Mäder, P.[Patrick],
Seeland, M.[Marco],
PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage,
WACV22(3605-3614)
IEEE DOI
2202
Training, Privacy, Data privacy, Perturbation methods,
Computational modeling, Training data, Stochastic processes,
Deep Learning Gradient Inversion Attacks
BibRef
Wang, S.[Shaojie],
Wu, T.[Tong],
Chakrabarti, A.[Ayan],
Vorobeychik, Y.[Yevgeniy],
Adversarial Robustness of Deep Sensor Fusion Models,
WACV22(1371-1380)
IEEE DOI
2202
Training, Systematics, Laser radar, Perturbation methods,
Neural networks, Object detection, Sensor fusion,
Adversarial Attack and Defense Methods
BibRef
Nesti, F.[Federico],
Rossolini, G.[Giulio],
Nair, S.[Saasha],
Biondi, A.[Alessandro],
Buttazzo, G.[Giorgio],
Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks,
WACV22(2826-2835)
IEEE DOI
2202
Computational modeling, Perturbation methods, Semantics, Pipelines,
Grouping and Shape
BibRef
Drenkow, N.[Nathan],
Fendley, N.[Neil],
Burlina, P.[Philippe],
Attack Agnostic Detection of Adversarial Examples via Random Subspace
Analysis,
WACV22(2815-2825)
IEEE DOI
2202
Training, Performance evaluation,
Perturbation methods, Training data, Detectors, Feature extraction,
Security/Surveillance
BibRef
Cheng, H.[Hao],
Xu, K.D.[Kai-Di],
Li, Z.G.[Zhen-Gang],
Zhao, P.[Pu],
Wang, C.[Chenan],
Lin, X.[Xue],
Kailkhura, B.[Bhavya],
Goldhahn, R.[Ryan],
More or Less (MoL): Defending against Multiple Perturbation Attacks
on Deep Neural Networks through Model Ensemble and Compression,
Hazards22(645-655)
IEEE DOI
2202
Training, Deep learning, Perturbation methods,
Computational modeling, Conferences, Neural networks
BibRef
Lang, I.[Itai],
Kotlicki, U.[Uriel],
Avidan, S.[Shai],
Geometric Adversarial Attacks and Defenses on 3D Point Clouds,
3DV21(1196-1205)
IEEE DOI
2201
Point cloud compression, Geometry, Deep learning, Solid modeling,
Shape, Semantics, 3D Point Clouds, Geometry Processing, Defense Methods
BibRef
Hasnat, A.[Abul],
Shvai, N.[Nadiya],
Nakib, A.[Amir],
CNN Classifier's Robustness Enhancement when Preserving Privacy,
ICIP21(3887-3891)
IEEE DOI
2201
Privacy, Data privacy, Image processing, Supervised learning,
Prediction algorithms, Robustness, Privacy, Vehicle Classification, CNN
BibRef
Liu, L.Q.[Lan-Qing],
Duan, Z.Y.[Zhen-Yu],
Xu, G.Z.[Guo-Zheng],
Xu, Y.[Yi],
Self-Supervised Disentangled Embedding for Robust Image
Classification,
ICIP21(1494-1498)
IEEE DOI
2201
Deep learning, Image segmentation, Correlation, Target recognition,
Tools, Robustness, Security, Disentanglement, Adversarial Examples, Robustness
BibRef
Chu, T.[Tianshu],
Yang, Z.[Zuopeng],
Yang, J.[Jie],
Huang, X.L.[Xiao-Lin],
Improving the Robustness of Convolutional Neural Networks Via Sketch
Attention,
ICIP21(869-873)
IEEE DOI
2201
Training, Perturbation methods, Image processing, Pipelines,
Robustness, Convolutional neural networks, CNNs,
sketch attention
BibRef
Deng, K.[Kang],
Peng, A.[Anjie],
Dong, W.L.[Wan-Li],
Zeng, H.[Hui],
Detecting C &W Adversarial Images Based on Noise
Addition-Then-Denoising,
ICIP21(3607-3611)
IEEE DOI
2201
Deep learning, Visualization, Perturbation methods, Gaussian noise,
Image processing, Noise reduction, Deep neural network,
Detection
BibRef
Maho, T.[Thibault],
Bonnet, B.[Benoît],
Furony, T.[Teddy],
Le Merrer, E.[Erwan],
RoBIC: A Benchmark Suite for Assessing Classifiers Robustness,
ICIP21(3612-3616)
IEEE DOI
2201
Image processing, Benchmark testing, Distortion, Robustness,
Distortion measurement, Benchmark, adversarial examples,
half-distortion measure
BibRef
Wang, Y.P.[Yao-Peng],
Xie, L.[Lehui],
Liu, X.M.[Xi-Meng],
Yin, J.L.[Jia-Li],
Zheng, T.J.[Ting-Jie],
Model-Agnostic Adversarial Example Detection Through Logit
Distribution Learning,
ICIP21(3617-3621)
IEEE DOI
2201
Deep learning, Resistance, Semantics, Feature extraction,
Task analysis, deep learning, adversarial detector,
adversarial defenses
BibRef
Raj, A.[Ankita],
Pal, A.[Ambar],
Arora, C.[Chetan],
Identifying Physically Realizable Triggers for Backdoored Face
Recognition Networks,
ICIP21(3023-3027)
IEEE DOI
2201
Deep learning, Image recognition, Face recognition, Force,
Adversarial attack, trojan attack, back-door attack, face recognition
BibRef
Co, K.T.[Kenneth T.],
Muñoz-González, L.[Luis],
Kanthan, L.[Leslie],
Glocker, B.[Ben],
Lupu, E.C.[Emil C.],
Universal Adversarial Robustness of Texture and Shape-Biased Models,
ICIP21(799-803)
IEEE DOI
2201
Training, Deep learning, Analytical models, Perturbation methods,
Image processing, Neural networks,
deep neural networks
BibRef
Xu, W.P.[Wei-Peng],
Huang, H.C.[Hong-Cheng],
Pan, S.Y.[Shao-You],
Using Feature Alignment Can Improve Clean Average Precision and
Adversarial Robustness In Object Detection,
ICIP21(2184-2188)
IEEE DOI
2201
Training, Object detection, Detectors, Feature extraction,
Robustness, deep learning, object detection,
adversarial training
BibRef
Agarwal, A.[Akshay],
Vatsa, M.[Mayank],
Singh, R.[Richa],
Ratha, N.[Nalini],
Intelligent and Adaptive Mixup Technique for Adversarial Robustness,
ICIP21(824-828)
IEEE DOI
2201
Training, Deep learning, Image recognition, Image analysis,
Perturbation methods, Robustness, Natural language processing,
Object Recognition
BibRef
Yu, C.[Cheng],
Xue, Y.Z.[You-Ze],
Chen, J.S.[Jian-Sheng],
Wang, Y.[Yu],
Ma, H.M.[Hui-Min],
Enhancing Adversarial Robustness for Image Classification By
Regularizing Class Level Feature Distribution,
ICIP21(494-498)
IEEE DOI
2201
Training, Deep learning, Adaptation models, Image processing,
Neural networks, Robustness, Adversarial Training, Robustness
BibRef
Chai, W.H.[Wei-Heng],
Lu, Y.T.[Yan-Tao],
Velipasalar, S.[Senem],
Weighted Average Precision: Adversarial Example Detection for Visual
Perception of Autonomous Vehicles,
ICIP21(804-808)
IEEE DOI
2201
Measurement, Perturbation methods, Image processing, Pipelines,
Neural networks, Optimization methods, Object detection, Neural Networks
BibRef
Kung, B.H.[Bo-Han],
Chen, P.C.[Pin-Chun],
Liu, Y.C.[Yu-Cheng],
Chen, J.C.[Jun-Cheng],
Squeeze and Reconstruct: Improved Practical Adversarial Defense Using
Paired Image Compression and Reconstruction,
ICIP21(849-853)
IEEE DOI
2201
Training, Deep learning, Image coding, Perturbation methods,
Transform coding, Robustness, Adversarial Attack,
JPEG Compression, Artifact Correction
BibRef
Li, C.Y.[Chau Yi],
Sánchez-Matilla, R.[Ricardo],
Shamsabadi, A.S.[Ali Shahin],
Mazzon, R.[Riccardo],
Cavallaro, A.[Andrea],
On the Reversibility of Adversarial Attacks,
ICIP21(3073-3077)
IEEE DOI
2201
Deep learning, Perturbation methods, Image processing,
Benchmark testing, Adversarial perturbations,
Reversibility
BibRef
Bakiskan, C.[Can],
Cekic, M.[Metehan],
Sezer, A.D.[Ahmet Dundar],
Madhow, U.[Upamanyu],
A Neuro-Inspired Autoencoding Defense Against Adversarial Attacks,
ICIP21(3922-3926)
IEEE DOI
2201
Training, Deep learning, Image coding, Perturbation methods,
Neural networks, Decoding, Adversarial, Machine learning, Robust,
Defense
BibRef
Pérez, J.C.[Juan C.],
Alfarra, M.[Motasem],
Jeanneret, G.[Guillaume],
Rueda, L.[Laura],
Thabet, A.[Ali],
Ghanem, B.[Bernard],
Arbeláez, P.[Pablo],
Enhancing Adversarial Robustness via Test-Time Transformation
Ensembling,
AROW21(81-91)
IEEE DOI
2112
Deep learning, Perturbation methods,
Transforms, Robustness, Data models
BibRef
Zhang, C.[Cheng],
Gao, P.[Pan],
Countering Adversarial Examples:
Combining Input Transformation and Noisy Training,
AROW21(102-111)
IEEE DOI
2112
Training, Image coding, Quantization (signal),
Perturbation methods, Computational modeling, Transform coding,
Artificial neural networks
BibRef
De, K.[Kanjar],
Pedersen, M.[Marius],
Impact of Colour on Robustness of Deep Neural Networks,
AROW21(21-30)
IEEE DOI
2112
Deep learning, Image color analysis,
Perturbation methods, Tools, Distortion, Robustness
BibRef
Truong, J.B.[Jean-Baptiste],
Maini, P.[Pratyush],
Walls, R.J.[Robert J.],
Papernot, N.[Nicolas],
Data-Free Model Extraction,
CVPR21(4769-4778)
IEEE DOI
2111
Adaptation models, Computational modeling,
Intellectual property, Predictive models, Data models, Complexity theory
BibRef
Mehra, A.[Akshay],
Kailkhura, B.[Bhavya],
Chen, P.Y.[Pin-Yu],
Hamm, J.[Jihun],
How Robust are Randomized Smoothing based Defenses to Data Poisoning?,
CVPR21(13239-13248)
IEEE DOI
2111
Training, Deep learning, Smoothing methods, Toxicology,
Perturbation methods, Distortion, Robustness
BibRef
Deng, Z.J.[Zhi-Jie],
Yang, X.[Xiao],
Xu, S.Z.[Shi-Zhen],
Su, H.[Hang],
Zhu, J.[Jun],
LiBRe: A Practical Bayesian Approach to Adversarial Detection,
CVPR21(972-982)
IEEE DOI
2111
Training, Deep learning, Costs, Uncertainty, Neural networks,
Bayes methods, Pattern recognition
BibRef
Yang, K.[Karren],
Lin, W.Y.[Wan-Yi],
Barman, M.[Manash],
Condessa, F.[Filipe],
Kolter, Z.[Zico],
Defending Multimodal Fusion Models against Single-Source Adversaries,
CVPR21(3339-3348)
IEEE DOI
2111
Training, Sentiment analysis,
Perturbation methods, Neural networks, Object detection, Robustness
BibRef
Wu, T.[Tong],
Liu, Z.[Ziwei],
Huang, Q.Q.[Qing-Qiu],
Wang, Y.[Yu],
Lin, D.[Dahua],
Adversarial Robustness under Long-Tailed Distribution,
CVPR21(8655-8664)
IEEE DOI
2111
Training, Systematics, Codes, Robustness, Pattern recognition
BibRef
Ong, D.S.[Ding Sheng],
Chan, C.S.[Chee Seng],
Ng, K.W.[Kam Woh],
Fan, L.X.[Li-Xin],
Yang, Q.[Qiang],
Protecting Intellectual Property of Generative Adversarial Networks
from Ambiguity Attacks,
CVPR21(3629-3638)
IEEE DOI
2111
Deep learning, Knowledge engineering,
Image synthesis, Superresolution, Intellectual property, Watermarking
BibRef
Addepalli, S.[Sravanti],
Jain, S.[Samyak],
Sriramanan, G.[Gaurang],
Babu, R.V.[R. Venkatesh],
Boosting Adversarial Robustness using Feature Level Stochastic
Smoothing,
SAIAD21(93-102)
IEEE DOI
2109
Training, Deep learning, Smoothing methods,
Boosting, Feature extraction
BibRef
Pestana, C.[Camilo],
Liu, W.[Wei],
Glance, D.[David],
Mian, A.[Ajmal],
Defense-friendly Images in Adversarial Attacks:
Dataset and Metrics for Perturbation Difficulty,
WACV21(556-565)
IEEE DOI
2106
Measurement, Deep learning,
Machine learning algorithms, Image recognition
BibRef
Ali, A.[Arslan],
Migliorati, A.[Andrea],
Bianchi, T.[Tiziano],
Magli, E.[Enrico],
Beyond Cross-Entropy: Learning Highly Separable Feature Distributions
for Robust and Accurate Classification,
ICPR21(9711-9718)
IEEE DOI
2105
Robustness to adversarial attacks.
Training, Deep learning, Perturbation methods,
Gaussian distribution, Linear programming, Robustness
BibRef
Kyatham, V.[Vinay],
Mishra, D.[Deepak],
Prathosh, A.P.,
Variational Inference with Latent Space Quantization for Adversarial
Resilience,
ICPR21(9593-9600)
IEEE DOI
2105
Manifolds, Degradation, Quantization (signal),
Perturbation methods, Neural networks, Data models, Real-time systems
BibRef
Li, H.[Honglin],
Fan, Y.[Yifei],
Ganz, F.[Frieder],
Yezzi, A.J.[Anthony J.],
Barnaghi, P.[Payam],
Verifying the Causes of Adversarial Examples,
ICPR21(6750-6757)
IEEE DOI
2105
Geometry, Perturbation methods, Neural networks, Linearity,
Estimation, Aerospace electronics, Probabilistic logic
BibRef
Hou, Y.F.[Yu-Fan],
Zou, L.X.[Li-Xin],
Liu, W.D.[Wei-Dong],
Task-based Focal Loss for Adversarially Robust Meta-Learning,
ICPR21(2824-2829)
IEEE DOI
2105
Training, Perturbation methods, Resists, Machine learning,
Benchmark testing, Robustness
BibRef
Huang, Y.T.[Yen-Ting],
Liao, W.H.[Wen-Hung],
Huang, C.W.[Chen-Wei],
Defense Mechanism Against Adversarial Attacks Using Density-based
Representation of Images,
ICPR21(3499-3504)
IEEE DOI
2105
Deep learning, Perturbation methods, Transforms,
Hybrid power systems, Pattern recognition, Intelligent systems
BibRef
Chhabra, S.[Saheb],
Agarwal, A.[Akshay],
Singh, R.[Richa],
Vatsa, M.[Mayank],
Attack Agnostic Adversarial Defense via Visual Imperceptible Bound,
ICPR21(5302-5309)
IEEE DOI
2105
Visualization, Sensitivity, Databases, Computational modeling,
Perturbation methods, Predictive models, Prediction algorithms
BibRef
Šircelj, J.[Jaka],
Skocaj, D.[Danijel],
Accuracy-Perturbation Curves for Evaluation of Adversarial Attack and
Defence Methods,
ICPR21(6290-6297)
IEEE DOI
2105
Training, Visualization, Perturbation methods, Machine learning,
Robustness, Generators, Pattern recognition
BibRef
Watson, M.[Matthew],
Moubayed, N.A.[Noura Al],
Attack-agnostic Adversarial Detection on Medical Data Using
Explainable Machine Learning,
ICPR21(8180-8187)
IEEE DOI
2105
Training, Deep learning, Perturbation methods, MIMICs,
Medical services, Predictive models, Feature extraction,
Medical Data
BibRef
Alamri, F.[Faisal],
Kalkan, S.[Sinan],
Pugeault, N.[Nicolas],
Transformer-Encoder Detector Module: Using Context to Improve
Robustness to Adversarial Attacks on Object Detection,
ICPR21(9577-9584)
IEEE DOI
2105
Visualization, Perturbation methods, Detectors, Object detection,
Transforms, Field-flow fractionation, Feature extraction
BibRef
Bouniot, Q.[Quentin],
Audigier, R.[Romaric],
Loesch, A.[Angelique],
Optimal Transport as a Defense Against Adversarial Attacks,
ICPR21(5044-5051)
IEEE DOI
2105
Training, Deep learning, Measurement, Adaptation models,
Perturbation methods, Image representation, Market research
BibRef
Schwartz, D.[Daniel],
Alparslan, Y.[Yigit],
Kim, E.[Edward],
Regularization and Sparsity for Adversarial Robustness and Stable
Attribution,
ISVC20(I:3-14).
Springer DOI
2103
BibRef
Carrara, F.[Fabio],
Caldelli, R.[Roberto],
Falchi, F.[Fabrizio],
Amato, G.[Giuseppe],
Defending Neural ODE Image Classifiers from Adversarial Attacks with
Tolerance Randomization,
MMForWild20(425-438).
Springer DOI
2103
BibRef
Gittings, T.,
Schneider, S.,
Collomosse, J.,
Vax-a-net: Training-time Defence Against Adversarial Patch Attacks,
ACCV20(IV:235-251).
Springer DOI
2103
BibRef
Yi, C.,
Li, H.,
Wan, R.,
Kot, A.C.,
Improving Robustness of DNNs against Common Corruptions via Gaussian
Adversarial Training,
VCIP20(17-20)
IEEE DOI
2102
Robustness, Perturbation methods, Training, Neural networks,
Standards, Gaussian noise, Tensors, Deep Learning,
Data Augmentation
BibRef
Rusak, E.[Evgenia],
Schott, L.[Lukas],
Zimmermann, R.S.[Roland S.],
Bitterwolf, J.[Julian],
Bringmann, O.[Oliver],
Bethge, M.[Matthias],
Brendel, W.[Wieland],
A Simple Way to Make Neural Networks Robust Against Diverse Image
Corruptions,
ECCV20(III:53-69).
Springer DOI
2012
BibRef
Li, Y.W.[Ying-Wei],
Bai, S.[Song],
Xie, C.H.[Ci-Hang],
Liao, Z.Y.[Zhen-Yu],
Shen, X.H.[Xiao-Hui],
Yuille, A.L.[Alan L.],
Regional Homogeneity: Towards Learning Transferable Universal
Adversarial Perturbations Against Defenses,
ECCV20(XI:795-813).
Springer DOI
2011
BibRef
Bui, A.[Anh],
Le, T.[Trung],
Zhao, H.[He],
Montague, P.[Paul],
deVel, O.[Olivier],
Abraham, T.[Tamas],
Phung, D.[Dinh],
Improving Adversarial Robustness by Enforcing Local and Global
Compactness,
ECCV20(XXVII:209-223).
Springer DOI
2011
BibRef
Xu, J.,
Li, Y.,
Jiang, Y.,
Xia, S.T.,
Adversarial Defense Via Local Flatness Regularization,
ICIP20(2196-2200)
IEEE DOI
2011
Training, Standards, Perturbation methods, Robustness, Visualization,
Linearity, Taylor series, adversarial defense,
gradient-based regularization
BibRef
Maung, M.,
Pyone, A.,
Kiya, H.,
Encryption Inspired Adversarial Defense For Visual Classification,
ICIP20(1681-1685)
IEEE DOI
2011
Training, Transforms, Encryption, Perturbation methods,
Machine learning, Adversarial defense,
perceptual image encryption
BibRef
Shah, S.A.A.,
Bougre, M.,
Akhtar, N.,
Bennamoun, M.,
Zhang, L.,
Efficient Detection of Pixel-Level Adversarial Attacks,
ICIP20(718-722)
IEEE DOI
2011
Robots, Training, Perturbation methods, Machine learning, Robustness,
Task analysis, Testing, Adversarial attack, perturbation detection,
deep learning
BibRef
Jia, S.[Shuai],
Ma, C.[Chao],
Song, Y.B.[Yi-Bing],
Yang, X.K.[Xiao-Kang],
Robust Tracking Against Adversarial Attacks,
ECCV20(XIX:69-84).
Springer DOI
2011
BibRef
Wang, R.[Ren],
Zhang, G.Y.[Gao-Yuan],
Liu, S.J.[Si-Jia],
Chen, P.Y.[Pin-Yu],
Xiong, J.J.[Jin-Jun],
Wang, M.[Meng],
Practical Detection of Trojan Neural Networks:
Data-limited and Data-free Cases,
ECCV20(XXIII:222-238).
Springer DOI
2011
(or poisoning backdoor attack) Manipulate the learned network.
BibRef
Mao, C.Z.[Cheng-Zhi],
Cha, A.[Augustine],
Gupta, A.[Amogh],
Wang, H.[Hao],
Yang, J.F.[Jun-Feng],
Vondrick, C.[Carl],
Generative Interventions for Causal Learning,
CVPR21(3946-3955)
IEEE DOI
2111
Training, Visualization, Correlation,
Computational modeling, Control systems, Pattern recognition
BibRef
Mao, C.Z.[Cheng-Zhi],
Gupta, A.[Amogh],
Nitin, V.[Vikram],
Ray, B.[Baishakhi],
Song, S.[Shuran],
Yang, J.F.[Jun-Feng],
Vondrick, C.[Carl],
Multitask Learning Strengthens Adversarial Robustness,
ECCV20(II:158-174).
Springer DOI
2011
BibRef
Li, S.S.[Sha-Sha],
Zhu, S.T.[Shi-Tong],
Paul, S.[Sudipta],
Roy-Chowdhury, A.[Amit],
Song, C.Y.[Cheng-Yu],
Krishnamurthy, S.[Srikanth],
Swami, A.[Ananthram],
Chan, K.S.[Kevin S.],
Connecting the Dots: Detecting Adversarial Perturbations Using Context
Inconsistency,
ECCV20(XXIII:396-413).
Springer DOI
2011
BibRef
Li, Y.[Yueru],
Cheng, S.Y.[Shu-Yu],
Su, H.[Hang],
Zhu, J.[Jun],
Defense Against Adversarial Attacks via Controlling Gradient Leaking on
Embedded Manifolds,
ECCV20(XXVIII:753-769).
Springer DOI
2011
BibRef
Rounds, J.[Jeremiah],
Kingsland, A.[Addie],
Henry, M.J.[Michael J.],
Duskin, K.R.[Kayla R.],
Probing for Artifacts: Detecting Imagenet Model Evasions,
AML-CV20(3432-3441)
IEEE DOI
2008
Perturbation methods, Probes, Computational modeling, Robustness,
Image color analysis, Machine learning, Indexes
BibRef
Kariyappa, S.,
Qureshi, M.K.,
Defending Against Model Stealing Attacks With Adaptive Misinformation,
CVPR20(767-775)
IEEE DOI
2008
Data models, Adaptation models, Cloning, Predictive models,
Computational modeling, Security, Perturbation methods
BibRef
Mohapatra, J.,
Weng, T.,
Chen, P.,
Liu, S.,
Daniel, L.,
Towards Verifying Robustness of Neural Networks Against A Family of
Semantic Perturbations,
CVPR20(241-249)
IEEE DOI
2008
Semantics, Perturbation methods, Robustness, Image color analysis,
Brightness, Neural networks, Tools
BibRef
Liu, X.,
Xiao, T.,
Si, S.,
Cao, Q.,
Kumar, S.,
Hsieh, C.,
How Does Noise Help Robustness? Explanation and Exploration under the
Neural SDE Framework,
CVPR20(279-287)
IEEE DOI
2008
Neural networks, Robustness, Stochastic processes, Training,
Random variables, Gaussian noise, Mathematical model
BibRef
Wu, M.,
Kwiatkowska, M.,
Robustness Guarantees for Deep Neural Networks on Videos,
CVPR20(308-317)
IEEE DOI
2008
Robustness, Videos, Optical imaging, Adaptive optics,
Optical sensors, Measurement, Neural networks
BibRef
Chan, A.,
Tay, Y.,
Ong, Y.,
What It Thinks Is Important Is Important: Robustness Transfers
Through Input Gradients,
CVPR20(329-338)
IEEE DOI
2008
Robustness, Task analysis, Training, Computational modeling,
Perturbation methods, Impedance matching, Predictive models
BibRef
Zhang, L.,
Yu, M.,
Chen, T.,
Shi, Z.,
Bao, C.,
Ma, K.,
Auxiliary Training: Towards Accurate and Robust Models,
CVPR20(369-378)
IEEE DOI
2008
Training, Robustness, Perturbation methods, Neural networks,
Data models, Task analysis, Feature extraction
BibRef
Baráth, D.,
Noskova, J.,
Ivashechkin, M.,
Matas, J.,
MAGSAC++, a Fast, Reliable and Accurate Robust Estimator,
CVPR20(1301-1309)
IEEE DOI
2008
Robustness, Data models, Estimation, Noise level,
Pattern recognition, Kernel
BibRef
Saha, A.,
Subramanya, A.,
Patil, K.,
Pirsiavash, H.,
Role of Spatial Context in Adversarial Robustness for Object
Detection,
AML-CV20(3403-3412)
IEEE DOI
2008
Detectors, Object detection, Cognition, Training, Blindness,
Perturbation methods, Optimization
BibRef
Jefferson, B.,
Marrero, C.O.,
Robust Assessment of Real-World Adversarial Examples,
AML-CV20(3442-3449)
IEEE DOI
2008
Cameras, Light emitting diodes, Robustness, Lighting, Detectors,
Testing, Perturbation methods
BibRef
Goel, A.,
Agarwal, A.,
Vatsa, M.,
Singh, R.,
Ratha, N.K.,
DNDNet: Reconfiguring CNN for Adversarial Robustness,
TCV20(103-110)
IEEE DOI
2008
Mathematical model, Perturbation methods, Machine learning,
Computer architecture, Robustness, Computational modeling, Databases
BibRef
Cohen, G.,
Sapiro, G.,
Giryes, R.,
Detecting Adversarial Samples Using Influence Functions and Nearest
Neighbors,
CVPR20(14441-14450)
IEEE DOI
2008
Training, Robustness, Loss measurement, Feature extraction,
Neural networks, Perturbation methods, Training data
BibRef
He, Z.,
Rakin, A.S.,
Li, J.,
Chakrabarti, C.,
Fan, D.,
Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack,
CVPR20(14083-14091)
IEEE DOI
2008
Training, Neural networks, Random access memory, Indexes,
Optimization, Degradation, Immune system
BibRef
Dong, X.,
Han, J.,
Chen, D.,
Liu, J.,
Bian, H.,
Ma, Z.,
Li, H.,
Wang, X.,
Zhang, W.,
Yu, N.,
Robust Superpixel-Guided Attentional Adversarial Attack,
CVPR20(12892-12901)
IEEE DOI
2008
Perturbation methods, Robustness, Noise measurement,
Image color analysis, Pipelines, Agriculture
BibRef
Rahnama, A.,
Nguyen, A.T.,
Raff, E.,
Robust Design of Deep Neural Networks Against Adversarial Attacks
Based on Lyapunov Theory,
CVPR20(8175-8184)
IEEE DOI
2008
Robustness, Nonlinear systems, Training, Control theory,
Stability analysis, Perturbation methods, Transient analysis
BibRef
Zhao, Y.,
Wu, Y.,
Chen, C.,
Lim, A.,
On Isometry Robustness of Deep 3D Point Cloud Models Under
Adversarial Attacks,
CVPR20(1198-1207)
IEEE DOI
2008
Robustness, Data models,
Solid modeling, Computational modeling, Perturbation methods
BibRef
Gowal, S.,
Qin, C.,
Huang, P.,
Cemgil, T.,
Dvijotham, K.,
Mann, T.,
Kohli, P.,
Achieving Robustness in the Wild via Adversarial Mixing With
Disentangled Representations,
CVPR20(1208-1217)
IEEE DOI
2008
Perturbation methods, Robustness, Training, Semantics, Correlation,
Task analysis, Mathematical model
BibRef
Jeddi, A.,
Shafiee, M.J.,
Karg, M.,
Scharfenberger, C.,
Wong, A.,
Learn2Perturb: An End-to-End Feature Perturbation Learning to Improve
Adversarial Robustness,
CVPR20(1238-1247)
IEEE DOI
2008
Perturbation methods, Robustness, Training, Neural networks,
Data models, Uncertainty, Optimization
BibRef
Dabouei, A.,
Soleymani, S.,
Taherkhani, F.,
Dawson, J.,
Nasrabadi, N.M.,
Exploiting Joint Robustness to Adversarial Perturbations,
CVPR20(1119-1128)
IEEE DOI
2008
Robustness, Perturbation methods, Training, Predictive models,
Optimization, Adaptation models
BibRef
Addepalli, S.[Sravanti],
Vivek, B.S.,
Baburaj, A.[Arya],
Sriramanan, G.[Gaurang],
Babu, R.V.[R. Venkatesh],
Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes,
CVPR20(1017-1026)
IEEE DOI
2008
Training, Robustness, Quantization (signal), Visual systems,
Perturbation methods, Neural networks
BibRef
Yuan, J.,
He, Z.,
Ensemble Generative Cleaning With Feedback Loops for Defending
Adversarial Attacks,
CVPR20(578-587)
IEEE DOI
2008
Cleaning, Feedback loop, Transforms, Neural networks, Estimation,
Fuses, Iterative methods
BibRef
Guo, M.,
Yang, Y.,
Xu, R.,
Liu, Z.,
Lin, D.,
When NAS Meets Robustness: In Search of Robust Architectures Against
Adversarial Attacks,
CVPR20(628-637)
IEEE DOI
2008
Computer architecture, Robustness, Training, Network architecture,
Neural networks, Convolution, Architecture
BibRef
Borkar, T.,
Heide, F.,
Karam, L.J.,
Defending Against Universal Attacks Through Selective Feature
Regeneration,
CVPR20(706-716)
IEEE DOI
2008
Perturbation methods, Training, Robustness, Noise reduction,
Image restoration, Transforms
BibRef
Li, G.,
Ding, S.,
Luo, J.,
Liu, C.,
Enhancing Intrinsic Adversarial Robustness via Feature Pyramid
Decoder,
CVPR20(797-805)
IEEE DOI
2008
Noise reduction, Robustness, Training, Image restoration,
Noise measurement, Decoding, Neural networks
BibRef
Chen, T.,
Liu, S.,
Chang, S.,
Cheng, Y.,
Amini, L.,
Wang, Z.,
Adversarial Robustness:
From Self-Supervised Pre-Training to Fine-Tuning,
CVPR20(696-705)
IEEE DOI
2008
Robustness, Task analysis, Training, Standards, Data models,
Computational modeling, Tuning
BibRef
Lee, S.,
Lee, H.,
Yoon, S.,
Adversarial Vertex Mixup: Toward Better Adversarially Robust
Generalization,
CVPR20(269-278)
IEEE DOI
2008
Robustness, Training, Standards, Perturbation methods,
Complexity theory, Upper bound, Data models
BibRef
Dong, Y.,
Fu, Q.,
Yang, X.,
Pang, T.,
Su, H.,
Xiao, Z.,
Zhu, J.,
Benchmarking Adversarial Robustness on Image Classification,
CVPR20(318-328)
IEEE DOI
2008
Robustness, Adaptation models, Training, Predictive models,
Perturbation methods, Data models, Measurement
BibRef
Xiao, C.,
Zheng, C.,
One Man's Trash Is Another Man's Treasure:
Resisting Adversarial Examples by Adversarial Examples,
CVPR20(409-418)
IEEE DOI
2008
Training, Robustness, Perturbation methods, Neural networks,
Transforms, Mathematical model, Numerical models
BibRef
Naseer, M.,
Khan, S.,
Hayat, M.,
Khan, F.S.,
Porikli, F.M.,
A Self-supervised Approach for Adversarial Robustness,
CVPR20(259-268)
IEEE DOI
2008
Perturbation methods, Task analysis, Distortion, Training,
Robustness, Feature extraction, Neural networks
BibRef
Zhao, Y.,
Tian, Y.,
Fowlkes, C.,
Shen, W.,
Yuille, A.L.,
Resisting Large Data Variations via Introspective Transformation
Network,
WACV20(3069-3078)
IEEE DOI
2006
Training, Testing, Robustness, Training data,
Linear programming, Resists
BibRef
Kim, D.H.[Dong-Hyun],
Bargal, S.A.[Sarah Adel],
Zhang, J.M.[Jian-Ming],
Sclaroff, S.[Stan],
Multi-way Encoding for Robustness,
WACV20(1341-1349)
IEEE DOI
2006
To counter adversarial attacks.
Encoding, Robustness, Perturbation methods, Training,
Biological system modeling, Neurons, Correlation
BibRef
Folz, J.,
Palacio, S.,
Hees, J.,
Dengel, A.,
Adversarial Defense based on Structure-to-Signal Autoencoders,
WACV20(3568-3577)
IEEE DOI
2006
Perturbation methods, Semantics, Robustness, Predictive models,
Training, Decoding, Neural networks
BibRef
Zheng, S.,
Zhu, Z.,
Zhang, X.,
Liu, Z.,
Cheng, J.,
Zhao, Y.,
Distribution-Induced Bidirectional Generative Adversarial Network for
Graph Representation Learning,
CVPR20(7222-7231)
IEEE DOI
2008
Generative adversarial networks, Robustness, Data models,
Generators, Task analysis, Gaussian distribution
BibRef
Vivek, B.S.,
Revanur, A.[Ambareesh],
Venkat, N.[Naveen],
Babu, R.V.[R. Venkatesh],
Plug-And-Pipeline: Efficient Regularization for Single-Step
Adversarial Training,
TCV20(138-146)
IEEE DOI
2008
Training, Robustness, Computational modeling, Perturbation methods,
Iterative methods, Backpropagation, Data models
BibRef
Benz, P.[Philipp],
Zhang, C.[Chaoning],
Imtiaz, T.[Tooba],
Kweon, I.S.[In So],
Double Targeted Universal Adversarial Perturbations,
ACCV20(IV:284-300).
Springer DOI
2103
BibRef
Earlier: A2,, A1, A3, A4:
Understanding Adversarial Examples From the Mutual Influence of
Images and Perturbations,
CVPR20(14509-14518)
IEEE DOI
2008
Perturbation methods, Correlation, Training data,
Feature extraction, Training, Task analysis, Robustness
BibRef
Zheng, H.,
Zhang, Z.,
Gu, J.,
Lee, H.,
Prakash, A.,
Efficient Adversarial Training With Transferable Adversarial Examples,
CVPR20(1178-1187)
IEEE DOI
2008
Training, Perturbation methods, Robustness, Computational modeling,
Measurement, Iterative methods, Silicon
BibRef
Shi, Y.,
Han, Y.,
Tian, Q.,
Polishing Decision-Based Adversarial Noise With a Customized Sampling,
CVPR20(1027-1035)
IEEE DOI
2008
Gaussian distribution, Sensitivity, Noise reduction, Optimization,
Image coding, Robustness, Standards
BibRef
Xie, C.,
Tan, M.,
Gong, B.,
Wang, J.,
Yuille, A.L.,
Le, Q.V.,
Adversarial Examples Improve Image Recognition,
CVPR20(816-825)
IEEE DOI
2008
Training, Robustness, Degradation, Image recognition,
Perturbation methods, Standards, Supervised learning
BibRef
Dabouei, A.,
Soleymani, S.,
Taherkhani, F.,
Dawson, J.,
Nasrabadi, N.M.,
SmoothFool: An Efficient Framework for Computing Smooth Adversarial
Perturbations,
WACV20(2654-2663)
IEEE DOI
2006
Perturbation methods, Frequency-domain analysis, Robustness,
Training, Optimization, Network architecture, Topology
BibRef
Peterson, J.[Joshua],
Battleday, R.[Ruairidh],
Griffiths, T.[Thomas],
Russakovsky, O.[Olga],
Human Uncertainty Makes Classification More Robust,
ICCV19(9616-9625)
IEEE DOI
2004
CIFAR10H dataset.
To make deep network robust ot adversarial attacks.
convolutional neural nets, learning (artificial intelligence),
pattern classification, classification performance,
Dogs
BibRef
Wang, J.,
Zhang, H.,
Bilateral Adversarial Training:
Towards Fast Training of More Robust Models Against Adversarial Attacks,
ICCV19(6628-6637)
IEEE DOI
2004
entropy, learning (artificial intelligence), neural nets,
security of data, adversarial attacks, Data models
BibRef
Ye, S.,
Xu, K.,
Liu, S.,
Cheng, H.,
Lambrechts, J.,
Zhang, H.,
Zhou, A.,
Ma, K.,
Wang, Y.,
Lin, X.,
Adversarial Robustness vs. Model Compression, or Both?,
ICCV19(111-120)
IEEE DOI
2004
minimax techniques, neural nets, security of data,
adversarial attacks, concurrent adversarial training
BibRef
Moosavi-Dezfooli, S.M.[Seyed-Mohsen],
Fawzi, A.[Alhussein],
Uesato, J.[Jonathan],
Frossard, P.[Pascal],
Robustness via Curvature Regularization, and Vice Versa,
CVPR19(9070-9078).
IEEE DOI
2002
Adversarial training leads to more linear boundaries.
BibRef
Xie, C.[Cihang],
Wu, Y.X.[Yu-Xin],
van der Maaten, L.[Laurens],
Yuille, A.L.[Alan L.],
He, K.M.[Kai-Ming],
Feature Denoising for Improving Adversarial Robustness,
CVPR19(501-509).
IEEE DOI
2002
BibRef
He, Z.[Zhezhi],
Rakin, A.S.[Adnan Siraj],
Fan, D.L.[De-Liang],
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural
Network Robustness Against Adversarial Attack,
CVPR19(588-597).
IEEE DOI
2002
BibRef
Kaneko, T.[Takuhiro],
Harada, T.[Tatsuya],
Blur, Noise, and Compression Robust Generative Adversarial Networks,
CVPR21(13574-13584)
IEEE DOI
2111
Degradation, Training, Adaptation models, Image coding, Uncertainty,
Computational modeling
BibRef
Kaneko, T.[Takuhiro],
Harada, T.[Tatsuya],
Noise Robust Generative Adversarial Networks,
CVPR20(8401-8411)
IEEE DOI
2008
Training, Noise measurement, Generators,
Noise robustness, Gaussian noise, Image generation
BibRef
Kaneko, T.[Takuhiro],
Ushiku, Y.[Yoshitaka],
Harada, T.[Tatsuya],
Label-Noise Robust Generative Adversarial Networks,
CVPR19(2462-2471).
IEEE DOI
2002
BibRef
Stutz, D.[David],
Hein, M.[Matthias],
Schiele, B.[Bernt],
Disentangling Adversarial Robustness and Generalization,
CVPR19(6969-6980).
IEEE DOI
2002
BibRef
Miyazato, S.,
Wang, X.,
Yamasaki, T.,
Aizawa, K.,
Reinforcing the Robustness of a Deep Neural Network to Adversarial
Examples by Using Color Quantization of Training Image Data,
ICIP19(884-888)
IEEE DOI
1910
convolutional neural network, adversarial example, color quantization
BibRef
Ramanathan, T.,
Manimaran, A.,
You, S.,
Kuo, C.J.,
Robustness of Saak Transform Against Adversarial Attacks,
ICIP19(2531-2535)
IEEE DOI
1910
Saak transform, Adversarial attacks, Deep Neural Networks, Image Classification
BibRef
Prakash, A.,
Moran, N.,
Garber, S.,
DiLillo, A.,
Storer, J.,
Deflecting Adversarial Attacks with Pixel Deflection,
CVPR18(8571-8580)
IEEE DOI
1812
Perturbation methods, Transforms, Minimization, Robustness,
Noise reduction, Training
BibRef
Mummadi, C.K.,
Brox, T.,
Metzen, J.H.,
Defending Against Universal Perturbations With Shared Adversarial
Training,
ICCV19(4927-4936)
IEEE DOI
2004
image classification, image segmentation, neural nets,
universal perturbations, shared adversarial training,
Computational modeling
BibRef
Chen, H.,
Liang, J.,
Chang, S.,
Pan, J.,
Chen, Y.,
Wei, W.,
Juan, D.,
Improving Adversarial Robustness via Guided Complement Entropy,
ICCV19(4880-4888)
IEEE DOI
2004
entropy, learning (artificial intelligence), neural nets,
probability, adversarial defense, adversarial robustness,
BibRef
Bai, Y.,
Feng, Y.,
Wang, Y.,
Dai, T.,
Xia, S.,
Jiang, Y.,
Hilbert-Based Generative Defense for Adversarial Examples,
ICCV19(4783-4792)
IEEE DOI
2004
feature extraction, Hilbert transforms, neural nets,
security of data, scan mode, advanced Hilbert curve scan order
BibRef
Jang, Y.,
Zhao, T.,
Hong, S.,
Lee, H.,
Adversarial Defense via Learning to Generate Diverse Attacks,
ICCV19(2740-2749)
IEEE DOI
2004
learning (artificial intelligence), neural nets,
pattern classification, security of data, adversarial defense, Machine learning
BibRef
Mustafa, A.,
Khan, S.,
Hayat, M.,
Goecke, R.,
Shen, J.,
Shao, L.,
Adversarial Defense by Restricting the Hidden Space of Deep Neural
Networks,
ICCV19(3384-3393)
IEEE DOI
2004
convolutional neural nets, feature extraction,
image classification, image representation, Iterative methods
BibRef
Taran, O.[Olga],
Rezaeifar, S.[Shideh],
Holotyak, T.[Taras],
Voloshynovskiy, S.[Slava],
Defending Against Adversarial Attacks by Randomized Diversification,
CVPR19(11218-11225).
IEEE DOI
2002
BibRef
Sun, B.[Bo],
Tsai, N.H.[Nian-Hsuan],
Liu, F.C.[Fang-Chen],
Yu, R.[Ronald],
Su, H.[Hao],
Adversarial Defense by Stratified Convolutional Sparse Coding,
CVPR19(11439-11448).
IEEE DOI
2002
BibRef
Ho, C.H.[Chih-Hui],
Leung, B.[Brandon],
Sandstrom, E.[Erik],
Chang, Y.[Yen],
Vasconcelos, N.M.[Nuno M.],
Catastrophic Child's Play:
Easy to Perform, Hard to Defend Adversarial Attacks,
CVPR19(9221-9229).
IEEE DOI
2002
BibRef
Dubey, A.[Abhimanyu],
van der Maaten, L.[Laurens],
Yalniz, Z.[Zeki],
Li, Y.X.[Yi-Xuan],
Mahajan, D.[Dhruv],
Defense Against Adversarial Images Using Web-Scale Nearest-Neighbor
Search,
CVPR19(8759-8768).
IEEE DOI
2002
BibRef
Dong, Y.P.[Yin-Peng],
Pang, T.[Tianyu],
Su, H.[Hang],
Zhu, J.[Jun],
Evading Defenses to Transferable Adversarial Examples by
Translation-Invariant Attacks,
CVPR19(4307-4316).
IEEE DOI
2002
BibRef
Rony, J.[Jerome],
Hafemann, L.G.[Luiz G.],
Oliveira, L.S.[Luiz S.],
Ben Ayed, I.[Ismail],
Sabourin, R.[Robert],
Granger, E.[Eric],
Decoupling Direction and Norm for Efficient Gradient-Based L2
Adversarial Attacks and Defenses,
CVPR19(4317-4325).
IEEE DOI
2002
BibRef
Qiu, Y.X.[Yu-Xian],
Leng, J.W.[Jing-Wen],
Guo, C.[Cong],
Chen, Q.[Quan],
Li, C.[Chao],
Guo, M.[Minyi],
Zhu, Y.H.[Yu-Hao],
Adversarial Defense Through Network Profiling Based Path Extraction,
CVPR19(4772-4781).
IEEE DOI
2002
BibRef
Jia, X.J.[Xiao-Jun],
Wei, X.X.[Xing-Xing],
Cao, X.C.[Xiao-Chun],
Foroosh, H.[Hassan],
ComDefend: An Efficient Image Compression Model to Defend Adversarial
Examples,
CVPR19(6077-6085).
IEEE DOI
2002
BibRef
Raff, E.[Edward],
Sylvester, J.[Jared],
Forsyth, S.[Steven],
McLean, M.[Mark],
Barrage of Random Transforms for Adversarially Robust Defense,
CVPR19(6521-6530).
IEEE DOI
2002
BibRef
Ji, J.,
Zhong, B.,
Ma, K.,
Multi-Scale Defense of Adversarial Images,
ICIP19(4070-4074)
IEEE DOI
1910
deep learning, adversarial images, defense, multi-scale, image evolution
BibRef
Agarwal, C.,
Nguyen, A.,
Schonfeld, D.,
Improving Robustness to Adversarial Examples by Encouraging
Discriminative Features,
ICIP19(3801-3805)
IEEE DOI
1910
Adversarial Machine Learning, Robustness, Defenses, Deep Learning
BibRef
Saha, S.,
Kumar, A.,
Sahay, P.,
Jose, G.,
Kruthiventi, S.,
Muralidhara, H.,
Attack Agnostic Statistical Method for Adversarial Detection,
SDL-CV19(798-802)
IEEE DOI
2004
feature extraction, image classification,
learning (artificial intelligence), neural nets, Adversarial Attack
BibRef
Taran, O.[Olga],
Rezaeifar, S.[Shideh],
Voloshynovskiy, S.[Slava],
Bridging Machine Learning and Cryptography in Defence Against
Adversarial Attacks,
Objectionable18(II:267-279).
Springer DOI
1905
BibRef
Naseer, M.,
Khan, S.,
Porikli, F.M.,
Local Gradients Smoothing: Defense Against Localized Adversarial
Attacks,
WACV19(1300-1307)
IEEE DOI
1904
data compression, feature extraction, gradient methods,
image classification, image coding, image representation,
High frequency
BibRef
Akhtar, N.,
Liu, J.,
Mian, A.,
Defense Against Universal Adversarial Perturbations,
CVPR18(3389-3398)
IEEE DOI
1812
Perturbation methods, Training, Computational modeling, Detectors,
Neural networks, Robustness, Integrated circuits
BibRef
Liao, F.,
Liang, M.,
Dong, Y.,
Pang, T.,
Hu, X.,
Zhu, J.,
Defense Against Adversarial Attacks Using High-Level Representation
Guided Denoiser,
CVPR18(1778-1787)
IEEE DOI
1812
Training, Perturbation methods, Noise reduction,
Image reconstruction, Predictive models, Neural networks, Adaptation models
BibRef
Behpour, S.,
Xing, W.,
Ziebart, B.D.,
ARC: Adversarial Robust Cuts for Semi-Supervised and Multi-label
Classification,
WiCV18(1986-19862)
IEEE DOI
1812
Markov random fields, Task analysis, Training, Testing,
Support vector machines, Fasteners, Games
BibRef
Karim, R.,
Islam, M.A.,
Mohammed, N.,
Bruce, N.D.B.,
On the Robustness of Deep Learning Models to Universal Adversarial
Attack,
CRV18(55-62)
IEEE DOI
1812
Perturbation methods, Computational modeling, Neural networks,
Task analysis, Image segmentation, Data models, Semantics,
Semantic Segmentation
BibRef
Jakubovitz, D.[Daniel],
Giryes, R.[Raja],
Improving DNN Robustness to Adversarial Attacks Using Jacobian
Regularization,
ECCV18(XII: 525-541).
Springer DOI
1810
BibRef
Rozsa, A.,
Gunther, M.,
Boult, T.E.,
Towards Robust Deep Neural Networks with BANG,
WACV18(803-811)
IEEE DOI
1806
image processing, learning (artificial intelligence),
neural nets, BANG technique, adversarial image utilization,
Training
BibRef
Lu, J.,
Issaranon, T.,
Forsyth, D.A.,
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly,
ICCV17(446-454)
IEEE DOI
1802
image colour analysis, image reconstruction,
learning (artificial intelligence), neural nets,
BibRef
Mukuta, Y.,
Ushiku, Y.,
Harada, T.,
Spatial-Temporal Weighted Pyramid Using Spatial Orthogonal Pooling,
CEFR-LCV17(1041-1049)
IEEE DOI
1802
Encoding, Feature extraction, Robustness,
Spatial resolution, Standards
BibRef
Moosavi-Dezfooli, S.M.[Seyed-Mohsen],
Fawzi, A.[Alhussein],
Fawzi, O.[Omar],
Frossard, P.[Pascal],
Universal Adversarial Perturbations,
CVPR17(86-94)
IEEE DOI
1711
Computer architecture, Correlation, Neural networks, Optimization,
Robustness, Training, Visualization
BibRef
Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Adversarial Attacks .