14.5.9.10.3 Countering Adversarial Attacks, Defense, Robustness

Chapter Contents (Back)
Adversarial Networks. Attacks. Defense. GAN. Generative Networks. Robustness. More for the attack iteslf:
See also Adversarial Attacks. Noise for attack:
See also Adversarial Trainning for Defense.
See also Adversarial Patch Attacks.
See also Noise in Adversarial Attacks, Removing, Detection, Use.
See also Backdoor Attacks.
See also Adversarial Networks, Adversarial Inputs, Generative Adversarial.
See also Black-Box Attacks, Robustness.

Miller, D.J., Xiang, Z., Kesidis, G.,
Adversarial Learning Targeting Deep Neural Network Classification: A Comprehensive Review of Defenses Against Attacks,
PIEEE(108), No. 3, March 2020, pp. 402-433.
IEEE DOI 2003
Training data, Neural networks, Reverse engineering, Machine learning, Robustness, Training data, Feature extraction, white box BibRef

Amini, S., Ghaemmaghami, S.,
Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations,
MultMed(22), No. 7, July 2020, pp. 1889-1903.
IEEE DOI 2007
Robustness, Perturbation methods, Training, Deep learning, Neural networks, Signal to noise ratio, interpretable BibRef

Li, X.R.[Xu-Rong], Ji, S.L.[Shou-Ling], Ji, J.T.[Jun-Tao], Ren, Z.Y.[Zhen-Yu], Wu, C.M.[Chun-Ming], Li, B.[Bo], Wang, T.[Ting],
Adversarial examples detection through the sensitivity in space mappings,
IET-CV(14), No. 5, August 2020, pp. 201-213.
DOI Link 2007
BibRef

Li, H., Zeng, Y., Li, G., Lin, L., Yu, Y.,
Online Alternate Generator Against Adversarial Attacks,
IP(29), 2020, pp. 9305-9315.
IEEE DOI 2010
Generators, Training, Perturbation methods, Knowledge engineering, Convolutional neural networks, Deep learning, image classification BibRef

Shi, Y.C.[Yu-Cheng], Han, Y.H.[Ya-Hong], Zhang, Q.X.[Quan-Xin], Kuang, X.H.[Xiao-Hui],
Adaptive iterative attack towards explainable adversarial robustness,
PR(105), 2020, pp. 107309.
Elsevier DOI 2006
Adversarial example, Adversarial attack, Image classification BibRef

Wang, Y., Su, H., Zhang, B., Hu, X.,
Interpret Neural Networks by Extracting Critical Subnetworks,
IP(29), 2020, pp. 6707-6720.
IEEE DOI 2007
Predictive models, Logic gates, Neural networks, Machine learning, Feature extraction, Robustness, Visualization, adversarial robustness BibRef

Liu, A.S.[Ai-Shan], Liu, X.L.[Xiang-Long], Yu, H.[Hang], Zhang, C.Z.[Chong-Zhi], Liu, Q.[Qiang], Tao, D.C.[Da-Cheng],
Training Robust Deep Neural Networks via Adversarial Noise Propagation,
IP(30), 2021, pp. 5769-5781.
IEEE DOI 2106
Training, Adaptation models, Visualization, Computational modeling, Neural networks, Manuals, Robustness, Adversarial examples, deep neural networks BibRef

Jia, K.[Kui], Tao, D.C.[Da-Cheng], Gao, S.H.[Sheng-Hua], Xu, X.M.[Xiang-Min],
Improving Training of Deep Neural Networks via Singular Value Bounding,
CVPR17(3994-4002)
IEEE DOI 1711
Histograms, Machine learning, Neural networks, Optimization, Standards, Training BibRef

Zhang, C.Z.[Chong-Zhi], Liu, A.S.[Ai-Shan], Liu, X.L.[Xiang-Long], Xu, Y.T.[Yi-Tao], Yu, H.[Hang], Ma, Y.Q.[Yu-Qing], Li, T.L.[Tian-Lin],
Interpreting and Improving Adversarial Robustness of Deep Neural Networks With Neuron Sensitivity,
IP(30), 2021, pp. 1291-1304.
IEEE DOI 2012
Neurons, Sensitivity, Robustness, Computational modeling, Analytical models, Training, Deep learning, Model interpretation, neuron sensitivity BibRef

Ortiz-Jiménez, G.[Guillermo], Modas, A.[Apostolos], Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Frossard, P.[Pascal],
Optimism in the Face of Adversity: Understanding and Improving Deep Learning Through Adversarial Robustness,
PIEEE(109), No. 5, May 2021, pp. 635-659.
IEEE DOI 2105
Neural networks, Deep learning, Robustness, Security, Tools, Perturbation methods, Benchmark testing, Adversarial robustness, transfer learning BibRef

Yüce, G.[Gizem], Ortiz-Jiménez, G.[Guillermo], Besbinar, B.[Beril], Frossard, P.[Pascal],
A Structured Dictionary Perspective on Implicit Neural Representations,
CVPR22(19206-19216)
IEEE DOI 2210
Deep learning, Dictionaries, Data visualization, Power system harmonics, Harmonic analysis, Self- semi- meta- unsupervised learning BibRef

Bai, X.[Xiao], Wang, X.[Xiang], Liu, X.L.[Xiang-Long], Liu, Q.[Qiang], Song, J.K.[Jing-Kuan], Sebe, N.[Nicu], Kim, B.[Been],
Explainable deep learning for efficient and robust pattern recognition: A survey of recent developments,
PR(120), 2021, pp. 108102.
Elsevier DOI 2109
Survey, Explainable Learning. Explainable deep learning, Network compression and acceleration, Adversarial robustness, Stability in deep learning BibRef

Ma, X.J.[Xing-Jun], Niu, Y.H.[Yu-Hao], Gu, L.[Lin], Wang, Y.S.[Yi-Sen], Zhao, Y.T.[Yi-Tian], Bailey, J.[James], Lu, F.[Feng],
Understanding adversarial attacks on deep learning based medical image analysis systems,
PR(110), 2021, pp. 107332.
Elsevier DOI 2011
Adversarial attack, Adversarial example detection, Medical image analysis, Deep learning BibRef

Zhou, M.[Mo], Niu, Z.X.[Zhen-Xing], Wang, L.[Le], Zhang, Q.L.[Qi-Lin], Hua, G.[Gang],
Adversarial Ranking Attack and Defense,
ECCV20(XIV:781-799).
Springer DOI 2011
BibRef

Agarwal, A.[Akshay], Vatsa, M.[Mayank], Singh, R.[Richa], Ratha, N.[Nalini],
Cognitive data augmentation for adversarial defense via pixel masking,
PRL(146), 2021, pp. 244-251.
Elsevier DOI 2105
Adversarial attacks, Deep learning, Data augmentation BibRef

Agarwal, A.[Akshay], Ratha, N.[Nalini], Vatsa, M.[Mayank], Singh, R.[Richa],
Exploring Robustness Connection between Artificial and Natural Adversarial Examples,
ArtOfRobust22(178-185)
IEEE DOI 2210
Deep learning, Neural networks, Semantics, Transformers, Robustness, Convolutional neural networks BibRef

Li, Z.R.[Zhuo-Rong], Feng, C.[Chao], Wu, M.H.[Ming-Hui], Yu, H.C.[Hong-Chuan], Zheng, J.W.[Jian-Wei], Zhu, F.[Fanwei],
Adversarial robustness via attention transfer,
PRL(146), 2021, pp. 172-178.
Elsevier DOI 2105
Adversarial defense, Robustness, Representation learning, Visual attention, Transfer learning BibRef

Zhang, S.D.[Shu-Dong], Gao, H.[Haichang], Rao, Q.X.[Qing-Xun],
Defense Against Adversarial Attacks by Reconstructing Images,
IP(30), 2021, pp. 6117-6129.
IEEE DOI 2107
Perturbation methods, Image reconstruction, Training, Iterative methods, Computational modeling, Predictive models, perceptual loss BibRef

Hu, W.Z.[Wen-Zheng], Li, M.Y.[Ming-Yang], Wang, Z.[Zheng], Wang, J.Q.[Jian-Qiang], Zhang, C.S.[Chang-Shui],
DiFNet: Densely High-Frequency Convolutional Neural Networks,
SPLetters(28), 2021, pp. 1340-1344.
IEEE DOI 2107
Image edge detection, Convolution, Perturbation methods, Training, Neural networks, Robustness, Robust, deep convolution neural network BibRef

Mustafa, A.[Aamir], Khan, S.H.[Salman H.], Hayat, M.[Munawar], Goecke, R.[Roland], Shen, J.B.[Jian-Bing], Shao, L.[Ling],
Deeply Supervised Discriminative Learning for Adversarial Defense,
PAMI(43), No. 9, September 2021, pp. 3154-3166.
IEEE DOI 2108
Robustness, Perturbation methods, Training, Linear programming, Optimization, Marine vehicles, Prototypes, Adversarial defense, deep supervision BibRef

Khodabakhsh, A.[Ali], Akhtar, Z.[Zahid],
Unknown presentation attack detection against rational attackers,
IET-Bio(10), No. 5, 2021, pp. 460-479.
DOI Link 2109
BibRef

Zhang, X.W.[Xing-Wei], Zheng, X.L.[Xiao-Long], Mao, W.J.[Wen-Ji],
Adversarial Perturbation Defense on Deep Neural Networks,
Surveys(54), No. 8, October 2021, pp. xx-yy.
DOI Link 2110
Survey, Adversarial Defense. security, deep neural networks, origin, Adversarial perturbation defense BibRef

Chen, X.[Xuan], Ma, Y.N.[Yue-Na], Lu, S.W.[Shi-Wei], Yao, Y.[Yu],
Boundary augment: A data augment method to defend poison attack,
IET-IPR(15), No. 13, 2021, pp. 3292-3303.
DOI Link 2110
BibRef

Xu, Y.H.[Yong-Hao], Du, B.[Bo], Zhang, L.P.[Liang-Pei],
Self-Attention Context Network: Addressing the Threat of Adversarial Attacks for Hyperspectral Image Classification,
IP(30), 2021, pp. 8671-8685.
IEEE DOI 2110
Deep learning, Training, Hyperspectral imaging, Feature extraction, Task analysis, Perturbation methods, Predictive models, deep learning BibRef

Yu, H.[Hang], Liu, A.S.[Ai-Shan], Li, G.C.[Geng-Chao], Yang, J.C.[Ji-Chen], Zhang, C.Z.[Chong-Zhi],
Progressive Diversified Augmentation for General Robustness of DNNs: A Unified Approach,
IP(30), 2021, pp. 8955-8967.
IEEE DOI 2111
Robustness, Training, Handheld computers, Perturbation methods, Complexity theory, Streaming media, Standards BibRef

Dai, T.[Tao], Feng, Y.[Yan], Chen, B.[Bin], Lu, J.[Jian], Xia, S.T.[Shu-Tao],
Deep image prior based defense against adversarial examples,
PR(122), 2022, pp. 108249.
Elsevier DOI 2112
Deep neural network, Adversarial example, Image prior, Defense BibRef

Lo, S.Y.[Shao-Yuan], Patel, V.M.[Vishal M.],
Defending Against Multiple and Unforeseen Adversarial Videos,
IP(31), 2022, pp. 962-973.
IEEE DOI 2201
Videos, Training, Robustness, Perturbation methods, Resists, Image reconstruction, Image recognition, Adversarial video, multi-perturbation robustness BibRef

Wang, J.W.[Jin-Wei], Zhao, J.J.[Jun-Jie], Yin, Q.L.[Qi-Lin], Luo, X.Y.[Xiang-Yang], Zheng, Y.H.[Yu-Hui], Shi, Y.Q.[Yun-Qing], Jha, S.I.K.[Sun-Il Kr.],
SmsNet: A New Deep Convolutional Neural Network Model for Adversarial Example Detection,
MultMed(24), 2022, pp. 230-244.
IEEE DOI 2202
Feature extraction, Training, Manuals, Perturbation methods, Information science, Principal component analysis, SmsConnection BibRef

Mygdalis, V.[Vasileios], Pitas, I.[Ioannis],
Hyperspherical class prototypes for adversarial robustness,
PR(125), 2022, pp. 108527.
Elsevier DOI 2203
Adversarial defense, Adversarial robustness, Hypersphere prototype loss, HCP loss BibRef

Liang, Q.[Qi], Li, Q.[Qiang], Nie, W.Z.[Wei-Zhi],
LD-GAN: Learning perturbations for adversarial defense based on GAN structure,
SP:IC(103), 2022, pp. 116659.
Elsevier DOI 2203
Adversarial attacks, Adversarial defense, Adversarial robustness, Image classification BibRef

Shao, R.[Rui], Perera, P.[Pramuditha], Yuen, P.C.[Pong C.], Patel, V.M.[Vishal M.],
Open-Set Adversarial Defense with Clean-Adversarial Mutual Learning,
IJCV(130), No. 1, January 2022, pp. 1070-1087.
Springer DOI 2204
BibRef
Earlier:
Open-set Adversarial Defense,
ECCV20(XVII:682-698).
Springer DOI 2011
BibRef

Subramanyam, A.V.,
Sinkhorn Adversarial Attack and Defense,
IP(31), 2022, pp. 4039-4049.
IEEE DOI 2206
Iterative methods, Training, Perturbation methods, Loss measurement, Standards, Robustness, Linear programming, adversarial attack and defense BibRef

Khong, T.T.T.[Thi Thu Thao], Nakada, T.[Takashi], Nakashima, Y.[Yasuhiko],
A Hybrid Bayesian-Convolutional Neural Network for Adversarial Robustness,
IEICE(E105-D), No. 7, July 2022, pp. 1308-1319.
WWW Link. 2207
BibRef

Wang, K.[Ke], Li, F.J.[Feng-Jun], Chen, C.M.[Chien-Ming], Hassan, M.M.[Mohammad Mehedi], Long, J.Y.[Jin-Yi], Kumar, N.[Neeraj],
Interpreting Adversarial Examples and Robustness for Deep Learning-Based Auto-Driving Systems,
ITS(23), No. 7, July 2022, pp. 9755-9764.
IEEE DOI 2207
Training, Robustness, Deep learning, Perturbation methods, Interference, Computer science, Computational modeling, adversarial robustness BibRef

Wang, Y.Z.[Yuan-Zhe], Liu, Q.[Qipeng], Mihankhah, E.[Ehsan], Lv, C.[Chen], Wang, D.[Danwei],
Detection and Isolation of Sensor Attacks for Autonomous Vehicles: Framework, Algorithms, and Validation,
ITS(23), No. 7, July 2022, pp. 8247-8259.
IEEE DOI 2207
Robot sensing systems, Autonomous vehicles, Laser radar, Mathematical model, Detectors, Global Positioning System, cyber-attack BibRef

Wang, J.[Jia], Su, W.Q.[Wu-Qiang], Luo, C.W.[Cheng-Wen], Chen, J.[Jie], Song, H.B.[Hou-Bing], Li, J.Q.[Jian-Qiang],
CSG: Classifier-Aware Defense Strategy Based on Compressive Sensing and Generative Networks for Visual Recognition in Autonomous Vehicle Systems,
ITS(23), No. 7, July 2022, pp. 9543-9553.
IEEE DOI 2207
Training, Neural networks, Compressed sensing, Perturbation methods, Robustness, Real-time systems, generative neural networks BibRef

Wang, K.[Kun], Liu, M.Z.[Mao-Zhen],
YOLO-Anti: YOLO-based counterattack model for unseen congested object detection,
PR(131), 2022, pp. 108814.
Elsevier DOI 2208
Deep learning, Congested and occluded objects, Object detection BibRef

Xue, W.[Wei], Chen, Z.M.[Zhi-Ming], Tian, W.W.[Wei-Wei], Wu, Y.H.[Yun-Hua], Hua, B.[Bing],
A Cascade Defense Method for Multidomain Adversarial Attacks under Remote Sensing Detection,
RS(14), No. 15, 2022, pp. xx-yy.
DOI Link 2208
BibRef

Shi, X.S.[Xiao-Shuang], Peng, Y.F.[Yi-Fan], Chen, Q.Y.[Qing-Yu], Keenan, T.[Tiarnan], Thavikulwat, A.T.[Alisa T.], Lee, S.[Sungwon], Tang, Y.X.[Yu-Xing], Chew, E.Y.[Emily Y.], Summers, R.M.[Ronald M.], Lu, Z.Y.[Zhi-Yong],
Robust convolutional neural networks against adversarial attacks on medical images,
PR(132), 2022, pp. 108923.
Elsevier DOI 2209
CNNs, Adversarial examples, Sparsity denoising BibRef

Rakin, A.S.[Adnan Siraj], He, Z.[Zhezhi], Li, J.T.[Jing-Tao], Yao, F.[Fan], Chakrabarti, C.[Chaitali], Fan, D.L.[De-Liang],
T-BFA: Targeted Bit-Flip Adversarial Weight Attack,
PAMI(44), No. 11, November 2022, pp. 7928-7939.
IEEE DOI 2210
BibRef
Earlier: A2, A1, A3, A5, A6, Only:
Defending and Harnessing the Bit-Flip Based Adversarial Weight Attack,
CVPR20(14083-14091)
IEEE DOI 2008
Computational modeling, Random access memory, Computer security, Training, Quantization (signal), Data models, Memory management, bit-flip. Neural networks, Random access memory, Indexes, Optimization, Degradation, Immune system BibRef

Melacci, S.[Stefano], Ciravegna, G.[Gabriele], Sotgiu, A.[Angelo], Demontis, A.[Ambra], Biggio, B.[Battista], Gori, M.[Marco], Roli, F.[Fabio],
Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers,
PAMI(44), No. 12, December 2022, pp. 9944-9959.
IEEE DOI 2212
Training, Training data, Robustness, Task analysis, Adversarial machine learning, Ink, Semisupervised learning, multi-label classification BibRef

Yu, X.[Xi], Smedemark-Margulies, N.[Niklas], Aeron, S.[Shuchin], Koike-Akino, T.[Toshiaki], Moulin, P.[Pierre], Brand, M.[Matthew], Parsons, K.[Kieran], Wang, Y.[Ye],
Improving adversarial robustness by learning shared information,
PR(134), 2023, pp. 109054.
Elsevier DOI 2212
Adversarial robustness, Information bottleneck, Multi-view learning, Shared information, BibRef

Machado, G.R.[Gabriel Resende], Silva, E.[Eugenio], Goldschmidt, R.R.[Ronaldo Ribeiro],
Adversarial Machine Learning in Image Classification: A Survey Toward the Defender's Perspective,
Surveys(55), No. 1, January 2023, pp. xx-yy.
DOI Link 2212
adversarial attacks, deep neural networks, adversarial images, defense methods, image classification BibRef

Rathore, H.[Hemant], Sasan, A.[Animesh], Sahay, S.K.[Sanjay K.], Sewak, M.[Mohit],
Defending malware detection models against evasion based adversarial attacks,
PRL(164), 2022, pp. 119-125.
Elsevier DOI 2212
Adversarial robustness, Deep neural network, Evasion attack, Malware analysis and detection, Machine learning BibRef

Lee, S.[Sungyoon], Kim, H.[Hoki], Lee, J.W.[Jae-Wook],
GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization,
PAMI(45), No. 2, February 2023, pp. 2645-2651.
IEEE DOI 2301
Neural networks, Robustness, Stochastic processes, Perturbation methods, Training, Transform coding, Statistics, directional analysis BibRef

Lin, D.[Da], Wang, Y.G.[Yuan-Gen], Tang, W.X.[Wei-Xuan], Kang, X.G.[Xian-Gui],
Boosting Query Efficiency of Meta Attack With Dynamic Fine-Tuning,
SPLetters(29), 2022, pp. 2557-2561.
IEEE DOI 2301
Distortion, Optimization, Estimation, Training, Tuning, Closed box, Rate distortion theory, Adversarial attack, query efficiency BibRef

Zhou, S.[Shuai], Liu, C.[Chi], Ye, D.[Dayong], Zhu, T.Q.[Tian-Qing], Zhou, W.[Wanlei], Yu, P.S.[Philip S.],
Adversarial Attacks and Defenses in Deep Learning: From a Perspective of Cybersecurity,
Surveys(55), No. 8, December 2022, pp. xx-yy.
DOI Link 2301
cybersecurity, adversarial attacks and defenses, advanced persistent threats, Deep learning BibRef

Picot, M.[Marine], Messina, F.[Francisco], Boudiaf, M.[Malik], Labeau, F.[Fabrice], Ben Ayed, I.[Ismail], Piantanida, P.[Pablo],
Adversarial Robustness Via Fisher-Rao Regularization,
PAMI(45), No. 3, March 2023, pp. 2698-2710.
IEEE DOI 2302
Robustness, Manifolds, Training, Perturbation methods, Standards, Neural networks, Adversarial machine learning, safety AI BibRef

Stutz, D.[David], Chandramoorthy, N.[Nandhini], Hein, M.[Matthias], Schiele, B.[Bernt],
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators,
PAMI(45), No. 3, March 2023, pp. 3632-3647.
IEEE DOI 2302
Robustness, Quantization (signal), Random access memory, Training, Voltage, Bit error rate, Low voltage, DNN Accelerators, DNN quantization BibRef

Stutz, D.[David], Hein, M.[Matthias], Schiele, B.[Bernt],
Disentangling Adversarial Robustness and Generalization,
CVPR19(6969-6980).
IEEE DOI 2002
BibRef

Guo, Y.[Yong], Stutz, D.[David], Schiele, B.[Bernt],
Improving Robustness by Enhancing Weak Subnets,
ECCV22(XXIV:320-338).
Springer DOI 2211
BibRef

Guo, J.[Jun], Bao, W.[Wei], Wang, J.K.[Jia-Kai], Ma, Y.Q.[Yu-Qing], Gao, X.H.[Xing-Hai], Xiao, G.[Gang], Liu, A.[Aishan], Dong, J.[Jian], Liu, X.L.[Xiang-Long], Wu, W.J.[Wen-Jun],
A comprehensive evaluation framework for deep model robustness,
PR(137), 2023, pp. 109308.
Elsevier DOI 2302
Adversarial examples, Evaluation metrics, Model robustness BibRef

Niu, Z.H.[Zhong-Han], Yang, Y.B.[Yu-Bin],
Defense Against Adversarial Attacks with Efficient Frequency-Adaptive Compression and Reconstruction,
PR(138), 2023, pp. 109382.
Elsevier DOI 2303
Deep neural networks, Adversarial defense, Adversarial robustness, Closed-set attack, Open-set attack BibRef

Zhang, J.J.[Jia-Jin], Chao, H.Q.[Han-Qing], Yan, P.K.[Ping-Kun],
Toward Adversarial Robustness in Unlabeled Target Domains,
IP(32), 2023, pp. 1272-1284.
IEEE DOI 2303
Training, Robustness, Adaptation models, Data models, Deep learning, Task analysis, Labeling, Adversarial robustness, domain adaptation, pseudo labeling BibRef

Brau, F.[Fabio], Rossolini, G.[Giulio], Biondi, A.[Alessandro], Buttazzo, G.[Giorgio],
On the Minimal Adversarial Perturbation for Deep Neural Networks With Provable Estimation Error,
PAMI(45), No. 4, April 2023, pp. 5038-5052.
IEEE DOI 2303
Perturbation methods, Robustness, Estimation, Neural networks, Deep learning, Error analysis, Computational modeling, verification methods BibRef

Quan, C.[Chen], Sriranga, N.[Nandan], Yang, H.D.[Hao-Dong], Han, Y.H.S.[Yung-Hsiang S.], Geng, B.C.[Bao-Cheng], Varshney, P.K.[Pramod K.],
Efficient Ordered-Transmission Based Distributed Detection Under Data Falsification Attacks,
SPLetters(30), 2023, pp. 145-149.
IEEE DOI 2303
Energy efficiency, Wireless sensor networks, Upper bound, Optimization, Distributed databases, Simulation, distributed detection BibRef

Naseer, M.[Muzammal], Khan, S.[Salman], Hayat, M.[Munawar], Khan, F.S.[Fahad Shahbaz], Porikli, F.M.[Fatih M.],
Stylized Adversarial Defense,
PAMI(45), No. 5, May 2023, pp. 6403-6414.
IEEE DOI 2304
Training, Perturbation methods, Robustness, Multitasking, Predictive models, Computational modeling, Visualization, multi-task objective BibRef

Xu, Q.Q.[Qian-Qian], Yang, Z.Y.[Zhi-Yong], Zhao, Y.R.[Yun-Rui], Cao, X.C.[Xiao-Chun], Huang, Q.M.[Qing-Ming],
Rethinking Label Flipping Attack: From Sample Masking to Sample Thresholding,
PAMI(45), No. 6, June 2023, pp. 7668-7685.
IEEE DOI 2305
Data models, Training data, Training, Deep learning, Predictive models, Testing, Optimization, Label flipping attack, machine learning BibRef

Zago, J.G.[João G.], Antonelo, E.A.[Eric A.], Baldissera, F.L.[Fabio L.], Saad, R.T.[Rodrigo T.],
Benford's law: What does it say on adversarial images?,
JVCIR(93), 2023, pp. 103818.
Elsevier DOI 2305
Benford's law, Adversarial attacks, Convolutional neural networks, Adversarial detection BibRef

Li, W.[Wen], Wang, H.Y.[Heng-You], Huo, L.Z.[Lian-Zhi], He, Q.[Qiang], Zhang, C.L.[Chang-Lun],
Robust attention ranking architecture with frequency-domain transform to defend against adversarial samples,
CVIU(233), 2023, pp. 103717.
Elsevier DOI 2307
Adversarial samples, Attention mechanism, Discrete cosine transform, Key points ranking BibRef

Zhang, Y.X.[Yu-Xuan], Meng, H.[Hua], Cao, X.M.[Xue-Mei], Zhou, Z.C.[Zheng-Chun], Yang, M.[Mei], Adhikary, A.R.[Avik Ranjan],
Interpreting vulnerabilities of multi-instance learning to adversarial perturbations,
PR(142), 2023, pp. 109725.
Elsevier DOI 2307
Customized perturbation, Multi-instance learning, Universal perturbation, Vulnerability BibRef

Dong, J.H.[Jun-Hao], Yang, L.X.[Ling-Xiao], Wang, Y.[Yuan], Xie, X.H.[Xiao-Hua], Lai, J.H.[Jian-Huang],
Toward Intrinsic Adversarial Robustness Through Probabilistic Training,
IP(32), 2023, pp. 3862-3872.
IEEE DOI 2307
Training, Uncertainty, Probabilistic logic, Robustness, Standards, Computational modeling, Feature extraction, Deep neural networks, uncertainty BibRef

Lee, H.[Hakmin], Ro, Y.M.[Yong Man],
Adversarial anchor-guided feature refinement for adversarial defense,
IVC(136), 2023, pp. 104722.
Elsevier DOI 2308
Adversarial example, Adversarial robustness, Adversarial anchor, Covariate shift, Feature refinement BibRef

Gao, W.[Wei], Zhang, X.[Xu], Guo, S.[Shangwei], Zhang, T.W.[Tian-Wei], Xiang, T.[Tao], Qiu, H.[Han], Wen, Y.G.[Yong-Gang], Liu, Y.[Yang],
Automatic Transformation Search Against Deep Leakage From Gradients,
PAMI(45), No. 9, September 2023, pp. 10650-10668.
IEEE DOI Collaborative learning, deal with attacks that reveal shared data. 2309
BibRef

Wei, X.X.[Xing-Xing], Wang, S.[Songping], Yan, H.Q.[Huan-Qian],
Efficient Robustness Assessment via Adversarial Spatial-Temporal Focus on Videos,
PAMI(45), No. 9, September 2023, pp. 10898-10912.
IEEE DOI 2309
BibRef

Saini, N.[Nandini], Chattopadhyay, C.[Chiranjoy], Das, D.[Debasis],
SOLARNet: A single stage regression based framework for efficient and robust object recognition in aerial images,
PRL(172), 2023, pp. 37-43.
Elsevier DOI 2309
Adversarial attacks, Deep learning, Aerial image, Object detection, DOTA, DIOR BibRef

Heo, J.[Jaehyuk], Seo, S.[Seungwan], Kang, P.[Pilsung],
Exploring the differences in adversarial robustness between ViT- and CNN-based models using novel metrics,
CVIU(235), 2023, pp. 103800.
Elsevier DOI 2310
Adversarial robustness, Computer vision BibRef

Huang, L.F.[Li-Feng], Gao, C.Y.[Cheng-Ying], Liu, N.[Ning],
Erosion Attack: Harnessing Corruption To Improve Adversarial Examples,
IP(32), 2023, pp. 4828-4841.
IEEE DOI Code:
WWW Link. 2310
BibRef

Wang, K.[Ke], Chen, Z.C.[Zi-Cong], Dang, X.L.[Xi-Lin], Fan, X.[Xuan], Han, X.M.[Xu-Ming], Chen, C.M.[Chien-Ming], Ding, W.P.[Wei-Ping], Yiu, S.M.[Siu-Ming], Weng, J.[Jian],
Uncovering Hidden Vulnerabilities in Convolutional Neural Networks through Graph-based Adversarial Robustness Evaluation,
PR(143), 2023, pp. 109745.
Elsevier DOI 2310
Graph of patterns, Graph distance algorithm, Adversarial robustness, Interpretable graph-based systems, Convolutional neural networks BibRef

Yang, S.R.[Suo-Rong], Li, J.Q.[Jin-Qiao], Zhang, T.Y.[Tian-Yue], Zhao, J.[Jian], Shen, F.[Furao],
AdvMask: A sparse adversarial attack-based data augmentation method for image classification,
PR(144), 2023, pp. 109847.
Elsevier DOI 2310
Data augmentation, Image classification, Sparse adversarial attack, Generalization BibRef

Ding, F.[Feng], Shen, Z.Y.[Zhang-Yi], Zhu, G.P.[Guo-Pu], Kwong, S.[Sam], Zhou, Y.C.[Yi-Cong], Lyu, S.W.[Si-Wei],
ExS-GAN: Synthesizing Anti-Forensics Images via Extra Supervised GAN,
Cyber(53), No. 11, November 2023, pp. 7162-7173.
IEEE DOI 2310
BibRef

Shi, C.[Cheng], Liu, Y.[Ying], Zhao, M.H.[Ming-Hua], Pun, C.M.[Chi-Man], Miao, Q.G.[Qi-Guang],
Attack-invariant attention feature for adversarial defense in hyperspectral image classification,
PR(145), 2024, pp. 109955.
Elsevier DOI Code:
WWW Link. 2311
Hyperspectral image classification, Adversarial defense, Attack-invariant attention feature, Adversarial attack BibRef

Liu, D.[Deyin], Wu, L.Y.B.[Lin Yuan-Bo], Li, B.[Bo], Boussaid, F.[Farid], Bennamoun, M.[Mohammed], Xie, X.H.[Xiang-Hua], Liang, C.W.[Cheng-Wu],
Jacobian norm with Selective Input Gradient Regularization for interpretable adversarial defense,
PR(145), 2024, pp. 109902.
Elsevier DOI Code:
WWW Link. 2311
Selective input gradient regularization, Jacobian normalization, Adversarial robustness BibRef

Zhang, C.H.[Chen-Han], Yu, S.[Shui], Tian, Z.Y.[Zhi-Yi], Yu, J.J.Q.[James J. Q.],
Generative Adversarial Networks: A Survey on Attack and Defense Perspective,
Surveys(56), No. 4, November 2023, pp. xx-yy.
DOI Link 2312
Survey, GAN Attacks. security and privacy, GANs survey, deep learning, attack and defense, Generative adversarial networks BibRef

Liu, H.[Hui], Zhao, B.[Bo], Guo, J.[Jiabao], Zhang, K.[Kehuan], Liu, P.[Peng],
A lightweight unsupervised adversarial detector based on autoencoder and isolation forest,
PR(147), 2024, pp. 110127.
Elsevier DOI 2312
Deep neural networks, Adversarial examples, Adversarial detection, Autoencoder, Isolation forest BibRef

Chu, T.S.[Tian-Shu], Fang, K.[Kun], Yang, J.[Jie], Huang, X.L.[Xiao-Lin],
Improving the adversarial robustness of quantized neural networks via exploiting the feature diversity,
PRL(176), 2023, pp. 117-122.
Elsevier DOI 2312
Quantized neural networks, Adversarial robustness, Orthogonal regularization, Feature diversity BibRef

Fang, K.[Kun], Tao, Q.H.[Qing-Hua], Wu, Y.W.[Ying-Wen], Li, T.[Tao], Cai, J.[Jia], Cai, F.P.[Fei-Peng], Huang, X.L.[Xiao-Lin], Yang, J.[Jie],
Towards robust neural networks via orthogonal diversity,
PR(149), 2024, pp. 110281.
Elsevier DOI 2403
Model augmentation, Multi-head, Orthogonality, Margin-maximization, Data augmentation, Adversarial robustness BibRef

Chu, T.S.[Tian-Shu], Yang, Z.P.[Zuo-Peng], Yang, J.[Jie], Huang, X.L.[Xiao-Lin],
Improving the Robustness of Convolutional Neural Networks Via Sketch Attention,
ICIP21(869-873)
IEEE DOI 2201
Training, Perturbation methods, Image processing, Pipelines, Robustness, Convolutional neural networks, CNNs, sketch attention BibRef

Yu, Y.R.[Yun-Rui], Gao, X.T.[Xi-Tong], Xu, C.Z.[Cheng-Zhong],
LAFIT: Efficient and Reliable Evaluation of Adversarial Defenses With Latent Features,
PAMI(46), No. 1, January 2024, pp. 354-369.
IEEE DOI 2312
BibRef

Zhang, X.X.[Xing-Xing], Gui, S.[Shupeng], Jin, J.[Jian], Zhu, Z.F.[Zhen-Feng], Zhao, Y.[Yao],
ATZSL: Defensive Zero-Shot Recognition in the Presence of Adversaries,
MultMed(26), 2024, pp. 15-27.
IEEE DOI 2401
BibRef

Xu, S.W.[Sheng-Wang], Qiao, T.[Tong], Xu, M.[Ming], Wang, W.[Wei], Zheng, N.[Ning],
Robust Adversarial Watermark Defending Against GAN Synthesization Attack,
SPLetters(31), 2024, pp. 351-355.
IEEE DOI 2402
Watermarking, Transform coding, Generative adversarial networks, Forgery, Image coding, Discrete cosine transforms, Decoding, JPEG compression BibRef

Wang, D.H.[Dong-Hua], Yao, W.[Wen], Jiang, T.S.[Ting-Song], Chen, X.Q.[Xiao-Qian],
AdvOps: Decoupling adversarial examples,
PR(149), 2024, pp. 110252.
Elsevier DOI 2403
Adversarial attack, Analysis of adversarial examples, Analysis of neural network BibRef

Zhuang, W.[Wenzi], Huang, L.F.[Li-Feng], Gao, C.Y.[Cheng-Ying], Liu, N.[Ning],
LAFED: Towards robust ensemble models via Latent Feature Diversification,
PR(150), 2024, pp. 110225.
Elsevier DOI Code:
WWW Link. 2403
Adversarial example, Adversarial defense, Ensemble model, Robustness BibRef

Wang, W.D.[Wei-Dong], Li, Z.[Zhi], Liu, S.[Shuaiwei], Zhang, L.[Li], Yang, J.[Jin], Wang, Y.[Yi],
Feature decoupling and interaction network for defending against adversarial examples,
IVC(144), 2024, pp. 104931.
Elsevier DOI 2404
Deep neural networks, Adversarial examples, Adversarial defense, Feature decoupling-interaction BibRef

Li, Y.J.[Yan-Jie], Xie, B.[Bin], Guo, S.T.[Song-Tao], Yang, Y.Y.[Yuan-Yuan], Xiao, B.[Bin],
A Survey of Robustness and Safety of 2D and 3D Deep Learning Models against Adversarial Attacks,
Surveys(56), No. 6, January 2024, pp. xx-yy.
DOI Link 2404
Deep learning, 3D computer vision, adversarial attack, robustness BibRef

Zhao, C.L.[Cheng-Long], Mei, S.B.[Shi-Bin], Ni, B.B.[Bing-Bing], Yuan, S.C.[Sheng-Chao], Yu, Z.B.[Zhen-Bo], Wang, J.[Jun],
Variational Adversarial Defense: A Bayes Perspective for Adversarial Training,
PAMI(46), No. 5, May 2024, pp. 3047-3063.
IEEE DOI 2404
Training, Training data, Data models, Complexity theory, Robustness, Perturbation methods, Optimization, Variational inference, model robustness BibRef

Yao, Q.S.[Qing-Song], He, Z.C.[Ze-Cheng], Li, Y.X.[Yue-Xiang], Lin, Y.[Yi], Ma, K.[Kai], Zheng, Y.F.[Ye-Feng], Zhou, S.K.[S. Kevin],
Adversarial Medical Image With Hierarchical Feature Hiding,
MedImg(43), No. 4, April 2024, pp. 1296-1307.
IEEE DOI 2404
Medical diagnostic imaging, Hybrid fiber coaxial cables, Perturbation methods, Iterative methods, Feature extraction, adversarial attacks and defense BibRef

He, S.Y.[Shi-Yuan], Wei, J.[Jiwei], Zhang, C.N.[Chao-Ning], Xu, X.[Xing], Song, J.K.[Jing-Kuan], Yang, Y.[Yang], Shen, H.T.[Heng Tao],
Boosting Adversarial Training with Hardness-Guided Attack Strategy,
MultMed(26), 2024, pp. 7748-7760.
IEEE DOI 2405
Training, Robustness, Data models, Perturbation methods, Adaptation models, Standards, Predictive models, model robustness BibRef

Liu, A.[Aishan], Tang, S.Y.[Shi-Yu], Chen, X.Y.[Xin-Yun], Huang, L.[Lei], Qin, H.T.[Hao-Tong], Liu, X.L.[Xiang-Long], Tao, D.C.[Da-Cheng],
Towards Defending Multiple-Norm Bounded Adversarial Perturbations via Gated Batch Normalization,
IJCV(132), No. 6, June 2024, pp. 1881-1898.
Springer DOI 2406
BibRef

Zhou, M.[Mo], Wang, L.[Le], Niu, Z.X.[Zhen-Xing], Zhang, Q.[Qilin], Zheng, N.N.[Nan-Ning], Hua, G.[Gang],
Adversarial Attack and Defense in Deep Ranking,
PAMI(46), No. 8, August 2024, pp. 5306-5324.
IEEE DOI 2407
Robustness, Perturbation methods, Glass box, Training, Face recognition, Adaptation models, Task analysis, ranking model robustness BibRef

Zhang, L.[Lei], Zhou, Y.H.[Yu-Hang], Yang, Y.[Yi], Gao, X.B.[Xin-Bo],
Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks,
PAMI(46), No. 10, October 2024, pp. 6669-6687.
IEEE DOI 2409
Robustness, Training, Task analysis, Feature extraction, Metalearning, Perturbation methods, Artificial neural networks, deep neural network BibRef

Zhang, H.[Han], Zhang, X.[Xin], Sun, Y.[Yuan], Ji, L.X.[Li-Xia],
Detecting adversarial samples by noise injection and denoising,
c IVC(150), 2024, pp. 105238.
Elsevier DOI 2409
Artificial intelligence security, Adversarial sample detection, Image classification, Noise, Denoising BibRef

Zhu, R.[Rui], Ma, S.P.[Shi-Ping], He, L.Y.[Lin-Yuan], Ge, W.[Wei],
FFA: Foreground Feature Approximation Digitally against Remote Sensing Object Detection,
RS(16), No. 17, 2024, pp. 3194.
DOI Link 2409
BibRef

Zhang, P.F.[Peng-Fei], Huang, Z.[Zi], Xu, X.S.[Xin-Shun], Bai, G.[Guangdong],
Effective and Robust Adversarial Training Against Data and Label Corruptions,
MultMed(26), 2024, pp. 9477-9488.
IEEE DOI 2410
Data models, Perturbation methods, Training, Noise measurement, Noise, Semisupervised learning, Predictive models, semi-supervised learning BibRef

Li, Z.R.[Zhuo-Rong], Wu, M.H.[Ming-Hui], Jin, C.[Canghong], Yu, D.[Daiwei], Yu, H.[Hongchuan],
Adversarial self-training for robustness and generalization,
PRL(185), 2024, pp. 117-123.
Elsevier DOI 2410
Adversarial defense, Adversarial attack, Robustness, Generalization, Self-training BibRef

Liu, Y.[Yujia], Yang, C.X.[Chen-Xi], Li, D.Q.[Ding-Quan], Ding, J.H.[Jian-Hao], Jiang, T.T.[Ting-Ting],
Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization,
CVPR24(25554-25563)
IEEE DOI 2410
Image quality, Training, Performance evaluation, Perturbation methods, Computational modeling, Predictive models, adversarial defense method BibRef

Zhang, L.[Lilin], Yang, N.[Ning], Sun, Y.C.[Yan-Chao], Yu, P.S.[Philip S.],
Provable Unrestricted Adversarial Training Without Compromise With Generalizability,
PAMI(46), No. 12, December 2024, pp. 8302-8319.
IEEE DOI 2411
Robustness, Training, Standards, Perturbation methods, Stars, Optimization, Adversarial robustness, standard generalizability BibRef

Li, Z.Y.[Ze-Yang], Hu, C.[Chuxiong], Wang, Y.[Yunan], Yang, Y.J.[Yu-Jie], Li, S.E.[Shengbo Eben],
Safe Reinforcement Learning With Dual Robustness,
PAMI(46), No. 12, December 2024, pp. 10876-10890.
IEEE DOI 2411
Safety, Games, Game theory, Task analysis, Robustness, Optimization, Convergence, Reinforcement learning, robustness, safety, zero-sum Markov game BibRef

Li, J.W.[Jia-Wen], Fang, K.[Kun], Huang, X.L.[Xiao-Lin], Yang, J.[Jie],
Boosting certified robustness via an expectation-based similarity regularization,
IVC(151), 2024, pp. 105272.
Elsevier DOI 2411
Image classification, Adversarial robustness, Metric learning, Certified robustness, Randomized smoothing BibRef


Hsu, C.C.[Chih-Chung], Wu, M.H.[Ming-Hsuan], Liu, E.C.[En-Chao],
LFGN: Low-Level Feature-Guided Network for Adversarial Defense,
ICIP24(563-567)
IEEE DOI 2411
Training, Deep learning, Computational modeling, Pipelines, Noise, Transforms, Artificial neural networks, Adversarial defense, security BibRef

Li, Z.R.[Zhuo-Rong], Yu, D.[Daiwei], Wei, L.[Lina], Jin, C.H.[Cang-Hong], Zhang, Y.[Yun], Chan, S.[Sixian],
Soften to Defend: Towards Adversarial Robustness via Self-Guided Label Refinement,
CVPR24(24776-24785)
IEEE DOI 2410
Training, Accuracy, Noise, Computer architecture, Robustness, Robust overfitting, Adversarial training, Label smoothing BibRef

Zhang, C.S.[Chen-Shuang], Pan, F.[Fei], Kim, J.[Junmo], Kweon, I.S.[In So], Mao, C.Z.[Cheng-Zhi],
ImageNet-D: Benchmarking Neural Network Robustness on Diffusion Synthetic Object,
CVPR24(21752-21762)
IEEE DOI Code:
WWW Link. 2410
Visualization, Accuracy, Computational modeling, Soft sensors, Benchmark testing, Diffusion models, Robustness, Dataset BibRef

Franco, N.[Nicola], Lorenz, J.M.[Jeanette Miriam], Roscher, K.[Karsten], Günnemann, S.[Stephan],
Understanding ReLU Network Robustness Through Test Set Certification Performance,
SAIAD24(3451-3460)
IEEE DOI 2410
Accuracy, Perturbation methods, Neural networks, Reliability theory, Robustness, Stability analysis, Safety, Formal Verification BibRef

Niu, Y.[Yue], Ali, R.E.[Ramy E.], Prakash, S.[Saurav], Avestimehr, S.[Salman],
All Rivers Run to the Sea: Private Learning with Asymmetric Flows,
CVPR24(12353-12362)
IEEE DOI 2410
Training, Privacy, Quantization (signal), Accuracy, Computational modeling, Machine learning, Complexity theory, Distributed Machine Learning BibRef

Park, H.[Hyejin], Hwang, J.Y.[Jeong-Yeon], Mun, S.[Sunung], Park, S.[Sangdon], Ok, J.[Jungseul],
MedBN: Robust Test-Time Adaptation against Malicious Test Samples,
CVPR24(5997-6007)
IEEE DOI Code:
WWW Link. 2410
Training, Estimation, Benchmark testing, Robustness, Inference algorithms, Data models, Test-Time Adaptation, Robustness BibRef

Hong, S.[SangHwa],
Learning to Schedule Resistant to Adversarial Attacks in Diffusion Probabilistic Models Under the Threat of Lipschitz Singularities,
AML24(2957-2966)
IEEE DOI 2410
Resistance, Schedules, Image synthesis, Computational modeling, Face recognition, Reinforcement learning, Adversarial Attack BibRef

Mumcu, F.[Furkan], Yilmaz, Y.[Yasin],
Multimodal Attack Detection for Action Recognition Models,
AML24(2967-2976)
IEEE DOI 2410
Target recognition, Graphics processing units, Detectors, Real-time systems, Robustness, Action Recognition Models BibRef

Fares, S.[Samar], Nandakumar, K.[Karthik],
Attack To Defend: Exploiting Adversarial Attacks for Detecting Poisoned Models,
CVPR24(24726-24735)
IEEE DOI 2410
Sensitivity, Machine learning algorithms, Computational modeling, Perturbation methods, Closed box, Machine learning, Safety BibRef

Cui, X.M.[Xuan-Ming], Aparcedo, A.[Alejandro], Jang, Y.K.[Young Kyun], Lim, S.N.[Ser-Nam],
On the Robustness of Large Multimodal Models Against Image Adversarial Attacks,
CVPR24(24625-24634)
IEEE DOI 2410
Visualization, Accuracy, Robustness, Question answering (information retrieval), Adversarial attack BibRef

Wang, Y.T.[Yan-Ting], Fu, H.Y.[Hong-Ye], Zou, W.[Wei], Jia, J.[Jinyuan],
MMCert: Provable Defense Against Adversarial Attacks to Multi-Modal Models,
CVPR24(24655-24664)
IEEE DOI 2410
Solid modeling, Image segmentation, Emotion recognition, Perturbation methods, Computational modeling, Roads, multi-modal BibRef

Wang, K.Y.[Kun-Yu], He, X.R.[Xuan-Ran], Wang, W.X.[Wen-Xuan], Wang, X.S.[Xiao-Sen],
Boosting Adversarial Transferability by Block Shuffle and Rotation,
CVPR24(24336-24346)
IEEE DOI Code:
WWW Link. 2410
Heating systems, Deep learning, Limiting, Codes, Perturbation methods, Computational modeling, adversarial attack, BibRef

Fang, B.[Bin], Li, B.[Bo], Wu, S.[Shuang], Ding, S.H.[Shou-Hong], Yi, R.[Ran], Ma, L.Z.[Li-Zhuang],
Re-Thinking Data Availability Attacks Against Deep Neural Networks,
CVPR24(12215-12224)
IEEE DOI 2410
Training, Noise, Resists, Machine learning, Solids, Data models, Unlearnable Examples, Data Privacy, Data Availability Attacks BibRef

Zheng, J.H.[Jun-Hao], Lin, C.H.[Chen-Hao], Sun, J.H.[Jia-Hao], Zhao, Z.Y.[Zheng-Yu], Li, Q.[Qian], Shen, C.[Chao],
Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving,
CVPR24(24452-24461)
IEEE DOI Code:
WWW Link. 2410
Solid modeling, Rain, Shape, Computational modeling, Estimation, Robustness, Monocular Depth Estimation, Autonomous Driving, Adversarial Attack BibRef

Christensen, P.E.[Peter Ebert], Snæbjarnarson, V.[Vésteinn], Dittadi, A.[Andrea], Belongie, S.[Serge], Benaim, S.[Sagie],
Assessing Neural Network Robustness via Adversarial Pivotal Tuning,
WACV24(2940-2949)
IEEE DOI 2404
Training, Semantics, Neural networks, Training data, Benchmark testing, Robustness, Generators, Algorithms BibRef

Cohen, G.[Gilad], Giryes, R.[Raja],
Simple Post-Training Robustness using Test Time Augmentations and Random Forest,
WACV24(3984-3994)
IEEE DOI Code:
WWW Link. 2404
Training, Threat modeling, Adaptation models, Image color analysis, Artificial neural networks, Transforms, Robustness, Algorithms, adversarial attack and defense methods BibRef

Sharma, A.[Abhijith], Munz, P.[Phil], Narayan, A.[Apurva],
Assist Is Just as Important as the Goal: Image Resurfacing to Aid Model's Robust Prediction,
WACV24(3821-3830)
IEEE DOI 2404
Visualization, TV, Perturbation methods, Predictive models, Benchmark testing, Security, Algorithms, Adversarial learning, adversarial attack and defense methods BibRef

Schlarmann, C.[Christian], Hein, M.[Matthias],
On the Adversarial Robustness of Multi-Modal Foundation Models,
AROW23(3679-3687)
IEEE DOI 2401
BibRef

Tao, Y.[Yunbo], Liu, D.Z.[Dai-Zong], Zhou, P.[Pan], Xie, Y.[Yulai], Du, W.[Wei], Hu, W.[Wei],
3DHacker: Spectrum-based Decision Boundary Generation for Hard-label 3D Point Cloud Attack,
ICCV23(14294-14304)
IEEE DOI 2401
BibRef

Ruan, S.W.[Shou-Wei], Dong, Y.P.[Yin-Peng], Su, H.[Hang], Peng, J.T.[Jian-Teng], Chen, N.[Ning], Wei, X.X.[Xing-Xing],
Towards Viewpoint-Invariant Visual Recognition via Adversarial Training,
ICCV23(4686-4696)
IEEE DOI 2401
BibRef

Yang, D.Y.[Dong-Yoon], Kong, I.[Insung], Kim, Y.[Yongdai],
Enhancing Adversarial Robustness in Low-Label Regime via Adaptively Weighted Regularization and Knowledge Distillation,
ICCV23(4529-4538)
IEEE DOI 2401
BibRef

Lee, B.K.[Byung-Kwan], Kim, J.[Junho], Ro, Y.M.[Yong Man],
Mitigating Adversarial Vulnerability through Causal Parameter Estimation by Adversarial Double Machine Learning,
ICCV23(4476-4486)
IEEE DOI 2401
BibRef

Suzuki, S.[Satoshi], Yamaguchi, S.[Shin'ya], Takeda, S.[Shoichiro], Kanai, S.[Sekitoshi], Makishima, N.[Naoki], Ando, A.[Atsushi], Masumura, R.[Ryo],
Adversarial Finetuning with Latent Representation Constraint to Mitigate Accuracy-Robustness Tradeoff,
ICCV23(4367-4378)
IEEE DOI 2401
BibRef

Fang, H.[Han], Zhang, J.[Jiyi], Qiu, Y.P.[Yu-Peng], Liu, J.Y.[Jia-Yang], Xu, K.[Ke], Fang, C.[Chengfang], Chang, E.C.[Ee-Chien],
Tracing the Origin of Adversarial Attack for Forensic Investigation and Deterrence,
ICCV23(4312-4321)
IEEE DOI 2401
BibRef

Zhu, P.[Peifei], Osada, G.[Genki], Kataoka, H.[Hirokatsu], Takahashi, T.[Tsubasa],
Frequency-aware GAN for Adversarial Manipulation Generation,
ICCV23(4292-4301)
IEEE DOI 2401
BibRef

Ji, Q.F.[Qiu-Fan], Wang, L.[Lin], Shi, C.[Cong], Hu, S.S.[Sheng-Shan], Chen, Y.Y.[Ying-Ying], Sun, L.C.[Li-Chao],
Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples,
ICCV23(4272-4281)
IEEE DOI Code:
WWW Link. 2401
BibRef

Jin, Y.L.[Yu-Lin], Zhang, X.Y.[Xiao-Yu], Lou, J.[Jian], Ma, X.[Xu], Wang, Z.L.[Zi-Long], Chen, X.F.[Xiao-Feng],
Explaining Adversarial Robustness of Neural Networks from Clustering Effect Perspective,
ICCV23(4499-4508)
IEEE DOI Code:
WWW Link. 2401
BibRef

Li, Y.M.[Yi-Ming], Fang, Q.[Qi], Bai, J.[Jiamu], Chen, S.[Siheng], Xu, F.J.F.[Felix Jue-Fei], Feng, C.[Chen],
Among Us: Adversarially Robust Collaborative Perception by Consensus,
ICCV23(186-195)
IEEE DOI 2401
BibRef

Lee, M.J.[Min-Jong], Kim, D.[Dongwoo],
Robust Evaluation of Diffusion-Based Adversarial Purification,
ICCV23(134-144)
IEEE DOI 2401
Evaluation of purification process at run-time. BibRef

Frosio, I.[Iuri], Kautz, J.[Jan],
The Best Defense is a Good Offense: Adversarial Augmentation Against Adversarial Attacks,
CVPR23(4067-4076)
IEEE DOI 2309
BibRef

Sharma, S.[Shivam], Joshi, R.[Rohan], Bhilare, S.[Shruti], Joshi, M.V.[Manjunath V.],
Robust Adversarial Defence: Use of Auto-inpainting,
CAIP23(I:110-119).
Springer DOI 2312
BibRef

Silva, H.P.[Hondamunige Prasanna], Seidenari, L.[Lorenzo], del Bimbo, A.[Alberto],
Diffdefense: Defending Against Adversarial Attacks via Diffusion Models,
CIAP23(II:430-442).
Springer DOI 2312
BibRef

di Domenico, N.[Nicolò], Borghi, G.[Guido], Franco, A.[Annalisa], Maltoni, D.[Davide],
Combining Identity Features and Artifact Analysis for Differential Morphing Attack Detection,
CIAP23(I:100-111).
Springer DOI 2312
BibRef

Tapia, J.[Juan], Busch, C.[Christoph],
Impact of Synthetic Images on Morphing Attack Detection Using a Siamese Network,
CIARP23(I:343-357).
Springer DOI 2312
BibRef

Zeng, H.[Hui], Chen, B.W.[Bi-Wei], Deng, K.[Kang], Peng, A.[Anjie],
Adversarial Example Detection Bayesian Game,
ICIP23(1710-1714)
IEEE DOI Code:
WWW Link. 2312
BibRef

Piat, W.[William], Fadili, J.[Jalal], Jurie, S.F.[S Frédéric],
Exploring the Connection Between Neuron Coverage and Adversarial Robustness in DNN Classifiers,
ICIP23(745-749)
IEEE DOI 2312
BibRef

Atsague, M.[Modeste], Nirala, A.[Ashutosh], Fakorede, O.[Olukorede], Tian, J.[Jin],
A Penalized Modified Huber Regularization to Improve Adversarial Robustness,
ICIP23(2675-2679)
IEEE DOI 2312
BibRef

Zhang, J.F.[Jie-Fei], Wang, J.[Jie], Lyu, W.L.[Wan-Li], Yin, Z.X.[Zhao-Xia],
Local Texture Complexity Guided Adversarial Attack,
ICIP23(2065-2069)
IEEE DOI 2312
BibRef

Wang, B.H.[Bing-Hui], Pang, M.[Meng], Dong, Y.[Yun],
Turning Strengths into Weaknesses: A Certified Robustness Inspired Attack Framework against Graph Neural Networks,
CVPR23(16394-16403)
IEEE DOI 2309
BibRef

Nguyen, N.B.[Ngoc-Bao], Chandrasegaran, K.[Keshigeyan], Abdollahzadeh, M.[Milad], Cheung, N.M.[Ngai-Man],
Re-Thinking Model Inversion Attacks Against Deep Neural Networks,
CVPR23(16384-16393)
IEEE DOI 2309
BibRef

Tan, C.C.[Chuang-Chuang], Zhao, Y.[Yao], Wei, S.[Shikui], Gu, G.H.[Guang-Hua], Wei, Y.C.[Yun-Chao],
Learning on Gradients: Generalized Artifacts Representation for GAN-Generated Images Detection,
CVPR23(12105-12114)
IEEE DOI 2309
BibRef

Bai, Q.Y.[Qing-Yan], Yang, C.[Ceyuan], Xu, Y.H.[Ying-Hao], Liu, X.H.[Xi-Hui], Yang, Y.[Yujiu], Shen, Y.J.[Yu-Jun],
GLeaD: Improving GANs with A Generator-Leading Task,
CVPR23(12094-12104)
IEEE DOI 2309
BibRef

Jamil, H.[Huma], Liu, Y.J.[Ya-Jing], Caglar, T.[Turgay], Cole, C.[Christina], Blanchard, N.[Nathaniel], Peterson, C.[Christopher], Kirby, M.[Michael],
Hamming Similarity and Graph Laplacians for Class Partitioning and Adversarial Image Detection,
TAG-PRA23(590-599)
IEEE DOI 2309
BibRef

Huang, B.[Bo], Chen, M.Y.[Ming-Yang], Wang, Y.[Yi], Lu, J.[Junda], Cheng, M.[Minhao], Wang, W.[Wei],
Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation,
CVPR23(24668-24677)
IEEE DOI 2309
BibRef

Dong, M.J.[Min-Jing], Xu, C.[Chang],
Adversarial Robustness via Random Projection Filters,
CVPR23(4077-4086)
IEEE DOI 2309
BibRef

Kim, W.J.[Woo Jae], Cho, Y.[Yoonki], Jung, J.[Junsik], Yoon, S.E.[Sung-Eui],
Feature Separation and Recalibration for Adversarial Robustness,
CVPR23(8183-8192)
IEEE DOI 2309
BibRef

Huang, S.H.[Shi-Hua], Lu, Z.C.[Zhi-Chao], Deb, K.[Kalyanmoy], Boddeti, V.N.[Vishnu Naresh],
Revisiting Residual Networks for Adversarial Robustness,
CVPR23(8202-8211)
IEEE DOI 2309
BibRef

Kim, J.[Junho], Lee, B.K.[Byung-Kwan], Ro, Y.M.[Yong Man],
Demystifying Causal Features on Adversarial Examples and Causal Inoculation for Robust Network by Adversarial Instrumental Variable Regression,
CVPR23(12032-12042)
IEEE DOI 2309
BibRef

Croce, F.[Francesco], Rebuffi, S.A.[Sylvestre-Alvise], Shelhamer, E.[Evan], Gowal, S.[Sven],
Seasoning Model Soups for Robustness to Adversarial and Natural Distribution Shifts,
CVPR23(12313-12323)
IEEE DOI 2309
BibRef

Li, S.[Simin], Zhang, S.[Shuning], Chen, G.[Gujun], Wang, D.[Dong], Feng, P.[Pu], Wang, J.[Jiakai], Liu, A.[Aishan], Yi, X.[Xin], Liu, X.L.[Xiang-Long],
Towards Benchmarking and Assessing Visual Naturalness of Physical World Adversarial Attacks,
CVPR23(12324-12333)
IEEE DOI 2309
BibRef

Li, Z.[Zhuowan], Wong, X.[Xingrui], Stengel-Eskin, E.[Elias], Kortylewski, A.[Adam], Ma, W.[Wufei], van Durme, B.[Benjamin], Yuille, A.L.[Alan L.],
Super-CLEVR: A Virtual Benchmark to Diagnose Domain Robustness in Visual Reasoning,
CVPR23(14963-14973)
IEEE DOI 2309
BibRef

Wang, Z.[Zifan], Ding, N.[Nan], Levinboim, T.[Tomer], Chen, X.[Xi], Soricut, R.[Radu],
Improving Robust Generalization by Direct PAC-Bayesian Bound Minimization,
CVPR23(16458-16468)
IEEE DOI 2309
BibRef

Agarwal, A.[Akshay], Ratha, N.[Nalini], Singh, R.[Richa], Vatsa, M.[Mayank],
Robustness Against Gradient based Attacks through Cost Effective Network Fine-Tuning,
FaDE-TCV23(28-37)
IEEE DOI 2309
BibRef

Liang, H.Y.[Heng-Yue], Liang, B.[Buyun], Sun, J.[Ju], Cui, Y.[Ying], Mitchell, T.[Tim],
Implications of Solution Patterns on Adversarial Robustness,
AML23(2393-2400)
IEEE DOI 2309
BibRef

Redgrave, T.[Timothy], Crum, C.[Colton],
Generating Adversarial Samples in Mini-Batches May Be Detrimental To Adversarial Robustness,
AML23(2378-2384)
IEEE DOI 2309
BibRef

Godfrey, C.[Charles], Kvinge, H.[Henry], Bishoff, E.[Elise], Mckay, M.[Myles], Brown, D.[Davis], Doster, T.[Tim], Byler, E.[Eleanor],
How many dimensions are required to find an adversarial example?,
AML23(2353-2360)
IEEE DOI 2309
BibRef

Gavrikov, P.[Paul], Keuper, J.[Janis],
On the Interplay of Convolutional Padding and Adversarial Robustness,
BRAVO23(3983-3992)
IEEE DOI 2401
BibRef

Wang, R.[Ren], Li, Y.X.[Yu-Xuan], Liu, S.[Sijia],
Exploring Diversified Adversarial Robustness in Neural Networks via Robust Mode Connectivity,
AML23(2346-2352)
IEEE DOI 2309
BibRef

Nandi, S.[Soumalya], Addepalli, S.[Sravanti], Rangwani, H.[Harsh], Babu, R.V.[R. Venkatesh],
Certified Adversarial Robustness Within Multiple Perturbation Bounds,
AML23(2298-2305)
IEEE DOI 2309
BibRef

Chen, Y.W.[Yu-Wei], Chu, S.Y.[Shi-Yong],
Adversarial Defense in Aerial Detection,
AML23(2306-2313)
IEEE DOI 2309
BibRef

Sarkar, S.[Soumyendu], Babu, A.R.[Ashwin Ramesh], Mousavi, S.[Sajad], Ghorbanpour, S.[Sahand], Gundecha, V.[Vineet], Guillen, A.[Antonio], Luna, R.[Ricardo], Naug, A.[Avisek],
Robustness with Query-efficient Adversarial Attack using Reinforcement Learning,
AML23(2330-2337)
IEEE DOI 2309
BibRef

Mofayezi, M.[Mohammadreza], Medghalchi, Y.[Yasamin],
Benchmarking Robustness to Text-Guided Corruptions,
GCV23(779-786)
IEEE DOI 2309
BibRef

Zhou, Q.G.[Qing-Guo], Lei, M.[Ming], Zhi, P.[Peng], Zhao, R.[Rui], Shen, J.[Jun], Yong, B.B.[Bin-Bin],
Towards Improving the Anti-Attack Capability of the Rangenet++,
ACCVWS22(60-70).
Springer DOI 2307
BibRef

Chandna, K.[Kshitij],
Improving Adversarial Robustness by Penalizing Natural Accuracy,
AdvRob22(517-533).
Springer DOI 2304
BibRef

Zhao, Z.Y.[Zheng-Yu], Dang, N.[Nga], Larson, M.[Martha],
The Importance of Image Interpretation: Patterns of Semantic Misclassification in Real-world Adversarial Images,
MMMod23(II: 718-725).
Springer DOI 2304
BibRef

Venkatesh, R.[Rahul], Wong, E.[Eric], Kolter, Z.[Zico],
Adversarial robustness in discontinuous spaces via alternating sampling and descent,
WACV23(4651-4660)
IEEE DOI 2302
Training, Solid modeling, Perturbation methods, Pipelines, Predictive models, Search problems, visual reasoning BibRef

Nayak, G.K.[Gaurav Kumar], Rawal, R.[Ruchit], Chakraborty, A.[Anirban],
DE-CROP: Data-efficient Certified Robustness for Pretrained Classifiers,
WACV23(4611-4620)
IEEE DOI 2302
Deep learning, Smoothing methods, Costs, Neural networks, Training data, Robustness, Algorithms: Adversarial learning BibRef

Kakizaki, K.[Kazuya], Fukuchi, K.[Kazuto], Sakuma, J.[Jun],
Certified Defense for Content Based Image Retrieval,
WACV23(4550-4559)
IEEE DOI 2302
Training, Deep learning, Image retrieval, Neural networks, Linear programming, Feature extraction, visual reasoning BibRef

Zheng, Z.H.[Zhi-Hao], Ying, X.W.[Xiao-Wen], Yao, Z.[Zhen], Chuah, M.C.[Mooi Choo],
Robustness of Trajectory Prediction Models Under Map-Based Attacks,
WACV23(4530-4539)
IEEE DOI 2302
Visualization, Image coding, Sensitivity analysis, Computational modeling, Predictive models, Control systems, adversarial attack and defense methods BibRef

Mathur, A.N.[Aradhya Neeraj], Madan, A.[Anish], Sharma, O.[Ojaswa],
SLI-pSp: Injecting Multi-Scale Spatial Layout in pSp,
WACV23(4084-4093)
IEEE DOI 2302
Visualization, Image synthesis, Layout, Generators, Task analysis, Algorithms: Computational photography, adversarial attack and defense methods BibRef

Dargaud, L.[Laurine], Ibsen, M.[Mathias], Tapia, J.[Juan], Busch, C.[Christoph],
A Principal Component Analysis-Based Approach for Single Morphing Attack Detection,
Explain-Bio23(683-692)
IEEE DOI 2302
Training, Learning systems, Visualization, Image color analysis, Feature extraction, Human in the loop, Detection algorithms BibRef

Drenkow, N.[Nathan], Lennon, M.[Max], Wang, I.J.[I-Jeng], Burlina, P.[Philippe],
Do Adaptive Active Attacks Pose Greater Risk Than Static Attacks?,
WACV23(1380-1389)
IEEE DOI 2302
Measurement, Sensitivity analysis, Aggregates, Kinematics, Observers, Trajectory, Algorithms: Adversarial learning, visual reasoning BibRef

Chen, Y.K.[Yong-Kang], Zhang, M.[Ming], Li, J.[Jin], Kuang, X.H.[Xiao-Hui],
Adversarial Attacks and Defenses in Image Classification: A Practical Perspective,
ICIVC22(424-430)
IEEE DOI 2301
Training, Deep learning, Benchmark testing, Security, Image classification, deep learning, security, defenses BibRef

Beetham, J.[James], Kardan, N.[Navid], Mian, A.[Ajmal], Shah, M.[Mubarak],
Detecting Compromised Architecture/Weights of a Deep Model,
ICPR22(2843-2849)
IEEE DOI 2212
Smoothing methods, Perturbation methods, Closed box, Detectors, Predictive models, Data models BibRef

Hwang, D.[Duhun], Lee, E.[Eunjung], Rhee, W.[Wonjong],
AID-Purifier: A Light Auxiliary Network for Boosting Adversarial Defense,
ICPR22(2401-2407)
IEEE DOI 2212
Training, Codes, Purification, Boosting, Robustness BibRef

Tasaki, H.[Hajime], Kaneko, Y.[Yuji], Chao, J.H.[Jin-Hui],
Curse of co-Dimensionality: Explaining Adversarial Examples by Embedding Geometry of Data Manifold,
ICPR22(2364-2370)
IEEE DOI 2212
Manifolds, Geometry, Training, Deep learning, Neural networks, Training data BibRef

Modas, A.[Apostolos], Rade, R.[Rahul], Ortiz-Jiménez, G.[Guillermo], Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Frossard, P.[Pascal],
PRIME: A Few Primitives Can Boost Robustness to Common Corruptions,
ECCV22(XXV:623-640).
Springer DOI 2211
BibRef

Khalsi, R.[Rania], Smati, I.[Imen], Sallami, M.M.[Mallek Mziou], Ghorbel, F.[Faouzi],
A Novel System for Deep Contour Classifiers Certification Under Filtering Attacks,
ICIP22(3561-3565)
IEEE DOI 2211
Deep learning, Upper bound, Image recognition, Filtering, Perturbation methods, Robustness, Kernel, Contours classification, Uncertainty in AI BibRef

Zhang, Y.X.[Yu-Xuan], Dong, B.[Bo], Heide, F.[Felix],
All You Need Is RAW: Defending Against Adversarial Attacks with Camera Image Pipelines,
ECCV22(XIX:323-343).
Springer DOI 2211
BibRef

Lu, B.[Bingyi], Liu, J.Y.[Ji-Yuan], Xiong, H.L.[Hui-Lin],
Transformation-Based Adversarial Defense Via Sparse Representation,
ICIP22(1726-1730)
IEEE DOI 2211
Bridges, Training, Deep learning, Dictionaries, Perturbation methods, Neural networks, adversarial examples, adversarial defense, image classification BibRef

Subramanyam, A.V., Raj, A.[Abhigyan],
Barycentric Defense,
ICIP22(2276-2280)
IEEE DOI 2211
Training, Codes, Extraterrestrial measurements, Robustness, Barycenter, Dual Wasserstein, Adversarial defense BibRef

Do, K.[Kien], Harikumar, H.[Haripriya], Le, H.[Hung], Nguyen, D.[Dung], Tran, T.[Truyen], Rana, S.[Santu], Nguyen, D.[Dang], Susilo, W.[Willy], Venkatesh, S.[Svetha],
Towards Effective and Robust Neural Trojan Defenses via Input Filtering,
ECCV22(V:283-300).
Springer DOI 2211
BibRef

Sun, J.C.[Jia-Chen], Mehra, A.[Akshay], Kailkhura, B.[Bhavya], Chen, P.Y.[Pin-Yu], Hendrycks, D.[Dan], Hamm, J.[Jihun], Mao, Z.M.[Z. Morley],
A Spectral View of Randomized Smoothing Under Common Corruptions: Benchmarking and Improving Certified Robustness,
ECCV22(IV:654-671).
Springer DOI 2211
BibRef

Li, G.L.[Guan-Lin], Xu, G.W.[Guo-Wen], Qiu, H.[Han], He, R.[Ruan], Li, J.[Jiwei], Zhang, T.W.[Tian-Wei],
Improving Adversarial Robustness of 3D Point Cloud Classification Models,
ECCV22(IV:672-689).
Springer DOI 2211
BibRef

Kowalski, C.[Charles], Famili, A.[Azadeh], Lao, Y.J.[Ying-Jie],
Towards Model Quantization on the Resilience Against Membership Inference Attacks,
ICIP22(3646-3650)
IEEE DOI 2211
Resistance, Performance evaluation, Privacy, Quantization (signal), Computational modeling, Neural networks, Training data, Neural Network BibRef

Nayak, G.K.[Gaurav Kumar], Rawal, R.[Ruchit], Lal, R.[Rohit], Patil, H.[Himanshu], Chakraborty, A.[Anirban],
Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems,
HCIS22(4331-4340)
IEEE DOI 2210
Measurement, Training, Knowledge engineering, Predictive models, Reliability engineering BibRef

Chen, Y.W.[Yu-Wei],
Rethinking Adversarial Examples in Wargames,
ArtOfRobust22(100-106)
IEEE DOI 2210
Neural networks, Decision making, Games, Prediction algorithms, Software, Security BibRef

Haque, M.[Mirazul], Budnik, C.J.[Christof J.], Yang, W.[Wei],
CorrGAN: Input Transformation Technique Against Natural Corruptions,
ArtOfRobust22(193-196)
IEEE DOI 2210
Deep learning, Perturbation methods, Neural networks, Generative adversarial networks BibRef

Ren, S.C.[Su-Cheng], Gao, Z.Q.[Zheng-Qi], Hua, T.Y.[Tian-Yu], Xue, Z.H.[Zi-Hui], Tian, Y.L.[Yong-Long], He, S.F.[Sheng-Feng], Zhao, H.[Hang],
Co-advise: Cross Inductive Bias Distillation,
CVPR22(16752-16761)
IEEE DOI 2210
Training, Representation learning, Convolutional codes, Convolution, Transformers, Adversarial attack and defense BibRef

Pang, T.Y.[Tian-Yu], Zhang, H.[Huishuai], He, D.[Di], Dong, Y.P.[Yin-Peng], Su, H.[Hang], Chen, W.[Wei], Zhu, J.[Jun], Liu, T. .Y.[Tie- Yan],
Two Coupled Rejection Metrics Can Tell Adversarial Examples Apart,
CVPR22(15202-15212)
IEEE DOI 2210
Measurement, Training, Couplings, Machine learning, Predictive models, Robustness, Adversarial attack and defense, Machine learning BibRef

Li, K.D.[Kai-Dong], Zhang, Z.M.[Zi-Ming], Zhong, C.C.[Cun-Cong], Wang, G.H.[Guang-Hui],
Robust Structured Declarative Classifiers for 3D Point Clouds: Defending Adversarial Attacks with Implicit Gradients,
CVPR22(15273-15283)
IEEE DOI 2210
Point cloud compression, Deep learning, Image coding, Neural networks, Lattices, Deep learning architectures and techniques BibRef

Ren, Q.B.[Qi-Bing], Bao, Q.Q.[Qing-Quan], Wang, R.Z.[Run-Zhong], Yan, J.C.[Jun-Chi],
Appearance and Structure Aware Robust Deep Visual Graph Matching: Attack, Defense and Beyond,
CVPR22(15242-15251)
IEEE DOI 2210
Training, Visualization, Image recognition, Computational modeling, Robustness, Data models, Adversarial attack and defense, Representation learning BibRef

Vellaichamy, S.[Sivapriya], Hull, M.[Matthew], Wang, Z.J.J.[Zi-Jie J.], Das, N.[Nilaksh], Peng, S.Y.[Sheng-Yun], Park, H.[Haekyu], Chau, D.H.P.[Duen Horng Polo],
DetectorDetective: Investigating the Effects of Adversarial Examples on Object Detectors,
CVPR22(21452-21459)
IEEE DOI 2210
Visualization, Head, Detectors, Object detection, Feature extraction, Magnetic heads, Behavioral sciences BibRef

Lee, B.K.[Byung-Kwan], Kim, J.[Junho], Ro, Y.M.[Yong Man],
Masking Adversarial Damage: Finding Adversarial Saliency for Robust and Sparse Network,
CVPR22(15105-15115)
IEEE DOI 2210
Training, Degradation, Computational modeling, Semantics, Neural networks, Memory management, Robustness, Adversarial attack and defense BibRef

Liu, Y.[Ye], Cheng, Y.[Yaya], Gao, L.L.[Lian-Li], Liu, X.L.[Xiang-Long], Zhang, Q.L.[Qi-Long], Song, J.K.[Jing-Kuan],
Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack,
CVPR22(15084-15093)
IEEE DOI 2210
Adaptation models, Codes, Computational modeling, Robustness, Iterative methods, Adversarial attack and defense BibRef

Özdenizci, O.[Ozan], Legenstein, R.[Robert],
Improving Robustness Against Stealthy Weight Bit-Flip Attacks by Output Code Matching,
CVPR22(13378-13387)
IEEE DOI 2210
Deep learning, Codes, Quantization (signal), Impedance matching, Computational modeling, Benchmark testing, Deep learning architectures and techniques BibRef

Dong, J.H.[Jun-Hao], Wang, Y.[Yuan], Lai, J.H.[Jian-Huang], Xie, X.H.[Xiao-Hua],
Improving Adversarially Robust Few-shot Image Classification with Generalizable Representations,
CVPR22(9015-9024)
IEEE DOI 2210
Training, Deep learning, Image recognition, Benchmark testing, Task analysis, Adversarial attack and defense BibRef

Yamada, Y.[Yutaro], Otani, M.[Mayu],
Does Robustness on ImageNet Transfer to Downstream Tasks?,
CVPR22(9205-9214)
IEEE DOI 2210
Image segmentation, Transfer learning, Semantics, Neural networks, Object detection, Transformers, Robustness, Adversarial attack and defense BibRef

Mao, X.F.[Xiao-Feng], Qi, G.[Gege], Chen, Y.F.[Yue-Feng], Li, X.D.[Xiao-Dan], Duan, R.J.[Ran-Jie], Ye, S.[Shaokai], He, Y.[Yuan], Xue, H.[Hui],
Towards Robust Vision Transformer,
CVPR22(12032-12041)
IEEE DOI 2210
Systematics, Costs, Machine vision, Training data, Benchmark testing, Transformers, Robustness, Adversarial attack and defense BibRef

Chen, T.L.[Tian-Long], Zhang, Z.Y.[Zhen-Yu], Zhang, Y.H.[Yi-Hua], Chang, S.Y.[Shi-Yu], Liu, S.[Sijia], Wang, Z.Y.[Zhang-Yang],
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free,
CVPR22(588-599)
IEEE DOI 2210
Training, Deep learning, Neural networks, Training data, Network architecture, Adversarial attack and defense BibRef

Sun, M.J.[Ming-Jie], Li, Z.C.[Zi-Chao], Xiao, C.W.[Chao-Wei], Qiu, H.[Haonan], Kailkhura, B.[Bhavya], Liu, M.Y.[Ming-Yan], Li, B.[Bo],
Can Shape Structure Features Improve Model Robustness under Diverse Adversarial Settings?,
ICCV21(7506-7515)
IEEE DOI 2203
Visualization, Systematics, Sensitivity, Shape, Image edge detection, Perturbation methods, Pipelines, Adversarial learning, Recognition and classification BibRef

Huang, J.X.[Jia-Xing], Guan, D.[Dayan], Xiao, A.[Aoran], Lu, S.J.[Shi-Jian],
RDA: Robust Domain Adaptation via Fourier Adversarial Attacking,
ICCV21(8968-8979)
IEEE DOI 2203
Training, Representation learning, Perturbation methods, Semantics, Supervised learning, FAA, grouping and shape BibRef

Yin, M.J.[Ming-Jun], Li, S.[Shasha], Cai, Z.[Zikui], Song, C.Y.[Cheng-Yu], Asif, M.S.[M. Salman], Roy-Chowdhury, A.K.[Amit K.], Krishnamurthy, S.V.[Srikanth V.],
Exploiting Multi-Object Relationships for Detecting Adversarial Attacks in Complex Scenes,
ICCV21(7838-7847)
IEEE DOI 2203
Deep learning, Machine vision, Computational modeling, Neural networks, Detectors, Context modeling, Adversarial learning, Scene analysis and understanding BibRef

Abusnaina, A.[Ahmed], Wu, Y.H.[Yu-Hang], Arora, S.[Sunpreet], Wang, Y.Z.[Yi-Zhen], Wang, F.[Fei], Yang, H.[Hao], Mohaisen, D.[David],
Adversarial Example Detection Using Latent Neighborhood Graph,
ICCV21(7667-7676)
IEEE DOI 2203
Training, Manifolds, Deep learning, Network topology, Perturbation methods, Neural networks, Adversarial learning, Recognition and classification BibRef

Mao, C.Z.[Cheng-Zhi], Chiquier, M.[Mia], Wang, H.[Hao], Yang, J.F.[Jun-Feng], Vondrick, C.[Carl],
Adversarial Attacks are Reversible with Natural Supervision,
ICCV21(641-651)
IEEE DOI 2203
Training, Benchmark testing, Robustness, Inference algorithms, Image restoration, Recognition and classification, Adversarial learning BibRef

Zhao, X.J.[Xue-Jun], Zhang, W.C.[Wen-Can], Xiao, X.K.[Xiao-Kui], Lim, B.[Brian],
Exploiting Explanations for Model Inversion Attacks,
ICCV21(662-672)
IEEE DOI 2203
Privacy, Semantics, Data visualization, Medical services, Predictive models, Data models, Artificial intelligence, Recognition and classification BibRef

Wang, Q.[Qian], Kurz, D.[Daniel],
Reconstructing Training Data from Diverse ML Models by Ensemble Inversion,
WACV22(3870-3878)
IEEE DOI 2202
Training, Analytical models, Filtering, Training data, Machine learning, Predictive models, Security/Surveillance BibRef

Tursynbek, N.[Nurislam], Petiushko, A.[Aleksandr], Oseledets, I.[Ivan],
Geometry-Inspired Top-k Adversarial Perturbations,
WACV22(4059-4068)
IEEE DOI 2202
Perturbation methods, Prediction algorithms, Multitasking, Classification algorithms, Task analysis, Adversarial Attack and Defense Methods BibRef

Nayak, G.K.[Gaurav Kumar], Rawal, R.[Ruchit], Chakraborty, A.[Anirban],
DAD: Data-free Adversarial Defense at Test Time,
WACV22(3788-3797)
IEEE DOI 2202
Training, Adaptation models, Biological system modeling, Frequency-domain analysis, Training data, Adversarial Attack and Defense Methods BibRef

Scheliga, D.[Daniel], Mäder, P.[Patrick], Seeland, M.[Marco],
PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage,
WACV22(3605-3614)
IEEE DOI 2202
Training, Privacy, Data privacy, Perturbation methods, Computational modeling, Training data, Stochastic processes, Deep Learning Gradient Inversion Attacks BibRef

Wang, S.J.[Shao-Jie], Wu, T.[Tong], Chakrabarti, A.[Ayan], Vorobeychik, Y.[Yevgeniy],
Adversarial Robustness of Deep Sensor Fusion Models,
WACV22(1371-1380)
IEEE DOI 2202
Training, Systematics, Laser radar, Perturbation methods, Neural networks, Object detection, Sensor fusion, Adversarial Attack and Defense Methods BibRef

Drenkow, N.[Nathan], Fendley, N.[Neil], Burlina, P.[Philippe],
Attack Agnostic Detection of Adversarial Examples via Random Subspace Analysis,
WACV22(2815-2825)
IEEE DOI 2202
Training, Performance evaluation, Perturbation methods, Training data, Detectors, Feature extraction, Security/Surveillance BibRef

Cheng, H.[Hao], Xu, K.D.[Kai-Di], Li, Z.G.[Zhen-Gang], Zhao, P.[Pu], Wang, C.[Chenan], Lin, X.[Xue], Kailkhura, B.[Bhavya], Goldhahn, R.[Ryan],
More or Less (MoL): Defending against Multiple Perturbation Attacks on Deep Neural Networks through Model Ensemble and Compression,
Hazards22(645-655)
IEEE DOI 2202
Training, Deep learning, Perturbation methods, Computational modeling, Conferences, Neural networks BibRef

Lang, I.[Itai], Kotlicki, U.[Uriel], Avidan, S.[Shai],
Geometric Adversarial Attacks and Defenses on 3D Point Clouds,
3DV21(1196-1205)
IEEE DOI 2201
Point cloud compression, Geometry, Deep learning, Solid modeling, Shape, Semantics, 3D Point Clouds, Geometry Processing, Defense Methods BibRef

Hasnat, A.[Abul], Shvai, N.[Nadiya], Nakib, A.[Amir],
CNN Classifier's Robustness Enhancement when Preserving Privacy,
ICIP21(3887-3891)
IEEE DOI 2201
Privacy, Data privacy, Image processing, Supervised learning, Prediction algorithms, Robustness, Privacy, Vehicle Classification, CNN BibRef

Liu, L.Q.[Lan-Qing], Duan, Z.Y.[Zhen-Yu], Xu, G.Z.[Guo-Zheng], Xu, Y.[Yi],
Self-Supervised Disentangled Embedding for Robust Image Classification,
ICIP21(1494-1498)
IEEE DOI 2201
Deep learning, Image segmentation, Correlation, Target recognition, Tools, Robustness, Security, Disentanglement, Adversarial Examples, Robustness BibRef

Maho, T.[Thibault], Bonnet, B.[Benoît], Furony, T.[Teddy], Le Merrer, E.[Erwan],
RoBIC: A Benchmark Suite for Assessing Classifiers Robustness,
ICIP21(3612-3616)
IEEE DOI 2201
Image processing, Benchmark testing, Distortion, Robustness, Distortion measurement, Benchmark, adversarial examples, half-distortion measure BibRef

Wang, Y.P.[Yao-Peng], Xie, L.[Lehui], Liu, X.M.[Xi-Meng], Yin, J.L.[Jia-Li], Zheng, T.J.[Ting-Jie],
Model-Agnostic Adversarial Example Detection Through Logit Distribution Learning,
ICIP21(3617-3621)
IEEE DOI 2201
Deep learning, Resistance, Semantics, Feature extraction, Task analysis, deep learning, adversarial detector, adversarial defenses BibRef

Co, K.T.[Kenneth T.], Muñoz-González, L.[Luis], Kanthan, L.[Leslie], Glocker, B.[Ben], Lupu, E.C.[Emil C.],
Universal Adversarial Robustness of Texture and Shape-Biased Models,
ICIP21(799-803)
IEEE DOI 2201
Training, Deep learning, Analytical models, Perturbation methods, Image processing, Neural networks, deep neural networks BibRef

Agarwal, A.[Akshay], Vatsa, M.[Mayank], Singh, R.[Richa], Ratha, N.[Nalini],
Intelligent and Adaptive Mixup Technique for Adversarial Robustness,
ICIP21(824-828)
IEEE DOI 2201
Training, Deep learning, Image recognition, Image analysis, Perturbation methods, Robustness, Natural language processing, Object Recognition BibRef

Chai, W.H.[Wei-Heng], Lu, Y.T.[Yan-Tao], Velipasalar, S.[Senem],
Weighted Average Precision: Adversarial Example Detection for Visual Perception of Autonomous Vehicles,
ICIP21(804-808)
IEEE DOI 2201
Measurement, Perturbation methods, Image processing, Pipelines, Neural networks, Optimization methods, Object detection, Neural Networks BibRef

Kung, B.H.[Bo-Han], Chen, P.C.[Pin-Chun], Liu, Y.C.[Yu-Cheng], Chen, J.C.[Jun-Cheng],
Squeeze and Reconstruct: Improved Practical Adversarial Defense Using Paired Image Compression and Reconstruction,
ICIP21(849-853)
IEEE DOI 2201
Training, Deep learning, Image coding, Perturbation methods, Transform coding, Robustness, Adversarial Attack, JPEG Compression, Artifact Correction BibRef

Li, C.Y.[Chau Yi], Sánchez-Matilla, R.[Ricardo], Shamsabadi, A.S.[Ali Shahin], Mazzon, R.[Riccardo], Cavallaro, A.[Andrea],
On the Reversibility of Adversarial Attacks,
ICIP21(3073-3077)
IEEE DOI 2201
Deep learning, Perturbation methods, Image processing, Benchmark testing, Adversarial perturbations, Reversibility BibRef

Bakiskan, C.[Can], Cekic, M.[Metehan], Sezer, A.D.[Ahmet Dundar], Madhow, U.[Upamanyu],
A Neuro-Inspired Autoencoding Defense Against Adversarial Attacks,
ICIP21(3922-3926)
IEEE DOI 2201
Training, Deep learning, Image coding, Perturbation methods, Neural networks, Decoding, Adversarial, Machine learning, Robust, Defense BibRef

Pérez, J.C.[Juan C.], Alfarra, M.[Motasem], Jeanneret, G.[Guillaume], Rueda, L.[Laura], Thabet, A.[Ali], Ghanem, B.[Bernard], Arbeláez, P.[Pablo],
Enhancing Adversarial Robustness via Test-Time Transformation Ensembling,
AROW21(81-91)
IEEE DOI 2112
Deep learning, Perturbation methods, Transforms, Robustness, Data models BibRef

De, K.[Kanjar], Pedersen, M.[Marius],
Impact of Colour on Robustness of Deep Neural Networks,
AROW21(21-30)
IEEE DOI 2112
Deep learning, Image color analysis, Perturbation methods, Tools, Distortion, Robustness BibRef

Truong, J.B.[Jean-Baptiste], Maini, P.[Pratyush], Walls, R.J.[Robert J.], Papernot, N.[Nicolas],
Data-Free Model Extraction,
CVPR21(4769-4778)
IEEE DOI 2111
Adaptation models, Computational modeling, Intellectual property, Predictive models, Data models, Complexity theory BibRef

Mehra, A.[Akshay], Kailkhura, B.[Bhavya], Chen, P.Y.[Pin-Yu], Hamm, J.[Jihun],
How Robust are Randomized Smoothing based Defenses to Data Poisoning?,
CVPR21(13239-13248)
IEEE DOI 2111
Training, Deep learning, Smoothing methods, Toxicology, Perturbation methods, Distortion, Robustness BibRef

Deng, Z.J.[Zhi-Jie], Yang, X.[Xiao], Xu, S.Z.[Shi-Zhen], Su, H.[Hang], Zhu, J.[Jun],
LiBRe: A Practical Bayesian Approach to Adversarial Detection,
CVPR21(972-982)
IEEE DOI 2111
Training, Deep learning, Costs, Uncertainty, Neural networks, Bayes methods BibRef

Yang, K.[Karren], Lin, W.Y.[Wan-Yi], Barman, M.[Manash], Condessa, F.[Filipe], Kolter, Z.[Zico],
Defending Multimodal Fusion Models against Single-Source Adversaries,
CVPR21(3339-3348)
IEEE DOI 2111
Training, Sentiment analysis, Perturbation methods, Neural networks, Object detection, Robustness BibRef

Wu, T.[Tong], Liu, Z.W.[Zi-Wei], Huang, Q.Q.[Qing-Qiu], Wang, Y.[Yu], Lin, D.[Dahua],
Adversarial Robustness under Long-Tailed Distribution,
CVPR21(8655-8664)
IEEE DOI 2111
Training, Systematics, Codes, Robustness BibRef

Ong, D.S.[Ding Sheng], Chan, C.S.[Chee Seng], Ng, K.W.[Kam Woh], Fan, L.X.[Li-Xin], Yang, Q.[Qiang],
Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attacks,
CVPR21(3629-3638)
IEEE DOI 2111
Deep learning, Knowledge engineering, Image synthesis, Superresolution, Intellectual property, Watermarking BibRef

Addepalli, S.[Sravanti], Jain, S.[Samyak], Sriramanan, G.[Gaurang], Babu, R.V.[R. Venkatesh],
Boosting Adversarial Robustness using Feature Level Stochastic Smoothing,
SAIAD21(93-102)
IEEE DOI 2109
Training, Deep learning, Smoothing methods, Boosting, Feature extraction BibRef

Pestana, C.[Camilo], Liu, W.[Wei], Glance, D.[David], Mian, A.[Ajmal],
Defense-friendly Images in Adversarial Attacks: Dataset and Metrics for Perturbation Difficulty,
WACV21(556-565)
IEEE DOI 2106
Measurement, Deep learning, Machine learning algorithms, Image recognition BibRef

Ali, A.[Arslan], Migliorati, A.[Andrea], Bianchi, T.[Tiziano], Magli, E.[Enrico],
Beyond Cross-Entropy: Learning Highly Separable Feature Distributions for Robust and Accurate Classification,
ICPR21(9711-9718)
IEEE DOI 2105
Robustness to adversarial attacks. Training, Deep learning, Perturbation methods, Gaussian distribution, Linear programming, Robustness BibRef

Kyatham, V.[Vinay], Mishra, D.[Deepak], Prathosh, A.P.,
Variational Inference with Latent Space Quantization for Adversarial Resilience,
ICPR21(9593-9600)
IEEE DOI 2105
Manifolds, Degradation, Quantization (signal), Perturbation methods, Neural networks, Data models, Real-time systems BibRef

Li, H.[Honglin], Fan, Y.F.[Yi-Fei], Ganz, F.[Frieder], Yezzi, A.J.[Anthony J.], Barnaghi, P.[Payam],
Verifying the Causes of Adversarial Examples,
ICPR21(6750-6757)
IEEE DOI 2105
Geometry, Perturbation methods, Neural networks, Linearity, Estimation, Aerospace electronics, Probabilistic logic BibRef

Hou, Y.F.[Yu-Fan], Zou, L.X.[Li-Xin], Liu, W.D.[Wei-Dong],
Task-based Focal Loss for Adversarially Robust Meta-Learning,
ICPR21(2824-2829)
IEEE DOI 2105
Training, Perturbation methods, Resists, Machine learning, Benchmark testing, Robustness BibRef

Huang, Y.T.[Yen-Ting], Liao, W.H.[Wen-Hung], Huang, C.W.[Chen-Wei],
Defense Mechanism Against Adversarial Attacks Using Density-based Representation of Images,
ICPR21(3499-3504)
IEEE DOI 2105
Deep learning, Perturbation methods, Transforms, Hybrid power systems, Intelligent systems BibRef

Chhabra, S.[Saheb], Agarwal, A.[Akshay], Singh, R.[Richa], Vatsa, M.[Mayank],
Attack Agnostic Adversarial Defense via Visual Imperceptible Bound,
ICPR21(5302-5309)
IEEE DOI 2105
Visualization, Sensitivity, Databases, Computational modeling, Perturbation methods, Predictive models, Prediction algorithms BibRef

Šircelj, J.[Jaka], Skocaj, D.[Danijel],
Accuracy-Perturbation Curves for Evaluation of Adversarial Attack and Defence Methods,
ICPR21(6290-6297)
IEEE DOI 2105
Training, Visualization, Perturbation methods, Machine learning, Robustness, Generators BibRef

Watson, M.[Matthew], Moubayed, N.A.[Noura Al],
Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine Learning,
ICPR21(8180-8187)
IEEE DOI 2105
Training, Deep learning, Perturbation methods, MIMICs, Medical services, Predictive models, Feature extraction, Medical Data BibRef

Alamri, F.[Faisal], Kalkan, S.[Sinan], Pugeault, N.[Nicolas],
Transformer-Encoder Detector Module: Using Context to Improve Robustness to Adversarial Attacks on Object Detection,
ICPR21(9577-9584)
IEEE DOI 2105
Visualization, Perturbation methods, Detectors, Object detection, Transforms, Field-flow fractionation, Feature extraction BibRef

Schwartz, D.[Daniel], Alparslan, Y.[Yigit], Kim, E.[Edward],
Regularization and Sparsity for Adversarial Robustness and Stable Attribution,
ISVC20(I:3-14).
Springer DOI 2103
BibRef

Carrara, F.[Fabio], Caldelli, R.[Roberto], Falchi, F.[Fabrizio], Amato, G.[Giuseppe],
Defending Neural ODE Image Classifiers from Adversarial Attacks with Tolerance Randomization,
MMForWild20(425-438).
Springer DOI 2103
BibRef

Rusak, E.[Evgenia], Schott, L.[Lukas], Zimmermann, R.S.[Roland S.], Bitterwolf, J.[Julian], Bringmann, O.[Oliver], Bethge, M.[Matthias], Brendel, W.[Wieland],
A Simple Way to Make Neural Networks Robust Against Diverse Image Corruptions,
ECCV20(III:53-69).
Springer DOI 2012
BibRef

Li, Y.W.[Ying-Wei], Bai, S.[Song], Xie, C.H.[Ci-Hang], Liao, Z.Y.[Zhen-Yu], Shen, X.H.[Xiao-Hui], Yuille, A.L.[Alan L.],
Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses,
ECCV20(XI:795-813).
Springer DOI 2011
BibRef

Bui, A.[Anh], Le, T.[Trung], Zhao, H.[He], Montague, P.[Paul], deVel, O.[Olivier], Abraham, T.[Tamas], Phung, D.[Dinh],
Improving Adversarial Robustness by Enforcing Local and Global Compactness,
ECCV20(XXVII:209-223).
Springer DOI 2011
BibRef

Xu, J., Li, Y., Jiang, Y., Xia, S.T.,
Adversarial Defense Via Local Flatness Regularization,
ICIP20(2196-2200)
IEEE DOI 2011
Training, Standards, Perturbation methods, Robustness, Visualization, Linearity, Taylor series, adversarial defense, gradient-based regularization BibRef

Maung, M., Pyone, A., Kiya, H.,
Encryption Inspired Adversarial Defense For Visual Classification,
ICIP20(1681-1685)
IEEE DOI 2011
Training, Transforms, Encryption, Perturbation methods, Machine learning, Adversarial defense, perceptual image encryption BibRef

Shah, S.A.A., Bougre, M., Akhtar, N., Bennamoun, M., Zhang, L.,
Efficient Detection of Pixel-Level Adversarial Attacks,
ICIP20(718-722)
IEEE DOI 2011
Robots, Training, Perturbation methods, Machine learning, Robustness, Task analysis, Testing, Adversarial attack, perturbation detection, deep learning BibRef

Jia, S.[Shuai], Ma, C.[Chao], Song, Y.B.[Yi-Bing], Yang, X.K.[Xiao-Kang],
Robust Tracking Against Adversarial Attacks,
ECCV20(XIX:69-84).
Springer DOI 2011
BibRef

Mao, C.Z.[Cheng-Zhi], Cha, A.[Augustine], Gupta, A.[Amogh], Wang, H.[Hao], Yang, J.F.[Jun-Feng], Vondrick, C.[Carl],
Generative Interventions for Causal Learning,
CVPR21(3946-3955)
IEEE DOI 2111
Training, Visualization, Correlation, Computational modeling, Control systems BibRef

Mao, C.Z.[Cheng-Zhi], Gupta, A.[Amogh], Nitin, V.[Vikram], Ray, B.[Baishakhi], Song, S.[Shuran], Yang, J.F.[Jun-Feng], Vondrick, C.[Carl],
Multitask Learning Strengthens Adversarial Robustness,
ECCV20(II:158-174).
Springer DOI 2011
BibRef

Li, S.S.[Sha-Sha], Zhu, S.T.[Shi-Tong], Paul, S.[Sudipta], Roy-Chowdhury, A.K.[Amit K.], Song, C.Y.[Cheng-Yu], Krishnamurthy, S.[Srikanth], Swami, A.[Ananthram], Chan, K.S.[Kevin S.],
Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency,
ECCV20(XXIII:396-413).
Springer DOI 2011
BibRef

Li, Y.[Yueru], Cheng, S.Y.[Shu-Yu], Su, H.[Hang], Zhu, J.[Jun],
Defense Against Adversarial Attacks via Controlling Gradient Leaking on Embedded Manifolds,
ECCV20(XXVIII:753-769).
Springer DOI 2011
BibRef

Rounds, J.[Jeremiah], Kingsland, A.[Addie], Henry, M.J.[Michael J.], Duskin, K.R.[Kayla R.],
Probing for Artifacts: Detecting Imagenet Model Evasions,
AML-CV20(3432-3441)
IEEE DOI 2008
Perturbation methods, Probes, Computational modeling, Robustness, Image color analysis, Machine learning, Indexes BibRef

Kariyappa, S., Qureshi, M.K.,
Defending Against Model Stealing Attacks With Adaptive Misinformation,
CVPR20(767-775)
IEEE DOI 2008
Data models, Adaptation models, Cloning, Predictive models, Computational modeling, Security, Perturbation methods BibRef

Mohapatra, J., Weng, T., Chen, P., Liu, S., Daniel, L.,
Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations,
CVPR20(241-249)
IEEE DOI 2008
Semantics, Perturbation methods, Robustness, Image color analysis, Brightness, Neural networks, Tools BibRef

Wu, M., Kwiatkowska, M.,
Robustness Guarantees for Deep Neural Networks on Videos,
CVPR20(308-317)
IEEE DOI 2008
Robustness, Videos, Optical imaging, Adaptive optics, Optical sensors, Measurement, Neural networks BibRef

Chan, A., Tay, Y., Ong, Y.,
What It Thinks Is Important Is Important: Robustness Transfers Through Input Gradients,
CVPR20(329-338)
IEEE DOI 2008
Robustness, Task analysis, Training, Computational modeling, Perturbation methods, Impedance matching, Predictive models BibRef

Zhang, L., Yu, M., Chen, T., Shi, Z., Bao, C., Ma, K.,
Auxiliary Training: Towards Accurate and Robust Models,
CVPR20(369-378)
IEEE DOI 2008
Training, Robustness, Perturbation methods, Neural networks, Data models, Task analysis, Feature extraction BibRef

Saha, A., Subramanya, A., Patil, K., Pirsiavash, H.,
Role of Spatial Context in Adversarial Robustness for Object Detection,
AML-CV20(3403-3412)
IEEE DOI 2008
Detectors, Object detection, Cognition, Training, Blindness, Perturbation methods, Optimization BibRef

Jefferson, B., Marrero, C.O.,
Robust Assessment of Real-World Adversarial Examples,
AML-CV20(3442-3449)
IEEE DOI 2008
Cameras, Light emitting diodes, Robustness, Lighting, Detectors, Testing, Perturbation methods BibRef

Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N.K.,
DNDNet: Reconfiguring CNN for Adversarial Robustness,
TCV20(103-110)
IEEE DOI 2008
Mathematical model, Perturbation methods, Machine learning, Robustness, Computational modeling, Databases BibRef

Cohen, G., Sapiro, G., Giryes, R.,
Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors,
CVPR20(14441-14450)
IEEE DOI 2008
Training, Robustness, Loss measurement, Feature extraction, Neural networks, Perturbation methods, Training data BibRef

Rahnama, A., Nguyen, A.T., Raff, E.,
Robust Design of Deep Neural Networks Against Adversarial Attacks Based on Lyapunov Theory,
CVPR20(8175-8184)
IEEE DOI 2008
Robustness, Nonlinear systems, Training, Control theory, Stability analysis, Perturbation methods, Transient analysis BibRef

Zhao, Y., Wu, Y., Chen, C., Lim, A.,
On Isometry Robustness of Deep 3D Point Cloud Models Under Adversarial Attacks,
CVPR20(1198-1207)
IEEE DOI 2008
Robustness, Data models, Solid modeling, Computational modeling, Perturbation methods BibRef

Gowal, S., Qin, C., Huang, P., Cemgil, T., Dvijotham, K., Mann, T., Kohli, P.,
Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations,
CVPR20(1208-1217)
IEEE DOI 2008
Perturbation methods, Robustness, Training, Semantics, Correlation, Task analysis, Mathematical model BibRef

Jeddi, A., Shafiee, M.J., Karg, M., Scharfenberger, C., Wong, A.,
Learn2Perturb: An End-to-End Feature Perturbation Learning to Improve Adversarial Robustness,
CVPR20(1238-1247)
IEEE DOI 2008
Perturbation methods, Robustness, Training, Neural networks, Data models, Uncertainty, Optimization BibRef

Addepalli, S.[Sravanti], Vivek, B.S., Baburaj, A.[Arya], Sriramanan, G.[Gaurang], Babu, R.V.[R. Venkatesh],
Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes,
CVPR20(1017-1026)
IEEE DOI 2008
Training, Robustness, Quantization (signal), Visual systems, Perturbation methods, Neural networks BibRef

Yuan, J., He, Z.,
Ensemble Generative Cleaning With Feedback Loops for Defending Adversarial Attacks,
CVPR20(578-587)
IEEE DOI 2008
Cleaning, Feedback loop, Transforms, Neural networks, Estimation, Fuses, Iterative methods BibRef

Guo, M., Yang, Y., Xu, R., Liu, Z., Lin, D.,
When NAS Meets Robustness: In Search of Robust Architectures Against Adversarial Attacks,
CVPR20(628-637)
IEEE DOI 2008
Robustness, Training, Network architecture, Neural networks, Convolution, Architecture BibRef

Chen, T., Liu, S., Chang, S., Cheng, Y., Amini, L., Wang, Z.,
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning,
CVPR20(696-705)
IEEE DOI 2008
Robustness, Task analysis, Training, Standards, Data models, Computational modeling, Tuning BibRef

Lee, S., Lee, H., Yoon, S.,
Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization,
CVPR20(269-278)
IEEE DOI 2008
Robustness, Training, Standards, Perturbation methods, Complexity theory, Upper bound, Data models BibRef

Dong, Y., Fu, Q., Yang, X., Pang, T., Su, H., Xiao, Z., Zhu, J.,
Benchmarking Adversarial Robustness on Image Classification,
CVPR20(318-328)
IEEE DOI 2008
Robustness, Adaptation models, Training, Predictive models, Perturbation methods, Data models, Measurement BibRef

Xiao, C., Zheng, C.,
One Man's Trash Is Another Man's Treasure: Resisting Adversarial Examples by Adversarial Examples,
CVPR20(409-418)
IEEE DOI 2008
Training, Robustness, Perturbation methods, Neural networks, Transforms, Mathematical model, Numerical models BibRef

Naseer, M., Khan, S., Hayat, M., Khan, F.S., Porikli, F.M.,
A Self-supervised Approach for Adversarial Robustness,
CVPR20(259-268)
IEEE DOI 2008
Perturbation methods, Task analysis, Distortion, Training, Robustness, Feature extraction, Neural networks BibRef

Zhao, Y., Tian, Y., Fowlkes, C., Shen, W., Yuille, A.L.,
Resisting Large Data Variations via Introspective Transformation Network,
WACV20(3069-3078)
IEEE DOI 2006
Training, Testing, Robustness, Training data, Linear programming, Resists BibRef

Kim, D.H.[Dong-Hyun], Bargal, S.A.[Sarah Adel], Zhang, J.M.[Jian-Ming], Sclaroff, S.[Stan],
Multi-way Encoding for Robustness,
WACV20(1341-1349)
IEEE DOI 2006
To counter adversarial attacks. Encoding, Robustness, Perturbation methods, Training, Biological system modeling, Neurons, Correlation BibRef

Folz, J., Palacio, S., Hees, J., Dengel, A.,
Adversarial Defense based on Structure-to-Signal Autoencoders,
WACV20(3568-3577)
IEEE DOI 2006
Perturbation methods, Semantics, Robustness, Predictive models, Training, Decoding, Neural networks BibRef

Zheng, S., Zhu, Z., Zhang, X., Liu, Z., Cheng, J., Zhao, Y.,
Distribution-Induced Bidirectional Generative Adversarial Network for Graph Representation Learning,
CVPR20(7222-7231)
IEEE DOI 2008
Generative adversarial networks, Robustness, Data models, Generators, Task analysis, Gaussian distribution BibRef

Benz, P.[Philipp], Zhang, C.N.[Chao-Ning], Imtiaz, T.[Tooba], Kweon, I.S.[In So],
Double Targeted Universal Adversarial Perturbations,
ACCV20(IV:284-300).
Springer DOI 2103
BibRef
Earlier: A2,, A1, A3, A4:
Understanding Adversarial Examples From the Mutual Influence of Images and Perturbations,
CVPR20(14509-14518)
IEEE DOI 2008
Perturbation methods, Correlation, Training data, Feature extraction, Training, Task analysis, Robustness BibRef

Xie, C., Tan, M., Gong, B., Wang, J., Yuille, A.L., Le, Q.V.,
Adversarial Examples Improve Image Recognition,
CVPR20(816-825)
IEEE DOI 2008
Training, Robustness, Degradation, Image recognition, Perturbation methods, Standards, Supervised learning BibRef

Dabouei, A., Soleymani, S., Taherkhani, F., Dawson, J., Nasrabadi, N.M.,
SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations,
WACV20(2654-2663)
IEEE DOI 2006
Perturbation methods, Frequency-domain analysis, Robustness, Training, Optimization, Network architecture, Topology BibRef

Peterson, J.[Joshua], Battleday, R.[Ruairidh], Griffiths, T.[Thomas], Russakovsky, O.[Olga],
Human Uncertainty Makes Classification More Robust,
ICCV19(9616-9625)
IEEE DOI 2004
CIFAR10H dataset. To make deep network robust ot adversarial attacks. convolutional neural nets, learning (artificial intelligence), pattern classification, classification performance, Dogs BibRef

Miyazato, S., Wang, X., Yamasaki, T., Aizawa, K.,
Reinforcing the Robustness of a Deep Neural Network to Adversarial Examples by Using Color Quantization of Training Image Data,
ICIP19(884-888)
IEEE DOI 1910
convolutional neural network, adversarial example, color quantization BibRef

Ramanathan, T., Manimaran, A., You, S., Kuo, C.J.,
Robustness of Saak Transform Against Adversarial Attacks,
ICIP19(2531-2535)
IEEE DOI 1910
Saak transform, Adversarial attacks, Deep Neural Networks, Image Classification BibRef

Chen, H., Liang, J., Chang, S., Pan, J., Chen, Y., Wei, W., Juan, D.,
Improving Adversarial Robustness via Guided Complement Entropy,
ICCV19(4880-4888)
IEEE DOI 2004
entropy, learning (artificial intelligence), neural nets, probability, adversarial defense, adversarial robustness, BibRef

Bai, Y., Feng, Y., Wang, Y., Dai, T., Xia, S., Jiang, Y.,
Hilbert-Based Generative Defense for Adversarial Examples,
ICCV19(4783-4792)
IEEE DOI 2004
feature extraction, Hilbert transforms, neural nets, security of data, scan mode, advanced Hilbert curve scan order BibRef

Jang, Y., Zhao, T., Hong, S., Lee, H.,
Adversarial Defense via Learning to Generate Diverse Attacks,
ICCV19(2740-2749)
IEEE DOI 2004
learning (artificial intelligence), neural nets, pattern classification, security of data, adversarial defense, Machine learning BibRef

Mustafa, A., Khan, S., Hayat, M., Goecke, R., Shen, J., Shao, L.,
Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks,
ICCV19(3384-3393)
IEEE DOI 2004
convolutional neural nets, feature extraction, image classification, image representation, Iterative methods BibRef

Taran, O.[Olga], Rezaeifar, S.[Shideh], Holotyak, T.[Taras], Voloshynovskiy, S.[Slava],
Defending Against Adversarial Attacks by Randomized Diversification,
CVPR19(11218-11225).
IEEE DOI 2002
BibRef

Sun, B.[Bo], Tsai, N.H.[Nian-Hsuan], Liu, F.C.[Fang-Chen], Yu, R.[Ronald], Su, H.[Hao],
Adversarial Defense by Stratified Convolutional Sparse Coding,
CVPR19(11439-11448).
IEEE DOI 2002
BibRef

Ho, C.H.[Chih-Hui], Leung, B.[Brandon], Sandstrom, E.[Erik], Chang, Y.[Yen], Vasconcelos, N.M.[Nuno M.],
Catastrophic Child's Play: Easy to Perform, Hard to Defend Adversarial Attacks,
CVPR19(9221-9229).
IEEE DOI 2002
BibRef

Dubey, A.[Abhimanyu], van der Maaten, L.[Laurens], Yalniz, Z.[Zeki], Li, Y.X.[Yi-Xuan], Mahajan, D.[Dhruv],
Defense Against Adversarial Images Using Web-Scale Nearest-Neighbor Search,
CVPR19(8759-8768).
IEEE DOI 2002
BibRef

Dong, Y.P.[Yin-Peng], Pang, T.Y.[Tian-Yu], Su, H.[Hang], Zhu, J.[Jun],
Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks,
CVPR19(4307-4316).
IEEE DOI 2002
BibRef

Rony, J.[Jerome], Hafemann, L.G.[Luiz G.], Oliveira, L.S.[Luiz S.], Ben Ayed, I.[Ismail], Sabourin, R.[Robert], Granger, E.[Eric],
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses,
CVPR19(4317-4325).
IEEE DOI 2002
BibRef

Qiu, Y.X.[Yu-Xian], Leng, J.W.[Jing-Wen], Guo, C.[Cong], Chen, Q.[Quan], Li, C.[Chao], Guo, M.[Minyi], Zhu, Y.H.[Yu-Hao],
Adversarial Defense Through Network Profiling Based Path Extraction,
CVPR19(4772-4781).
IEEE DOI 2002
BibRef

Jia, X.J.[Xiao-Jun], Wei, X.X.[Xing-Xing], Cao, X.C.[Xiao-Chun], Foroosh, H.[Hassan],
ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples,
CVPR19(6077-6085).
IEEE DOI 2002
BibRef

Raff, E.[Edward], Sylvester, J.[Jared], Forsyth, S.[Steven], McLean, M.[Mark],
Barrage of Random Transforms for Adversarially Robust Defense,
CVPR19(6521-6530).
IEEE DOI 2002
BibRef

Ji, J., Zhong, B., Ma, K.,
Multi-Scale Defense of Adversarial Images,
ICIP19(4070-4074)
IEEE DOI 1910
deep learning, adversarial images, defense, multi-scale, image evolution BibRef

Agarwal, C., Nguyen, A., Schonfeld, D.,
Improving Robustness to Adversarial Examples by Encouraging Discriminative Features,
ICIP19(3801-3805)
IEEE DOI 1910
Adversarial Machine Learning, Robustness, Defenses, Deep Learning BibRef

Saha, S., Kumar, A., Sahay, P., Jose, G., Kruthiventi, S., Muralidhara, H.,
Attack Agnostic Statistical Method for Adversarial Detection,
SDL-CV19(798-802)
IEEE DOI 2004
feature extraction, image classification, learning (artificial intelligence), neural nets, Adversarial Attack BibRef

Taran, O.[Olga], Rezaeifar, S.[Shideh], Voloshynovskiy, S.[Slava],
Bridging Machine Learning and Cryptography in Defence Against Adversarial Attacks,
Objectionable18(II:267-279).
Springer DOI 1905
BibRef

Naseer, M., Khan, S., Porikli, F.M.,
Local Gradients Smoothing: Defense Against Localized Adversarial Attacks,
WACV19(1300-1307)
IEEE DOI 1904
data compression, feature extraction, gradient methods, image classification, image coding, image representation, High frequency BibRef

Akhtar, N., Liu, J., Mian, A.,
Defense Against Universal Adversarial Perturbations,
CVPR18(3389-3398)
IEEE DOI 1812
Perturbation methods, Training, Computational modeling, Detectors, Neural networks, Robustness, Integrated circuits BibRef

Behpour, S., Xing, W., Ziebart, B.D.,
ARC: Adversarial Robust Cuts for Semi-Supervised and Multi-label Classification,
WiCV18(1986-19862)
IEEE DOI 1812
Markov random fields, Task analysis, Training, Testing, Support vector machines, Fasteners, Games BibRef

Karim, R., Islam, M.A., Mohammed, N., Bruce, N.D.B.,
On the Robustness of Deep Learning Models to Universal Adversarial Attack,
CRV18(55-62)
IEEE DOI 1812
Perturbation methods, Computational modeling, Neural networks, Task analysis, Image segmentation, Data models, Semantics, Semantic Segmentation BibRef

Jakubovitz, D.[Daniel], Giryes, R.[Raja],
Improving DNN Robustness to Adversarial Attacks Using Jacobian Regularization,
ECCV18(XII: 525-541).
Springer DOI 1810
BibRef

Rozsa, A., Gunther, M., Boult, T.E.,
Towards Robust Deep Neural Networks with BANG,
WACV18(803-811)
IEEE DOI 1806
image processing, learning (artificial intelligence), neural nets, BANG technique, adversarial image utilization, Training BibRef

Lu, J., Issaranon, T., Forsyth, D.A.,
SafetyNet: Detecting and Rejecting Adversarial Examples Robustly,
ICCV17(446-454)
IEEE DOI 1802
image colour analysis, image reconstruction, learning (artificial intelligence), neural nets, BibRef

Mukuta, Y., Ushiku, Y., Harada, T.,
Spatial-Temporal Weighted Pyramid Using Spatial Orthogonal Pooling,
CEFR-LCV17(1041-1049)
IEEE DOI 1802
Encoding, Feature extraction, Robustness, Spatial resolution, Standards BibRef

Moosavi-Dezfooli, S.M.[Seyed-Mohsen], Fawzi, A.[Alhussein], Fawzi, O.[Omar], Frossard, P.[Pascal],
Universal Adversarial Perturbations,
CVPR17(86-94)
IEEE DOI 1711
Correlation, Neural networks, Optimization, Robustness, Training, Visualization BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Adversarial Patch Attacks .


Last update:Nov 26, 2024 at 16:40:19