14.5.10.10.7 Backdoor Attacks

Chapter Contents (Back)
Attacks. Backdoor Attack.

Zhang, J.[Jie], Chen, D.D.[Dong-Dong], Huang, Q.D.[Qi-Dong], Liao, J.[Jing], Zhang, W.M.[Wei-Ming], Feng, H.M.[Hua-Min], Hua, G.[Gang], Yu, N.H.[Neng-Hai],
Poison Ink: Robust and Invisible Backdoor Attack,
IP(31), 2022, pp. 5691-5705.
IEEE DOI 2209
Toxicology, Ink, Training, Robustness, Data models, Training data, Task analysis, Backdoor attack, stealthiness, robustness, generality, flexibility BibRef

Gao, Y.H.[Ying-Hua], Li, Y.M.[Yi-Ming], Zhu, L.[Linghui], Wu, D.X.[Dong-Xian], Jiang, Y.[Yong], Xia, S.T.[Shu-Tao],
Not All Samples Are Born Equal: Towards Effective Clean-Label Backdoor Attacks,
PR(139), 2023, pp. 109512.
Elsevier DOI 2304
Backdoor attack, Clean-label attack, Sample selection, Trustworthy ML, AI Security, Deep learning BibRef

Wang, Z.[Zhen], Wang, B.H.[Bu-Hong], Zhang, C.L.[Chuan-Lei], Liu, Y.[Yaohui], Guo, J.X.[Jian-Xin],
Robust Feature-Guided Generative Adversarial Network for Aerial Image Semantic Segmentation against Backdoor Attacks,
RS(15), No. 10, 2023, pp. xx-yy.
DOI Link 2306
BibRef

Wang, Z.[Zhen], Wang, B.H.[Bu-Hong], Zhang, C.L.[Chuan-Lei], Liu, Y.[Yaohui],
Defense against Adversarial Patch Attacks for Aerial Image Semantic Segmentation by Robust Feature Extraction,
RS(15), No. 6, 2023, pp. 1690.
DOI Link 2304
BibRef

Wang, Z.[Zhen], Wang, B.H.[Bu-Hong], Zhang, C.L.[Chuan-Lei], Liu, Y.[Yaohui], Guo, J.X.[Jian-Xin],
Defending against Poisoning Attacks in Aerial Image Semantic Segmentation with Robust Invariant Feature Enhancement,
RS(15), No. 12, 2023, pp. xx-yy.
DOI Link 2307
BibRef

Ma, Q.L.[Qian-Li], Qin, J.P.[Jun-Ping], Yan, K.[Kai], Wang, L.[Lei], Sun, H.[Hao],
Stealthy Frequency-Domain Backdoor Attacks: Fourier Decomposition and Fundamental Frequency Injection,
SPLetters(30), 2023, pp. 1677-1681.
IEEE DOI 2312
BibRef

Zhang, Z.[Zheng], Yuan, X.[Xu], Zhu, L.[Lei], Song, J.K.[Jing-Kuan], Nie, L.Q.[Li-Qiang],
BadCM: Invisible Backdoor Attack Against Cross-Modal Learning,
IP(33), 2024, pp. 2558-2571.
IEEE DOI Code:
WWW Link. 2404
Visualization, Training, Flowering plants, Perturbation methods, Dogs, Generators, Computational modeling, Backdoor attacks, imperceptibility BibRef


Zhu, Z.X.[Zi-Xuan], Wang, R.[Rui], Zou, C.[Cong], Jing, L.H.[Li-Hua],
The Victim and The Beneficiary: Exploiting a Poisoned Model to Train a Clean Model on Poisoned Data,
ICCV23(155-164)
IEEE DOI Code:
WWW Link. 2401
BibRef

Ding, R.[Ruyi], Duan, S.J.[Shi-Jin], Xu, X.L.[Xiao-Lin], Fei, Y.[Yunsi],
VertexSerum: Poisoning Graph Neural Networks for Link Inference,
ICCV23(4509-4518)
IEEE DOI Code:
WWW Link. 2401
BibRef

Bansal, H.[Hritik], Yin, F.[Fan], Singhi, N.[Nishad], Grover, A.[Aditya], Yang, Y.[Yu], Chang, K.W.[Kai-Wei],
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive Learning,
ICCV23(112-123)
IEEE DOI Code:
WWW Link. 2401
BibRef

Sur, I.[Indranil], Sikka, K.[Karan], Walmer, M.[Matthew], Koneripalli, K.[Kaushik], Roy, A.[Anirban], Lin, X.[Xiao], Divakaran, A.[Ajay], Jha, S.[Susmit],
TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal Backdoored Models,
ICCV23(165-175)
IEEE DOI Code:
WWW Link. 2401
BibRef

Li, C.J.[Chang-Jiang], Pang, R.[Ren], Xi, Z.[Zhaohan], Du, T.Y.[Tian-Yu], Ji, S.[Shouling], Yao, Y.[Yuan], Wang, T.[Ting],
An Embarrassingly Simple Backdoor Attack on Self-supervised Learning,
ICCV23(4344-4355)
IEEE DOI Code:
WWW Link. 2401
BibRef

Zhu, M.L.[Ming-Li], Wei, S.[Shaokui], Shen, L.[Li], Fan, Y.B.[Yan-Bo], Wu, B.Y.[Bao-Yuan],
Enhancing Fine-Tuning based Backdoor Defense with Sharpness-Aware Minimization,
ICCV23(4443-4454)
IEEE DOI Code:
WWW Link. 2401
BibRef

Liu, M.[Min], Sangiovanni-Vincentelli, A.[Alberto], Yue, X.Y.[Xiang-Yu],
Beating Backdoor Attack at Its Own Game,
ICCV23(4597-4606)
IEEE DOI Code:
WWW Link. 2401
BibRef

Huang, S.Q.[Si-Quan], Li, Y.J.[Yi-Jiang], Chen, C.[Chong], Shi, L.[Leyu], Gao, Y.[Ying],
Multi-metrics adaptively identifies backdoors in Federated learning,
ICCV23(4629-4639)
IEEE DOI 2401
BibRef

Guo, J.F.[Jun-Feng], Li, A.[Ang], Wang, L.[Lixu], Liu, C.[Cong],
PolicyCleanse: Backdoor Detection and Mitigation for Competitive Reinforcement Learning,
ICCV23(4676-4685)
IEEE DOI 2401
BibRef

Shejwalkar, V.[Virat], Lyu, L.J.[Ling-Juan], Houmansadr, A.[Amir],
The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised Learning,
ICCV23(4707-4717)
IEEE DOI 2401
BibRef

Wu, Y.T.[Yu-Tong], Han, X.[Xingshuo], Qiu, H.[Han], Zhang, T.W.[Tian-Wei],
Computation and Data Efficient Backdoor Attacks,
ICCV23(4782-4791)
IEEE DOI Code:
WWW Link. 2401
BibRef

Han, G.[Gyojin], Choi, J.[Jaehyun], Hong, H.G.[Hyeong Gwon], Kim, J.[Junmo],
Data Poisoning Attack Aiming the Vulnerability of Continual Learning,
ICIP23(1905-1909)
IEEE DOI 2312
BibRef

Huang, B.[Bin], Wang, Z.[Zhi],
Efficient any-Target Backdoor Attack with Pseudo Poisoned Samples,
ICIP23(3319-3323)
IEEE DOI 2312
BibRef

Shen, Z.[Zihan], Hou, W.[Wei], Li, Y.[Yun],
CSSBA: A Clean Label Sample-Specific Backdoor Attack,
ICIP23(965-969)
IEEE DOI 1806
BibRef

Sun, M.J.[Ming-Jie], Kolter, Z.[Zico],
Single Image Backdoor Inversion via Robust Smoothed Classifiers,
CVPR23(8113-8122)
IEEE DOI 2309
BibRef

Hammoud, H.A.A.K.[Hasan Abed Al Kader], Bibi, A.[Adel], Torr, P.H.S.[Philip H.S.], Ghanem, B.[Bernard],
Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor Poisoned Samples in DNNs,
AML23(2338-2345)
IEEE DOI 2309
BibRef

Gao, K.F.[Kuo-Feng], Bai, Y.[Yang], Gu, J.D.[Jin-Dong], Yang, Y.[Yong], Xia, S.T.[Shu-Tao],
Backdoor Defense via Adaptively Splitting Poisoned Dataset,
CVPR23(4005-4014)
IEEE DOI 2309
BibRef

Chou, S.Y.[Sheng-Yen], Chen, P.Y.[Pin-Yu], Ho, T.Y.[Tsung-Yi],
How to Backdoor Diffusion Models?,
CVPR23(4015-4024)
IEEE DOI 2309
BibRef

Zheng, R.K.[Run-Kai], Tang, R.J.[Rong-Jun], Li, J.Z.[Jian-Ze], Liu, L.[Li],
Data-Free Backdoor Removal Based on Channel Lipschitzness,
ECCV22(V:175-191).
Springer DOI 2211
BibRef

Dolatabadi, H.M.[Hadi M.], Erfani, S.[Sarah], Leckie, C.[Christopher],
Collider: A Robust Training Framework for Backdoor Data,
ACCV22(VI:681-698).
Springer DOI 2307
BibRef

Ji, H.X.[Hu-Xiao], Li, J.[Jie], Wu, C.[Chentao],
CRAB: Certified Patch Robustness Against Poisoning-Based Backdoor Attacks,
ICIP22(2486-2490)
IEEE DOI 2211
Training, Deep learning, Smoothing methods, Neural networks, Games, Robustness, Computer Vision, Backdoor Attack, Certified Robustness, (De)randomized Smoothing BibRef

Liu, Y.Q.[Ying-Qi], Shen, G.Y.[Guang-Yu], Tao, G.H.[Guan-Hong], Wang, Z.T.[Zhen-Ting], Ma, S.Q.[Shi-Qing], Zhang, X.Y.[Xiang-Yu],
Complex Backdoor Detection by Symmetric Feature Differencing,
CVPR22(14983-14993)
IEEE DOI 2210
Rendering (computer graphics), Feature extraction, Reflection, Pattern recognition, Adversarial attack and defense BibRef

Guan, J.[Jiyang], Tu, Z.[Zhuozhuo], He, R.[Ran], Tao, D.C.[Da-Cheng],
Few-shot Backdoor Defense Using Shapley Estimation,
CVPR22(13348-13357)
IEEE DOI 2210
Deep learning, Training, Neurons, Neural networks, Estimation, Data models, Robustness, Adversarial attack and defense BibRef

Tao, G.H.[Guan-Hong], Shen, G.Y.[Guang-Yu], Liu, Y.Q.[Ying-Qi], An, S.W.[Sheng-Wei], Xu, Q.L.[Qiu-Ling], Ma, S.Q.[Shi-Qing], Li, P.[Pan], Zhang, X.Y.[Xiang-Yu],
Better Trigger Inversion Optimization in Backdoor Scanning,
CVPR22(13358-13368)
IEEE DOI 2210
Training, Computational modeling, Optimization methods, Robustness, Pattern recognition, Adversarial attack and defense, Optimization methods BibRef

Chan, S.H.[Shih-Han], Dong, Y.P.[Yin-Peng], Zhu, J.[Jun], Zhang, X.L.[Xiao-Lu], Zhou, J.[Jun],
Baddet: Backdoor Attacks on Object Detection,
AdvRob22(396-412).
Springer DOI 2304
Backdoor trigger into a small portion of training data. BibRef

Ramakrishnan, G.[Goutham], Albarghouthi, A.[Aws],
Backdoors in Neural Models of Source Code,
ICPR22(2892-2899)
IEEE DOI 2212
Deep learning, Codes, Source coding, Neural networks, Training data, Implants, Predictive models BibRef

Phan, H.[Huy], Shi, C.[Cong], Xie, Y.[Yi], Zhang, T.F.[Tian-Fang], Li, Z.H.[Zhuo-Hang], Zhao, T.M.[Tian-Ming], Liu, J.[Jian], Wang, Y.[Yan], Chen, Y.Y.[Ying-Ying], Yuan, B.[Bo],
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact DNN,
ECCV22(IV:708-724).
Springer DOI 2211
BibRef

Feng, Y.[Yu], Ma, B.[Benteng], Zhang, J.[Jing], Zhao, S.S.[Shan-Shan], Xia, Y.[Yong], Tao, D.C.[Da-Cheng],
FIBA: Frequency-Injection based Backdoor Attack in Medical Image Analysis,
CVPR22(20844-20853)
IEEE DOI 2210
Training, Image segmentation, Codes, Frequency-domain analysis, Computational modeling, Semantics, Predictive models, Medical, Privacy and federated learning BibRef

Zhao, Z.D.[Zhen-Dong], Chen, X.J.[Xiao-Jun], Xuan, Y.X.[Yue-Xin], Dong, Y.[Ye], Wang, D.[Dakui], Liang, K.[Kaitai],
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints,
CVPR22(15192-15201)
IEEE DOI 2210
Training, Resistance, Representation learning, Adaptation models, Visualization, Toxicology, Perturbation methods, Machine learning BibRef

Li, Y.Z.[Yue-Zun], Li, Y.M.[Yi-Ming], Wu, B.Y.[Bao-Yuan], Li, L.K.[Long-Kang], He, R.[Ran], Lyu, S.W.[Si-Wei],
Invisible Backdoor Attack with Sample-Specific Triggers,
ICCV21(16443-16452)
IEEE DOI 2203
Training, Additive noise, Deep learning, Steganography, Image coding, Perturbation methods, Adversarial learning, Recognition and classification BibRef

Doan, K.[Khoa], Lao, Y.J.[Ying-Jie], Zhao, W.J.[Wei-Jie], Li, P.[Ping],
LIRA: Learnable, Imperceptible and Robust Backdoor Attacks,
ICCV21(11946-11956)
IEEE DOI 2203
Deformable models, Visualization, Toxicology, Heuristic algorithms, Neural networks, Stochastic processes, Inspection, Neural generative models BibRef

Xiang, Z.[Zhen], Miller, D.J.[David J.], Chen, S.[Siheng], Li, X.[Xi], Kesidis, G.[George],
A Backdoor Attack against 3D Point Cloud Classifiers,
ICCV21(7577-7587)
IEEE DOI 2203
Geometry, Point cloud compression, Training, Barium, Toxicology, Adversarial learning, Recognition and classification, Vision for robotics and autonomous vehicles BibRef

Ren, Y.K.[Yan-Kun], Li, L.F.[Long-Fei], Zhou, J.[Jun],
Simtrojan: Stealthy Backdoor Attack,
ICIP21(819-823)
IEEE DOI 2201
Deep learning, Training, Target recognition, Image processing, Buildings, Extraterrestrial measurements, deep learning BibRef

Zhu, L.[Liuwan], Ning, R.[Rui], Xin, C.S.[Chun-Sheng], Wang, C.G.[Chong-Gang], Wu, H.Y.[Hong-Yi],
CLEAR: Clean-up Sample-Targeted Backdoor in Neural Networks,
ICCV21(16433-16442)
IEEE DOI 2203
Deep learning, Computational modeling, Neural networks, Benchmark testing, Feature extraction, Safety, BibRef

Zeng, Y.[Yi], Park, W.[Won], Mao, Z.M.[Z. Morley], Jia, R.[Ruoxi],
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective,
ICCV21(16453-16461)
IEEE DOI 2203
Deep learning, Frequency-domain analysis, Detectors, Data models, Security, Adversarial learning, Image and video manipulation detection and integrity methods. BibRef

Raj, A.[Ankita], Pal, A.[Ambar], Arora, C.[Chetan],
Identifying Physically Realizable Triggers for Backdoored Face Recognition Networks,
ICIP21(3023-3027)
IEEE DOI 2201
Deep learning, Image recognition, Face recognition, Force, Adversarial attack, trojan attack, back-door attack, face recognition BibRef

Wang, R.[Ren], Zhang, G.Y.[Gao-Yuan], Liu, S.J.[Si-Jia], Chen, P.Y.[Pin-Yu], Xiong, J.J.[Jin-Jun], Wang, M.[Meng],
Practical Detection of Trojan Neural Networks: Data-limited and Data-free Cases,
ECCV20(XXIII:222-238).
Springer DOI 2011
(or poisoning backdoor attack) Manipulate the learned network. BibRef

Liu, Y.F.[Yun-Fei], Ma, X.J.[Xing-Jun], Bailey, J.[James], Lu, F.[Feng],
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks,
ECCV20(X:182-199).
Springer DOI 2011
BibRef

Zhao, S., Ma, X., Zheng, X., Bailey, J., Chen, J., Jiang, Y.,
Clean-Label Backdoor Attacks on Video Recognition Models,
CVPR20(14431-14440)
IEEE DOI 2008
Training, Data models, Toxicology, Perturbation methods, Training data, Image resolution, Pipelines BibRef

Kolouri, S., Saha, A., Pirsiavash, H., Hoffmann, H.,
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs,
CVPR20(298-307)
IEEE DOI 2008
Training, Perturbation methods, Data models, Computational modeling, Machine learning, Benchmark testing BibRef

Truong, L., Jones, C., Hutchinson, B., August, A., Praggastis, B., Jasper, R., Nichols, N., Tuor, A.,
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers,
AML-CV20(3422-3431)
IEEE DOI 2008
Data models, Training, Computational modeling, Machine learning, Training data, Safety BibRef

Barni, M., Kallas, K., Tondi, B.,
A New Backdoor Attack in CNNS by Training Set Corruption Without Label Poisoning,
ICIP19(101-105)
IEEE DOI 1910
Adversarial learning, security of deep learning, backdoor poisoning attacks, training with poisoned data BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Backdoor Attacks, Robustness .


Last update:Apr 18, 2024 at 11:38:49