Zhang, J.[Jie],
Chen, D.D.[Dong-Dong],
Huang, Q.D.[Qi-Dong],
Liao, J.[Jing],
Zhang, W.M.[Wei-Ming],
Feng, H.M.[Hua-Min],
Hua, G.[Gang],
Yu, N.H.[Neng-Hai],
Poison Ink: Robust and Invisible Backdoor Attack,
IP(31), 2022, pp. 5691-5705.
IEEE DOI
2209
Toxicology, Ink, Training, Robustness, Data models, Training data,
Task analysis, Backdoor attack, stealthiness, robustness,
generality, flexibility
BibRef
Gao, Y.H.[Ying-Hua],
Li, Y.M.[Yi-Ming],
Zhu, L.[Linghui],
Wu, D.X.[Dong-Xian],
Jiang, Y.[Yong],
Xia, S.T.[Shu-Tao],
Not All Samples Are Born Equal:
Towards Effective Clean-Label Backdoor Attacks,
PR(139), 2023, pp. 109512.
Elsevier DOI
2304
Backdoor attack, Clean-label attack, Sample selection,
Trustworthy ML, AI Security, Deep learning
BibRef
Ma, Q.L.[Qian-Li],
Qin, J.P.[Jun-Ping],
Yan, K.[Kai],
Wang, L.[Lei],
Sun, H.[Hao],
Stealthy Frequency-Domain Backdoor Attacks:
Fourier Decomposition and Fundamental Frequency Injection,
SPLetters(30), 2023, pp. 1677-1681.
IEEE DOI
2312
BibRef
Zhang, Z.[Zheng],
Yuan, X.[Xu],
Zhu, L.[Lei],
Song, J.K.[Jing-Kuan],
Nie, L.Q.[Li-Qiang],
BadCM: Invisible Backdoor Attack Against Cross-Modal Learning,
IP(33), 2024, pp. 2558-2571.
IEEE DOI Code:
WWW Link.
2404
Visualization, Training, Flowering plants, Perturbation methods,
Dogs, Generators, Computational modeling, Backdoor attacks, imperceptibility
BibRef
Wang, K.Y.[Kai-Yang],
Deng, H.X.[Hua-Xin],
Xu, Y.J.[Yi-Jia],
Liu, Z.[Zhonglin],
Fang, Y.[Yong],
Multi-target label backdoor attacks on graph neural networks,
PR(152), 2024, pp. 110449.
Elsevier DOI
2405
Backdoor attack, Graph neural networks, Node classification
BibRef
Niu, Z.X.[Zhen-Xing],
Sun, Y.Y.[Yu-Yao],
Miao, Q.G.[Qi-Guang],
Jin, R.[Rong],
Hua, G.[Gang],
Towards Unified Robustness Against Both Backdoor and Adversarial
Attacks,
PAMI(46), No. 12, December 2024, pp. 7589-7605.
IEEE DOI
2411
Robustness, Purification, Training, Predictive models, Neurons,
Computational modeling, Analytical models, Adversarial attack,
model robustness
BibRef
Wang, B.[Bo],
Yu, F.[Fei],
Wei, F.[Fei],
Li, Y.[Yi],
Wang, W.[Wei],
Invisible Intruders: Label-Consistent Backdoor Attack Using
Re-Parameterized Noise Trigger,
MultMed(26), 2024, pp. 10766-10778.
IEEE DOI
2411
Training, Noise, Artificial neural networks, Visualization,
Computational modeling, Steganography, Perturbation methods,
Re-parameterized noise
BibRef
Chen, W.M.[Wen-Min],
Xu, X.W.[Xiao-Wei],
Wang, X.D.[Xiao-Dong],
Zhou, H.S.[Hua-Song],
Li, Z.[Zewen],
Chen, Y.M.[Yang-Ming],
Invisible backdoor attack with attention and steganography,
CVIU(249), 2024, pp. 104208.
Elsevier DOI
2412
Backdoor attack, Steganography, Spatial attention
BibRef
Hou, L.S.[Lin-Shan],
Hua, Z.Y.[Zhong-Yun],
Li, Y.H.[Yu-Hong],
Zheng, Y.F.[Yi-Feng],
Zhang, L.Y.[Leo Yu],
M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to
Deep Learning Models,
CirSysVideo(34), No. 11, November 2024, pp. 11299-11312.
IEEE DOI
2412
Training, Artificial neural networks, Face recognition, Testing,
Task analysis, Robustness, Filters, Backdoor attack,
deep neural networks
BibRef
Tang, W.X.[Wei-Xuan],
Li, J.H.[Jia-Hao],
Rao, Y.[Yuan],
Zhou, Z.[Zhili],
Peng, F.[Fei],
A trigger-perceivable backdoor attack framework driven by image
steganography,
PR(161), 2025, pp. 111262.
Elsevier DOI
2502
Backdoor attack, Image steganography, Data transformation,
Perceivability, Robustness
BibRef
Feng, Y.[Yu],
Ma, B.[Benteng],
Liu, D.[Dongnan],
Zhang, Y.N.[Yan-Ning],
Cai, W.D.[Wei-Dong],
Xia, Y.[Yong],
Contrastive Neuron Pruning for Backdoor Defense,
IP(34), 2025, pp. 1234-1245.
IEEE DOI
2502
Neurons, Training, Data models, Feature extraction, Robustness,
Perturbation methods, Face recognition, Computational modeling,
contrastive learning
BibRef
Pham, H.[Hoang],
Ta, T.A.[The-Anh],
Tran, A.[Anh],
Doan, K.D.[Khoa D.],
Flatness-aware Sequential Learning Generates Resilient Backdoors,
ECCV24(LXXXVII: 89-107).
Springer DOI
2412
BibRef
Huynh, T.[Tran],
Tran, A.[Anh],
Doan, K.D.[Khoa D.],
Pham, T.[Tung],
Data Poisoning Quantization Backdoor Attack,
ECCV24(LXXXIV: 38-54).
Springer DOI
2412
BibRef
Phan, H.[Huy],
Xiao, J.Q.[Jin-Qi],
Sui, Y.[Yang],
Zhang, T.[Tianfang],
Tang, Z.J.[Zi-Jie],
Shi, C.[Cong],
Wang, Y.[Yan],
Chen, Y.Y.[Ying-Ying],
Yuan, B.[Bo],
Clean and Compact: Efficient Data-free Backdoor Defense with Model
Compactness,
ECCV24(LX: 273-290).
Springer DOI
2412
BibRef
Jin, H.B.[Hai-Bo],
Chen, R.X.[Ruo-Xi],
Chen, J.[Jinyin],
Zheng, H.B.[Hai-Bin],
Zhang, Y.[Yang],
Wang, H.H.[Hao-Han],
Catchbackdoor: Backdoor Detection via Critical Trojan Neural Path
Fuzzing,
ECCV24(XLVII: 90-106).
Springer DOI
2412
BibRef
Huang, W.K.[Wen-Ke],
Ye, M.[Mang],
Shi, Z.K.[Ze-Kun],
Du, B.[Bo],
Tao, D.C.[Da-Cheng],
Fisher Calibration for Backdoor-robust Heterogeneous Federated Learning,
ECCV24(XV: 247-265).
Springer DOI
2412
BibRef
Wang, R.F.[Ruo-Fei],
Guo, Q.[Qing],
Li, H.L.[Hao-Liang],
Wan, R.J.[Ren-Jie],
Event Trojan: Asynchronous Event-based Backdoor Attacks,
ECCV24(VII: 315-332).
Springer DOI
2412
BibRef
Cheng, S.Y.[Si-Yuan],
Shen, G.Y.[Guang-Yu],
Zhang, K.Y.[Kai-Yuan],
Tao, G.H.[Guan-Hong],
An, S.W.[Sheng-Wei],
Guo, H.X.[Han-Xi],
Ma, S.Q.[Shi-Qing],
Zhang, X.Y.[Xiang-Yu],
Unit: Backdoor Mitigation via Automated Neural Distribution Tightening,
ECCV24(LXII: 262-281).
Springer DOI
2412
BibRef
Cai, K.[Kunbei],
Zhang, Z.K.[Zhen-Kai],
Lou, Q.[Qian],
Yao, F.[Fan],
WBP: Training-time Backdoor Attacks Through Hardware-based Weight Bit
Poisoning,
ECCV24(LXV: 179-197).
Springer DOI
2412
BibRef
Lyu, W.M.[Wei-Min],
Pang, L.[Lu],
Ma, T.F.[Teng-Fei],
Ling, H.B.[Hai-Bin],
Chen, C.[Chao],
TROJVLM: Backdoor Attack Against Vision Language Models,
ECCV24(LXV: 467-483).
Springer DOI
2412
BibRef
Shin, J.J.[Jeong-Jin],
Mask-Based Invisible Backdoor Attacks on Object Detection,
ICIP24(1050-1056)
IEEE DOI Code:
WWW Link.
2411
Deep learning, YOLO, Sensitivity, Codes, Security, Standards,
Backdoor attack, invisible attack, object detection, security in deep learning
BibRef
Li, B.[Boheng],
Cai, Y.[Yishuo],
Li, H.[Haowei],
Xue, F.[Feng],
Li, Z.F.[Zhi-Feng],
Li, Y.M.[Yi-Ming],
Nearest is Not Dearest: Towards Practical Defense Against
Quantization-Conditioned Backdoor Attacks,
CVPR24(24523-24533)
IEEE DOI
2410
Quantization (signal), Accuracy, Codes, Weapons, Neurons,
Backdoor Attack, Backdoor Defense, Conditioned Backdoors
BibRef
Guan, J.[Jiyang],
Liang, J.[Jian],
He, R.[Ran],
Backdoor Defense via Test-Time Detecting and Repairing,
CVPR24(24564-24573)
IEEE DOI
2410
Computational modeling, Face recognition, Neurons, Estimation,
Artificial neural networks, Computer architecture
BibRef
Cheng, S.Y.[Si-Yuan],
Tao, G.H.[Guan-Hong],
Liu, Y.Q.[Ying-Qi],
Shen, G.Y.[Guang-Yu],
An, S.W.[Sheng-Wei],
Feng, S.W.[Shi-Wei],
Xu, X.Z.[Xiang-Zhe],
Zhang, K.Y.[Kai-Yuan],
Ma, S.Q.[Shi-Qing],
Zhang, X.Y.[Xiang-Yu],
Lotus: Evasive and Resilient Backdoor Attacks through
Sub-Partitioning,
CVPR24(24798-24809)
IEEE DOI Code:
WWW Link.
2410
Training, Deep learning, Codes, Prevention and mitigation, Focusing,
Deep Learning, Image Classification, Backdoor Attack
BibRef
Hammoud, H.A.A.K.[Hasan Abed Al Kader],
Liu, S.M.[Shu-Ming],
Alkhrashi, M.[Mohammed],
AlBalawi, F.[Fahad],
Ghanem, B.[Bernard],
Look, Listen, and Attack:
Backdoor Attacks Against Video Action Recognition,
SAIAD24(3439-3450)
IEEE DOI
2410
Threat modeling, Accuracy, Artificial neural networks,
backdoor, adversarial attacks,
action recognition
BibRef
Zhang, J.H.[Jing-Huai],
Liu, H.B.[Hong-Bin],
Jia, J.Y.[Jin-Yuan],
Gong, N.Z.Q.[Neil Zhen-Qiang],
Data Poisoning Based Backdoor Attacks to Contrastive Learning,
CVPR24(24357-24366)
IEEE DOI
2410
Contrastive learning, Backdoor Attack,
Contrastive Learning, Adversarial Attack
BibRef
Yin, W.[Wen],
Lou, J.[Jian],
Zhou, P.[Pan],
Xie, Y.[Yulai],
Feng, D.[Dan],
Sun, Y.H.[Yu-Hua],
Zhang, T.[Tailai],
Sun, L.C.[Li-Chao],
Physical Backdoor: Towards Temperature-Based Backdoor Attacks in the
Physical World,
CVPR24(12733-12743)
IEEE DOI
2410
Temperature, Object detection, Detectors, Benchmark testing, Cameras,
Temperature control, Thermal Infrared Object Detection,
Temperature Controlling
BibRef
Subramanya, A.[Akshayvarun],
Koohpayegani, S.A.[Soroush Abbasi],
Saha, A.[Aniruddha],
Tejankar, A.[Ajinkya],
Pirsiavash, H.[Hamed],
A Closer Look at Robustness of Vision Transformers to Backdoor
Attacks,
WACV24(3862-3871)
IEEE DOI Code:
WWW Link.
2404
Training, Toxicology, Codes, Architecture, Computer architecture,
Transformers, Algorithms, Adversarial learning
BibRef
Zhu, Z.X.[Zi-Xuan],
Wang, R.[Rui],
Zou, C.[Cong],
Jing, L.H.[Li-Hua],
The Victim and The Beneficiary: Exploiting a Poisoned Model to Train
a Clean Model on Poisoned Data,
ICCV23(155-164)
IEEE DOI Code:
WWW Link.
2401
BibRef
Ding, R.[Ruyi],
Duan, S.J.[Shi-Jin],
Xu, X.L.[Xiao-Lin],
Fei, Y.[Yunsi],
VertexSerum: Poisoning Graph Neural Networks for Link Inference,
ICCV23(4509-4518)
IEEE DOI Code:
WWW Link.
2401
BibRef
Bansal, H.[Hritik],
Yin, F.[Fan],
Singhi, N.[Nishad],
Grover, A.[Aditya],
Yang, Y.[Yu],
Chang, K.W.[Kai-Wei],
CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal
Contrastive Learning,
ICCV23(112-123)
IEEE DOI Code:
WWW Link.
2401
BibRef
Sur, I.[Indranil],
Sikka, K.[Karan],
Walmer, M.[Matthew],
Koneripalli, K.[Kaushik],
Roy, A.[Anirban],
Lin, X.[Xiao],
Divakaran, A.[Ajay],
Jha, S.[Susmit],
TIJO: Trigger Inversion with Joint Optimization for Defending
Multimodal Backdoored Models,
ICCV23(165-175)
IEEE DOI Code:
WWW Link.
2401
BibRef
Li, C.J.[Chang-Jiang],
Pang, R.[Ren],
Xi, Z.[Zhaohan],
Du, T.Y.[Tian-Yu],
Ji, S.[Shouling],
Yao, Y.[Yuan],
Wang, T.[Ting],
An Embarrassingly Simple Backdoor Attack on Self-supervised Learning,
ICCV23(4344-4355)
IEEE DOI Code:
WWW Link.
2401
BibRef
Zhu, M.L.[Ming-Li],
Wei, S.[Shaokui],
Shen, L.[Li],
Fan, Y.B.[Yan-Bo],
Wu, B.Y.[Bao-Yuan],
Enhancing Fine-Tuning based Backdoor Defense with Sharpness-Aware
Minimization,
ICCV23(4443-4454)
IEEE DOI Code:
WWW Link.
2401
BibRef
Liu, M.[Min],
Sangiovanni-Vincentelli, A.[Alberto],
Yue, X.Y.[Xiang-Yu],
Beating Backdoor Attack at Its Own Game,
ICCV23(4597-4606)
IEEE DOI Code:
WWW Link.
2401
BibRef
Huang, S.Q.[Si-Quan],
Li, Y.J.[Yi-Jiang],
Chen, C.[Chong],
Shi, L.[Leyu],
Gao, Y.[Ying],
Multi-metrics adaptively identifies backdoors in Federated learning,
ICCV23(4629-4639)
IEEE DOI
2401
BibRef
Guo, J.F.[Jun-Feng],
Li, A.[Ang],
Wang, L.[Lixu],
Liu, C.[Cong],
PolicyCleanse: Backdoor Detection and Mitigation for Competitive
Reinforcement Learning,
ICCV23(4676-4685)
IEEE DOI
2401
BibRef
Shejwalkar, V.[Virat],
Lyu, L.J.[Ling-Juan],
Houmansadr, A.[Amir],
The Perils of Learning From Unlabeled Data:
Backdoor Attacks on Semi-supervised Learning,
ICCV23(4707-4717)
IEEE DOI
2401
BibRef
Wu, Y.T.[Yu-Tong],
Han, X.[Xingshuo],
Qiu, H.[Han],
Zhang, T.W.[Tian-Wei],
Computation and Data Efficient Backdoor Attacks,
ICCV23(4782-4791)
IEEE DOI Code:
WWW Link.
2401
BibRef
Han, G.[Gyojin],
Choi, J.[Jaehyun],
Hong, H.G.[Hyeong Gwon],
Kim, J.[Junmo],
Data Poisoning Attack Aiming the Vulnerability of Continual Learning,
ICIP23(1905-1909)
IEEE DOI
2312
BibRef
Huang, B.[Bin],
Wang, Z.[Zhi],
Efficient any-Target Backdoor Attack with Pseudo Poisoned Samples,
ICIP23(3319-3323)
IEEE DOI
2312
BibRef
Shen, Z.H.[Zi-Han],
Hou, W.[Wei],
Li, Y.[Yun],
CSSBA: A Clean Label Sample-Specific Backdoor Attack,
ICIP23(965-969)
IEEE DOI
1806
BibRef
Sun, M.J.[Ming-Jie],
Kolter, Z.[Zico],
Single Image Backdoor Inversion via Robust Smoothed Classifiers,
CVPR23(8113-8122)
IEEE DOI
2309
BibRef
Hammoud, H.A.A.K.[Hasan Abed Al Kader],
Bibi, A.[Adel],
Torr, P.H.S.[Philip H.S.],
Ghanem, B.[Bernard],
Don't FREAK Out: A Frequency-Inspired Approach to Detecting Backdoor
Poisoned Samples in DNNs,
AML23(2338-2345)
IEEE DOI
2309
BibRef
Gao, K.F.[Kuo-Feng],
Bai, Y.[Yang],
Gu, J.D.[Jin-Dong],
Yang, Y.[Yong],
Xia, S.T.[Shu-Tao],
Backdoor Defense via Adaptively Splitting Poisoned Dataset,
CVPR23(4005-4014)
IEEE DOI
2309
BibRef
Chou, S.Y.[Sheng-Yen],
Chen, P.Y.[Pin-Yu],
Ho, T.Y.[Tsung-Yi],
How to Backdoor Diffusion Models?,
CVPR23(4015-4024)
IEEE DOI
2309
BibRef
Zheng, R.K.[Run-Kai],
Tang, R.J.[Rong-Jun],
Li, J.Z.[Jian-Ze],
Liu, L.[Li],
Data-Free Backdoor Removal Based on Channel Lipschitzness,
ECCV22(V:175-191).
Springer DOI
2211
BibRef
Dolatabadi, H.M.[Hadi M.],
Erfani, S.[Sarah],
Leckie, C.[Christopher],
Collider: A Robust Training Framework for Backdoor Data,
ACCV22(VI:681-698).
Springer DOI
2307
BibRef
Ji, H.X.[Hu-Xiao],
Li, J.[Jie],
Wu, C.[Chentao],
CRAB: Certified Patch Robustness Against Poisoning-Based Backdoor
Attacks,
ICIP22(2486-2490)
IEEE DOI
2211
Training, Deep learning, Smoothing methods, Neural networks, Games,
Robustness, Backdoor Attack, Certified Robustness,
(De)randomized Smoothing
BibRef
Liu, Y.Q.[Ying-Qi],
Shen, G.Y.[Guang-Yu],
Tao, G.H.[Guan-Hong],
Wang, Z.T.[Zhen-Ting],
Ma, S.Q.[Shi-Qing],
Zhang, X.Y.[Xiang-Yu],
Complex Backdoor Detection by Symmetric Feature Differencing,
CVPR22(14983-14993)
IEEE DOI
2210
Rendering (computer graphics),
Feature extraction, Reflection,
Adversarial attack and defense
BibRef
Guan, J.[Jiyang],
Tu, Z.[Zhuozhuo],
He, R.[Ran],
Tao, D.C.[Da-Cheng],
Few-shot Backdoor Defense Using Shapley Estimation,
CVPR22(13348-13357)
IEEE DOI
2210
Deep learning, Training, Neurons, Neural networks, Estimation,
Data models, Robustness, Adversarial attack and defense
BibRef
Tao, G.H.[Guan-Hong],
Shen, G.Y.[Guang-Yu],
Liu, Y.Q.[Ying-Qi],
An, S.W.[Sheng-Wei],
Xu, Q.L.[Qiu-Ling],
Ma, S.Q.[Shi-Qing],
Li, P.[Pan],
Zhang, X.Y.[Xiang-Yu],
Better Trigger Inversion Optimization in Backdoor Scanning,
CVPR22(13358-13368)
IEEE DOI
2210
Training, Computational modeling, Optimization methods, Robustness,
Adversarial attack and defense, Optimization methods
BibRef
Chan, S.H.[Shih-Han],
Dong, Y.P.[Yin-Peng],
Zhu, J.[Jun],
Zhang, X.L.[Xiao-Lu],
Zhou, J.[Jun],
Baddet: Backdoor Attacks on Object Detection,
AdvRob22(396-412).
Springer DOI
2304
Backdoor trigger into a small portion of training data.
BibRef
Ramakrishnan, G.[Goutham],
Albarghouthi, A.[Aws],
Backdoors in Neural Models of Source Code,
ICPR22(2892-2899)
IEEE DOI
2212
Deep learning, Codes, Source coding, Neural networks, Training data,
Implants, Predictive models
BibRef
Phan, H.[Huy],
Shi, C.[Cong],
Xie, Y.[Yi],
Zhang, T.F.[Tian-Fang],
Li, Z.H.[Zhuo-Hang],
Zhao, T.M.[Tian-Ming],
Liu, J.[Jian],
Wang, Y.[Yan],
Chen, Y.Y.[Ying-Ying],
Yuan, B.[Bo],
RIBAC: Towards Robust and Imperceptible Backdoor Attack against Compact
DNN,
ECCV22(IV:708-724).
Springer DOI
2211
BibRef
Feng, Y.[Yu],
Ma, B.[Benteng],
Zhang, J.[Jing],
Zhao, S.S.[Shan-Shan],
Xia, Y.[Yong],
Tao, D.C.[Da-Cheng],
FIBA: Frequency-Injection based Backdoor Attack in Medical Image
Analysis,
CVPR22(20844-20853)
IEEE DOI
2210
Training, Image segmentation, Codes, Frequency-domain analysis,
Computational modeling, Semantics, Predictive models, Medical,
Privacy and federated learning
BibRef
Zhao, Z.D.[Zhen-Dong],
Chen, X.J.[Xiao-Jun],
Xuan, Y.X.[Yue-Xin],
Dong, Y.[Ye],
Wang, D.[Dakui],
Liang, K.[Kaitai],
DEFEAT: Deep Hidden Feature Backdoor Attacks by Imperceptible
Perturbation and Latent Representation Constraints,
CVPR22(15192-15201)
IEEE DOI
2210
Training, Resistance, Representation learning, Adaptation models,
Visualization, Toxicology, Perturbation methods,
Machine learning
BibRef
Li, Y.Z.[Yue-Zun],
Li, Y.M.[Yi-Ming],
Wu, B.Y.[Bao-Yuan],
Li, L.K.[Long-Kang],
He, R.[Ran],
Lyu, S.W.[Si-Wei],
Invisible Backdoor Attack with Sample-Specific Triggers,
ICCV21(16443-16452)
IEEE DOI
2203
Training, Additive noise, Deep learning, Steganography, Image coding,
Perturbation methods, Adversarial learning, Recognition and classification
BibRef
Doan, K.[Khoa],
Lao, Y.J.[Ying-Jie],
Zhao, W.J.[Wei-Jie],
Li, P.[Ping],
LIRA: Learnable, Imperceptible and Robust Backdoor Attacks,
ICCV21(11946-11956)
IEEE DOI
2203
Deformable models, Visualization, Toxicology, Heuristic algorithms,
Neural networks, Stochastic processes, Inspection,
Neural generative models
BibRef
Xiang, Z.[Zhen],
Miller, D.J.[David J.],
Chen, S.[Siheng],
Li, X.[Xi],
Kesidis, G.[George],
A Backdoor Attack against 3D Point Cloud Classifiers,
ICCV21(7577-7587)
IEEE DOI
2203
Geometry, Point cloud compression, Training, Barium, Toxicology,
Adversarial learning, Recognition and classification,
Vision for robotics and autonomous vehicles
BibRef
Ren, Y.K.[Yan-Kun],
Li, L.F.[Long-Fei],
Zhou, J.[Jun],
Simtrojan: Stealthy Backdoor Attack,
ICIP21(819-823)
IEEE DOI
2201
Deep learning, Training, Target recognition, Image processing,
Buildings, Extraterrestrial measurements, deep learning
BibRef
Zhu, L.[Liuwan],
Ning, R.[Rui],
Xin, C.S.[Chun-Sheng],
Wang, C.G.[Chong-Gang],
Wu, H.Y.[Hong-Yi],
CLEAR: Clean-up Sample-Targeted Backdoor in Neural Networks,
ICCV21(16433-16442)
IEEE DOI
2203
Deep learning, Computational modeling, Neural networks,
Benchmark testing, Feature extraction, Safety,
BibRef
Zeng, Y.[Yi],
Park, W.[Won],
Mao, Z.M.[Z. Morley],
Jia, R.X.[Ruo-Xi],
Rethinking the Backdoor Attacks' Triggers: A Frequency Perspective,
ICCV21(16453-16461)
IEEE DOI
2203
Deep learning, Frequency-domain analysis, Detectors, Data models,
Security, Adversarial learning,
Image and video manipulation detection and integrity methods.
BibRef
Raj, A.[Ankita],
Pal, A.[Ambar],
Arora, C.[Chetan],
Identifying Physically Realizable Triggers for Backdoored Face
Recognition Networks,
ICIP21(3023-3027)
IEEE DOI
2201
Deep learning, Image recognition, Face recognition, Force,
Adversarial attack, trojan attack, back-door attack, face recognition
BibRef
Wang, R.[Ren],
Zhang, G.Y.[Gao-Yuan],
Liu, S.J.[Si-Jia],
Chen, P.Y.[Pin-Yu],
Xiong, J.J.[Jin-Jun],
Wang, M.[Meng],
Practical Detection of Trojan Neural Networks:
Data-limited and Data-free Cases,
ECCV20(XXIII:222-238).
Springer DOI
2011
(or poisoning backdoor attack) Manipulate the learned network.
BibRef
Liu, Y.F.[Yun-Fei],
Ma, X.J.[Xing-Jun],
Bailey, J.[James],
Lu, F.[Feng],
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks,
ECCV20(X:182-199).
Springer DOI
2011
BibRef
Zhao, S.,
Ma, X.,
Zheng, X.,
Bailey, J.,
Chen, J.,
Jiang, Y.,
Clean-Label Backdoor Attacks on Video Recognition Models,
CVPR20(14431-14440)
IEEE DOI
2008
Training, Data models, Toxicology, Perturbation methods,
Training data, Image resolution, Pipelines
BibRef
Kolouri, S.,
Saha, A.,
Pirsiavash, H.,
Hoffmann, H.,
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs,
CVPR20(298-307)
IEEE DOI
2008
Training, Perturbation methods, Data models,
Computational modeling, Machine learning, Benchmark testing
BibRef
Truong, L.,
Jones, C.,
Hutchinson, B.,
August, A.,
Praggastis, B.,
Jasper, R.,
Nichols, N.,
Tuor, A.,
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image
Classifiers,
AML-CV20(3422-3431)
IEEE DOI
2008
Data models, Training, Computational modeling, Machine learning,
Training data, Safety
BibRef
Barni, M.,
Kallas, K.,
Tondi, B.,
A New Backdoor Attack in CNNS by Training Set Corruption Without
Label Poisoning,
ICIP19(101-105)
IEEE DOI
1910
Adversarial learning, security of deep learning,
backdoor poisoning attacks, training with poisoned data
BibRef
Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Backdoor Attacks, Robustness .