Index for backd

_backdating_
Automated backdating of transportation networks with Landsat imagery
Integrating backdating and Transfer Learning in an Object-Based Framework for High Resolution Image Classification and Change Analysis

_backdoor_
Adversarial backdoor Attack by Naturalistic Data Poisoning on Trajectory Prediction in Autonomous Driving
Architectural backdoors in Neural Networks
Augmented Neural Fine-tuning for Efficient backdoor Purification
backdoor Attack against 3D Point Cloud Classifiers, A
backdoor Attacks
backdoor Attacks Against Deep Image Compression via Adaptive Frequency Trigger
backdoor Attacks Against Deep Learning Systems in the Physical World
backdoor Attacks against Deep Neural Networks by Personalized Audio Steganography
backdoor Attacks on Self-Supervised Learning
backdoor Attacks, Robustness
backdoor Cleansing with Unlabeled Data
backdoor defense based on adversarial prediction proximity and contrastive knowledge distillation
backdoor defense for large language models with weak-to-strong knowledge distillation
backdoor Defense in Transportation Cyber-Physical Systems Using Frequency Domain Hybrid Distillation
backdoor Defense via Adaptively Splitting Poisoned Dataset
backdoor Defense via Deconfounded Representation Learning
backdoor Defense via Test-Time Detecting and Repairing
backdoor for Debias: Mitigating Model Bias With Backdoor Attack-Based Artificial Bias
backdoor for Debias: Mitigating Model Bias With Backdoor Attack-Based Artificial Bias
backdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning
backdoors in Neural Models of Source Code
BadCLIP: Dual-Embedding Guided backdoor Attack on Multimodal Contrastive Learning
BadCLIP: Trigger-Aware Prompt Learning for backdoor Attacks on CLIP
BadCM: Invisible backdoor Attack Against Cross-Modal Learning
Baddet: backdoor Attacks on Object Detection
BadToken: Token-level backdoor Attacks to Multi-modal Large Language Models
BAIT: A New DNN backdoor Attack Using Inpainted Triggers
BAM: backdoor defense based on adversarial mitigation
Beating backdoor Attack at Its Own Game
Better Trigger Inversion Optimization in backdoor Scanning
BHAC-MRI: backdoor and Hybrid Attacks on MRI Brain Tumor Classification Using CNN
Black-box Detection of backdoor Attacks with Limited Information and Data
Catchbackdoor: Backdoor Detection via Critical Trojan Neural Path Fuzzing
Clean and Compact: Efficient Data-free backdoor Defense with Model Compactness
Clean-Label backdoor Attacks on Video Recognition Models
CLEAR: Clean-up Sample-Targeted backdoor in Neural Networks
Closer Look at Robustness of Vision Transformers to backdoor Attacks, A
Collider: A Robust Training Framework for backdoor Data
Color backdoor: A Robust Poisoning Attack in Color Space
Complex backdoor Detection by Symmetric Feature Differencing
Computation and Data Efficient backdoor Attacks
Contrastive Neuron Pruning for backdoor Defense
CRAB: Certified Patch Robustness Against Poisoning-Based backdoor Attacks
CSSBA: A Clean Label Sample-Specific backdoor Attack
Dark Side of Dynamic Routing Neural Networks: Towards Efficiency backdoor Injection, The
Data Poisoning Based backdoor Attacks to Contrastive Learning
Data Poisoning Quantization backdoor Attack
Data-Free backdoor Removal Based on Channel Lipschitzness
Dataset Security for Machine Learning: Data Poisoning, backdoor Attacks, and Defenses
DeDe: Detecting backdoor Samples for SSL Encoders via Decoders
Deep fidelity in DNN watermarking: A study of backdoor watermarking for classification models
DEFEAT: Deep Hidden Feature backdoor Attacks by Imperceptible Perturbation and Latent Representation Constraints
Defending Against Patch-based backdoor Attacks on Self-Supervised Learning
Defending Against Repetitive backdoor Attacks on Semi-Supervised Learning Through Lens of Rate-Distortion-Perception Trade-Off
Detecting backdoor Attacks in Federated Learning via Direction Alignment Inspection
Detecting backdoors During the Inference Stage Based on Corruption Robustness Consistency
Detecting backdoors in Pre-trained Encoders
Don't FREAK Out: A Frequency-Inspired Approach to Detecting backdoor Poisoned Samples in DNNs
Dual-Key Multimodal backdoors for Visual Question Answering
Dynamic Attention Analysis for backdoor Detection in Text-to-Image Diffusion Models
Effective backdoor Learning on Open-Set Face Recognition Systems
Efficient any-Target backdoor Attack with Pseudo Poisoned Samples
Embarrassingly Simple backdoor Attack on Self-supervised Learning, An
Enhancing Fine-Tuning based backdoor Defense with Sharpness-Aware Minimization
EntropyMark: Towards More Harmless backdoor Watermark via Entropy-based Constraint for Open-source Dataset Copyright Protection
Event Trojan: Asynchronous Event-based backdoor Attacks
Few-shot backdoor Defense Using Shapley Estimation
FIBA: Frequency-Injection based backdoor Attack in Medical Image Analysis
Fisher Calibration for backdoor-robust Heterogeneous Federated Learning
Flatness-aware Sequential Learning Generates Resilient backdoors
Fooling a Face Recognition System with a Marker-Free Label-Consistent backdoor Attack
Generalizable poisoning-resistant backdoor detection and removal framework: From dataset perspective
How to backdoor Diffusion Models?
IBSD: Iterable Black-Box Self-Defense Against backdoor Attacks
Imperceptible backdoor Attacks on Text-Guided 3D Scene Grounding
Infighting in the Dark: Multi-Label backdoor Attack in Federated Learning
Invisible backdoor Attack against Self-supervised Learning
Invisible backdoor attack with attention and steganography
Invisible backdoor Attack with Sample-Specific Triggers
Invisible backdoor Attack With Siamese Tuning on Pre-Trained Vision-Language Models
Invisible Black-Box backdoor Attack Through Frequency Domain, An
Invisible Intruders: Label-Consistent backdoor Attack Using Re-Parameterized Noise Trigger
Large language models are good attackers: Efficient and stealthy textual backdoor attacks
LIRA: Learnable, Imperceptible and Robust backdoor Attacks
Look, Listen, and Attack: backdoor Attacks Against Video Action Recognition
Lotus: Evasive and Resilient backdoor Attacks through Sub-Partitioning
Low-Frequency Black-Box backdoor Attack via Evolutionary Algorithm
M-to-N backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to Deep Learning Models
MAMBO-NET: Multi-causal aware modeling backdoor-intervention optimization for medical image segmentation network
Manipulating Trajectory Prediction Models With backdoors
Mask-Based Invisible backdoor Attacks on Object Detection
Master Key backdoor for universal impersonation attack against DNN-based face verification, A
MEDIC: Remove Model backdoors via Importance Driven Cloning
Mitigating Cross-Modal Retrieval Violations With Privacy-Preserving backdoor Learning
Multi-metrics adaptively identifies backdoors in Federated learning
Multi-target federated backdoor attack based on feature aggregation
Multi-target label backdoor attacks on graph neural networks
Nearest is Not Dearest: Towards Practical Defense Against Quantization-Conditioned backdoor Attacks
New backdoor Attack in CNNS by Training Set Corruption Without Label Poisoning, A
Not All Prompts Are Secure: A Switchable backdoor Attack Against Pre-trained Vision Transfomers
Not All Samples Are Born Equal: Towards Effective Clean-Label backdoor Attacks
One-pixel Signature: Characterizing CNN Models for backdoor Detection
Perils of Learning From Unlabeled Data: backdoor Attacks on Semi-supervised Learning, The
Perturbation distillation and backdoor feature induction for universal defense in deep vision models
Physical backdoor: Towards Temperature-Based Backdoor Attacks in the Physical World
Physical backdoor: Towards Temperature-Based Backdoor Attacks in the Physical World
PointBA: Towards backdoor Attacks in 3D Point Cloud
Poison Ink: Robust and Invisible backdoor Attack
PolicyCleanse: backdoor Detection and Mitigation for Competitive Reinforcement Learning
Progressive backdoor Erasing via connecting Backdoor and Adversarial Attacks
Progressive backdoor Erasing via connecting Backdoor and Adversarial Attacks
Protecting Deep Cerebrospinal Fluid Cell Image Processing Models with backdoor and Semi-Distillation
PSBD: Prediction Shift Uncertainty Unlocks backdoor Detection
Purifier+: Plug-and-Play backdoor Mitigation for Pre-Trained Models via Activation Alignment
Reflection backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection backdoor: A Natural Backdoor Attack on Deep Neural Networks
Removing backdoor-Based Watermarks in Neural Networks with Limited Data
Rethinking the backdoor Attacks' Triggers: A Frequency Perspective
Revisiting backdoor Attacks against Large Vision-Language Models from Domain Shift
RIBAC: Towards Robust and Imperceptible backdoor Attack against Compact DNN
Rickrolling the Artist: Injecting backdoors into Text Encoders for Text-to-Image Synthesis
Robust and Transferable backdoor Attacks Against Deep Image Compression With Selective Frequency Prior
Robust Feature-Guided Generative Adversarial Network for Aerial Image Semantic Segmentation against backdoor Attacks
SilentTrig: An imperceptible backdoor attack against speaker identification with hidden triggers
Simtrojan: Stealthy backdoor Attack
Single Image backdoor Inversion via Robust Smoothed Classifiers
StealthMark: Harmless and Stealthy Ownership Verification for Medical Segmentation via Uncertainty-Guided backdoors
Stealthy backdoor Attack Against Speaker Recognition Using Phase-Injection Hidden Trigger
Stealthy backdoor Attack in Self-Supervised Learning Vision Encoders for Large Vision Language Models
Stealthy Frequency-Domain backdoor Attacks: Fourier Decomposition and Fundamental Frequency Injection
Systematic Evaluation of backdoor Data Poisoning Attacks on Image Classifiers
T2ishield: Defending Against backdoors on Text-to-image Diffusion Models
TAT: Targeted backdoor attacks against visual object tracking
Test-Time backdoor Detection for Object Detection Models
Towards invisible backdoor attacks on multi-object tracking via suppressed feature learning
Towards Physical World backdoor Attacks Against Skeleton Action Recognition
Towards Practical Deployment-Stage backdoor Attack on Deep Neural Networks
Towards Unified Robustness Against Both backdoor and Adversarial Attacks
trigger-perceivable backdoor attack framework driven by image steganography, A
TROJVLM: backdoor Attack Against Vision Language Models
TSBA: A two-stage poison-only backdoor attack on visual object tracking
UIBDiffusion: Universal Imperceptible backdoor Attack for Diffusion Models
Unified Framework for backdoor Trigger Segmentation, A
Unit: backdoor Mitigation via Automated Neural Distribution Tightening
Universal Litmus Patterns: Revealing backdoor Attacks in CNNs
UPGP: backdoor defense via unlearning perturbation and orthogonality-constraint gradient projection
Vaccination Against backdoor Attacks on Federated Learning Systems
Versatile backdoor Attack With Visible, Semantic, Sample-Specific and Compatible Triggers
VL-Trojan: Multimodal Instruction backdoor Attacks against Autoregressive Visual Language Models
WBP: Training-time backdoor Attacks Through Hardware-based Weight Bit Poisoning
When Visual State Space Model Meets backdoor Attacks
You Are Catching My Attention: Are Vision Transformers Bad Learners under backdoor Attacks?
152 for backdoor

_backdoorbench_
backdoorbench: A Comprehensive Benchmark and Analysis of Backdoor Learning

_backdoored_
FixGuard: Repairing backdoored Models via Class-Wise Trigger Recovery and Unlearning
Identify backdoored Model in Federated Learning via Individual Unlearning
Identifying Physically Realizable Triggers for backdoored Face Recognition Networks
TIJO: Trigger Inversion with Joint Optimization for Defending Multimodal backdoored Models

_backdooring_
Semantic Shield: Defending Vision-Language Models Against backdooring and Poisoning via Fine-Grained Knowledge Alignment

_backdrivability_
Feed-forward friction and inertia compensation for improving backdrivability of motors

Index for "b"


Last update:28-Mar-26 20:22:13
Use price@usc.edu for comments.