Journals starting with saia

SAIAD20 * *Safe Artificial Intelligence for Automated Driving
* Attentional Bottleneck: Towards an Interpretable Deep Driving Network
* Detection and Retrieval of Out-of-Distribution Objects in Semantic Segmentation
* Evaluating Scalable Bayesian Deep Learning Methods for Robust Computer Vision
* Explaining Autonomous Driving by Learning End-to-End Visual Attention
* Generating Socially Acceptable Perturbations for Efficient Evaluation of Autonomous Vehicles
* Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation
* Leveraging combinatorial testing for safety-critical computer vision datasets
* Mind the Gap - A Benchmark for Dense Depth Prediction Beyond Lidar
* Multivariate Confidence Calibration for Object Detection
* Robust Semantic Segmentation by Redundant Networks With a Layer-Specific Loss Contribution and Majority Vote
* Self-Supervised Domain Mismatch Estimation for Autonomous Perception
* Unsupervised Temporal Consistency Metric for Video Segmentation in Highly-Automated Driving
* Using Mixture of Expert Models to Gain Insights into Semantic Segmentation
14 for SAIAD20

SAIAD21 * *Safe Artificial Intelligence for Automated Driving
* Adversarial Robust Model Compression using In-Train Pruning
* Boosting Adversarial Robustness using Feature Level Stochastic Smoothing
* Detecting Anomalies in Semantic Segmentation with Prototypes
* Development Methodologies for Safety Critical Machine Learning Applications in the Automotive Domain: A Survey
* From Evaluation to Verification: Towards Task-oriented Relevance Metrics for Pedestrian Detection in Safety-critical Domains
* Improving Online Performance Prediction for Semantic Segmentation
* Out-of-distribution Detection and Generation using Soft Brownian Offset Sampling and Autoencoders
* Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
* Plants Don't Walk on the Street: Common-Sense Reasoning for Reliable Semantic Segmentation
* Reevaluating the Safety Impact of Inherent Interpretability on Deep Neural Networks for Pedestrian Detection
* SafeSO: Interpretable and Explainable Deep Learning Approach for Seat Occupancy Classification in Vehicle Interior
* Simulation Driven Design and Test for Safety of AI Based Autonomous Vehicles
* Sparse Activation Maps for Interpreting 3D Object Detection
* Towards Black-Box Explainability with Gaussian Discriminant Knowledge Distillation
* Unsupervised Temporal Consistency (TC) Loss to Improve the Performance of Semantic Segmentation Networks, An
16 for SAIAD21

SAIAD23 * *Safe Artificial Intelligence for Automated Driving
* Beyond AUROC and co. for evaluating out-of-distribution detection performance
* Category Differences Matter: A Broad Analysis of Inter-Category Error in Semantic Segmentation
* Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis
* Interpretable Model-Agnostic Plausibility Verification for 2D Object Detectors Using Domain-Invariant Concept Bottleneck Models
* Investigating CLIP Performance for Meta-data Generation in AD Datasets
* Maximum Entropy Information Bottleneck for Uncertainty-aware Stochastic Embedding
* Novel Benchmark for Refinement of Noisy Localization Labels in Autolabeled Datasets for Object Detection, A
* Optimizing Explanations by Network Canonization and Hyperparameter Search
* Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations
* RL-CAM: Visual Explanations for Convolutional Networks using Reinforcement Learning
* Uncovering the Inner Workings of STEGO for Safe Unsupervised Semantic Segmentation
12 for SAIAD23

Index for "s"


Last update:25-Mar-24 16:25:22
Use price@usc.edu for comments.