Journals starting with ecv2

ECV21 * *Efficient Deep Learning for Computer Vision
* Alps: Adaptive Quantization of Deep Neural Networks with GeneraLized PositS
* BasisNet: Two-stage Model Synthesis for Efficient Inference
* CompConv: A Compact Convolution Module for Efficient Feature Learning
* Data-Efficient Language-Supervised Zero-Shot Learning with Self-Distillation
* Discovering Multi-Hardware Mobile Models via Architecture Search
* Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms
* Efficient Two-stream Action Recognition on FPGA
* Extracurricular Learning: Knowledge Transfer Beyond Empirical Distribution
* Generative Zero-shot Network Quantization
* In-Hindsight Quantization Range Estimation for Quantized Training
* Is In-Domain Data Really Needed? A Pilot Study on Cross-Domain Calibration for Network Quantization
* Network Space Search for Pareto-Efficient Spaces
* Pareto-Optimal Quantized ResNet Is Mostly 4-bit
* Rethinking the Self-Attention in Vision Transformers
* Width transfer: on the (in)variance of width optimization
16 for ECV21

ECV22 * *Efficient Deep Learning for Computer Vision
* Active Object Detection with Epistemic Uncertainty and Hierarchical Information Aggregation
* ANT: Adapt Network Across Time for Efficient Video Processing
* Area Under the ROC Curve Maximization for Metric Learning
* Conjugate Adder Net (CAddNet) - a Space-Efficient Approximate CNN
* Cyclical Pruning for Sparse Neural Networks
* DA3: Dynamic Additive Attention Adaption for Memory-Efficient On-Device Multi-Domain Learning
* Discriminability-enforcing loss to improve representation learning
* Disentangled Loss for Low-Bit Quantization-Aware Training
* Event Transformer. A sparse-aware solution for efficient event data processing
* Hybrid Consistency Training with Prototype Adaptation for Few-Shot Learning
* Integrating Pose and Mask Predictions for Multi-person in Videos
* Linear Combination Approximation of Feature for Channel Pruning
* Low Memory Footprint Quantized Neural Network for Depth Completion of Very Sparse Time-of-Flight Depth Maps, A
* MAPLE: Microprocessor A Priori for Latency Estimation
* Momentum Contrastive Pruning
* Once-for-All Budgeted Pruning Framework for ConvNets Considering Input Resolution, An
* PEA: Improving the Performance of ReLU Networks for Free by Using Progressive Ensemble Activations
* ResNeSt: Split-Attention Networks
* Searching for Efficient Neural Architectures for On-Device ML on Edge TPUs
* Semi-Supervised Few-Shot Learning from A Dependency-Discriminant Perspective
* Simple and Efficient Architectures for Semantic Segmentation
* Simulated Quantization, Real Power Savings
* SqueezeNeRF: Further factorized FastNeRF for memory-efficient inference
* TinyOps: ImageNet Scale Deep Learning on Microcontrollers
* TorMentor: Deterministic dynamic-path, data augmentations with fractals
* Towards efficient feature sharing in MIMO architectures
* When NAS Meets Trees: An Efficient Algorithm for Neural Architecture Search
* YOLO-Pose: Enhancing YOLO for Multi Person Pose Estimation Using Object Keypoint Similarity Loss
29 for ECV22

Index for "e"


Last update: 1-Jun-23 11:30:56
Use price@usc.edu for comments.