Index for feic

Feichtenhofer, C. Standard Author Listing
     with: Adcock, A.: effectiveness of MAE pre-pretraining for billion-scale pre...
     with: Aggarwal, V.: effectiveness of MAE pre-pretraining for billion-scale p...
     with: Alwala, K.V.: effectiveness of MAE pre-pretraining for billion-scale p...
     with: Arbelaez, P.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Auli, M.: 3D Human Pose Estimation in Video With Temporal Convolutions...
     with: Auli, M.: Modeling Human Motion with Quaternion-Based Neural Networks
     with: Bansal, S.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Batra, D.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Byrne, E.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Cartillier, V.: Ego4D: Around the World in 3,000 Hours of Egocentric V...
     with: Chavis, Z.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Crandall, D.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Crane, S.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Damen, D.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Darrell, T.J.: ConvNet for the 2020s, A
     with: Do, T.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Dollar, P.: effectiveness of MAE pre-pretraining for billion-scale pre...
     with: Doulaty, M.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Duval, Q.: effectiveness of MAE pre-pretraining for billion-scale pret...
     with: Erapalli, A.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Fan, H.: Long-Term Feature Banks for Detailed Video Understanding
     with: Fan, H.: SlowFast Networks for Video Recognition
     with: Fan, H.Q.: Diffusion Models as Masked Autoencoders
     with: Fan, H.Q.: effectiveness of MAE pre-pretraining for billion-scale pret...
     with: Fan, H.Q.: Large-Scale Study on Unsupervised Spatiotemporal Representa...
     with: Fan, H.Q.: Masked Feature Prediction for Self-Supervised Visual Pre-Tr...
     with: Fan, H.Q.: MeMViT: Memory-Augmented Multiscale Vision Transformer for ...
     with: Fan, H.Q.: Multiscale Vision Transformers
     with: Fan, H.Q.: Multiview Pseudo-Labeling for Semi-supervised Learning from...
     with: Fan, H.Q.: MViTv2: Improved Multiscale Vision Transformers for Classif...
     with: Fan, H.Q.: Reversible Vision Transformers
     with: Fan, H.Q.: Scaling Language-Image Pre-Training via Masking
     with: Farinella, G.M.: Ego4D: Around the World in 3,000 Hours of Egocentric ...
     with: Fassold, H.: Perceptual Image Sharpness Metric Based on Local Edge Gra...
     with: Fragomeni, A.: Ego4D: Around the World in 3,000 Hours of Egocentric Vi...
     with: Fu, Q.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Fuegen, C.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Furnari, A.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Gebreselasie, A.: Ego4D: Around the World in 3,000 Hours of Egocentric...
     with: Ghanem, B.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Ghosh, G.: CiT: Curation in Training for Effective Vision-Language Data
     with: Girdhar, R.: effectiveness of MAE pre-pretraining for billion-scale pr...
     with: Girdhar, R.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Girshick, R.: effectiveness of MAE pre-pretraining for billion-scale p...
     with: Girshick, R.: Large-Scale Study on Unsupervised Spatiotemporal Represe...
     with: Girshick, R.: Long-Term Feature Banks for Detailed Video Understanding
     with: Girshick, R.: Multigrid Method for Efficiently Training Video Models, A
     with: Gkioxari, G.: Multiview Compressive Coding for 3D Reconstruction
     with: Gonzalez, C.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Grangier, D.: 3D Human Pose Estimation in Video With Temporal Convolut...
     with: Grangier, D.: Modeling Human Motion with Quaternion-Based Neural Netwo...
     with: Grauman, K.: Ego-Topo: Environment Affordances From Egocentric Video
     with: Grauman, K.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Grauman, K.: Grounded Human-Object Interaction Hotspots From Video
     with: Grauman, K.: Multiview Pseudo-Labeling for Semi-supervised Learning fr...
     with: Hamburger, J.: Ego4D: Around the World in 3,000 Hours of Egocentric Vi...
     with: He, K.: Large-Scale Study on Unsupervised Spatiotemporal Representatio...
     with: He, K.: Long-Term Feature Banks for Detailed Video Understanding
     with: He, K.: Scaling Language-Image Pre-Training via Masking
     with: He, K.: SlowFast Networks for Video Recognition
     with: He, K.M.: Multigrid Method for Efficiently Training Video Models, A
     with: Hillis, J.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Howes, R.: CiT: Curation in Training for Effective Vision-Language Data
     with: Hu, R.: Scaling Language-Image Pre-Training via Masking
     with: Huang, P.Y.: CiT: Curation in Training for Effective Vision-Language D...
     with: Huang, P.Y.: Diffusion Models as Masked Autoencoders
     with: Huang, X.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Huang, Y.F.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Ithapu, V.K.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Jawahar, C.V.: Ego4D: Around the World in 3,000 Hours of Egocentric Vi...
     with: Jia, W.Q.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Jiang, H.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Johnson, J.: Multiview Compressive Coding for 3D Reconstruction
     with: Joo, H.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Joulin, A.: effectiveness of MAE pre-pretraining for billion-scale pre...
     with: Kanazawa, A.: On the Benefits of 3D Pose and Tracking for Human Action...
     with: Khoo, W.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Kirillov, A.: TrackFormer: Multi-Object Tracking with Transformers
     with: Kitani, K.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Kolai, J.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Kottur, S.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Krahenbuhl, P.: Long-Term Feature Banks for Detailed Video Understanding
     with: Krahenbuhl, P.: Multigrid Method for Efficiently Training Video Models...
     with: Kumar, A.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Landini, F.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Leal Taixe, L.: TrackFormer: Multi-Object Tracking with Transformers
     with: Li, C.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Li, H.Z.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Li, Y.: MeMViT: Memory-Augmented Multiscale Vision Transformer for Eff...
     with: Li, Y.: Multiscale Vision Transformers
     with: Li, Y.: MViTv2: Improved Multiscale Vision Transformers for Classifica...
     with: Li, Y.: Reversible Vision Transformers
     with: Li, Y.H.: Diffusion Models as Masked Autoencoders
     with: Li, Y.H.: Ego-Topo: Environment Affordances From Egocentric Video
     with: Li, Y.H.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Li, Y.H.: Scaling Language-Image Pre-Training via Masking
     with: Li, Z.Q.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Liu, M.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Liu, X.Y.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Liu, Z.: ConvNet for the 2020s, A
     with: Malik, J.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Malik, J.: MeMViT: Memory-Augmented Multiscale Vision Transformer for ...
     with: Malik, J.: Multiscale Vision Transformers
     with: Malik, J.: Multiview Compressive Coding for 3D Reconstruction
     with: Malik, J.: MViTv2: Improved Multiscale Vision Transformers for Classif...
     with: Malik, J.: On the Benefits of 3D Pose and Tracking for Human Action Re...
     with: Malik, J.: Reversible Vision Transformers
     with: Malik, J.: SlowFast Networks for Video Recognition
     with: Mangalam, K.: Diffusion Models as Masked Autoencoders
     with: Mangalam, K.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Mangalam, K.: MeMViT: Memory-Augmented Multiscale Vision Transformer f...
     with: Mangalam, K.: Multiscale Vision Transformers
     with: Mangalam, K.: MViTv2: Improved Multiscale Vision Transformers for Clas...
     with: Mangalam, K.: Reversible Vision Transformers
     with: Mao, H.Z.: ConvNet for the 2020s, A
     with: Martin, M.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Meinhardt, T.: TrackFormer: Multi-Object Tracking with Transformers
     with: Misra, I.: effectiveness of MAE pre-pretraining for billion-scale pret...
     with: Modhugu, R.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Munro, J.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Murrell, T.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Nagarajan, T.: Ego-Topo: Environment Affordances From Egocentric Video
     with: Nagarajan, T.: Ego4D: Around the World in 3,000 Hours of Egocentric Vi...
     with: Nagarajan, T.: Grounded Human-Object Interaction Hotspots From Video
     with: Newcombe, R.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Nishiyasu, T.: Ego4D: Around the World in 3,000 Hours of Egocentric Vi...
     with: Oliva, A.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Park, H.S.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Pavlakos, G.: On the Benefits of 3D Pose and Tracking for Human Action...
     with: Pavllo, D.: 3D Human Pose Estimation in Video With Temporal Convolutio...
     with: Pavllo, D.: Modeling Human Motion with Quaternion-Based Neural Networks
     with: Pinz, A.: Bags of Spacetime Energies for Dynamic Scene Recognition
     with: Pinz, A.: Convolutional Two-Stream Network Fusion for Video Action Rec...
     with: Pinz, A.: Deep Insights into Convolutional Networks for Video Recognit...
     with: Pinz, A.: Detect to Track and Track to Detect
     with: Pinz, A.: Dynamic Scene Recognition with Complementary Spatiotemporal ...
     with: Pinz, A.: Dynamically encoded actions based on spacetime saliency
     with: Pinz, A.: Spacetime Forests with Complementary Features for Dynamic Sc...
     with: Pinz, A.: Spatio-temporal Good Features to Track
     with: Pinz, A.: Spatiotemporal Multiplier Networks for Video Action Recognit...
     with: Pinz, A.: Temporal Residual Networks for Dynamic Scene Recognition
     with: Pinz, A.: What have We Learned from Deep Representations for Action Re...
     with: Price, W.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Puentes, P.R.: Ego4D: Around the World in 3,000 Hours of Egocentric Vi...
     with: Radosavovic, I.: Ego4D: Around the World in 3,000 Hours of Egocentric ...
     with: Rajasegaran, J.: On the Benefits of 3D Pose and Tracking for Human Act...
     with: Ramakrishnan, S.K.: Ego4D: Around the World in 3,000 Hours of Egocentr...
     with: Ramazanova, M.: Ego4D: Around the World in 3,000 Hours of Egocentric V...
     with: Rehg, J.M.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Ryan, F.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Sari, L.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Sato, Y.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Schallauer, P.: Perceptual Image Sharpness Metric Based on Local Edge ...
     with: Sharma, J.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Shi, J.B.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Shou, M.Z.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Singh, M.: effectiveness of MAE pre-pretraining for billion-scale pret...
     with: Somasundaram, K.: Ego4D: Around the World in 3,000 Hours of Egocentric...
     with: Southerland, A.: Ego4D: Around the World in 3,000 Hours of Egocentric ...
     with: Sugano, Y.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Tao, R.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Torralba, A.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Torresani, L.: Ego4D: Around the World in 3,000 Hours of Egocentric Vi...
     with: Vo, M.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Wang, H.Y.: Diffusion Models as Masked Autoencoders
     with: Wang, Y.C.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Wei, C.: Diffusion Models as Masked Autoencoders
     with: Wei, C.: Masked Feature Prediction for Self-Supervised Visual Pre-Trai...
     with: Westbury, A.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Wildes, R.P.: Bags of Spacetime Energies for Dynamic Scene Recognition
     with: Wildes, R.P.: Deep Insights into Convolutional Networks for Video Reco...
     with: Wildes, R.P.: Dynamic Scene Recognition with Complementary Spatiotempo...
     with: Wildes, R.P.: Dynamically encoded actions based on spacetime saliency
     with: Wildes, R.P.: Spacetime Forests with Complementary Features for Dynami...
     with: Wildes, R.P.: Spatiotemporal Multiplier Networks for Video Action Reco...
     with: Wildes, R.P.: Temporal Residual Networks for Dynamic Scene Recognition
     with: Wildes, R.P.: What have We Learned from Deep Representations for Actio...
     with: Wray, M.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Wu, C.Y.: ConvNet for the 2020s, A
     with: Wu, C.Y.: Long-Term Feature Banks for Detailed Video Understanding
     with: Wu, C.Y.: Masked Feature Prediction for Self-Supervised Visual Pre-Tra...
     with: Wu, C.Y.: MeMViT: Memory-Augmented Multiscale Vision Transformer for E...
     with: Wu, C.Y.: Multigrid Method for Efficiently Training Video Models, A
     with: Wu, C.Y.: Multiview Compressive Coding for 3D Reconstruction
     with: Wu, C.Y.: MViTv2: Improved Multiscale Vision Transformers for Classifi...
     with: Wu, C.Y.: Reversible Vision Transformers
     with: Wu, X.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Xie, C.: Diffusion Models as Masked Autoencoders
     with: Xie, S.: CiT: Curation in Training for Effective Vision-Language Data
     with: Xie, S.: Masked Feature Prediction for Self-Supervised Visual Pre-Trai...
     with: Xie, S.N.: ConvNet for the 2020s, A
     with: Xiong, B.: Large-Scale Study on Unsupervised Spatiotemporal Representa...
     with: Xiong, B.: MeMViT: Memory-Augmented Multiscale Vision Transformer for ...
     with: Xiong, B.: Multiscale Vision Transformers
     with: Xiong, B.: Multiview Pseudo-Labeling for Semi-supervised Learning from...
     with: Xiong, B.: MViTv2: Improved Multiscale Vision Transformers for Classif...
     with: Xiong, B.: Reversible Vision Transformers
     with: Xu, E.Z.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Xu, H.: CiT: Curation in Training for Effective Vision-Language Data
     with: Xu, H.: Diffusion Models as Masked Autoencoders
     with: Xu, M.M.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Yagi, T.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Yan, M.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Yan, Z.C.: Multiscale Vision Transformers
     with: Yu, L.C.: CiT: Curation in Training for Effective Vision-Language Data
     with: Yuille, A.: Diffusion Models as Masked Autoencoders
     with: Yuille, A.L.: Masked Feature Prediction for Self-Supervised Visual Pre...
     with: Zettlemoyer, L.: CiT: Curation in Training for Effective Vision-Langua...
     with: Zhao, C.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Zhao, Z.W.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Zhu, Y.: Ego4D: Around the World in 3,000 Hours of Egocentric Video
     with: Zisserman, A.: Convolutional Two-Stream Network Fusion for Video Actio...
     with: Zisserman, A.: Deep Insights into Convolutional Networks for Video Rec...
     with: Zisserman, A.: Detect to Track and Track to Detect
     with: Zisserman, A.: What have We Learned from Deep Representations for Acti...
215 for Feichtenhofer, C.

Feichtinger, C. Standard Author Listing
     with: Aoki, T.: Geometric Multigrid Solver on Tsubame 2.0, A
     with: Kostler, H.: Geometric Multigrid Solver on Tsubame 2.0, A
     with: Rude, U.: Geometric Multigrid Solver on Tsubame 2.0, A

Feichtinger, H.G. Standard Author Listing
     with: Hampejs, M.: Approximation of Matrices by Gabor Multipliers
     with: Kozek, W.: Gabor Systems with Good TF-Localization and Applications to...
     with: Kracher, G.: Approximation of Matrices by Gabor Multipliers
     with: Lu, Y.H.: On a Complementary Condition to Derivation of Discrete Gabor...
     with: Morris, J.M.: On a Complementary Condition to Derivation of Discrete G...
     with: Prinz, P.: Gabor Systems with Good TF-Localization and Applications to...
     with: Strohmer, T.: Fast iterative reconstruction of band-limited images fro...
7 for Feichtinger, H.G.

Feick, R. Standard Author Listing
     with: Blakey, A.: Exploring rooftop photovoltaic cell feasibility through we...
     with: Boots, B.: Predicting Forest Age Classes from High Spatial Resolution ...
     with: Danahy, J.: Geovisualisation methods for exploring urban heat island e...
     with: Gray, A.: CWDAT: An Open-Source Tool for the Visualization and Analysi...
     with: Harrap, R.: Geovisualisation methods for exploring urban heat island e...
     with: King, L.: Geovisualisation methods for exploring urban heat island eff...
     with: Lawrence, H.: Spatial-Comprehensiveness (S-COM) Index: Identifying Opt...
     with: Mitchell, J.: Geovisualisation methods for exploring urban heat island...
     with: Nelson, T.: Predicting Forest Age Classes from High Spatial Resolution...
     with: Nelson, T.: Spatial-Comprehensiveness (S-COM) Index: Identifying Optim...
     with: Penny, J.: Geovisualisation methods for exploring urban heat island ef...
     with: Robertson, C.: CWDAT: An Open-Source Tool for the Visualization and An...
     with: Robertson, C.: Spatial-Comprehensiveness (S-COM) Index: Identifying Op...
     with: Wrigh, R.: Geovisualisation methods for exploring urban heat island ef...
     with: Wulder, M.: Predicting Forest Age Classes from High Spatial Resolution...
     with: Zhang, S.Q.: Understanding Public Opinions from Geosocial Media
16 for Feick, R.

Index for "f"


Last update:27-Apr-24 12:07:02
Use price@usc.edu for comments.