Edgar, A.D.[Albert D.],
Penn, S.C.[Steven C.],
Video editing by locating segment boundaries and reordering
segment sequences,
US_Patent5,537,530, Jul 16, 1996
WWW Link.
BibRef
9607
Butler, S.[Sean],
Parkes, A.P.[Alan P.],
Film Sequence Generation Strategies for
Automatic Intelligent Video Editing,
AppAI(11), No. 4, June 1997, pp. 367-388.
9706
BibRef
Nack, F.,
Parkes, A.P.,
Toward the Automated Editing of Theme-Oriented Video Sequences,
AppAI(11), No. 4, June 1997, pp. 331-366.
9706
BibRef
Nack, F.,
Parkes, A.P.,
The Application of Video Semantics and Theme Representation in
Automated Video Editing,
MultToolApp(4), No. 1, January 1997, pp. 57-83.
9703
BibRef
Hua, X.S.[Xian-Sheng],
Lu, L.[Lie],
Zhang, H.J.[Hong-Jiang],
Optimization-based automated home video editing system,
CirSysVideo(14), No. 5, May 2004, pp. 572-583.
IEEE Abstract.
0407
BibRef
Hua, X.S.[Xian-Sheng],
Zhang, H.J.[Hong-Jiang],
Content and transformation effect matching for automated home video
editing,
ICIP04(III: 1613-1616).
IEEE DOI
0505
BibRef
Hua, X.S.[Xian-Sheng],
Chen, X.[Xian],
Zhang, H.J.[Hong-Jiang],
Robust video signature based on ordinal measure,
ICIP04(I: 685-688).
IEEE DOI
0505
BibRef
Shah, R.,
Narayanan, P.J.,
Interactive Video Manipulation Using Object Trajectories and Scene
Backgrounds,
CirSysVideo(23), No. 9, 2013, pp. 1565-1576.
IEEE DOI
1309
Cameras
BibRef
Sadek, R.[Rida],
Facciolo, G.[Gabriele],
Arias, P.[Pablo],
Caselles, V.[Vicent],
A Variational Model for Gradient-Based Video Editing,
IJCV(103), No. 1, May 2013, pp. 127-162.
Springer DOI
1305
See also Variational Framework for Exemplar-Based Image Inpainting, A.
BibRef
Facciolo, G.[Gabriele],
Sadek, R.[Rida],
Bugeau, A.[Aurélie],
Caselles, V.[Vicent],
Temporally Consistent Gradient Domain Video Editing,
EMMCVPR11(59-73).
Springer DOI
1107
BibRef
Xie, L.,
Natsev, A.,
He, X.,
Kender, J.R.,
Hill, M.,
Smith, J.R.,
Tracking Large-Scale Video Remix in Real-World Events,
MultMed(15), No. 6, 2013, pp. 1244-1254.
IEEE DOI
1309
image databases; YouTube; social networks
BibRef
Chen, S.F.[Shi-Feng],
Zhou, Q.A.[Qi-Ang],
Ding, H.J.[Hui-Jun],
Learning Boundary and Appearance for Video Object Cutout,
SPLetters(21), No. 1, January 2014, pp. 101-104.
IEEE DOI
1402
Markov processes
BibRef
Shi, Q.[Qun],
Nobuhara, S.,
Matsuyama, T.,
Augmented Motion History Volume for Spatiotemporal Editing of 3-D
Video in Multiparty Interaction Scenes,
CirSysVideo(25), No. 1, January 2015, pp. 63-76.
IEEE DOI
1502
BibRef
Earlier:
3DV13(414-421)
IEEE DOI
1311
image motion analysis.
computer graphics
BibRef
Su, P.C.[Po-Chyi],
Suei, P.L.[Pei-Lun],
Chang, M.K.[Min-Kuan],
Lain, J.[Jie],
Forensic and anti-forensic techniques for video shot editing in
H.264/AVC,
JVCIR(29), No. 1, 2015, pp. 103-113.
Elsevier DOI
1504
H.264/AVC
BibRef
Ahn, B.Y.[Byeong-Yong],
Koo, H.I.[Hyung Il],
Kim, H.I.[Hong Il],
Jeong, J.C.[Ji-Chull],
Cho, N.I.[Nam Ik],
Efficient Unwrap Representation of Faces for Video Editing,
SPLetters(22), No. 10, October 2015, pp. 1718-1722.
IEEE DOI
1506
image reconstruction
BibRef
Li, C.C.,
Lai, Y.C.,
Syu, N.S.,
Guo, H.N.,
Todorov, D.,
Yao, C.Y.,
EZCam: WYSWYG Camera Manipulator for Path Design,
CirSysVideo(27), No. 8, August 2017, pp. 1632-1646.
IEEE DOI
1708
Cameras, Feature extraction, Manipulators, Motion pictures,
Rendering (computer graphics), Robustness, Tracking,
Camera path design, camera transformation manipulator,
marker-based, camera, tracking
BibRef
Zhang, F.,
Wu, X.,
Li, R.,
Wang, J.,
Zheng, Z.,
Hu, S.,
Detecting and Removing Visual Distractors for Video Aesthetic
Enhancement,
MultMed(20), No. 8, August 2018, pp. 1987-1999.
IEEE DOI
1808
convolution, feature extraction, feedforward neural nets,
graph theory, learning (artificial intelligence), optimisation,
visual quality
BibRef
Chen, Z.,
Wang, J.,
Sheng, B.,
Li, P.,
Feng, D.D.,
Illumination-Invariant Video Cut-Out Using Octagon Sensitive
Optimization,
CirSysVideo(30), No. 5, May 2020, pp. 1410-1422.
IEEE DOI
2005
Image segmentation, Lighting, Motion segmentation,
Object segmentation, Feature extraction, Task analysis,
graph-cut
BibRef
Pérez-Rúa, J.M.[Juan-Manuel],
Miksik, O.[Ondrej],
Crivelli, T.[Tomas],
Bouthemy, P.[Patrick],
Torr, P.H.S.[Philip H. S.],
Pérez, P.[Patrick],
ROAM: A Rich Object Appearance Model with Application to Rotoscoping,
PAMI(42), No. 8, August 2020, pp. 1996-2010.
IEEE DOI
2007
BibRef
Earlier: A2, A1, A5, A6, Only:
CVPR17(7426-7434)
IEEE DOI
1711
detailed delineation of scene elements through a video shot.
Video post processing.
Adaptation models, Tools, Shape, Task analysis, Pipelines, Labeling,
Deformable models, Rotoscoping, trimaps, video segmentation,
dynamic programming.
Image color analysis.
BibRef
Cao, M.[Meng],
Huang, H.Z.[Hao-Zhi],
Wang, H.[Hao],
Wang, X.[Xuan],
Shen, L.[Li],
Wang, S.[Sheng],
Bao, L.C.[Lin-Chao],
Li, Z.F.[Zhi-Feng],
Luo, J.B.[Jie-Bo],
UniFaceGAN: A Unified Framework for Temporally Consistent Facial
Video Editing,
IP(30), 2021, pp. 6107-6116.
IEEE DOI
2107
Faces, Training, Task analysis, Image reconstruction, Optical losses,
Solid modeling, Facial video editing,
region-aware conditional normalization
BibRef
Wang, Z.[Zheng],
Li, J.G.[Jian-Guo],
Jiang, Y.G.[Yu-Gang],
Story-driven Video Editing,
MultMed(23), 2021, pp. 4027-4036.
IEEE DOI
2112
Task analysis, Visualization, Semantics, Image segmentation,
Context modeling, Proposals, Streaming media,
vision and language
BibRef
Bigioi, D.[Dan],
Basak, S.[Shubhajit],
Stypulkowski, M.[Michal],
Zieba, M.[Maciej],
Jordan, H.[Hugh],
McDonnell, R.[Rachel],
Corcoran, P.[Peter],
Speech driven video editing via an audio-conditioned diffusion model,
IVC(142), 2024, pp. 104911.
Elsevier DOI Code:
WWW Link.
2402
Video editing, Talking head generation, Generative AI,
Diffusion models, Dubbing
BibRef
Liang, S.[Susan],
Huang, C.[Chao],
Tian, Y.[Yapeng],
Kumar, A.[Anurag],
Xu, C.L.[Chen-Liang],
Language-guided Joint Audio-visual Editing via One-shot Adaptation,
ACCV24(VI: 123-139).
Springer DOI
2412
BibRef
Kahatapitiya, K.[Kumara],
Karjauv, A.[Adil],
Abati, D.[Davide],
Porikli, F.M.[Fatih M.],
Asano, Y.M.[Yuki M.],
Habibian, A.[Amirhossein],
Object-centric Diffusion for Efficient Video Editing,
ECCV24(LVII: 91-108).
Springer DOI
2412
BibRef
Deng, Y.F.[Yu-Fan],
Wang, R.[Ruida],
Zhang, Y.H.[Yu-Hao],
Tai, Y.W.[Yu-Wing],
Tang, C.K.[Chi-Keung],
Dragvideo: Interactive Drag-style Video Editing,
ECCV24(LVI: 183-199).
Springer DOI
2412
BibRef
Yoon, S.[Sunjae],
Koo, G.[Gwanhyeong],
Hong, J.W.[Ji Woo],
Yoo, C.D.[Chang D.],
DNI: Dilutional Noise Initialization for Diffusion Video Editing,
ECCV24(XLVIII: 180-195).
Springer DOI
2412
BibRef
Zhong, X.J.[Xiao-Jing],
Huang, X.[Xinyi],
Yang, X.F.[Xiao-Feng],
Lin, G.S.[Guo-Sheng],
Wu, Q.Y.[Qing-Yao],
Deco: Decoupled Human-centered Diffusion Video Editing with Motion
Consistency,
ECCV24(XLIV: 352-370).
Springer DOI
2412
BibRef
Jeong, H.[Hyeonho],
Chang, J.H.[Jin-Ho],
Park, G.Y.[Geon Yeong],
Ye, J.C.[Jong Chul],
Dreammotion: Space-time Self-similar Score Distillation for Zero-shot
Video Editing,
ECCV24(XXX: 358-376).
Springer DOI
2412
BibRef
Fan, X.[Xiang],
Bhattad, A.[Anand],
Krishna, R.[Ranjay],
Videoshop: Localized Semantic Video Editing with Noise-extrapolated
Diffusion Inversion,
ECCV24(XII: 232-250).
Springer DOI
2412
BibRef
Lee, J.[Jaekyeong],
Kim, G.[Geonung],
Cho, S.[Sunghyun],
RNA: Video Editing with Roi-based Neural Atlas,
ACCV24(VI: 278-293).
Springer DOI
2412
BibRef
Feng, Y.T.[Yu-Tang],
Gao, S.C.[Si-Cheng],
Bao, Y.X.[Yu-Xiang],
Wang, X.D.[Xiao-Di],
Han, S.[Shumin],
Zhang, J.[Juan],
Zhang, B.C.[Bao-Chang],
Yao, A.[Angela],
Wave: Warping Ddim Inversion Features for Zero-shot Text-to-video
Editing,
ECCV24(LXXVI: 38-55).
Springer DOI
2412
BibRef
Chen, M.H.[Ming-Hao],
Laina, I.[Iro],
Vedaldi, A.[Andrea],
DGE: Direct Gaussian 3d Editing by Consistent Multi-view Editing,
ECCV24(LXXIV: 74-92).
Springer DOI
2412
BibRef
Song, Y.[Yeji],
Shin, W.S.[Won-Sik],
Lee, J.[Junsoo],
Kim, J.[Jeesoo],
Kwak, N.[Nojun],
Save: Protagonist Diversification with Structure Agnostic Video Editing,
ECCV24(LXXX: 41-57).
Springer DOI
2412
BibRef
Singer, U.[Uriel],
Zohar, A.[Amit],
Kirstain, Y.[Yuval],
Sheynin, S.[Shelly],
Polyak, A.[Adam],
Parikh, D.[Devi],
Taigman, Y.[Yaniv],
Video Editing via Factorized Diffusion Distillation,
ECCV24(LXXVI: 450-466).
Springer DOI
2412
BibRef
Mou, L.[Linzhan],
Chen, J.K.[Jun-Kun],
Wang, Y.X.[Yu-Xiong],
Instruct 4D-to-4D: Editing 4D Scenes as Pseudo-3D Scenes Using 2D
Diffusion,
CVPR24(20176-20185)
IEEE DOI
2410
Integrated optics, Codes, Optical propagation, Diffusion models,
Pattern recognition
BibRef
Ma, H.Y.[Hao-Yu],
Mahdizadehaghdam, S.[Shahin],
Wu, B.[Bichen],
Fan, Z.P.[Zhi-Peng],
Gu, Y.C.[Yu-Chao],
Zhao, W.L.[Wen-Liang],
Shapira, L.[Lior],
Xie, X.H.[Xiao-Hui],
MaskINT: Video Editing via Interpolative Non-autoregressive Masked
Transformers,
CVPR24(7403-7412)
IEEE DOI
2410
Training, Interpolation, Generative AI, Text to image, Transformers,
Diffusion models, video editing, masked transformers, video generation
BibRef
Li, X.[Xirui],
Ma, C.[Chao],
Yang, X.K.[Xiao-Kang],
Yang, M.H.[Ming-Hsuan],
VidToMe: Video Token Merging for Zero-Shot Video Editing,
CVPR24(7486-7495)
IEEE DOI
2410
Image coding, Merging, Memory management, Coherence,
Diffusion models, Rendering (computer graphics), diffusion model,
zero-shot
BibRef
Li, M.[Maomao],
Li, Y.[Yu],
Yang, T.Y.[Tian-Yu],
Liu, Y.F.[Yun-Fei],
Yue, D.X.[Dong-Xu],
Lin, Z.H.[Zhi-Hui],
Xu, D.[Dong],
A Video is Worth 256 Bases: Spatial-Temporal Expectation-Maximization
Inversion for Zero-Shot Video Editing,
CVPR24(7528-7537)
IEEE DOI Code:
WWW Link.
2410
Noise, Noise measurement, Video inversion,
Low-rank, Expectation-Maximization
BibRef
Liu, J.W.[Jia-Wei],
Cao, Y.P.[Yan-Pei],
Wu, J.Z.J.[Jay Zhang-Jie],
Mao, W.J.[Wei-Jia],
Gu, Y.C.[Yu-Chao],
Zhao, R.[Rui],
Keppo, J.[Jussi],
Shan, Y.[Ying],
Shou, M.Z.[Mike Zheng],
DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and
View-Change Human-Centric Video Editing,
CVPR24(7664-7674)
IEEE DOI Code:
WWW Link.
2410
Codes, Deformation, Dynamics, Superresolution, Pipelines,
4D Human-Centric Video Editing, Dynamic Video-NeRF,
Large Motion and Viewpoint Changes
BibRef
Tu, S.Y.[Shu-Yuan],
Dai, Q.[Qi],
Cheng, Z.Q.[Zhi-Qi],
Hu, H.[Han],
Han, X.T.[Xin-Tong],
Wu, Z.[Zuxuan],
Jiang, Y.G.[Yu-Gang],
MotionEditor: Editing Video Motion via Content-Aware Diffusion,
CVPR24(7882-7891)
IEEE DOI
2410
Adaptation models, Heuristic algorithms, Noise, Dynamics,
Diffusion models, Controllability, video editing, diffusion models,
motion editing
BibRef
Liu, S.[Shaoteng],
Zhang, Y.[Yuechen],
Li, W.B.[Wen-Bo],
Lin, Z.[Zhe],
Jia, J.Y.[Jia-Ya],
Video-P2P: Video Editing with Cross-Attention Control,
CVPR24(8599-8608)
IEEE DOI
2410
Adaptation models, Costs, Accuracy, Image synthesis,
Computational modeling, Diffusion models
BibRef
Zhang, G.[Guiwei],
Zhang, T.Y.[Tian-Yu],
Niu, G.L.[Guang-Lin],
Tan, Z.C.[Zi-Chang],
Bai, Y.[Yalong],
Yang, Q.[Qing],
CAMEL: CAusal Motion Enhancement Tailored for Lifting Text-Driven
Video Editing,
CVPR24(9079-9088)
IEEE DOI
2410
Wavelet transforms, Visualization, Attention mechanisms,
Video sequences, Focusing, Coherence, Diffusion models,
Video-editing
BibRef
Harsha, S.S.[Sai Sree],
Revanur, A.[Ambareesh],
Agarwal, D.[Dhwanit],
Agrawal, S.[Shradha],
GenVideo: One-shot target-image and shape aware video editing using
T2I diffusion models,
GCV24(7559-7568)
IEEE DOI
2410
Visualization, Shape, Pipelines, Noise, Wheels, diffusion models,
shape aware mask, invedit, video editing, one shot target-aware mask
BibRef
Feng, R.[Ruoyu],
Weng, W.M.[Wen-Ming],
Wang, Y.H.[Yan-Hui],
Yuan, Y.H.[Yu-Hui],
Bao, J.M.[Jian-Min],
Luo, C.[Chong],
Chen, Z.B.[Zhi-Bo],
Guo, B.[Baining],
CCEdit: Creative and Controllable Video Editing via Diffusion Models,
CVPR24(6712-6722)
IEEE DOI
2410
Text to image, Computer architecture, Benchmark testing,
Network architecture, Diffusion models,
generative models
BibRef
Soucek, T.[Tomáš],
Damen, D.[Dima],
Wray, M.[Michael],
Laptev, I.[Ivan],
Sivic, J.[Josef],
GenHowTo: Learning to Generate Actions and State Transformations from
Instructional Videos,
CVPR24(6561-6571)
IEEE DOI
2410
Computational modeling, Transforms, Diffusion models,
Videos, image generation, image editing,
generative models
BibRef
Kara, O.[Ozgur],
Kurtkaya, B.[Bariscan],
Yesiltepe, H.[Hidir],
Rehg, J.M.[James M.],
Yanardag, P.[Pinar],
RAVE: Randomized Noise Shuffling for Fast and Consistent Video
Editing with Diffusion Models,
CVPR24(6507-6516)
IEEE DOI
2410
Training, Visualization, Shape, Noise, Semantics, Memory management,
Text to image, video editing, diffusion models, generative ai
BibRef
Chan, C.H.[Cheng-Hung],
Yuan, C.Y.[Cheng-Yang],
Sun, C.[Cheng],
Chen, H.T.[Hwann-Tzong],
Hashing Neural Video Decomposition with Multiplicative Residuals in
Space-Time,
ICCV23(7709-7719)
IEEE DOI Code:
WWW Link.
2401
BibRef
Khandelwal, A.[Anant],
InFusion: Inject and Attention Fusion for Multi Concept Zero-Shot
Text-based Video Editing,
CVEU23(3009-3018)
IEEE DOI
2401
BibRef
Muralikrishnan, S.[Sanjeev],
Huang, C.H.P.[Chun-Hao Paul],
Ceylan, D.[Duygu],
Mitra, N.J.[Niloy J.],
BLiSS: Bootstrapped Linear Shape Space,
3DV24(569-580)
IEEE DOI
2408
Deformable models, Solid modeling, Shape, Deformation, Buildings,
Registers, body models, 3D deformation, shape space
BibRef
Ceylan, D.[Duygu],
Huang, C.H.P.[Chun-Hao P.],
Mitra, N.J.[Niloy J.],
Pix2Video: Video Editing using Image Diffusion,
ICCV23(23149-23160)
IEEE DOI Code:
WWW Link.
2401
BibRef
Chai, W.H.[Wen-Hao],
Guo, X.[Xun],
Wang, G.[Gaoang],
Lu, Y.[Yan],
StableVideo: Text-driven Consistency-aware Diffusion Video Editing,
ICCV23(22983-22993)
IEEE DOI Code:
WWW Link.
2401
BibRef
Qi, C.Y.[Chen-Yang],
Cun, X.D.[Xiao-Dong],
Zhang, Y.[Yong],
Lei, C.Y.[Chen-Yang],
Wang, X.[Xintao],
Shan, Y.[Ying],
Chen, Q.F.[Qi-Feng],
FateZero: Fusing Attentions for Zero-shot Text-based Video Editing,
ICCV23(15886-15896)
IEEE DOI
2401
BibRef
Ali, M.H.[Moayed Haji],
Bond, A.[Andrew],
Karacan, L.[Levent],
Birdal, T.[Tolga],
Erdem, E.[Erkut],
Ceylan, D.[Duygu],
Erdem, A.[Aykut],
VidStyleODE: Disentangled Video Editing via StyleGAN and NeuralODEs,
ICCV23(7489-7500)
IEEE DOI Code:
WWW Link.
2401
BibRef
Lee, Y.C.[Yao-Chih],
Jang, J.Z.G.[Ji-Ze Genevieve],
Chen, Y.T.[Yi-Ting],
Qiu, E.[Elizabeth],
Huang, J.B.[Jia-Bin],
Shape-Aware Text-Driven Layered Video Editing,
CVPR23(14317-14326)
IEEE DOI
2309
BibRef
Frühstück, A.[Anna],
Sarafianos, N.[Nikolaos],
Xu, Y.[Yuanlu],
Wonka, P.[Peter],
Tung, T.[Tony],
VIVE3D: Viewpoint-Independent Video Editing using 3D-Aware GANs,
CVPR23(4446-4455)
IEEE DOI
2309
BibRef
Shen, Y.J.[Yao-Jie],
Zhang, L.[Libo],
Xu, K.[Kai],
Jin, X.J.[Xiao-Jie],
AutoTransition: Learning to Recommend Video Transition Effects,
ECCV22(XXXVIII:285-300).
Springer DOI
2211
BibRef
Argaw, D.M.[Dawit Mureja],
Heilbron, F.C.[Fabian Caba],
Lee, J.Y.[Joon-Young],
Woodson, M.[Markus],
Kweon, I.S.[In So],
The Anatomy of Video Editing: A Dataset and Benchmark Suite for
AI-Assisted Video Editing,
ECCV22(VIII:201-218).
Springer DOI
2211
BibRef
Davtyan, A.[Aram],
Favaro, P.[Paolo],
Controllable Video Generation Through Global and Local Motion Dynamics,
ECCV22(XVII:68-84).
Springer DOI
2211
BibRef
Ge, S.W.[Song-Wei],
Hayes, T.[Thomas],
Yang, H.[Harry],
Yin, X.[Xi],
Pang, G.[Guan],
Jacobs, D.[David],
Huang, J.B.[Jia-Bin],
Parikh, D.[Devi],
Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive
Transformer,
ECCV22(XVII:102-118).
Springer DOI
2211
BibRef
Huang, J.[Jiahui],
Jin, Y.[Yuhe],
Yi, K.M.[Kwang Moo],
Sigal, L.[Leonid],
Layered Controllable Video Generation,
ECCV22(XVI:546-564).
Springer DOI
2211
WWW Link.
BibRef
Bar-Tal, O.[Omer],
Ofri-Amar, D.[Dolev],
Fridman, R.[Rafail],
Kasten, Y.[Yoni],
Dekel, T.[Tali],
Text2LIVE: Text-Driven Layered Image and Video Editing,
ECCV22(XV:707-723).
Springer DOI
2211
BibRef
Xu, Y.[Yiran],
AlBahar, B.[Badour],
Huang, J.B.[Jia-Bin],
Temporally Consistent Semantic Video Editing,
ECCV22(XV:357-374).
Springer DOI
2211
BibRef
Mai, L.[Long],
Liu, F.[Feng],
Motion-Adjustable Neural Implicit Video Representation,
CVPR22(10728-10737)
IEEE DOI
2210
Interpolation, Visualization, Image coding, Smoothing methods,
Filtering, Dynamics, Generators, Image and video synthesis and generation
BibRef
Menapace, W.[Willi],
Lathuilière, S.[Stéphane],
Siarohin, A.[Aliaksandr],
Theobalt, C.[Christian],
Tulyakov, S.[Sergey],
Golyanik, V.[Vladislav],
Ricci, E.[Elisa],
Playable Environments: Video Manipulation in Space and Time,
CVPR22(3574-3583)
IEEE DOI
2210
Modulation, Machine learning, Aerospace electronics,
Benchmark testing, Cameras, Self- semi- meta- unsupervised learning
BibRef
Fu, T.J.[Tsu-Jui],
Wang, X.E.[Xin Eric],
Grafton, S.T.[Scott T.],
Eckstein, M.P.[Miguel P.],
Wang, W.Y.[William Yang],
M3L: Language-based Video Editing via Multi-Modal Multi-Level
Transformers,
CVPR22(10503-10512)
IEEE DOI
2210
Fuses, Semantics, Natural languages, Transformers,
Task analysis, Vision + X,
Image and video synthesis and generation
BibRef
Ardino, P.[Pierfrancesco],
de Nadai, M.[Marco],
Lepri, B.[Bruno],
Ricci, E.[Elisa],
Lathuilière, S.[Stéphane],
Click to Move: Controlling Video Generation with Sparse Motion,
ICCV21(14729-14738)
IEEE DOI
2203
Codes, Convolution, Motion segmentation, Video sequences,
Deep architecture,
Motion and tracking
BibRef
Kumar, B.G.V.[B.G. Vijay],
Subramanian, J.[Jeyasri],
Chordia, V.[Varnith],
Bart, E.[Eugene],
Fang, S.B.[Shao-Bo],
Guan, K.[Kelly],
Bala, R.[Raja],
STRIVE: Scene Text Replacement In Videos,
ICCV21(14529-14538)
IEEE DOI
2203
Geometry, Limiting, Shape, Lighting, Transforms,
Image and video synthesis, Computational photography,
Vision applications and systems
BibRef
Koorathota, S.[Sharath],
Adelman, P.[Patrick],
Cotton, K.[Kelly],
Sajda, P.[Paul],
Editing like Humans: A Contextual, Multimodal Framework for Automated
Video Editing,
MULA21(1701-1709)
IEEE DOI
2109
Training, Visualization, Semantics, Metadata
BibRef
Bermudez, L.[Luis],
Dabby, N.[Nadine],
Lin, Y.X.A.[Ying-Xi Adelle],
Hilmarsdottir, S.[Sara],
Sundararajan, N.[Narayan],
Kar, S.[Swarnendu],
A Learning-Based Approach to Parametric Rotoscoping of Multi-Shape
Systems,
WACV21(776-785)
IEEE DOI
2106
Visual Effects post-production.
Shape, Production, Manuals, Machine learning, Predictive models,
Tools, Visual effects
BibRef
Tripathi, S.[Shashank],
Chandra, S.[Siddhartha],
Agrawal, A.[Amit],
Tyagi, A.[Ambrish],
Rehg, J.M.[James M.],
Chari, V.[Visesh],
Learning to Generate Synthetic Data via Compositing,
CVPR19(461-470).
IEEE DOI
2002
BibRef
Meyer, S.[Simone],
Sorkine-Hornung, A.[Alexander],
Gross, M.[Markus],
Phase-Based Modification Transfer for Video,
ECCV16(III: 633-648).
Springer DOI
1611
BibRef
Taylor, W.,
Qureshi, F.Z.,
Automatic video editing for sensor-rich videos,
WACV16(1-9)
IEEE DOI
1606
Accelerometers
BibRef
Manickam, N.[Nithya],
Chandran, S.[Sharat],
Automontage: Photo sessions made easy,
ICIP13(1321-1325)
IEEE DOI
1402
Cameras
BibRef
Chen, J.W.[Jia-Wen],
Paris, S.[Sylvain],
Wang, J.[Jue],
Matusik, W.[Wojciech],
Cohen, M.[Michael],
Durand, F.[Fredo],
The video mesh: A data structure for image-based three-dimensional
video editing,
ICCP11(1-8).
IEEE DOI
1208
BibRef
Earlier: A6, A5, A1, A2, A3, A4:
The Video Mesh: A Data Structure for Image-based Video Editing,
CSAIL(TR-2009-062). 2009-12-16
WWW Link.
1101
Video as 2.5D paper cutouts. Interactive editing of objects and depth.
BibRef
Rajgopalan, V.[Vaishnavi],
Ranganathan, A.[Ananth],
Rajagopalan, R.[Ramgopal],
Mudur, S.P.[Sudhir P.],
Keyframe-Guided Automatic Non-linear Video Editing,
ICPR10(3236-3239).
IEEE DOI
1008
BibRef
Yoshitaka, A.[Atsuo],
Deguchi, Y.[Yoshiki],
Rendition-based video editing for public contents authoring,
ICIP09(1825-1828).
IEEE DOI
0911
Editing. Specify emotion to emphasize, not frame number.
BibRef
Slot, K.[Kristine],
Truelsen, R.[René],
Sporring, J.[Jon],
Content-Aware Video Editing in the Temporal Domain,
SCIA09(490-499).
Springer DOI
0906
BibRef
Wang, H.C.[Hong-Cheng],
Xu, N.[Ning],
Raskar, R.[Ramesh],
Ahuja, N.[Narendra],
Videoshop: A New Framework for Spatio-Temporal Video Editing in
Gradient Domain,
CVPR05(II: 1201).
IEEE DOI
0507
BibRef
Wang, H.C.[Hong-Cheng],
Raskar, R.,
Ahuja, N.,
Seamless video editing,
ICPR04(III: 858-861).
IEEE DOI
0409
BibRef
Kumano, M.,
Ariki, Y.,
Amano, M.,
Uehara, K.,
Shunto, K.,
Tsukada, K.,
Video editing support system based on video grammar and content
analysis,
ICPR02(II: 1031-1036).
IEEE DOI
0211
BibRef
Chapter on 3-D Object Description and Computation Techniques, Surfaces, Deformable, View Generation, Video Conferencing continues in
Text to Video Synthesis, Text to Motion .