11.14.3.5.4 Diffusion Models in 3D Synthesis, Text to 3D Models

Chapter Contents (Back)
Diffusion. Synthesis. 3D Synthesis. Text to 3-D. Text to Depth.

Cao, Z.[Ziang], Hong, F.Z.[Fang-Zhou], Wu, T.[Tong], Pan, L.[Liang], Liu, Z.W.[Zi-Wei],
DiffTF++: 3D-Aware Diffusion Transformer for Large-Vocabulary 3D Generation,
PAMI(47), No. 4, April 2025, pp. 3018-3030.
IEEE DOI 2503
Transformers, Diffusion models, Fitting, Training, Solid modeling, Neural radiance field, Image reconstruction, Topology, transformer BibRef

Xu, H.F.[Hai-Feng], Huai, Y.J.[Yong-Jian], Nie, X.Y.[Xiao-Ying], Meng, Q.[Qingkuo], Zhao, X.[Xun], Pei, X.[Xuanda], Lu, H.[Hao],
Diff-Tree: A Diffusion Model for Diversified Tree Point Cloud Generation with High Realism,
RS(17), No. 5, 2025, pp. 923.
DOI Link 2503
BibRef

Wan, B.[Boyan], Shi, Y.F.[Yi-Fei], Chen, X.H.[Xiao-Hong], Xu, K.[Kai],
Equivariant Diffusion Model With A5-Group Neurons for Joint Pose Estimation and Shape Reconstruction,
PAMI(47), No. 6, June 2025, pp. 4343-4357.
IEEE DOI 2505
Shape, Diffusion models, Pose estimation, Neurons, Feature extraction, Image reconstruction, Vectors, shape reconstruction BibRef

Zhang, N.[Nan], Liu, Y.[Youmeng], Liu, H.[Hao], Tian, T.[Tian], Ma, J.Y.[Jia-Yi], Tian, J.W.[Jin-Wen],
Hierarchical diffusion models for generating various pattern vehicles in infrared aerial images,
PR(166), 2025, pp. 111658.
Elsevier DOI 2505
Infrared aerial image, Data augmentation, Diffusion model, Vehicle detection BibRef

Chen, J.[Jian], Chen, Y.[Yu], Zhao, J.[Jieyu], Ma, C.[Chenjun],
A Discrete Index Graph Diffusion Model for 3D Meshes Synthesis,
CirSysVideo(35), No. 8, August 2025, pp. 8002-8015.
IEEE DOI 2508
Shape, Diffusion models, Solid modeling, Point cloud compression, Computer architecture, Topology, Surface reconstruction, 3D conditional generation BibRef


Yang, X.Y.[Xing-Yi], Liu, S.[Songhua], Wang, X.C.[Xin-Chao],
Hash3D: Training-free Acceleration for 3D Generation,
CVPR25(21481-21491)
IEEE DOI Code:
WWW Link. 2508
Training, Solid modeling, Adaptation models, Computational modeling, Cameras, Diffusion models, Optimization BibRef

Chen, J.[Jinnan], Zhu, L.T.[Ling-Ting], Hu, Z.[Zeyu], Qian, S.J.[Sheng-Ju], Chen, Y.G.[Yu-Gang], Wang, X.[Xin], Lee, G.H.[Gim Hee],
MAR-3D: Progressive Masked Auto-regressor for High-Resolution 3D Generation,
CVPR25(11083-11092)
IEEE DOI 2508
Training, Visualization, Mars, Vector quantization, Noise reduction, Transformers, Diffusion models, Convergence, 3d generation, mesh generation BibRef

Pan, T.Y.[Tai-Yu], Jeon, S.[Sooyoung], Fan, M.[Mengdi], Yoo, J.[Jinsu], Feng, Z.Y.[Zhen-Yang], Campbell, M.[Mark], Weinberger, K.Q.[Kilian Q.], Hariharan, B.[Bharath], Chao, W.L.[Wei-Lun],
Transfer Your Perspective: Controllable 3D Generation from Any Viewpoint in a Driving Scene,
CVPR25(12027-12036)
IEEE DOI 2508
Semantics, Layout, Collaboration, Data collection, Diffusion models, Sensors, Faces, autonomous driving, 3d generation, collaborative driving BibRef

Zhao, W.[Wang], Cao, Y.P.[Yan-Pei], Xu, J.[Jiale], Dong, Y.[Yuejiang], Shan, Y.[Ying],
DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation,
CVPR25(11061-11072)
IEEE DOI 2508
Solid modeling, Procedural generation, Shape, Noise reduction, Transformer cores, Transformers, Parametric statistics, Tuning BibRef

Zhang, L.Y.[Ling-Yun], Xie, Y.[Yu], Fu, Y.W.[Yan-Wei], Chen, P.[Ping],
Concept Replacer: Replacing Sensitive Concepts in Diffusion Models via Precision Localization,
CVPR25(8172-8181)
IEEE DOI Code:
WWW Link. 2508
Location awareness, Training, Image synthesis, Noise reduction, Text to image, Diffusion models, User experience, concept replacing BibRef

Lin, J.T.[Jian-Tao], Yang, X.[Xin], Chen, M.[Meixi], Xu, Y.J.[Ying-Jie], Yan, D.[Dongyu], Wu, L.[Leyi], Xu, X.[Xinli], Xu, L.[Lie], Zhang, S.[Shunsi], Chen, Y.C.[Ying-Cong],
Kiss3DGen: Repurposing Image Diffusion Models for 3D Asset Generation,
CVPR25(5870-5880)
IEEE DOI Code:
WWW Link. 2508
Training, Solid modeling, Image synthesis, Computational modeling, Transforms, Diffusion models, Image reconstruction, 3d generation BibRef

Chen, M.H.[Ming-Hao], Shapovalov, R.[Roman], Laina, I.[Iro], Monnier, T.[Tom], Wang, J.Y.[Jian-Yuan], Novotny, D.[David], Vedaldi, A.[Andrea],
PartGen: Part-level 3D Generation and Reconstruction with Multi-View Diffusion Models,
CVPR25(5881-5892)
IEEE DOI 2508
Image segmentation, Shape, Diffusion models, Generators, Filling, Image reconstruction, Periodic structures, 3d generation, generative model BibRef

Bai, T.[Tongyuan], Bai, W.[Wangyuanfan], Chen, D.[Dong], Wu, T.[Tieru], Li, M.[Manyi], Ma, R.[Rui],
FreeScene: Mixed Graph Diffusion for 3D Scene Synthesis from Free Prompts,
CVPR25(5893-5903)
IEEE DOI 2508
Solid modeling, Retrieval augmented generation, Noise reduction, Process control, Controllability, Transformers, layout generation BibRef

Ni, J.F.[Jun-Feng], Liu, Y.[Yu], Lu, R.J.[Rui-Jie], Zhou, Z.[Zirui], Zhu, S.C.[Song-Chun], Chen, Y.X.[Yi-Xin], Huang, S.Y.[Si-Yuan],
Decompositional Neural Scene Reconstruction with Generative Diffusion Prior,
CVPR25(6022-6033)
IEEE DOI Code:
WWW Link. 2508
Geometry, Degradation, Shape, Semantics, Pipelines, Visual effects, Image reconstruction, Optimization, 3d scene reconstruction, generative prior BibRef

Yan, Y.Z.[Yun-Zhi], Xu, Z.[Zhen], Lin, H.T.[Hao-Tong], Jin, H.[Haian], Guo, H.Y.[Hao-Yu], Wang, Y.[Yida], Zhan, K.[Kun], Lang, X.P.[Xian-Peng], Bao, H.J.[Hu-Jun], Zhou, X.W.[Xiao-Wei], Peng, S.[Sida],
StreetCrafter: Street View Synthesis with Controllable Video Diffusion Models,
CVPR25(822-832)
IEEE DOI Code:
WWW Link. 2508
Training, Laser radar, Accuracy, Streaming media, Rendering (computer graphics), Diffusion models, Cameras, Vehicle dynamics BibRef

Guizilini, V.[Vitor], Irshad, M.Z.[Muhammad Zubair], Chen, D.[Dian], Shakhnarovich, G.[Greg], Ambrus, R.[Rares],
Zero-Shot Novel View and Depth Synthesis with Multi-View Geometric Diffusion,
CVPR25(764-776)
IEEE DOI 2508
Training, Visualization, Depth measurement, Computational modeling, Pipelines, Benchmark testing, Multitasking, Standards, diffusion models BibRef

Li, L.[Lingen], Zhang, Z.Y.[Zhao-Yang], Li, Y.W.[Yao-Wei], Xu, J.[Jiale], Hu, W.B.[Wen-Bo], Li, X.Y.[Xiao-Yu], Cheng, W.H.[Wei-Hao], Gu, J.[Jinwei], Xue, T.F.[Tian-Fan], Shan, Y.[Ying],
NVComposer: Boosting Generative Novel View Synthesis with Multiple Sparse and Unposed Images,
CVPR25(777-787)
IEEE DOI 2508
Training, Pose estimation, Diffusion models, Cameras, Boosting, Data models, novel view synthesis, video generation. BibRef

Zhu, D.K.[De-Kai], Di, Y.[Yan], Gavranovic, S.[Stefan], Ilic, S.[Slobodan],
SeaLion: Semantic Part-Aware Latent Point Diffusion Models for 3D Generation,
CVPR25(11789-11798)
IEEE DOI 2508
Point cloud compression, Measurement, Solid modeling, Shape, Semantics, Noise reduction, Diffusion models, Data augmentation, point cloud BibRef

Wu, J.Z.J.[Jay Zhang-Jie], Zhang, Y.X.[Yu-Xuan], Turki, H.[Haithem], Ren, X.[Xuanchi], Gao, J.[Jun], Shou, M.Z.[Mike Zheng], Fidler, S.[Sanja], Gojcic, Z.[Zan], Ling, H.[Huan],
Difix3D+: Improving 3D Reconstructions with Single-Step Diffusion Models,
CVPR25(26024-26035)
IEEE DOI 2508
Solid modeling, Pipelines, Diffusion models, Neural radiance field, Rendering (computer graphics), Real-time systems, Image reconstruction BibRef

Wang, X.[Xuan], Gao, X.T.[Xi-Tong], Liao, D.P.[Dong-Ping], Qin, T.R.[Tian-Rui], Lu, Y.L.[Yu-Liang], Xu, C.Z.[Cheng-Zhong],
A3: Few-shot Prompt Learning of Unlearnable Examples with Cross-Modal Adversarial Feature Alignment,
CVPR25(9507-9516)
IEEE DOI 2508
Training, Learning systems, Adaptation models, Perturbation methods, Pressing, Harmonic analysis, Data models, contrastive vision-language pre-training models BibRef

Ma, Z.Y.[Zhi-Yuan], Liang, X.Y.[Xin-Yue], Wu, R.Y.[Rong-Yuan], Zhu, X.Y.[Xiang-Yu], Lei, Z.[Zhen], Zhang, L.[Lei],
Progressive Rendering Distillation: Adapting Stable Diffusion for Instant Text-to-Mesh Generation without 3D Data,
CVPR25(11036-11050)
IEEE DOI 2508
Training, Adaptation models, Solid modeling, Training data, Text to image, Diffusion models, Rendering (computer graphics), stable diffusion BibRef

Yang, Y.B.[Yuan-Bo], Shao, J.H.[Jia-Hao], Li, X.Y.[Xin-Yang], Shen, Y.J.[Yu-Jun], Geiger, A.[Andreas], Liao, Y.[Yiyi],
Prometheus: 3D-Aware Latent Diffusion Models for Feed-Forward Text-to-3D Scene Generation,
CVPR25(2857-2869)
IEEE DOI 2508
Geometry, Text to image, Diffusion models, Image reconstruction, 3d generation, diffusion BibRef

Jo, K.[Kyungmin], Choo, J.[Jaegul],
Skip-and-Play: Depth-Driven Pose-Preserved Image Generation for Any Objects,
WACV25(690-699)
IEEE DOI 2505
Training, Solid modeling, Shape, Image synthesis, Noise, Nose, Cameras, Controllability, Reliability, diffusion model, pose control, depth, image generation BibRef

Poleski, M.[Mateusz], Tabor, J.[Jacek], Spurek, P.[Przemyslaw],
GeoGuide: Geometric Guidance of Diffusion Models,
WACV25(297-305)
IEEE DOI 2505
Manifolds, Training, Measurement, Visualization, Image synthesis, Noise reduction, Diffusion models, Probabilistic logic, guidance BibRef

Askari, H.[Hossein], Roosta, F.[Fred], Sun, H.F.[Hong-Fu],
Training-free Medical Image Inverses via Bi-level Guided Diffusion Models,
WACV25(75-84)
IEEE DOI Code:
WWW Link. 2505
Inverse problems, Image synthesis, Computed tomography, Magnetic resonance imaging, Diffusion models, Noise measurement, Biomedical imaging BibRef

Ahn, S.[Suhyun], Park, W.[Wonjung], Cho, J.[Jihoon], Park, J.[Jinah],
Volumetric Conditioning Module to Control Pretrained Diffusion Models for 3D Medical Images,
WACV25(85-95)
IEEE DOI Code:
WWW Link. 2505
Training, Solid modeling, Image synthesis, Superresolution, Training data, Graphics processing units, Diffusion models, medical images BibRef

Zeng, Y.F.[Yi-Fei], Jiang, Y.Q.[Yan-Qin], Zhu, S.[Siyu], Lu, Y.X.[Yuan-Xun], Lin, Y.[Youtian], Zhu, H.[Hao], Hu, W.M.[Wei-Ming], Cao, X.[Xun], Yao, Y.[Yao],
STAG4D: Spatial-Temporal Anchored Generative 4D Gaussians,
ECCV24(XXXVI: 163-179).
Springer DOI 2412
BibRef

Liu, Y.H.[Yu-Heng], Li, X.K.[Xin-Ke], Li, X.T.[Xue-Ting], Qi, L.[Lu], Li, C.S.[Chong-Shou], Yang, M.H.[Ming-Hsuan],
Pyramid Diffusion for Fine 3d Large Scene Generation,
ECCV24(LXIX: 71-87).
Springer DOI 2412
BibRef

Liu, Y.[Yibo], Yang, Z.[Zheyuan], Wu, G.[Guile], Ren, Y.[Yuan], Lin, K.[Kejian], Liu, B.B.[Bing-Bing], Liu, Y.[Yang], Shan, J.J.[Jin-Jun],
VQA-DIFF: Exploiting VQA and Diffusion for Zero-shot Image-to-3d Vehicle Asset Generation in Autonomous Driving,
ECCV24(LXVI: 323-340).
Springer DOI 2412
BibRef

Burgess, J.[James], Wang, K.C.[Kuan-Chieh], Yeung-Levy, S.[Serena],
Viewpoint Textual Inversion: Discovering Scene Representations and 3d View Control in 2d Diffusion Models,
ECCV24(LXIV: 416-435).
Springer DOI 2412
BibRef

Kalischek, N.[Nikolai], Peters, T.[Torben], Wegner, J.D.[Jan D.], Schindler, K.[Konrad],
Tetradiffusion: Tetrahedral Diffusion Models for 3d Shape Generation,
ECCV24(LIII: 357-373).
Springer DOI 2412
BibRef

Shim, D.[Dongseok], Kim, H.J.[H. Jin],
SEDIFF: Structure Extraction for Domain Adaptive Depth Estimation via Denoising Diffusion Models,
ECCV24(XIX: 37-53).
Springer DOI 2412
BibRef

Chen, Y.W.[Yong-Wei], Wang, T.F.[Teng-Fei], Wu, T.[Tong], Pan, X.G.[Xin-Gang], Jia, K.[Kui], Liu, Z.W.[Zi-Wei],
Comboverse: Compositional 3D Assets Creation Using Spatially-aware Diffusion Guidance,
ECCV24(XXIV: 128-146).
Springer DOI 2412
BibRef

Cheng, F.[Feng], Luo, M.[Mi], Wang, H.Y.[Hui-Yu], Dimakis, A.[Alex], Torresani, L.[Lorenzo], Bertasius, G.[Gedas], Grauman, K.[Kristen],
4DIFF: 3D-Aware Diffusion Model for Third-to-first Viewpoint Translation,
ECCV24(XXIV: 409-427).
Springer DOI 2412
BibRef

Yang, X.F.[Xiao-Feng], Chen, Y.W.[Yi-Wen], Chen, C.[Cheng], Zhang, C.[Chi], Xu, Y.[Yi], Yang, X.[Xulei], Liu, F.[Fayao], Lin, G.S.[Guo-Sheng],
Learn to Optimize Denoising Scores: A Unified and Improved Diffusion Prior for 3d Generation,
ECCV24(XLIV: 136-152).
Springer DOI 2412
BibRef

Yoshiyasu, Y.[Yusuke], Sun, L.Y.[Le-Yuan],
Diffsurf: A Transformer-based Diffusion Model for Generating and Reconstructing 3d Surfaces in Pose,
ECCV24(LXXXII: 246-264).
Springer DOI 2412
BibRef

Shim, H.J.[Ha-Jin], Kim, C.H.[Chang-Hun], Yang, E.[Eunho],
Cloudfixer: Test-time Adaptation for 3d Point Clouds via Diffusion-guided Geometric Transformation,
ECCV24(XXXI: 454-471).
Springer DOI 2412
BibRef

Lan, Y.S.[Yu-Shi], Hong, F.Z.[Fang-Zhou], Yang, S.[Shuai], Zhou, S.C.[Shang-Chen], Meng, X.Y.[Xu-Yi], Dai, B.[Bo], Pan, X.G.[Xin-Gang], Loy, C.C.[Chen Change],
Ln3diff: Scalable Latent Neural Fields Diffusion for Speedy 3d Generation,
ECCV24(IV: 112-130).
Springer DOI 2412
BibRef

Zhu, X.Y.[Xiao-Yu], Zhou, H.[Hao], Xing, P.F.[Peng-Fei], Zhao, L.[Long], Xu, H.[Hao], Liang, J.W.[Jun-Wei], Hauptmann, A.[Alexander], Liu, T.[Ting], Gallagher, A.[Andrew],
Open-Vocabulary 3D Semantic Segmentation with Text-to-Image Diffusion Models,
ECCV24(XXIX: 357-375).
Springer DOI 2412
BibRef

Gu, Y.M.[Yu-Ming], Xu, H.Y.[Hong-Yi], Xie, Y.[You], Song, G.X.[Guo-Xian], Shi, Y.C.[Yi-Chun], Chang, D.[Di], Yang, J.[Jing], Luo, L.J.[Lin-Jie],
DiffPortrait3D: Controllable Diffusion for Zero-Shot Portrait View Synthesis,
CVPR24(10456-10465)
IEEE DOI 2410
Training, Visualization, Noise reduction, Noise, Cameras, Diffusion models, diffusion model, generative model, single to 3D BibRef

Wang, Z.[Zhen], Xu, Q.G.[Qian-Geng], Tan, F.T.[Fei-Tong], Chai, M.L.[Meng-Lei], Liu, S.C.[Shi-Chen], Pandey, R.[Rohit], Fanello, S.[Sean], Kadambi, A.[Achuta], Zhang, Y.[Yinda],
MVDD: Multi-view Depth Diffusion Models,
ECCV24(XIII: 236-253).
Springer DOI 2412
BibRef

Zhai, G.Y.[Guang-Yao], Örnek, E.P.[Evin Pinar], Chen, D.Z.Y.[Dave Zhen-Yu], Liao, R.T.[Ruo-Tong], Di, Y.[Yan], Navab, N.[Nassir], Tombari, F.[Federico], Busam, B.[Benjamin],
Echoscene: Indoor Scene Generation via Information Echo Over Scene Graph Diffusion,
ECCV24(XXI: 167-184).
Springer DOI 2412
BibRef

Tang, S.T.[Shi-Tao], Chen, J.C.[Jia-Cheng], Wang, D.[Dilin], Tang, C.Z.[Cheng-Zhou], Zhang, F.[Fuyang], Fan, Y.C.[Yu-Chen], Chandra, V.[Vikas], Furukawa, Y.[Yasutaka], Ranjan, R.[Rakesh],
Mvdiffusion++: A Dense High-resolution Multi-view Diffusion Model for Single or Sparse-view 3d Object Reconstruction,
ECCV24(XVI: 175-191).
Springer DOI 2412
BibRef

Yan, Z.Z.[Zi-Zheng], Zhou, J.P.[Jia-Peng], Meng, F.[Fanpeng], Wu, Y.S.[Yu-Shuang], Qiu, L.[Lingteng], Ye, Z.[Zisheng], Cui, S.G.[Shu-Guang], Chen, G.Y.[Guan-Ying], Han, X.G.[Xiao-Guang],
Dreamdissector: Learning Disentangled Text-to-3d Generation from 2d Diffusion Priors,
ECCV24(XII: 124-141).
Springer DOI 2412
BibRef

Liu, Z.X.[Ze-Xiang], Li, Y.G.[Yang-Guang], Lin, Y.T.[You-Tian], Yu, X.[Xin], Peng, S.[Sida], Cao, Y.P.[Yan-Pei], Qi, X.J.[Xiao-Juan], Huang, X.S.[Xiao-Shui], Liang, D.[Ding], Ouyang, W.L.[Wan-Li],
Unidream: Unifying Diffusion Priors for Relightable Text-to-3d Generation,
ECCV24(V: 74-91).
Springer DOI 2412
BibRef

Liu, Q.H.[Qi-Hao], Zhang, Y.[Yi], Bai, S.[Song], Kortylewski, A.[Adam], Yuilie, A.L.[Alan L.],
DIRECT-3D: Learning Direct Text-to-3D Generation on Massive Noisy 3D Data,
CVPR24(6881-6891)
IEEE DOI Code:
WWW Link. 2410
Geometry, Training, Solid modeling, Technological innovation, Diffusion models, Data models BibRef

Zhou, L.Q.[Lin-Qi], Shih, A.[Andy], Meng, C.L.[Chen-Lin], Ermon, S.[Stefano],
DreamPropeller: Supercharge Text-to-3D Generation with Parallel Sampling,
CVPR24(4610-4619)
IEEE DOI 2410
Pipelines, Graphics processing units, Diffusion models, User experience, Text-to-3D, Diffusion models, Acceleration BibRef

Chen, Y.[Yang], Pant, Y.W.[Ying-Wei], Yang, H.B.[Hai-Bo], Yao, T.[Ting], Meit, T.[Tao],
VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation,
CVPR24(4896-4905)
IEEE DOI Code:
WWW Link. 2410
Visualization, Solid modeling, Technological innovation, Zero-shot learning, Semantics, Diffusion models, diffusion model, image generation BibRef

Ding, L.[Lihe], Dong, S.C.[Shao-Cong], Huang, Z.P.[Zhan-Peng], Wang, Z.[Zibin], Zhang, Y.Y.[Yi-Yuan], Gong, K.X.[Kai-Xiong], Xu, D.[Dan], Xue, T.F.[Tian-Fan],
Text-to-3D Generation with Bidirectional Diffusion Using Both 2D and 3D Priors,
CVPR24(5115-5124)
IEEE DOI Code:
WWW Link. 2410
Training, Geometry, Bridges, Solid modeling, Costs, 3D generation BibRef

Ling, H.[Huan], Kim, S.W.[Seung Wook], Torralba, A.[Antonio], Fidler, S.[Sanja], Kreis, K.[Karsten],
Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed Diffusion Models,
CVPR24(8576-8588)
IEEE DOI 2410
Training, Visualization, Deformation, Dynamics, Diffusion models, Animation, text-to-4d, generative models, 3d generation, score distillation sampling BibRef

Qiu, L.T.[Ling-Teng], Chen, G.Y.[Guan-Ying], Gu, X.D.[Xiao-Dong], Zuo, Q.[Qi], Xu, M.[Mutian], Wu, Y.S.[Yu-Shuang], Yuan, W.H.[Wei-Hao], Dong, Z.L.[Zi-Long], Bo, L.[Liefeng], Han, X.G.[Xiao-Guang],
RichDreamer: A Generalizable Normal-Depth Diffusion Model for Detail Richness in Text-to-3D,
CVPR24(9914-9925)
IEEE DOI Code:
WWW Link. 2410
Geometry, Training, Computational modeling, Pipelines, Lighting, Diffusion models BibRef

Liu, Y.T.[Ying-Tian], Guo, Y.C.[Yuan-Chen], Luo, G.[Guan], Sun, H.[Heyi], Yin, W.[Wei], Zhang, S.H.[Song-Hai],
PI3D: Efficient Text-to-3D Generation with Pseudo-Image Diffusion,
CVPR24(19915-19924)
IEEE DOI 2410
Training, Solid modeling, Adaptation models, Shape, Image synthesis, Text to image BibRef

Liu, F.F.[Fang-Fu], Wu, D.K.[Dian-Kun], Wei, Y.[Yi], Rao, Y.M.[Yong-Ming], Duan, Y.Q.[Yue-Qi],
Sherpa3D: Boosting High-Fidelity Text-to-3D Generation via Coarse 3D Prior,
CVPR24(20763-20774)
IEEE DOI Code:
WWW Link. 2410
Solid modeling, Computational modeling, Semantics, Coherence, Diffusion models, 3D Generation, Diffusion Models BibRef

Chen, C.[Cheng], Yang, X.F.[Xiao-Feng], Yang, F.[Fan], Feng, C.Z.[Cheng-Zeng], Fu, Z.[Zhoujie], Foo, C.S.[Chuan-Sheng], Lin, G.S.[Guo-Sheng], Liu, F.[Fayao],
Sculpt3D: Multi-View Consistent Text-to-3D Generation with Sparse 3D Prior,
CVPR24(10228-10237)
IEEE DOI Code:
WWW Link. 2410
Geometry, Legged locomotion, Accuracy, Shape, Pipelines, Text-to-3D, 2D diffusion, 3D Generation BibRef

Wu, Z.[Zike], Zhou, P.[Pan], Yi, X.Y.[Xuan-Yu], Yuan, X.D.[Xiao-Ding], Zhang, H.W.[Han-Wang],
Consistent3D: Towards Consistent High-Fidelity Text-to-3D Generation with Deterministic Sampling Prior,
CVPR24(9892-9902)
IEEE DOI Code:
WWW Link. 2410
Training, Geometry, Solid modeling, Stochastic processes, Ordinary differential equations, Mathematical models, Diffusion Model BibRef

Lu, Y.X.[Yuan-Xun], Zhang, J.Y.[Jing-Yang], Li, S.W.[Shi-Wei], Fang, T.[Tian], McKinnon, D.[David], Tsin, Y.H.[Yang-Hai], Quarn, L.[Long], Cao, X.[Xun], Yao, Y.[Yao],
Direct2.5: Diverse Text-to-3D Generation via Multi-view 2.5D Diffusion,
CVPR24(8744-8753)
IEEE DOI Code:
WWW Link. 2410
Geometry, Solid modeling, Image synthesis, Computational modeling, Pipelines, Diffusion models BibRef

Huang, T.Y.[Tian-Yu], Zeng, Y.H.[Yi-Han], Zhang, Z.[Zhilu], Xu, W.[Wan], Xu, H.[Hang], Xu, S.[Songcen], Lau, R.W.H.[Rynson W. H.], Zuo, W.M.[Wang-Meng],
DreamControl: Control-Based Text-to-3D Generation with 3D Self-Prior,
CVPR24(5364-5373)
IEEE DOI Code:
WWW Link. 2410
Geometry, Measurement, Text to image, Neural radiance field, Diffusion models BibRef

Liang, Y.X.[Yi-Xun], Yang, X.[Xin], Lin, J.T.[Jian-Tao], Li, H.D.[Hao-Dong], Xu, X.G.[Xiao-Gang], Chen, Y.C.[Ying-Cong],
LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching,
CVPR24(6517-6526)
IEEE DOI 2410
Training, Solid modeling, Pipelines, Rendering (computer graphics), Neural radiance field, Distance measurement, 3D Generation, Diffusion Models BibRef

Yi, T.[Taoran], Fang, J.[Jiemin], Wang, J.J.[Jun-Jie], Wu, G.[Guanjun], Xie, L.X.[Ling-Xi], Zhang, X.P.[Xiao-Peng], Liu, W.Y.[Wen-Yu], Tian, Q.[Qi], Wang, X.G.[Xing-Gang],
GaussianDreamer: Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models,
CVPR24(6796-6807)
IEEE DOI Code:
WWW Link. 2410
Geometry, Solid modeling, Perturbation methods, Graphics processing units, Diffusion models,Neural rendering BibRef

Yang, J.[Jiayu], Cheng, Z.[Ziang], Duan, Y.F.[Yun-Fei], Ji, P.[Pan], Li, H.D.[Hong-Dong],
ConsistNet: Enforcing 3D Consistency for Multi-View Images Diffusion,
CVPR24(7079-7088)
IEEE DOI Code:
WWW Link. 2410
Solid modeling, Image synthesis, Computational modeling, Graphics processing units, Diffusion models, latent diffusion model BibRef

Wan, Z.Y.[Zi-Yu], Paschalidou, D.[Despoina], Huang, I.[Ian], Liu, H.Y.[Hong-Yu], Shen, B.[Bokui], Xiang, X.Y.[Xiao-Yu], Liao, J.[Jing], Guibas, L.J.[Leonidas J.],
CAD: Photorealistic 3D Generation via Adversarial Distillation,
CVPR24(10194-10207)
IEEE DOI 2410
Training, Solid modeling, Interpolation, Pipelines, Diffusion models, Rendering (computer graphics) BibRef

Huang, X.[Xin], Shao, R.Z.[Rui-Zhi], Zhang, Q.[Qi], Zhang, H.W.[Hong-Wen], Feng, Y.[Ying], Liu, Y.B.[Ye-Bin], Wang, Q.[Qing],
HumanNorm: Learning Normal Diffusion Model for High-quality and Realistic 3D Human Generation,
CVPR24(4568-4577)
IEEE DOI Code:
WWW Link. 2410
Geometry, Solid modeling, Text to image, Color, Diffusion models, Diffusion Model, 3D Human, 3D Generation BibRef

Dong, Y.[Yuan], Zuo, Q.[Qi], Gu, X.D.[Xiao-Dong], Yuan, W.H.[Wei-Hao], Zhao, Z.Y.[Zheng-Yi], Dong, Z.L.[Zi-Long], Bo, L.F.[Lie-Feng], Huang, Q.X.[Qi-Xing],
GPLD3D: Latent Diffusion of 3D Shape Generative Models by Enforcing Geometric and Physical Priors,
CVPR24(56-66)
IEEE DOI 2410
Solid modeling, Codes, Shape, Computational modeling, Noise reduction, Shape Generative Model, Latent Diffusion, Quality Checker BibRef

Lee, S.[Seoyoung], Lee, J.[Joonseok],
PoseDiff: Pose-conditioned Multimodal Diffusion Model for Unbounded Scene Synthesis from Sparse Inputs,
WACV24(5005-5015)
IEEE DOI 2404
Image color analysis, Computational modeling, Scalability, Cameras, Tuning, Faces, Algorithms, Generative models for image, video, 3D, etc., Vision + language and/or other modalities BibRef

Wang, H.[Hai], Xiang, X.Y.[Xiao-Yu], Fan, Y.C.[Yu-Chen], Xue, J.H.[Jing-Hao],
Customizing 360-Degree Panoramas through Text-to-Image Diffusion Models,
WACV24(4921-4931)
IEEE DOI Code:
WWW Link. 2404
Geometry, Codes, Noise reduction, Games, Task analysis, Algorithms, Generative models for image, video, 3D, etc., Algorithms, image and video synthesis BibRef

Chen, M.H.[Ming-Hao], Laina, I.[Iro], Vedaldi, A.[Andrea],
Training-Free Layout Control with Cross-Attention Guidance,
WACV24(5331-5341)
IEEE DOI 2404
Training, Visualization, Layout, Semantics, Noise, Benchmark testing, Algorithms, Generative models for image, video, 3D, etc BibRef

Tang, J.[Junshu], Wang, T.F.[Teng-Fei], Zhang, B.[Bo], Zhang, T.[Ting], Yi, R.[Ran], Ma, L.Z.[Li-Zhuang], Chen, D.[Dong],
Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior,
ICCV23(22762-22772)
IEEE DOI 2401
BibRef

Szymanowicz, S.[Stanislaw], Rupprecht, C.[Christian], Vedaldi, A.[Andrea],
Viewset Diffusion: (0-)Image-Conditioned 3D Generative Models from 2D Data,
ICCV23(8829-8839)
IEEE DOI 2401
BibRef

Jiang, Y.T.[Yu-Tao], Zhou, Y.[Yang], Liang, Y.[Yuan], Liu, W.X.[Wen-Xi], Jiao, J.B.[Jian-Bo], Quan, Y.H.[Yu-Hui], He, S.F.[Sheng-Feng],
Diffuse3D: Wide-Angle 3D Photography via Bilateral Diffusion,
ICCV23(8964-8974)
IEEE DOI Code:
WWW Link. 2401
BibRef

Xiang, J.F.[Jian-Feng], Yang, J.[Jiaolong], Huang, B.B.[Bin-Bin], Tong, X.[Xin],
3D-aware Image Generation using 2D Diffusion Models,
ICCV23(2383-2393)
IEEE DOI 2401
BibRef

Chai, L.[Lucy], Tucker, R.[Richard], Li, Z.Q.[Zheng-Qi], Isola, P.[Phillip], Snavely, N.[Noah],
Persistent Nature: A Generative Model of Unbounded 3D Worlds,
CVPR23(20863-20874)
IEEE DOI 2309
BibRef

Shim, J.[Jaehyeok], Joo, K.[Kyungdon],
DITTO: Dual and Integrated Latent Topologies for Implicit 3D Reconstruction,
CVPR24(5396-5405)
IEEE DOI 2410
Point cloud compression, Shape, Power system stability, Transformers, Topology, ComputerVision BibRef

Shim, J.[Jaehyeok], Kang, C.W.[Chang-Woo], Joo, K.[Kyungdon],
Diffusion-Based Signed Distance Fields for 3D Shape Generation,
CVPR23(20887-20897)
IEEE DOI 2309
BibRef

Po, R.[Ryan], Wetzstein, G.[Gordon],
Compositional 3D Scene Generation using Locally Conditioned Diffusion,
3DV24(651-663)
IEEE DOI 2408
Semantics, Pipelines, Manuals, Task analysis BibRef

Shue, J.R.[J. Ryan], Chan, E.R.[Eric Ryan], Po, R.[Ryan], Ankner, Z.[Zachary], Wu, J.J.[Jia-Jun], Wetzstein, G.[Gordon],
3D Neural Field Generation Using Triplane Diffusion,
CVPR23(20875-20886)
IEEE DOI 2309
BibRef

Xu, J.[Jiale], Wang, X.T.[Xin-Tao], Cheng, W.H.[Wei-Hao], Cao, Y.P.[Yan-Pei], Shan, Y.[Ying], Qie, X.H.[Xiao-Hu], Gao, S.H.[Sheng-Hua],
Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models,
CVPR23(20908-20918)
IEEE DOI 2309
BibRef

Chapter on 3-D Object Description and Computation Techniques, Surfaces, Deformable, View Generation, Video Conferencing continues in
Merging Views, Object Insertion in Image .


Last update:Oct 6, 2025 at 14:07:43