11.14.3.5.1 Diffusion for Description or Text to Image Generation

Chapter Contents (Back)
Diffusion Models. Synthesis. Image Synthesis. Text to Image.
See also Diffusion Process, Diffusion Operators, Mechanism, or Technique.
See also Adversarial Networks for Image Synthesis, Image Generation.
See also Video Diffusion, Video Sysnthesis, Text to Video.
See also Diffusion Process in Image Editing.

Sun, G.[Gan], Liang, W.Q.[Wen-Qi], Dong, J.H.[Jia-Hua], Li, J.[Jun], Ding, Z.M.[Zheng-Ming], Cong, Y.[Yang],
Create Your World: Lifelong Text-to-Image Diffusion,
PAMI(46), No. 9, September 2024, pp. 6454-6470.
IEEE DOI 2408
Task analysis, Dogs, Computational modeling, Semantics, Training, Neural networks, Continual learning, image generation, stable diffusion BibRef

Chen, H.[Hong], Zhang, Y.P.[Yi-Peng], Wang, X.[Xin], Duan, X.G.[Xu-Guang], Zhou, Y.W.[Yu-Wei], Zhu, W.W.[Wen-Wu],
DisenDreamer: Subject-Driven Text-to-Image Generation With Sample-Aware Disentangled Tuning,
CirSysVideo(34), No. 8, August 2024, pp. 6860-6873.
IEEE DOI 2408
Noise reduction, Visualization, Tuning, Controllability, Image synthesis, Training, Diffusion model, disentangled finetuning BibRef

Verma, A.[Ayushi], Badal, T.[Tapas], Bansal, A.[Abhay],
Advancing Image Generation with Denoising Diffusion Probabilistic Model and ConvNeXt-V2: A novel approach for enhanced diversity and quality,
CVIU(247), 2024, pp. 104077.
Elsevier DOI 2408
Deep learning, Diffusion model, Generative model, Image generation BibRef

Xu, Y.F.[Yi-Fei], Xu, X.L.[Xiao-Long], Gao, H.H.[Hong-Hao], Xiao, F.[Fu],
SGDM: An Adaptive Style-Guided Diffusion Model for Personalized Text to Image Generation,
MultMed(26), 2024, pp. 9804-9813.
IEEE DOI 2410
Feature extraction, Adaptation models, Image synthesis, Computational modeling, Training, Task analysis, Noise reduction, image style similarity assessment BibRef

Ramasinghe, S.[Sameera], Shevchenko, V.[Violetta], Avraham, G.[Gil], Thalaiyasingam, A.[Ajanthan],
Accept the Modality Gap: An Exploration in the Hyperbolic Space,
CVPR24(27253-27262)
IEEE DOI 2410
Text to image, Machine learning, Linear programming, multimodal learning, modality gap BibRef

Luo, Y.M.[Yi-Min], Yang, Q.[Qinyu], Fan, Y.H.[Yu-Heng], Qi, H.K.[Hai-Kun], Xia, M.[Menghan],
Measurement Guidance in Diffusion Models: Insight from Medical Image Synthesis,
PAMI(46), No. 12, December 2024, pp. 7983-7997.
IEEE DOI 2411
Task analysis, Medical diagnostic imaging, Uncertainty, Image synthesis, Training, Reliability, Data models, controllable generation BibRef

Ren, J.X.[Jia-Xin], Liu, W.Z.[Wan-Zeng], Chen, J.[Jun], Yin, S.X.[Shun-Xi], Tao, Y.[Yuan],
Word2Scene: Efficient remote sensing image scene generation with only one word via hybrid intelligence and low-rank representation,
PandRS(218), 2024, pp. 231-257.
Elsevier DOI Code:
WWW Link. 2412
Award, U.V. Helava, ISPRS. Intelligentized surveying and mapping, Hybrid intelligence, Remote sensing image scene generation, Diffusion models, Zero-shot learning BibRef

Zhou, D.[Dewei], Li, Y.[You], Ma, F.[Fan], Yang, Z.X.[Zong-Xin], Yang, Y.[Yi],
MIGC++: Advanced Multi-Instance Generation Controller for Image Synthesis,
PAMI(47), No. 3, March 2025, pp. 1714-1728.
IEEE DOI 2502
Benchmark testing, Training, Layout, Text to image, Iterative algorithms, Position control, Pipelines, Image synthesis, multimodal learning BibRef

Zhou, D.[Dewei], Li, Y.[You], Ma, F.[Fan], Zhang, X.T.[Xiao-Ting], Yang, Y.[Yi],
MIGC: Multi-Instance Generation Controller for Text-to-Image Synthesis,
CVPR24(6818-6828)
IEEE DOI Code:
WWW Link. 2410
Codes, Attention mechanisms, Aggregates, Pipelines, Layout, Text to image, AIGC, Diffusion Models, Image Generation, Stable Diffusion BibRef

Dall'Asen, N.[Nicola], Menapace, W.[Willi], Peruzzo, E.[Elia], Sangineto, E.[Enver], Wang, Y.M.[Yi-Ming], Ricci, E.[Elisa],
Collaborative Neural Painting,
CVIU(252), 2025, pp. 104298.
Elsevier DOI 2502
Neural painting, Collaborative, interactive, Diffusion Models BibRef

Wang, Z.C.[Zhi-Cai], Li, O.X.[Ou-Xiang], Wang, T.[Tan], Wei, L.H.[Long-Hui], Hao, Y.B.[Yan-Bin], Wang, X.[Xiang], Tian, Q.[Qi],
Prior Preserved Text-to-Image Personalization Without Image Regularization,
CirSysVideo(35), No. 2, February 2025, pp. 1318-1330.
IEEE DOI 2502
Dogs, Semantics, Visualization, Diffusion models, Text to image, Training, Noise reduction, Reliability, dreambooth BibRef

Ridley, H.[Henrietta], Alcover-Couso, R.[Roberto], SanMiguel, J.C.[Juan C.],
Controlling semantics of diffusion-augmented data for unsupervised domain adaptation,
IET-CV(19), No. 1, 2025, pp. e70002.
DOI Link Code:
WWW Link. 2502
image segmentation, unsupervised learning BibRef

Xiao, G.X.[Guang-Xuan], Yin, T.W.[Tian-Wei], Freeman, W.T.[William T.], Durand, F.[Frédo], Han, S.[Song],
FastComposer: Tuning-Free Multi-subject Image Generation with Localized Attention,
IJCV(133), No. 3, March 2025, pp. 1175-1194.
Springer DOI 2502
Code:
WWW Link. BibRef

Ji, J.Y.[Jia-Yi], Wang, H.[Haowei], Wu, C.L.[Chang-La], Ma, Y.W.[Yi-Wei], Sun, X.S.[Xiao-Shuai], Ji, R.R.[Rong-Rong],
JM3D and JM3D-LLM: Elevating 3D Representation With Joint Multi-Modal Cues,
PAMI(47), No. 4, April 2025, pp. 2475-2492.
IEEE DOI 2503
Solid modeling, Point cloud compression, Visualization, Representation learning, Feature extraction, structured multimodal organizer BibRef

Yang, D.[Danni], Dong, R.H.[Ruo-Han], Ji, J.Y.[Jia-Yi], Ma, Y.W.[Yi-Wei], Wang, H.[Haowei], Sun, X.S.[Xiao-Shuai], Ji, R.R.[Rong-Rong],
Exploring Phrase-level Grounding with Text-to-image Diffusion Model,
ECCV24(LIII: 161-180).
Springer DOI 2412
BibRef

Nwoye, C.I.[Chinedu Innocent], Bose, R.[Rupak], Elgohary, K.[Kareem], Arboit, L.[Lorenzo], Carlino, G.[Giorgio], Lavanchy, J.L.[Joël L.], Mascagni, P.[Pietro], Padoy, N.[Nicolas],
Surgical text-to-image generation,
PRL(190), 2025, pp. 73-80.
Elsevier DOI 2503
Surgical image synthesis, Text-to-image, Diffusion model, Large language model, Surgical action triplet BibRef

Marí, R.[Roger], Redondo, R.[Rafael],
Latent Diffusion Approaches for Conditional Generation of Aerial Imagery: A Study,
IPOL(15), 2025, pp. 20-31.
DOI Link 2503
BibRef

Wang, W.L.[Wei-Lun], Bao, J.M.[Jian-Min], Zhou, W.G.[Wen-Gang], Chen, D.D.[Dong-Dong], Chen, D.[Dong], Yuan, L.[Lu], Li, H.Q.[Hou-Qiang],
SinDiffusion: Learning a Diffusion Model from a Single Natural Image,
PAMI(47), No. 5, May 2025, pp. 3412-3423.
IEEE DOI 2504
Diffusion models, Image synthesis, Training, Noise reduction, Periodic structures, Translation, Mathematical models, Art, Noise, image manipulation BibRef

Kim, J.[Jisoo], Kang, J.[Jiwoo], Kim, T.[Taewan], Oh, H.[Heeseok],
SinWaveFusion: Learning a single image diffusion model in wavelet domain,
IVC(159), 2025, pp. 105551.
Elsevier DOI 2505
Single image generation, Denoising diffusion models, Wavelet transform BibRef

Ma, H.Y.[Heng-Yuan], Zhu, X.T.[Xia-Tian], Feng, J.F.[Jian-Feng], Zhang, L.[Li],
Preconditioned Score-Based Generative Models,
IJCV(133), No. 7, July 2025, pp. 4837-4863.
Springer DOI 2506
BibRef
Earlier: A1, A4, A2, A3:
Accelerating Score-Based Generative Models with Preconditioned Diffusion Sampling,
ECCV22(XXIII:1-16).
Springer DOI 2211
BibRef

Huang, Y.W.[Ya-Wen], Huang, H.M.[Hui-Min], Zheng, H.[Hao], Li, Y.X.[Yue-Xiang], Zheng, F.[Feng], Zhen, X.T.[Xian-Tong], Zheng, Y.F.[Ye-Feng],
Learning to Generalize Heterogeneous Representation for Cross-Modality Image Synthesis via Multiple Domain Interventions,
IJCV(133), No. 7, July 2025, pp. 4727-4748.
Springer DOI 2506
BibRef

Xie, J.H.[Jin-Heng], Li, Y.X.[Yue-Xiang], Huang, Y.W.[Ya-Wen], Liu, H.Z.[Hao-Zhe], Zhang, W.T.[Wen-Tian], Zheng, Y.F.[Ye-Feng], Shou, M.Z.[Mike Zheng],
BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion,
ICCV23(7418-7427)
IEEE DOI 2401
BibRef

Zhao, R.[Ruoyu], Zhu, M.R.[Ming-Rui], Dong, S.Y.[Shi-Yin], Cheng, D.[De], Wang, N.N.[Nan-Nan], Gao, X.B.[Xin-Bo],
CatVersion: Concatenating Embeddings for Diffusion-Based Text-to-Image Personalization,
CirSysVideo(35), No. 6, June 2025, pp. 6047-6058.
IEEE DOI 2506
Diffusion models, Image synthesis, Dogs, Text to image, Image restoration, Training, Noise reduction, few-shot learning BibRef

Duan, H.R.[Hao-Ran], Shao, S.[Shuai], Zhai, B.[Bing], Shah, T.[Tejal], Han, J.G.[Jun-Gong], Ranjan, R.[Rajiv],
Parameter Efficient Fine-Tuning for Multi-modal Generative Vision Models with Möbius-Inspired Transformation,
IJCV(133), No. 7, July 2025, pp. 4590-4603.
Springer DOI 2506
BibRef
And: Correction: IJCV(133), No. 9, September 2025, pp. 6637-6637.
Springer DOI 2509
BibRef

Wang, B.[Bingyuan], Chen, Q.F.[Qi-Feng], Wang, Z.[Zeyu],
Diffusion-Based Visual Art Creation: A Survey and New Perspectives,
Surveys(57), No. 10, May 2025, pp. xx-yy.
DOI Link 2507
Survey, Art Creation. Survey, Diffusion. AI-generated content, diffusion model, visual art, creativity, human-AI collaboration BibRef

Wu, W.J.[Wei-Jia], Li, Z.[Zhuang], He, Y.F.[Ye-Fei], Shou, M.Z.[Mike Zheng], Shen, C.H.[Chun-Hua], Cheng, L.[Lele], Li, Y.[Yan], Gao, T.T.[Ting-Ting], Zhang, D.[Di],
Paragraph-to-Image Generation with Information-Enriched Diffusion Model,
IJCV(133), No. 8, August 2025, pp. 5413-5434.
Springer DOI Code:
WWW Link. 2508
Longer descriptions. BibRef

Hunter, R.[Rosco], Dudziak, L.[Lukasz], Abdelfattah, M.S.[Mohamed S.], Mehrotra, A.[Abhinav], Bhattacharya, S.[Sourav], Wen, H.K.[Hong-Kai],
Fast Sampling Through The Reuse Of Attention Maps In Diffusion Models,
IJCV(133), No. 9, September 2025, pp. 6422-6431.
Springer DOI 2509
BibRef

Wei, Y.L.[Yan-Lei], Zhang, X.L.[Xiao-Lin], Wang, Y.P.[Yong-Ping], Wang, J.Y.[Jing-Yu], Hu, W.J.[Wei-Jian],
Enhancing Transferability via Spectral Alignment and Grad-CAM Guided Latent Diffusion Models,
SPLetters(32), 2025, pp. 3260-3264.
IEEE DOI 2509
Noise, Discrete cosine transforms, Discrete Fourier transforms, Diffusion models, Perturbation methods, Image reconstruction, control-VAE BibRef

Mao, F.Y.[Fang-Yuan], Mei, J.L.[Ji-Lin], Lu, S.[Shun], Liu, F.[Fuyang], Chen, L.[Liang], Zhao, F.Z.[Fang-Zhou], Hu, Y.[Yu],
PID: Physics-Informed Diffusion Model for Infrared Image Generation,
PR(169), 2026, pp. 111816.
Elsevier DOI Code:
WWW Link. 2509
Physical constraints, Diffusion model, Infrared image generation BibRef

Lin, Z.H.[Zhi-Hang], Lin, M.[Mingbao], Zhan, W.[Wengyi], Ji, R.R.[Rong-Rong],
AccDiffusion v2: Toward More Accurate Higher-Resolution Diffusion Extrapolation,
PAMI(47), No. 10, October 2025, pp. 8351-8363.
IEEE DOI 2510
Image resolution, Distortion, Training, Semantics, Noise reduction, Image synthesis, Extrapolation, Diffusion models, diffusion model BibRef

Wu, H.X.[Hao-Xuan], Po, L.M.[Lai-Man], Xu, X.[Xuyuan], Li, K.[Kun], Liu, Y.Y.[Yu-Yang], Jiang, Z.[Zeyu],
Comprehensive regional guidance for attention map semantics in text-to-image diffusion models,
CVIU(260), 2025, pp. 104492.
Elsevier DOI 2510
Diffusion models, Text-to-image generation BibRef

Huang, Y.S.[Yu-Shi], Gong, R.[Ruihao], Liu, X.L.[Xiang-Long], Liu, J.[Jing], Li, Y.H.[Yu-Hang], Lu, J.W.[Ji-Wen], Tao, D.C.[Da-Cheng],
Temporal Feature Matters: A Framework for Diffusion Model Quantization,
PAMI(47), No. 10, October 2025, pp. 8823-8837.
IEEE DOI 2510
Diffusion models, Quantization (signal), Maintenance, Noise reduction, Computational modeling, Training, Calibration, hardware acceleration BibRef

Gan, Y.[Yan], Xiao, X.Y.[Xin-Yao], Xiang, T.[Tao], Wu, C.Q.[Cheng-Qian], Ouyang, D.Q.[De-Qiang],
SFCM-AEG: Source-Free Cross-Modal Adversarial Example Generation,
MultMed(27), 2025, pp. 6262-6272.
IEEE DOI 2510
Semantics, Text to image, Diffusion models, Perturbation methods, Glass box, Training, Image restoration, Faces, source-free BibRef


Han, J.[Jian], Liu, J.[Jinlai], Jiang, Y.[Yi], Yan, B.[Bin], Zhang, Y.Q.[Yu-Qi], Yuan, Z.H.[Ze-Huan], Peng, B.Y.[Bing-Yue], Liu, X.B.[Xia-Bing],
Infinity?: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis,
CVPR25(15733-15744)
IEEE DOI 2508
Visualization, Vocabulary, Technological innovation, Reactive power, Computational modeling, Text to image, Photorealistic images BibRef

Shi, Q.Y.[Qing-Yu], Qi, L.[Lu], Wu, J.[Jianzong], Bai, J.[Jinbin], Wang, J.B.[Jing-Bo], Tong, Y.H.[Yun-Hai], Li, X.T.[Xiang-Tai],
DreamRelation: Bridging Customization and Relation Generation,
CVPR25(15723-15732)
IEEE DOI 2508
Image synthesis, Training data, Text to image, Focusing, Benchmark testing, Diffusion models, Scenario generation, Engines BibRef

Ma, X.[Xinyin], Yu, R.[Runpeng], Liu, S.[Songhua], Fang, G.[Gongfan], Wang, X.C.[Xin-Chao],
Diffusion Model is Effectively Its Own Teacher,
CVPR25(12901-12911)
IEEE DOI 2508
Image quality, Computational modeling, Text to image, Diffusion models, Transformers, Lenses BibRef

Chu, B.[Beilin], Xu, X.[Xuan], Wang, X.[Xin], Zhang, Y.F.[Yu-Fei], You, W.[Weike], Zhou, L.[Linna],
FIRE: Robust Detection of Diffusion-Generated Images via Frequency-Guided Reconstruction Error,
CVPR25(12830-12839)
IEEE DOI 2508
Training, Image synthesis, Perturbation methods, Diffusion models, Feature extraction, Robustness, Image reconstruction, Standards BibRef

Li, S.[Shufan], Kallidromitis, K.[Konstantinos], Gokul, A.[Akash], Liao, Z.[Zichun], Kato, Y.[Yusuke], Kozuka, K.[Kazuki], Grover, A.[Aditya],
OmniFlow: Any-to-Any Generation with Multi-Modal Rectified Flows,
CVPR25(13178-13188)
IEEE DOI Code:
WWW Link. 2508
Radio frequency, Adaptation models, Codes, Attention mechanisms, Computational modeling, Text to image, Transformers, text-to-image, diffusion transformer BibRef

Kim, E.[Eunji], Kim, S.[Siwon], Park, M.[Minjun], Entezari, R.[Rahim], Yoon, S.[Sungroh],
Rethinking Training for De-biasing Text-to-Image Generation: Unlocking the Potential of Stable Diffusion,
CVPR25(13361-13370)
IEEE DOI 2508
Training, Image synthesis, Computational modeling, Noise, Semantics, Text to image, Focusing, Robustness, bias, text-to-image generation, fairness BibRef

Xia, M.F.[Meng-Fei], Xue, N.[Nan], Shen, Y.J.[Yu-Jun], Yi, R.[Ran], Gong, T.[Tieliang], Liu, Y.J.[Yong-Jin],
Rectified Diffusion Guidance for Conditional Generation,
CVPR25(13371-13380)
IEEE DOI Code:
WWW Link. 2508
Closed-form solutions, Codes, Noise reduction, Toy manufacturing industry, Diffusion processes BibRef

Wang, Z.X.[Zi-Xuan], Peng, D.[Duo], Chen, F.[Feng], Yang, Y.W.[Yu-Wei], Lei, Y.J.[Yin-Jie],
Training-free Dense-Aligned Diffusion Guidance for Modular Conditional Image Synthesis,
CVPR25(13135-13145)
IEEE DOI Code:
WWW Link. 2508
Geometry, Visualization, Solid modeling, Image synthesis, Foundation models, Layout, Virtual reality, Trajectory, Rivers BibRef

Jia, W.N.[Wei-Nan], Huang, M.Q.[Meng-Qi], Chen, N.[Nan], Zhang, L.[Lei], Mao, Z.D.[Zhen-Dong],
D2iT: Dynamic Diffusion Transformer for Accurate Image Generation,
CVPR25(12860-12870)
IEEE DOI Code:
WWW Link. 2508
Image coding, Codes, Image recognition, Accuracy, Image synthesis, Scalability, Noise, Diffusion processes, Transformers, dynamic coding BibRef

Hu, Z.J.[Zi-Jing], Zhang, F.[Fengda], Chen, L.[Long], Kuang, K.[Kun], Li, J.H.[Jia-Hui], Gao, K.[Kaifeng], Xiao, J.[Jun], Wang, X.[Xin], Zhu, W.W.[Wen-Wu],
Towards Better Alignment: Training Diffusion Models with Reinforcement Learning Against Sparse Rewards,
CVPR25(23604-23614)
IEEE DOI 2508
Training, Codes, Noise reduction, Text to image, Reinforcement learning, Diffusion models, Optimization BibRef

Ren, J.[Jie], Chen, K.[Kangrui], Cui, Y.Q.[Ying-Qian], Zeng, S.[Shenglai], Liu, H.[Hui], Xing, Y.[Yue], Tang, J.[Jiliang], Lyu, L.[Lingjuan],
Six-CD: Benchmarking Concept Removals for Text-to-image Diffusion Models,
CVPR25(28769-28778)
IEEE DOI 2508
Measurement, Text to image, Benchmark testing, Diffusion models, Faces, Context modeling BibRef

Huang, H.[Huayang], Jin, X.[Xiangye], Miao, J.X.[Jia-Xu], Wu, Y.[Yu],
Implicit Bias Injection Attacks against Text-to-Image Diffusion Models,
CVPR25(28779-28789)
IEEE DOI 2508
Visualization, Image color analysis, Generative AI, Prevention and mitigation, Semantics, Text to image, generative bias BibRef

Wang, L.F.[Li-Fu], Liu, D.Q.[Da-Qing], Liu, X.C.[Xin-Chen], He, X.D.[Xiao-Dong],
Scaling Down Text Encoders of Text-to-Image Diffusion Models,
CVPR25(18424-18433)
IEEE DOI Code:
WWW Link. 2508
Image quality, Computational modeling, Semantics, Redundancy, Natural languages, Text to image, Graphics processing units, text-representation BibRef

Ye, Z.[Zilyu], Chen, Z.Y.[Zhi-Yang], Li, T.C.[Tian-Cheng], Huang, Z.[Zemin], Luo, W.J.[Wei-Jian], Qi, G.J.[Guo-Jun],
Schedule On the Fly: Diffusion Time Prediction for Faster and Better Image Generation,
CVPR25(23412-23422)
IEEE DOI 2508
Image quality, Schedules, Image synthesis, Noise reduction, Noise, Text to image, Reinforcement learning, Diffusion models, Noise level BibRef

Qiu, W.M.[Wei-Min], Wang, J.[Jieke], Tang, M.[Meng],
Self-Cross Diffusion Guidance for Text-to-Image Synthesis of Similar Subjects,
CVPR25(23528-23538)
IEEE DOI 2508
Image synthesis, Text to image, Diffusion models, Transformers, Birds, Reliability, Videos, guidance, similar subjects, diffusion model BibRef

Wang, C.[Chao], Fan, H.[Hehe], Yang, H.[Huichen], Karimi, S.[Sarvnaz], Yao, L.[Lina], Yang, Y.[Yi],
Adapting Text-to-Image Generation with Feature Difference Instruction for Generic Image Restoration,
CVPR25(23539-23550)
IEEE DOI 2508
Degradation, Training, Adaptation models, Translation, Pipelines, Noise, Text to image, Feature extraction, Image restoration BibRef

Liu, Y.P.[Yun-Peng], Liu, B.X.[Bo-Xiao], Zhang, Y.[Yi], Hou, X.Z.[Xing-Zhong], Song, G.L.[Guang-Lu], Liu, Y.[Yu], You, H.[Haihang],
See Further When Clear: Curriculum Consistency Model,
CVPR25(18103-18112)
IEEE DOI 2508
Training, Adaptation models, Schedules, PSNR, Computational modeling, Noise, Semantics, Text to image, Diffusion models, Pattern matching BibRef

Lee, B.H.[Byung Hyun], Lim, S.[Sungjin], Chun, S.Y.[Se Young],
Localized Concept Erasure for Text-to-Image Diffusion Models Using Training-Free Gated Low-Rank Adaptation,
CVPR25(18596-18606)
IEEE DOI Code:
WWW Link. 2508
Training, Art, Codes, Image synthesis, Text to image, Logic gates, Diffusion models, Robustness, text-to image diffusion models, gated low-rank adaptation BibRef

Jang, S.[Sangwon], Choi, J.S.[June Suk], Jo, J.[Jaehyeong], Lee, K.[Kimin], Hwang, S.J.[Sung Ju],
Silent Branding Attack: Trigger-free Data Poisoning Attack on Text-to-Image Diffusion Models,
CVPR25(8203-8212)
IEEE DOI Code:
WWW Link. 2508
Measurement, Visualization, Brand management, Text to image, Training data, Symbols, Diffusion models, Market research, data poisoning BibRef

Wang, Z.L.[Zi-Lan], Guo, J.F.[Jun-Feng], Zhu, J.C.[Jia-Cheng], Li, Y.M.[Yi-Ming], Huang, H.[Heng], Chen, M.[Muhao], Tu, Z.Z.[Zheng-Zhong],
SleeperMark: Towards Robust Watermark against Fine-Tuning Text-to-image Diffusion Models,
CVPR25(8213-8224)
IEEE DOI Code:
WWW Link. 2508
Training, Adaptation models, Computational modeling, Semantics, Text to image, Watermarking, Diffusion models, Robustness, ownership verification BibRef

Duan, L.[Lunhao], Zhao, S.S.[Shan-Shan], Yan, W.J.[Wen-Jun], Li, Y.[Yinglun], Chen, Q.G.[Qing-Guo], Xu, Z.[Zhao], Luo, W.H.[Wei-Hua], Zhang, K.[Kaifu], Gong, M.M.[Ming-Ming], Xia, G.S.[Gui-Song],
UNIC-Adapter: Unified Image-Instruction Adapter with Multi-Modal Transformer for Image Generation,
CVPR25(7963-7973)
IEEE DOI 2508
Adaptation models, Image synthesis, Layout, Text to image, Transformers, Diffusion models, Data mining BibRef

Cao, C.J.[Chen-Jie], Yu, C.H.[Chao-Hui], Liu, S.[Shang], Wang, F.[Fan], Xue, X.Y.[Xiang-Yang], Fu, Y.W.[Yan-Wei],
MVGenMaster: Scaling Multi-View Generation from Any Image via 3D Priors Enhanced Diffusion Model,
CVPR25(6045-6056)
IEEE DOI 2508
Measurement, Training, Scalability, Computational modeling, Pipelines, Diffusion models, Cameras, diffusion model, novel view synthesis BibRef

Pan, S.[Shuokai], Tuzi, G.[Gerti], Sreeram, S.[Sudarshan], Gope, D.[Dibakar],
Data-Free Group-Wise Fully Quantized Winograd Convolution via Learnable Scales,
CVPR25(4091-4100)
IEEE DOI 2508
Quantization (signal), Costs, Accuracy, Convolution, Training data, Text to image, Diffusion models, Kernel, Usability BibRef

Dombrowski, M.[Mischa], Zhang, W.T.[Wei-Tong], Cechnicka, S.[Sarah], Reynaud, H.[Hadrien], Kainz, B.[Bernhard],
Image Generation Diversity Issues and How to Tame Them,
CVPR25(3029-3039)
IEEE DOI Code:
WWW Link. 2508
Measurement, Image quality, Current measurement, Image retrieval, Training data, Feature extraction, Diffusion models, Data models, Synthetic data BibRef

Wang, X.[Xi], Li, H.Z.[Hong-Zhen], Fang, H.[Heng], Peng, Y.C.[Yi-Chen], Xie, H.R.[Hao-Ran], Yang, X.[Xi], Li, C.T.[Chun-Tao],
LineArt: A Knowledge-guided Training-free High-quality Appearance Transfer for Design Drawing with Diffusion Model,
CVPR25(2912-2923)
IEEE DOI Code:
WWW Link. 2508
Training, Visualization, Accuracy, Translation, Image synthesis, Semantics, Rendering (computer graphics), Painting, prolines dataset BibRef

Kwon, M.[Mingi], Kim, S.S.[Shin Seong], Jeong, J.[Jaeseok], Hsiao, Y.T.[Yi Ting], Uh, Y.[Youngjung],
TCFG: Tangential Damping Classifier-free Guidance,
CVPR25(2620-2629)
IEEE DOI 2508
Manifolds, Image quality, Image synthesis, Refining, Text to image, Diffusion models, Vectors, Trajectory, Singular value decomposition BibRef

Wu, B.[Bin], Shi, W.[Wuxuan], Wang, J.Q.[Jin-Qiao], Ye, M.[Mang],
Synthetic Data is an Elegant GIFT for Continual Vision-Language Models,
CVPR25(2813-2823)
IEEE DOI 2508
Adaptation models, Data privacy, Text to image, Diffusion models, Data models, Synthetic data, vision-language model BibRef

Zhang, X.[Xiao], Jiang, R.X.[Ruo-Xi], Willett, R.[Rebecca], Maire, M.[Michael],
Nested Diffusion Models Using Hierarchical Latent Priors,
CVPR25(2502-2512)
IEEE DOI 2508
Image quality, Dimensionality reduction, Visualization, Image synthesis, Computational modeling, Scalability, Semantics BibRef

Jeong, J.H.[Jin-Ho], Han, S.[Sangmin], Kim, J.[Jinwoo], Kim, S.J.[Seon Joo],
Latent Space Super-Resolution for Higher-Resolution Image Generation with Diffusion Models,
CVPR25(2355-2365)
IEEE DOI Code:
WWW Link. 2508
Manifolds, Training, Hands, Image synthesis, RNA, Face recognition, Noise, Diffusion models, super-resolution BibRef

Srivatsan, K.[Koushik], Shamshad, F.[Fahad], Naseer, M.[Muzammal], Patel, V.M.[Vishal M.], Nandakumar, K.[Karthik],
STEREO: A Two-Stage Framework for Adversarially Robust Concept Erasing from Text-to-Image Diffusion Models,
CVPR25(23765-23774)
IEEE DOI 2508
Training, Systematics, Text to image, Benchmark testing, Diffusion models, Robustness, Security, Object recognition, Glass box BibRef

Miao, B.M.[Bo-Ming], Li, C.X.[Chun-Xiao], Wang, X.X.[Xiao-Xiao], Zhang, A.[Andi], Sun, R.[Rui], Wang, Z.Z.[Zi-Zhe], Zhu, Y.[Yao],
Noise Diffusion for Enhancing Semantic Faithfulness in Text-to-Image Synthesis,
CVPR25(23575-23584)
IEEE DOI 2508
Visualization, Semantics, Noise, Text to image, Diffusion processes, Diffusion models, Question answering (information retrieval), text-to-image synthesis BibRef

Cha, S.[Seung_Ju], Lee, K.[Kwanyoung], Kim, Y.C.[Ye-Chan], Oh, H.W.[Hyun-Woo], Kim, D.J.[Dong-Jin],
VerbDiff: Text-Only Diffusion Models with Enhanced Interaction Awareness,
CVPR25(8041-8050)
IEEE DOI 2508
Accuracy, Computational modeling, Semantics, Text to image, Diffusion models, Photorealistic images BibRef

Dang, M.[Meihua], Singh, A.[Anikait], Zhou, L.Q.[Lin-Qi], Ermon, S.[Stefano], Song, J.M.[Jia-Ming],
Personalized Preference Fine-tuning of Diffusion Models,
CVPR25(8020-8030)
IEEE DOI 2508
Bridges, Text to image, Diffusion models, Optimization, text-to-image diffusion models, dpo, personalized preference fine-tuning BibRef

Li, S.M.[Sen-Mao], Wang, L.[Lei], Wang, K.[Kai], Liu, T.[Tao], Xie, J.[Jiehang], van de Weijer, J.[Joost], Khan, F.S.[Fahad Shahbaz], Yang, S.Q.[Shi-Qi], Wang, Y.X.[Ya-Xing], Yang, J.[Jian],
One-Way Ticket: Time-Independent Unified Encoder for Distilling Text-to-Image Diffusion Models,
CVPR25(23563-23574)
IEEE DOI Code:
WWW Link. 2508
Image quality, Image synthesis, Computational modeling, Semantics, Noise, Text to image, Diffusion models, Decoding, Time complexity, one-step generation BibRef

Li, Z.J.[Zi-Jie], Li, H.[Henry], Shi, Y.C.[Yi-Chun], Farimani, A.B.[Amir Barati], Kluger, Y.[Yuval], Yang, L.J.[Lin-Jie], Wang, P.[Peng],
Dual Diffusion for Unified Image Generation and Understanding,
CVPR25(2779-2790)
IEEE DOI 2508
Visualization, Maximum likelihood estimation, Image synthesis, Computational modeling, Text to image, Predictive models, BibRef

Thakral, K.[Kartik], Glaser, T.[Tamar], Hassner, T.[Tal], Vatsa, M.[Mayank], Singh, R.[Richa],
Fine-Grained Erasure in Text-To-Image Diffusion-Based Foundation Models,
CVPR25(9121-9130)
IEEE DOI 2508
Manifolds, Foundation models, Computational modeling, Semantics, Text to image, Dogs, Flowering plants, Diffusion models, unlearning, diffusion BibRef

Le, D.H.[Duong H.], Pham, T.[Tuan], Lee, S.H.[Sang-Ho], Clark, C.[Christopher], Kembhavi, A.[Aniruddha], Mandt, S.[Stephan], Krishna, R.[Ranjay], Lu, J.[Jiasen],
One Diffusion to Generate Them All,
CVPR25(2671-2682)
IEEE DOI Code:
WWW Link. 2508
Training, Vocabulary, Depth measurement, Semantic segmentation, Scalability, Pose estimation, Semantics, Text to image, Cameras, multiview-generation BibRef

Bernal-Berdun, E.[Edurne], Serrano, A.[Ana], Masia, B.[Belen], Gadelha, M.[Matheus], Hold-Geoffroy, Y.[Yannick], Sun, X.[Xin], Gutierrez, D.[Diego],
PreciseCam: Precise Camera Control for Text-to-Image Generation,
CVPR25(2724-2733)
IEEE DOI 2508
Geometry, Text to image, Cameras, Distortion, Prompt engineering, Videos, Lenses BibRef

Liu, Q.H.[Qi-Hao], Yin, X.[Xi], Yuille, A.L.[Alan L.], Brown, A.[Andrew], Singh, M.[Mannat],
Flowing from Words to Pixels: A Noise-Free Framework for Cross-Modality Evolution,
CVPR25(2755-2765)
IEEE DOI 2508
Training, Gaussian noise, Superresolution, Text to image, Media, Diffusion models, Transformers, Standards, Pattern matching, flow matching BibRef

Parihar, R.[Rishubh], Agrawal, V.[Vaibhav], VS, S.[Sachidanand], Babu, R.V.[R. Venkatesh],
Compass Control: Multi Object Orientation Control for Text-to-Image Generation,
CVPR25(2791-2801)
IEEE DOI 2508
Training, Solid modeling, Position control, Text to image, Diffusion models, Compass, Synthetic data, generative models, 3d control BibRef

Jiang, J.X.[Jia-Xiu], Zhang, Y.[Yabo], Feng, K.[Kailai], Wu, X.H.[Xiao-He], Li, W.B.[Wen-Bo], Pei, R.[Renjing], Li, F.[Fan], Zuo, W.M.[Wang-Meng],
MC2: Multi-concept Guidance for Customized Multi-concept Generation,
CVPR25(2802-2812)
IEEE DOI Code:
WWW Link. 2508
Adaptation models, Visualization, Computational modeling, Refining, Text to image, Interference, Optimization BibRef

Shimoda, W.[Wataru], Inoue, N.[Naoto], Haraguchi, D.[Daichi], Mitani, H.[Hayato], Uchida, S.[Seiichi], Yamaguchi, K.[Kota],
Type-R: Automatically Retouching Typos for Text-to-Image Generation,
CVPR25(2745-2754)
IEEE DOI 2508
Image quality, Training, Accuracy, Computational modeling, Face recognition, Pipelines, Text to image, graphic design BibRef

Schusterbauer, J.[Johannes], Gui, M.[Ming], Fundel, F.[Frank], Ommer, B.[Björn],
Diff2Flow: Training Flow Matching Models via Diffusion Model Alignment,
CVPR25(28347-28357)
IEEE DOI Code:
WWW Link. 2508
Training, Frequency modulation, Computational modeling, Biological system modeling, Text to image, Performance gain, diffusion BibRef

Han, J.[Jiyeon], Kwon, D.[Dahee], Lee, G.[Gayoung], Kim, J.[Junho], Choi, J.[Jaesik],
Enhancing Creative Generation on Stable Diffusion-based Models,
CVPR25(28609-28618)
IEEE DOI 2508
Computational modeling, Catalysts, Noise reduction, Text to image, Diffusion models, Creativity, Optimization, Guidelines, stable diffusion BibRef

Sehwag, V.[Vikash], Kong, X.H.[Xiang-Hao], Li, J.T.[Jing-Tao], Spranger, M.[Michael], Lyu, L.[Lingjuan],
Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget,
CVPR25(28596-28608)
IEEE DOI Code:
WWW Link. 2508
Training, Costs, Biological system modeling, Computational modeling, Pipelines, Text to image, Transformers, low-cost ai BibRef

Lu, Y.H.[Yun-Hong], Wang, Q.C.[Qi-Chao], Cao, H.[Hengyuan], Wang, X.[Xierui], Xu, X.Y.[Xiao-Yin], Zhang, M.[Min],
InPO: Inversion Preference Optimization with Reparametrized DDIM for Efficient Diffusion Model Alignment,
CVPR25(28629-28639)
IEEE DOI 2508
Training, Correlation, Large language models, Text to image, Focusing, Diffusion models, Data models, Reliability, Optimization BibRef

Tang, B.[Bingda], Zheng, B.Y.[Bo-Yang], Paul, S.[Sayak], Xie, S.[Saining],
Exploring the Deep Fusion of Large Language Models and Diffusion Transformers for Text-to-Image Synthesis,
CVPR25(28586-28595)
IEEE DOI 2508
Training, Uncertainty, Large language models, System performance, Design methodology, Noise reduction, Text to image, Transformers, Guidelines BibRef

Lin, H.B.[Hong-Bin], Guo, Z.[Zilu], Zhang, Y.F.[Yi-Fan], Niu, S.C.[Shuai-Cheng], Li, Y.F.[Ya-Feng], Zhang, R.[Ruirnao], Cui, S.G.[Shu-Guang], Li, Z.[Zhen],
DriveGEN: Generalized and Robust 3D Detection in Driving via Controllable Text-to-Image Diffusion Generation,
CVPR25(27497-27507)
IEEE DOI 2508
Geometry, Training, Solid modeling, Accuracy, Layout, Text to image, Training data, Feature extraction, Robustness, autonomous driving, training-free controllable diffusion generation BibRef

Zhang, X.X.[Xin-Xi], Wen, S.[Song], Han, L.[Ligong], Juefei-Xu, F.[Felix], Srivastava, A.[Akash], Huang, J.Z.[Jun-Zhou], Pavlovic, V.[Vladimir], Wang, H.[Hao], Tao, M.[Molei], Metaxas, D.N.[Dimitris N.],
SODA: Spectral Orthogonal Decomposition Adaptation for Diffusion Models,
WACV25(4665-4682)
IEEE DOI 2505
Adaptation models, Computational modeling, Text to image, Diffusion models, Vectors, Computational efficiency, Matrix decomposition BibRef

Zeng, Y.[Yan], Suganuma, M.[Masanori], Okatani, T.[Takayuki],
Inverting the Generation Process of Denoising Diffusion Implicit Models: Empirical Evaluation and a Novel Method,
WACV25(4516-4524)
IEEE DOI 2505
Measurement, Accuracy, Image synthesis, Computational modeling, Noise reduction, Noise, Diffusion models, Image reconstruction, diffusion models BibRef

Lee, H.[Haeil], Lee, H.S.[Han-Sang], Gye, S.[Seoyeon], Kim, J.[Junmo],
Beta Sampling is All You Need: Efficient Image Generation Strategy for Diffusion Models Using Stepwise Spectral Analysis,
WACV25(4215-4224)
IEEE DOI 2505
Fourier transforms, Image synthesis, Noise reduction, Focusing, Diffusion models, Sampling methods, Computational efficiency, efficient image generation BibRef

Patel, Z.[Zakaria], Serkh, K.[Kirill],
Enhancing Image Layout Control with Loss-Guided Diffusion Models,
WACV25(3916-3924)
IEEE DOI 2505
Visualization, Attention mechanisms, Image synthesis, Computational modeling, Layout, Noise, Diffusion models, loss guidance BibRef

Jun, Y.[Youngjun], Park, J.[Jiwoo], Choo, K.[Kyobin], Choi, T.E.[Tae Eun], Hwang, S.J.[Seong Jae],
Disentangling Disentangled Representations: Towards Improved Latent Units via Diffusion Models,
WACV25(3559-3569)
IEEE DOI 2505
Training, Disentangled representation learning, Semantics, Noise reduction, Stochastic processes, Focusing, Transforms, diffusion models BibRef

Adaloglou, N.[Nikolas], Kaiser, T.[Tim], Michels, F.[Felix], Kollmann, M.[Markus],
Rethinking Cluster-Conditioned Diffusion Models for Label-Free Image Synthesis,
WACV25(3603-3613)
IEEE DOI Code:
WWW Link. 2505
Training, Image quality, Visualization, Upper bound, Systematics, Codes, Image synthesis, Focusing, Diffusion models, image synthesis BibRef

Marjit, S.[Shyam], Singh, H.[Harshit], Mathur, N.[Nityanand], Paul, S.[Sayak], Yu, C.M.[Chia-Mu], Chen, P.Y.[Pin-Yu],
DiffuseKronA: A Parameter Efficient Fine-tuning Method for Personalized Diffusion Models,
WACV25(3529-3538)
IEEE DOI 2505
Training, Adaptation models, Visualization, Sensitivity, Image synthesis, Image color analysis, Text to image, Stability analysis BibRef

Theiss, J.[Justin], Müller, N.[Norman], Kim, D.[Daeil], Prakash, A.[Aayush],
Multi-View Image Diffusion via Coordinate Noise and Fourier Attention,
WACV25(4310-4319)
IEEE DOI 2505
Measurement, Time-frequency analysis, Correlation, Attention mechanisms, Noise, Text to image, Diffusion processes, Videos BibRef

Arrabi, A.[Ahmad], Zhang, X.H.[Xiao-Han], Sultani, W.[Waqas], Chen, C.[Chen], Wshah, S.[Safwan],
Cross-View Meets Diffusion: Aerial Image Synthesis with Geometry and Text Guidance,
WACV25(5356-5366)
IEEE DOI Code:
WWW Link. 2505
Geometry, Image segmentation, Costs, Image synthesis, Geology, Computational modeling, Layout, Diffusion models, multimodality BibRef

Li, S.J.[Shi-Jie], Zanjani, F.G.[Farhad G.], Yahia, H.B.[Haitam Ben], Asano, Y.[Yuki], Gall, J.[Jürgen], Habibian, A.[Amirhossein],
Valid: Variable-Length Input Diffusion for Novel View Synthesis,
WACV25(2240-2249)
IEEE DOI 2505
Training, Visualization, Costs, Image synthesis, Fuses, Computational modeling, Diffusion processes, Diffusion models, diffusion model BibRef

Zhang, J.Y.[Jian-Yi], Zhou, Y.F.[Yu-Fan], Gu, J.X.[Jiu-Xiang], Wigington, C.[Curtis], Yu, T.[Tong], Chen, Y.R.[Yi-Ran], Sun, T.[Tong], Zhang, R.[Ruiyi],
ARTIST: Improving the Generation of Text-Rich Images with Disentangled Diffusion Models and Large Language Models,
WACV25(1268-1278)
IEEE DOI 2505
Training, Visualization, Accuracy, Image synthesis, Large language models, Disentangled representation learning, Rendering (computer graphics) BibRef

Garcia, G.M.[Gonzalo Martin], Zeid, K.A.[Karim Abou], Schmidt, C.[Christian], de Geus, D.[Daan], Hermans, A.[Alexander], Leibe, B.[Bastian],
Fine-Tuning Image-Conditional Diffusion Models is Easier than you Think,
WACV25(753-762)
IEEE DOI 2505
Training, Geometry, Protocols, Image synthesis, Computational modeling, Depth measurement, Pipelines, Estimation, Reliability BibRef

Cui, Q.P.[Qin-Peng], Zhang, X.[Xinyi], Bao, Q.Q.[Qi-Qi], Liao, Q.M.[Qing-Min],
Elucidating the Solution Space of Extended Reverse-Time SDE for Diffusion Models,
WACV25(243-252)
IEEE DOI Code:
WWW Link. 2505
Visualization, Codes, Buildings, Stochastic processes, Differential equations, Diffusion models BibRef

Xu, K.[Katherine], Zhang, L.Z.[Ling-Zhi], Shi, J.B.[Jian-Bo],
Detecting Origin Attribution for Text-to-Image Diffusion Models,
WACV25(8775-8785)
IEEE DOI 2505
Training, Visualization, Image forensics, Accuracy, Image recognition, Image synthesis, Text to image, fake image detection BibRef

Choi, S.[Seunghwan], Yun, J.[Jooyeol], Park, J.[Jeonghoon], Choo, J.[Jaegul],
Disentangling Subject-Irrelevant Elements in Personalized Text-to-Image Diffusion via Filtered Self-Distillation,
WACV25(9073-9082)
IEEE DOI 2505
Training, Gold, Analytical models, Image resolution, Filtering, Computational modeling, Text to image, User experience, self-distillation BibRef

Dutt, R.[Raman], Bohdal, O.[Ondrej], Sanchez, P.[Pedro], Tsaftaris, S.A.[Sotirios A.], Hospedales, T.[Timothy],
MemControl: Mitigating Memorization in Diffusion Models via Automated Parameter Selection,
WACV25(4491-4501)
IEEE DOI Code:
WWW Link. 2505
Uniform resource locators, Measurement, Law, Image synthesis, Prevention and mitigation, Training data, Text to image, medical image analysis BibRef

Guo, D.F.[Dan-Feng], Agarwal, S.[Sanchit], Lin, Y.H.[Yu-Hsiang], Kao, J.Y.[Jiun-Yu], Chung, T.[Tagyoung], Peng, N.[Nanyun], Bansal, M.[Mohit],
Improving Faithfulness of Text-to-Image Diffusion Models through Inference Intervention,
WACV25(4077-4086)
IEEE DOI 2505
Measurement, Accuracy, Computational modeling, Noise reduction, Layout, Retrieval augmented generation, Text to image, Diffusion models BibRef

Wu, H.N.[Hao-Ning], Shen, S.C.[Shao-Cheng], Hu, Q.[Qiang], Zhang, X.Y.[Xiao-Yun], Zhang, Y.[Ya], Wang, Y.F.[Yan-Feng],
MegaFusion: Extend Diffusion Models towards Higher-resolution Image Generation without Further Tuning,
WACV25(3944-3953)
IEEE DOI Code:
WWW Link. 2505
Adaptation models, Image resolution, Image synthesis, Computational modeling, Semantics, Noise, Text to image, tuning-free BibRef

He, X.Z.[Xing-Zhe], Cao, Z.W.[Zhi-Wen], Kolkin, N.[Nicholas], Yu, L.[Lantao], Wan, K.[Kun], Rhodin, H.[Helge], Kalarot, R.[Ratheesh],
A Data Perspective on Enhanced Identity Preservation for Diffusion Personalization,
WACV25(3782-3791)
IEEE DOI 2505
Training, Image quality, Visualization, Image color analysis, Natural languages, Memory management, Text to image, generative model BibRef

Buchheim, B.[Benito], Reimann, M.[Max], Döllner, J.[Jürgen],
Controlling Human Shape and Pose in Text-to-Image Diffusion Models via Domain Adaptation,
WACV25(3688-3697)
IEEE DOI Code:
WWW Link. 2505
Visualization, Adaptation models, Shape, Text to image, Diffusion models, Vectors, human body control BibRef

Xu, K.[Katherine], Zhang, L.Z.[Ling-Zhi], Shi, J.B.[Jian-Bo],
Good Seed Makes a Good Crop: Discovering Secret Seeds in Text-to-Image Diffusion Models,
WACV25(3024-3034)
IEEE DOI 2505
Visualization, Accuracy, Image synthesis, Noise, Text to image, Diffusion processes, Crops, Gray-scale, Diffusion models, seeds BibRef

Ram, S.[Shwetha], Neiman, T.[Tal], Feng, Q.L.[Qian-Li], Stuart, A.[Andrew], Tran, S.[Son], Chilimbi, T.[Trishul],
DreamBlend: Advancing Personalized Fine-Tuning of Text-to-Image Diffusion Models,
WACV25(3614-3623)
IEEE DOI 2505
Image synthesis, Text to image, Diffusion models, Image reconstruction BibRef

Jena, R.[Rohit], Taghibakhshi, A.[Ali], Jain, S.[Sahil], Shen, G.[Gerald], Tajbakhsh, N.[Nima], Vahdat, A.[Arash],
Elucidating Optimal Reward-Diversity Tradeoffs in Text-to-Image Diffusion Models,
WACV25(232-242)
IEEE DOI 2505
Measurement, Training, Annealing, Monte Carlo methods, Noise, Text to image, Diffusion models, Data models, Computer crime, text-to-image alignment BibRef

Kang, W.J.[Won-Jun], Galim, K.[Kevin], Koo, H.I.[Hyung Il], Cho, N.I.[Nam Ik],
Counting Guidance for High Fidelity Text-to-Image Synthesis,
WACV25(899-908)
IEEE DOI 2505
Knowledge engineering, Noise, Noise reduction, Text to image, Diffusion models, Tuning, generative models, diffusion models, text-to-image generation BibRef

Sun, Z.C.[Zhi-Cheng], Jiang, Y.[Yuan], Qin, Z.[Zihao], Deng, Y.C.[Yan-Cong],
Clip2Sam: Enhanced End-to-End Text-to-Image Segmentation and Image Diffusion System,
ICIVC24(169-176)
IEEE DOI 2503
Image segmentation, Computational modeling, Semantics, Text to image, Surgery, Diffusion models, Robustness, Standards, Segmentation BibRef

Voynov, A.[Andrey], Hertz, A.[Amir], Arar, M.[Moab], Fruchter, S.[Shlomi], Cohen-Or, D.[Daniel],
Curved Diffusion: A Generative Model with Optical Geometry Control,
ECCV24(LXXVII: 149-164).
Springer DOI 2412
BibRef

Chen, J.S.[Jun-Song], Ge, C.J.[Chong-Jian], Xie, E.[Enze], Wu, Y.[Yue], Yao, L.W.[Le-Wei], Ren, X.Z.[Xiao-Zhe], Wang, Z.[Zhongdao], Luo, P.[Ping], Lu, H.C.[Hu-Chuan], Li, Z.G.[Zhen-Guo],
Pixart-sigma: Weak-to-strong Training of Diffusion Transformer for 4k Text-to-image Generation,
ECCV24(XXXII: 74-91).
Springer DOI 2412
BibRef

Salehi, S.[Sogand], Shafiei, M.[Mahdi], Yeo, T.[Teresa], Bachmann, R.[Roman], Zamir, A.[Amir],
Viper: Visual Personalization of Generative Models via Individual Preference Learning,
ECCV24(LXXIV: 391-406).
Springer DOI 2412
Code:
WWW Link. BibRef

Um, S.[Soobin], Ye, J.C.[Jong Chul],
Self-guided Generation of Minority Samples Using Diffusion Models,
ECCV24(LXVIII: 414-430).
Springer DOI 2412
Code:
WWW Link. BibRef

Mukhopadhyay, S.[Soumik], Gwilliam, M.[Matthew], Yamaguchi, Y.[Yosuke], Agarwal, V.[Vatsal], Padmanabhan, N.[Namitha], Swaminathan, A.[Archana], Zhou, T.Y.[Tian-Yi], Ohya, J.[Jun], Shrivastava, A.[Abhinav],
Do Text-free Diffusion Models Learn Discriminative Visual Representations?,
ECCV24(LX: 253-272).
Springer DOI 2412
Project:
WWW Link. Code:
WWW Link. BibRef

Wang, J.Y.[Jia-Yi], Laube, K.A.[Kevin Alexander], Li, Y.M.[Yu-Meng], Metzen, J.H.[Jan Hendrik], Cheng, S.I.[Shin-I], Borges, J.[Julio], Khoreva, A.[Anna],
Label-free Neural Semantic Image Synthesis,
ECCV24(LIII: 391-407).
Springer DOI 2412
BibRef

Xu, C.[Chen], Song, T.[Tianhui], Feng, W.X.[Wei-Xin], Li, X.B.[Xu-Bin], Ge, T.[Tiezheng], Zheng, B.[Bo], Wang, L.M.[Li-Min],
Accelerating Image Generation with Sub-path Linear Approximation Model,
ECCV24(LIII: 323-339).
Springer DOI 2412
BibRef

Zhang, S.[Shen], Chen, Z.W.[Zhao-Wei], Zhao, Z.Y.[Zhen-Yu], Chen, Y.H.[Yu-Hao], Tang, Y.[Yao], Liang, J.J.[Jia-Jun],
Hidiffusion: Unlocking Higher-resolution Creativity and Efficiency in Pretrained Diffusion Models,
ECCV24(LI: 145-161).
Springer DOI 2412
BibRef

Garibi, D.[Daniel], Patashnik, O.[Or], Voynov, A.[Andrey], Averbuch-Elor, H.[Hadar], Cohen-Or, D.[Daniel],
Renoise: Real Image Inversion Through Iterative Noising,
ECCV24(XIV: 395-413).
Springer DOI 2412
BibRef

Cao, Y.[Yu], Gong, S.G.[Shao-Gang],
Few-shot Image Generation by Conditional Relaxing Diffusion Inversion,
ECCV24(LXXXIV: 20-37).
Springer DOI 2412
BibRef

Huang, R.[Runhui], Cai, K.X.[Kai-Xin], Han, J.H.[Jian-Hua], Liang, X.D.[Xiao-Dan], Pei, R.[Renjing], Lu, G.S.[Guan-Song], Xu, S.[Songcen], Zhang, W.[Wei], Xu, H.[Hang],
Layerdiff: Exploring Text-guided Multi-layered Composable Image Synthesis via Layer-collaborative Diffusion Model,
ECCV24(LXXVI: 144-160).
Springer DOI 2412
BibRef

Brokman, J.[Jonathan], Hofman, O.[Omer], Vainshtein, R.[Roman], Giloni, A.[Amit], Shimizu, T.[Toshiya], Singh, I.[Inderjeet], Rachmil, O.[Oren], Zolfi, A.[Alon], Shabtai, A.[Asaf], Unno, Y.[Yuki], Kojima, H.[Hisashi],
Montrage: Monitoring Training for Attribution of Generative Diffusion Models,
ECCV24(LXXV: 1-17).
Springer DOI 2412
BibRef

Desai, A.[Alakh], Vasconcelos, N.M.[Nuno M.],
Improving Image Synthesis with Diffusion-negative Sampling,
ECCV24(LIII: 199-214).
Springer DOI 2412
BibRef

Zhang, M.Y.[Man-Yuan], Song, G.L.[Guang-Lu], Shi, X.Y.[Xiao-Yu], Liu, Y.[Yu], Li, H.S.[Hong-Sheng],
Three Things We Need to Know About Transferring Stable Diffusion to Visual Dense Prediction Tasks,
ECCV24(XLII: 128-145).
Springer DOI 2412
BibRef

Huang, L.J.[Lin-Jiang], Fang, R.Y.[Rong-Yao], Zhang, A.P.[Ai-Ping], Song, G.L.[Guang-Lu], Liu, S.[Si], Liu, Y.[Yu], Li, H.S.[Hong-Sheng],
FouriScale: A Frequency Perspective on Training-free High-resolution Image Synthesis,
ECCV24(XII: 196-212).
Springer DOI 2412
Code:
WWW Link. BibRef

Guan, S.[Shanyan], Ge, Y.H.[Yan-Hao], Tai, Y.[Ying], Yang, J.[Jian], Li, W.[Wei], You, M.Y.[Ming-Yu],
Hybridbooth: Hybrid Prompt Inversion for Efficient Subject-driven Generation,
ECCV24(IX: 403-419).
Springer DOI 2412
BibRef

Wu, Y.[Yi], Li, Z.Q.[Zi-Qiang], Zheng, H.L.[He-Liang], Wang, C.Y.[Chao-Yue], Li, B.[Bin],
Infinite-ID: Identity-preserved Personalization via ID-Semantics Decoupling Paradigm,
ECCV24(VIII: 279-296).
Springer DOI 2412
BibRef

Butt, M.A.[Muhammad Atif], Wang, K.[Kai], Vazquez-Corral, J.[Javier], van de Weijer, J.[Joost],
ColorPeel: Color Prompt Learning with Diffusion Models via Color and Shape Disentanglement,
ECCV24(VII: 456-472).
Springer DOI 2412
Project:
WWW Link. BibRef

Li, M.[Ming], Yang, T.[Taojiannan], Kuang, H.F.[Hua-Feng], Wu, J.[Jie], Wang, Z.N.[Zhao-Ning], Xiao, X.F.[Xue-Feng], Chen, C.[Chen],
Controlnet++: Improving Conditional Controls with Efficient Consistency Feedback,
ECCV24(VII: 129-147).
Springer DOI 2412
Project:
WWW Link. BibRef

Ning, W.X.[Wen-Xin], Chang, D.L.[Dong-Liang], Tong, Y.J.[Yu-Jun], He, Z.J.[Zhong-Jiang], Liang, K.M.[Kong-Ming], Ma, Z.Y.[Zhan-Yu],
Hierarchical Prompting for Diffusion Classifiers,
ACCV24(VIII: 297-314).
Springer DOI 2412
Code:
WWW Link. BibRef

Kim, G.[Gwanghyun], Kim, H.[Hayeon], Seo, H.[Hoigi], Kang, D.U.[Dong Un], Chun, S.Y.[Se Young],
Beyondscene: Higher-resolution Human-centric Scene Generation with Pretrained Diffusion,
ECCV24(LXIV: 126-142).
Springer DOI 2412
BibRef

Wang, Y.L.[Yi-Lin], Chen, Z.Y.[Ze-Yuan], Zhong, L.J.[Liang-Jun], Ding, Z.[Zheng], Tu, Z.W.[Zhuo-Wen],
Dolfin: Diffusion Layout Transformers Without Autoencoder,
ECCV24(LI: 326-343).
Springer DOI 2412
BibRef

Najdenkoska, I.[Ivona], Sinha, A.[Animesh], Dubey, A.[Abhimanyu], Mahajan, D.[Dhruv], Ramanathan, V.[Vignesh], Radenovic, F.[Filip],
Context Diffusion: In-context Aware Image Generation,
ECCV24(LXXVII: 375-391).
Springer DOI 2412
BibRef

Ma, N.[Nanye], Goldstein, M.[Mark], Albergo, M.S.[Michael S.], Boffi, N.M.[Nicholas M.], Vanden-Eijnden, E.[Eric], Xie, S.[Saining],
SIT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers,
ECCV24(LXXVII: 23-40).
Springer DOI 2412
BibRef

Zhang, D.J.H.[David Jun-Hao], Xu, M.[Mutian], Wu, J.Z.J.[Jay Zhang-Jie], Xue, C.[Chuhui], Zhang, W.Q.[Wen-Qing], Han, X.G.[Xiao-Guang], Bai, S.[Song], Shou, M.Z.[Mike Zheng],
Free-atm: Harnessing Free Attention Masks for Representation Learning on Diffusion-generated Images,
ECCV24(XL: 465-482).
Springer DOI 2412
BibRef

Yu, Z.M.[Zheng-Ming], Dou, Z.Y.[Zhi-Yang], Long, X.X.[Xiao-Xiao], Lin, C.[Cheng], Li, Z.K.[Ze-Kun], Liu, Y.[Yuan], Müller, N.[Norman], Komura, T.[Taku], Habermann, M.[Marc], Theobalt, C.[Christian], Li, X.[Xin], Wang, W.P.[Wen-Ping],
SURF-D: Generating High-quality Surfaces of Arbitrary Topologies Using Diffusion Models,
ECCV24(XXXIX: 419-438).
Springer DOI 2412
BibRef

Gandikota, R.[Rohit], Materzynska, J.[Joanna], Zhou, T.[Tingrui], Torralba, A.[Antonio], Bau, D.[David],
Concept Sliders: Lora Adaptors for Precise Control in Diffusion Models,
ECCV24(XL: 172-188).
Springer DOI 2412
BibRef

Iwai, S.[Shoma], Osanai, A.[Atsuki], Kitada, S.[Shunsuke], Omachi, S.[Shinichiro],
Layout-corrector: Alleviating Layout Sticking Phenomenon in Discrete Diffusion Model,
ECCV24(XXXIV: 92-110).
Springer DOI 2412
BibRef

Kong, Z.[Zhe], Zhang, Y.[Yong], Yang, T.Y.[Tian-Yu], Wang, T.[Tao], Zhang, K.H.[Kai-Hao], Wu, B.[Bizhu], Chen, G.Y.[Guan-Ying], Liu, W.[Wei], Luo, W.H.[Wen-Han],
OMG: Occlusion-friendly Personalized Multi-concept Generation in Diffusion Models,
ECCV24(XXXI: 253-270).
Springer DOI 2412
BibRef

Lin, Z.H.[Zhi-Hang], Lin, M.B.[Ming-Bao], Zhao, M.[Meng], Ji, R.R.[Rong-Rong],
Accdiffusion: An Accurate Method for Higher-resolution Image Generation,
ECCV24(VI: 38-53).
Springer DOI 2412
BibRef

Somepalli, G.[Gowthami], Gupta, A.[Anubhav], Gupta, K.[Kamal], Palta, S.[Shramay], Goldblum, M.[Micah], Geiping, J.[Jonas], Shrivastava, A.[Abhinav], Goldstein, T.[Tom],
Investigating Style Similarity in Diffusion Models,
ECCV24(LXVI: 143-160).
Springer DOI 2412
BibRef

Qi, Z.[Zipeng], Huang, G.X.[Guo-Xi], Liu, C.Y.[Chen-Yang], Ye, F.[Fei],
Layered Rendering Diffusion Model for Controllable Zero-Shot Image Synthesis,
ECCV24(LXVI: 426-443).
Springer DOI 2412
BibRef

Ju, X.[Xuan], Liu, X.[Xian], Wang, X.T.[Xin-Tao], Bian, Y.X.[Yu-Xuan], Shan, Y.[Ying], Xu, Q.[Qiang],
Brushnet: A Plug-and-play Image Inpainting Model with Decomposed Dual-branch Diffusion,
ECCV24(XX: 150-168).
Springer DOI 2412
BibRef

Lin, C.H.[Chieh Hubert], Kim, C.[Changil], Huang, J.B.[Jia-Bin], Li, Q.[Qinbo], Ma, C.Y.[Chih-Yao], Kopf, J.[Johannes], Yang, M.H.[Ming-Hsuan], Tseng, H.Y.[Hung-Yu],
Taming Latent Diffusion Model for Neural Radiance Field Inpainting,
ECCV24(III: 149-165).
Springer DOI 2412
BibRef

Gao, H.A.[Huan-Ang], Gao, M.J.[Ming-Ju], Li, J.[Jiaju], Li, W.[Wenyi], Zhi, R.[Rong], Tang, H.[Hao], Zhao, H.[Hao],
SCP-Diff: Spatial-categorical Joint Prior for Diffusion Based Semantic Image Synthesis,
ECCV24(XXXII: 37-54).
Springer DOI 2412
BibRef

Le, M.Q.[Minh-Quan], Graikos, A.[Alexandros], Yellapragada, S.[Srikar], Gupta, R.[Rajarsi], Saltz, J.[Joel], Samaras, D.[Dimitris],
inf-brush: Controllable Large Image Synthesis with Diffusion Models in Infinite Dimensions,
ECCV24(XXXII: 385-401).
Springer DOI 2412
BibRef

Gong, C.[Chao], Chen, K.[Kai], Wei, Z.P.[Zhi-Peng], Chen, J.J.[Jing-Jing], Jiang, Y.G.[Yu-Gang],
Reliable and Efficient Concept Erasure of Text-to-image Diffusion Models,
ECCV24(LIII: 73-88).
Springer DOI 2412
BibRef

Luo, J.J.[Jian-Jie], Chen, J.W.[Jing-Wen], Li, Y.[Yehao], Pan, Y.W.[Ying-Wei], Feng, J.L.[Jian-Lin], Chao, H.Y.[Hong-Yang], Yao, T.[Ting],
Unleashing Text-to-image Diffusion Prior for Zero-shot Image Captioning,
ECCV24(LVII: 237-254).
Springer DOI 2412
BibRef

Lu, G.S.[Guan-Song], Guo, Y.F.[Yuan-Fan], Han, J.H.[Jian-Hua], Niu, M.Z.[Min-Zhe], Zeng, Y.H.[Yi-Han], Xu, S.[Songcen], Huang, Z.Y.[Ze-Yi], Zhong, Z.[Zhao], Zhang, W.[Wei], Xu, H.[Hang],
Pangu-draw: Advancing Resource-efficient Text-to-image Synthesis with Time-decoupled Training and Reusable Coop-diffusion,
ECCV24(XLV: 159-176).
Springer DOI 2412
BibRef

Huang, C.P.[Chi-Pin], Chang, K.P.[Kai-Po], Tsai, C.T.[Chung-Ting], Lai, Y.H.[Yung-Hsuan], Yang, F.E.[Fu-En], Wang, Y.C.A.F.[Yu-Chi-Ang Frank],
Receler: Reliable Concept Erasing of Text-to-image Diffusion Models via Lightweight Erasers,
ECCV24(XL: 360-376).
Springer DOI 2412
BibRef

Zhang, Y.[Yasi], Yu, P.[Peiyu], Wu, Y.N.[Ying Nian],
Object-conditioned Energy-based Attention Map Alignment in Text-to-image Diffusion Models,
ECCV24(XLII: 55-71).
Springer DOI 2412
BibRef

Chai, W.L.[Wei-Long], Zheng, D.D.[Dan-Dan], Cao, J.J.[Jia-Jiong], Chen, Z.Q.[Zhi-Quan], Wang, C.B.[Chang-Bao], Ma, C.G.[Chen-Guang],
Speedupnet: A Plug-and-play Adapter Network for Accelerating Text-to-image Diffusion Models,
ECCV24(XLIII: 181-196).
Springer DOI 2412
BibRef

Zhang, Y.[Yi], Tang, Y.[Yun], Ruan, W.J.[Wen-Jie], Huang, X.W.[Xiao-Wei], Khastgir, S.[Siddartha], Jennings, P.[Paul], Zhao, X.Y.[Xing-Yu],
Protip: Probabilistic Robustness Verification on Text-to-image Diffusion Models Against Stochastic Perturbation,
ECCV24(XXXII: 455-472).
Springer DOI 2412
BibRef

Nair, N.G.[Nithin Gopalakrishnan], Valanarasu, J.M.J.[Jeya Maria Jose], Patel, V.M.[Vishal M.],
Maxfusion: Plug&play Multi-modal Generation in Text-to-Image Diffusion Models,
ECCV24(XXXVIII: 93-110).
Springer DOI 2412
BibRef

Zhang, Z.B.[Zheng-Bo], Xu, L.[Li], Peng, D.[Duo], Rahmani, H.[Hossein], Liu, J.[Jun],
Diff-tracker: Text-to-image Diffusion Models are Unsupervised Trackers,
ECCV24(XXVIII: 319-337).
Springer DOI 2412
BibRef

Motamed, S.[Saman], Paudel, D.P.[Danda Pani], Van Gool, L.J.[Luc J.],
Lego: Learning to Disentangle and Invert Personalized Concepts Beyond Object Appearance in Text-to-image Diffusion Models,
ECCV24(XV: 116-133).
Springer DOI 2412
BibRef

Kong, H.Y.[Han-Yang], Lian, D.Z.[Dong-Ze], Mi, M.B.[Michael Bi], Wang, X.C.[Xin-Chao],
Dreamdrone: Text-to-image Diffusion Models Are Zero-shot Perpetual View Generators,
ECCV24(XIII: 324-341).
Springer DOI 2412
BibRef

Peng, D.[Duo], Zhang, Z.B.[Zheng-Bo], Hu, P.[Ping], Ke, Q.H.[Qiu-Hong], Yau, D.K.Y.[David K. Y.], Liu, J.[Jun],
Harnessing Text-to-image Diffusion Models for Category-agnostic Pose Estimation,
ECCV24(XIII: 342-360).
Springer DOI 2412
BibRef

Zhao, T.C.[Tian-Chen], Ning, X.F.[Xue-Fei], Fang, T.[Tongcheng], Liu, E.[Enshu], Huang, G.[Guyue], Lin, Z.[Zinan], Yan, S.[Shengen], Dai, G.H.[Guo-Hao], Wang, Y.[Yu],
Mixdq: Memory-efficient Few-step Text-to-image Diffusion Models with Metric-decoupled Mixed Precision Quantization,
ECCV24(XIV: 285-302).
Springer DOI 2412
BibRef

Gao, Y.[Yi],
Psg-adapter: Controllable Planning Scene Graph for Improving Text-to-image Diffusion,
ACCV24(V: 205-221).
Springer DOI 2412
BibRef

Gupta, P.[Parul], Hayat, M.[Munawar], Dhall, A.[Abhinav], Do, T.T.[Thanh-Toan],
Conditional Distribution Modelling for Few-shot Image Synthesis with Diffusion Models,
ACCV24(V: 3-20).
Springer DOI 2412
BibRef

Ren, J.[Jie], Li, Y.X.[Ya-Xin], Zeng, S.L.[Sheng-Lai], Xu, H.[Han], Lyu, L.J.[Ling-Juan], Xing, Y.[Yue], Tang, J.[Jiliang],
Unveiling and Mitigating Memorization in Text-to-image Diffusion Models Through Cross Attention,
ECCV24(LXXVII: 340-356).
Springer DOI 2412
BibRef

Zheng, W.[Wendi], Teng, J.Y.[Jia-Yan], Yang, Z.[Zhuoyi], Wang, W.H.[Wei-Han], Chen, J.[Jidong], Gu, X.T.[Xiao-Tao], Dong, Y.X.[Yu-Xiao], Ding, M.[Ming], Tang, J.[Jie],
Cogview3: Finer and Faster Text-to-image Generation via Relay Diffusion,
ECCV24(LXXVII: 1-22).
Springer DOI 2412
BibRef

Zhao, J.[Juntu], Deng, J.Y.[Jun-Yu], Ye, Y.X.[Yi-Xin], Li, C.X.[Chong-Xuan], Deng, Z.J.[Zhi-Jie], Wang, D.[Dequan],
Lost in Translation: Latent Concept Misalignment in Text-to-image Diffusion Models,
ECCV24(LXIX: 318-333).
Springer DOI 2412
BibRef

Hui, X.F.[Xiao-Fei], Wu, Q.[Qian], Rahmani, H.[Hossein], Liu, J.[Jun],
Class-agnostic Object Counting with Text-to-image Diffusion Model,
ECCV24(LXIX: 1-18).
Springer DOI 2412
BibRef

Ma, J.[Jian], Chen, C.[Chen], Xie, Q.S.[Qing-Song], Lu, H.[Haonan],
Pea-diffusion: Parameter-efficient Adapter with Knowledge Distillation in Non-english Text-to-image Generation,
ECCV24(LXVIII: 89-105).
Springer DOI 2412
BibRef

Kim, S.[Sanghyun], Jung, S.[Seohyeon], Kim, B.[Balhae], Choi, M.[Moonseok], Shin, J.[Jinwoo], Lee, J.H.[Ju-Ho],
Safeguard Text-to-image Diffusion Models with Human Feedback Inversion,
ECCV24(LXVII: 128-145).
Springer DOI 2412
BibRef

Biggs, B.[Benjamin], Seshadri, A.[Arjun], Zou, Y.[Yang], Jain, A.[Achin], Golatkar, A.[Aditya], Xie, Y.S.[Yu-Sheng], Achille, A.[Alessandro], Swaminathan, A.[Ashwin], Soatto, S.[Stefano],
Diffusion Soup: Model Merging for Text-to-image Diffusion Models,
ECCV24(LXIII: 257-274).
Springer DOI 2412
BibRef

Zhao, Y.[Yang], Xu, Y.[Yanwu], Xiao, Z.S.[Zhi-Sheng], Jia, H.L.[Hao-Lin], Hou, T.B.[Ting-Bo],
Mobilediffusion: Instant Text-to-image Generation on Mobile Devices,
ECCV24(LXII: 225-242).
Springer DOI 2412
BibRef

Zhang, Y.[Yang], Tzun, T.T.[Teoh Tze], Hern, L.W.[Lim Wei], Kawaguchi, K.[Kenji],
Enhancing Semantic Fidelity in Text-to-image Synthesis: Attention Regulation in Diffusion Models,
ECCV24(LXXXVI: 70-86).
Springer DOI 2412
BibRef

Wang, Z.Q.[Zhong-Qi], Zhang, J.[Jie], Shan, S.G.[Shi-Guang], Chen, X.L.[Xi-Lin],
T2ishield: Defending Against Backdoors on Text-to-image Diffusion Models,
ECCV24(LXXXV: 107-124).
Springer DOI 2412
BibRef

Kim, C.[Changhoon], Min, K.[Kyle], Yang, Y.Z.[Ye-Zhou],
R.A.C.E.: Robust Adversarial Concept Erasure for Secure Text-to-image Diffusion Model,
ECCV24(LXXXIII: 461-478).
Springer DOI 2412
BibRef

Wu, X.S.[Xiao-Shi], Hao, Y.M.[Yi-Ming], Zhang, M.Y.[Man-Yuan], Sun, K.Q.[Ke-Qiang], Huang, Z.Y.[Zhao-Yang], Song, G.L.[Guang-Lu], Liu, Y.[Yu], Li, H.S.[Hong-Sheng],
Deep Reward Supervisions for Tuning Text-to-image Diffusion Models,
ECCV24(LXXXIII: 108-124).
Springer DOI 2412
BibRef

Parihar, R.[Rishubh], Sachidanand, V.S., Mani, S.[Sabariswaran], Karmali, T.[Tejan], Babu, R.V.[R. Venkatesh],
Precisecontrol: Enhancing Text-to-image Diffusion Models with Fine-grained Attribute Control,
ECCV24(LXXXII: 469-487).
Springer DOI 2412
BibRef

Choi, D.W.[Dae-Won], Jeong, J.[Jongheon], Jang, H.[Huiwon], Shin, J.[Jinwoo],
Adversarial Robustification via Text-to-image Diffusion Models,
ECCV24(LXXXI: 158-177).
Springer DOI 2412
BibRef

Zavadski, D.[Denis], Feiden, J.F.[Johann-Friedrich], Rother, C.[Carsten],
Controlnet-xs: Rethinking the Control of Text-to-image Diffusion Models as Feedback-control Systems,
ECCV24(LXXXVIII: 343-362).
Springer DOI 2412
BibRef

Maung-Maung, A.P.[April-Pyone], Nguyen, H.H.[Huy H.], Kiya, H.[Hitoshi], Echizen, I.[Isao],
Fine-Tuning Text-To-Image Diffusion Models for Class-Wise Spurious Feature Generation,
ICIP24(3910-3916)
IEEE DOI 2411
Text to image, Flowering plants, Diffusion models, Feature extraction, Information filters, Internet, Testing, finetuning BibRef

Hudson, D.A.[Drew A.], Zoran, D.[Daniel], Malinowski, M.[Mateusz], Lampinen, A.K.[Andrew K.], Jaegle, A.[Andrew], McClelland, J.L.[James L.], Matthey, L.[Loic], Hill, F.[Felix], Lerchner, A.[Alexander],
SODA: Bottleneck Diffusion Models for Representation Learning,
CVPR24(23115-23127)
IEEE DOI 2410
Representation learning, Training, Visualization, Image synthesis, Semantics, Noise reduction, Self-supervised learning, classification BibRef

Karras, T.[Tero], Aittala, M.[Miika], Lehtinen, J.[Jaakko], Hellsten, J.[Janne], Aila, T.[Timo], Laine, S.[Samuli],
Analyzing and Improving the Training Dynamics of Diffusion Models,
CVPR24(24174-24184)
IEEE DOI 2410
Training, Systematics, Costs, Image synthesis, Computer architecture, Network architecture BibRef

Li, J.[Jing], Wang, Z.[Zigan], Li, J.L.[Jin-Liang],
AdvDenoise: Fast Generation Framework of Universal and Robust Adversarial Patches Using Denoise,
SAIAD24(3481-3490)
IEEE DOI Code:
WWW Link. 2410
Visualization, Computational modeling, Noise reduction, Diffusion models, Transformers, Robustness BibRef

Tang, S.[Siao], Wang, X.[Xin], Chen, H.[Hong], Guan, C.[Chaoyu], Wu, Z.[Zewen], Tang, Y.S.[Yan-Song], Zhu, W.W.[Wen-Wu],
Post-training Quantization with Progressive Calibration and Activation Relaxing for Text-to-image Diffusion Models,
ECCV24(LVI: 404-420).
Springer DOI 2412
BibRef

Wang, C.Y.[Chang-Yuan], Wang, Z.W.[Zi-Wei], Xu, X.W.[Xiu-Wei], Tang, Y.S.[Yan-Song], Zhou, J.[Jie], Lu, J.W.[Ji-Wen],
Towards Accurate Post-Training Quantization for Diffusion Models,
CVPR24(16026-16035)
IEEE DOI Code:
WWW Link. 2410
Quantization (signal), Risk minimization, Accuracy, Tensors, Image synthesis, Diffusion models, Minimization, diffusion model, network quantization BibRef

Islam, K.[Khawar], Zaheer, M.Z.[Muhammad Zaigham], Mahmood, A.[Arif], Nandakumar, K.[Karthik],
Diffusemix: Label-Preserving Data Augmentation with Diffusion Models,
CVPR24(27611-27620)
IEEE DOI Code:
WWW Link. 2410
Training, Performance gain, Diffusion models, Data augmentation, Robustness, Image augmentation, Fractals, data augmentation, cutmix BibRef

Miao, Z.C.[Zi-Chen], Wang, J.[Jiang], Wang, Z.[Ze], Yang, Z.Y.[Zheng-Yuan], Wang, L.J.[Li-Juan], Qiu, Q.[Qiang], Liu, Z.C.[Zi-Cheng],
Training Diffusion Models Towards Diverse Image Generation with Reinforcement Learning,
CVPR24(10844-10853)
IEEE DOI 2410
Training, Gradient methods, Limiting, Image synthesis, Estimation, Diffusion processes, Reinforcement learning BibRef

Shabani, M.A.[Mohammad Amin], Wang, Z.W.[Zhao-Wen], Liu, D.[Difan], Zhao, N.X.[Nan-Xuan], Yang, J.[Jimei], Furukawa, Y.[Yasutaka],
Visual Layout Composer: Image-Vector Dual Diffusion Model for Design Layout Generation,
CVPR24(9222-9231)
IEEE DOI Code:
WWW Link. 2410
Visualization, Computational modeling, Layout, Diffusion models, Controllability, Vectors BibRef

Qian, Y.R.[Yu-Rui], Cai, Q.[Qi], Pan, Y.W.[Ying-Wei], Li, Y.[Yehao], Yao, T.[Ting], Sun, Q.[Qibin], Mei, T.[Tao],
Boosting Diffusion Models with Moving Average Sampling in Frequency Domain,
CVPR24(8911-8920)
IEEE DOI 2410
Schedules, Image synthesis, Frequency-domain analysis, Noise reduction, Diffusion processes, Diffusion models, image generation BibRef

Yang, K.[Kai], Tao, J.[Jian], Lyu, J.[Jiafei], Ge, C.J.[Chun-Jiang], Chen, J.X.[Jia-Xin], Shen, W.H.[Wei-Han], Zhu, X.L.[Xiao-Long], Li, X.[Xiu],
Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model,
CVPR24(8941-8951)
IEEE DOI Code:
WWW Link. 2410
Training, Analytical models, Image coding, Computational modeling, Noise reduction, Graphics processing units, Diffusion models, Human feedback BibRef

Zhu, R.[Rui], Pan, Y.W.[Ying-Wei], Li, Y.[Yehao], Yao, T.[Ting], Sun, Z.L.[Zheng-Long], Mei, T.[Tao], Chen, C.W.[Chang Wen],
SD-DiT: Unleashing the Power of Self-Supervised Discrimination in Diffusion Transformer*,
CVPR24(8435-8445)
IEEE DOI 2410
Training, Image synthesis, Noise, Diffusion processes, Ordinary differential equations, Transformers, self-supervised learning BibRef

Zhou, Z.Y.[Zhen-Yu], Chen, D.[Defang], Wang, C.[Can], Chen, C.[Chun],
Fast ODE-based Sampling for Diffusion Models in Around 5 Steps,
CVPR24(7777-7786)
IEEE DOI Code:
WWW Link. 2410
Degradation, Image resolution, Image synthesis, Ordinary differential equations, Diffusion models, Fast Sampling BibRef

Lee, H.Y.[Hsin-Ying], Tseng, H.Y.[Hung-Yu], Lee, H.Y.[Hsin-Ying], Yang, M.H.[Ming-Hsuan],
Exploiting Diffusion Prior for Generalizable Dense Prediction,
CVPR24(7861-7871)
IEEE DOI Code:
WWW Link. 2410
Adaptation models, Visualization, Training data, Stochastic processes, Estimation, Diffusion processes, image generation BibRef

Li, M.Y.[Mu-Yang], Cai, T.[Tianle], Cao, J.X.[Jia-Xin], Zhang, Q.S.[Qin-Sheng], Cai, H.[Han], Bai, J.J.[Jun-Jie], Jia, Y.Q.[Yang-Qing], Li, K.[Kai], Han, S.[Song],
DistriFusion: Distributed Parallel Inference for High-Resolution Diffusion Models,
CVPR24(7183-7193)
IEEE DOI 2410
Degradation, Computational modeling, Graphics processing units, Diffusion processes, Parallel processing, Diffusion models, generative-ai BibRef

Koley, S.[Subhadeep], Bhunia, A.K.[Ayan Kumar], Sekhri, D.[Deeptanshu], Sain, A.[Aneeshan], Chowdhury, P.N.[Pinaki Nath], Xiang, T.[Tao], Song, Y.Z.[Yi-Zhe],
It's All About Your Sketch: Democratising Sketch Control in Diffusion Models,
CVPR24(7204-7214)
IEEE DOI 2410
Adaptation models, Adaptive systems, Navigation, Generative AI, Image retrieval, Process control, Streaming media BibRef

Wang, Y.[Yibo], Gao, R.Y.[Rui-Yuan], Chen, K.[Kai], Zhou, K.Q.[Kai-Qiang], Cai, Y.J.[Ying-Jie], Hong, L.Q.[Lan-Qing], Li, Z.G.[Zhen-Guo], Jiang, L.H.[Li-Hui], Yeung, D.Y.[Dit-Yan], Xu, Q.[Qiang], Zhang, K.[Kai],
DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception,
CVPR24(7246-7255)
IEEE DOI 2410
Image segmentation, Image recognition, Image synthesis, Training data, Object detection, Diffusion models, Data augmentation BibRef

Zhang, P.Z.[Peng-Ze], Yin, H.[Hubery], Li, C.[Chen], Xie, X.H.[Xiao-Hua],
Tackling the Singularities at the Endpoints of Time Intervals in Diffusion Models,
CVPR24(6945-6954)
IEEE DOI 2410
Training, Brightness, Gaussian distribution, Diffusion models, Diffusion Model, Generative Model, Singularity BibRef

Hong, S.[Seongmin], Lee, K.[Kyeonghyun], Jeon, S.Y.[Suh Yoon], Bae, H.[Hyewon], Chun, S.Y.[Se Young],
On Exact Inversion of DPM-Solvers,
CVPR24(7069-7078)
IEEE DOI 2410
Noise, Noise reduction, Watermarking, Diffusion models, Robustness, Diffusion, Inversion, DPM-Solver BibRef

Deng, F.[Fei], Wang, Q.F.[Qi-Fei], Wei, W.[Wei], Hou, T.B.[Ting-Bo], Grundmann, M.[Matthias],
PRDP: Proximal Reward Difference Prediction for Large-Scale Reward Finetuning of Diffusion Models,
CVPR24(7423-7433)
IEEE DOI 2410
Training, Technological innovation, Closed box, Reinforcement learning, Diffusion models, RLHF BibRef

Du, R.[Ruoyi], Chang, D.L.[Dong-Liang], Hospedales, T.[Timothy], Song, Y.Z.[Yi-Zhe], Ma, Z.Y.[Zhan-Yu],
DemoFusion: Democratising High-Resolution Image Generation With No $$,
CVPR24(6159-6168)
IEEE DOI 2410
Training, Image resolution, Image synthesis, Generative AI, Semantics, Memory management, Image Generation, Diffusion Model, High-resolution BibRef

Wang, H.J.[Hong-Jie], Liu, D.[Difan], Kang, Y.[Yan], Li, Y.J.[Yi-Jun], Lin, Z.[Zhe], Jha, N.K.[Niraj K.], Liu, Y.C.[Yu-Chen],
Attention-Driven Training-Free Efficiency Enhancement of Diffusion Models,
CVPR24(16080-16089)
IEEE DOI Code:
WWW Link. 2410
Image quality, Schedules, Costs, Convolution, Computational modeling, Noise reduction, diffusion model, training-free, efficiency, attention map BibRef

Kang, J.[Junoh], Choi, J.[Jinyoung], Choi, S.[Sungik], Han, B.H.[Bo-Hyung],
Observation-Guided Diffusion Probabilistic Models,
CVPR24(8323-8331)
IEEE DOI Code:
WWW Link. 2410
Training, Accuracy, Computational modeling, Noise reduction, Quality control, Diffusion models, Robustness, generative models, diffusion models BibRef

Zhou, J.X.[Jin-Xin], Ding, T.Y.[Tian-Yu], Chen, T.Y.[Tian-Yi], Jiang, J.C.[Jia-Chen], Zharkov, I.[Ilya], Zhu, Z.H.[Zhi-Hui], Liang, L.[Luming],
DREAM: Diffusion Rectification and Estimation-Adaptive Models,
CVPR24(8342-8351)
IEEE DOI 2410
Training, Image quality, Navigation, Source coding, Superresolution, Estimation, Distortion BibRef

Chen, C.[Chen], Liu, D.[Daochang], Xu, C.[Chang],
Towards Memorization-Free Diffusion Models,
CVPR24(8425-8434)
IEEE DOI 2410
Image quality, Training, Measurement, Refining, Noise reduction, Training data, Reliability theory, Diffusion Models, Memorization BibRef

Qi, L.[Lu], Yang, L.[Lehan], Guo, W.D.[Wei-Dong], Xu, Y.[Yu], Du, B.[Bo], Jampani, V.[Varun], Yang, M.H.[Ming-Hsuan],
UniGS: Unified Representation for Image Generation and Segmentation,
CVPR24(6305-6315)
IEEE DOI 2410
Training, Image segmentation, Image synthesis, Image color analysis, Pipelines, Training data, Transforms, diffusion BibRef

Wang, L.Z.[Le-Zhong], Frisvad, J.R.[Jeppe Revall], Jensen, M.B.[Mark Bo], Bigdeli, S.A.[Siavash Arjomand],
StereoDiffusion: Training-Free Stereo Image Generation Using Latent Diffusion Models,
GCV24(7416-7425)
IEEE DOI 2410
Image quality, Image synthesis, Extended reality, Pipelines, Noise reduction, Diffusion models, Deep Image/Video Synthesis, Stable Diffusion BibRef

Sharma, N.[Nakul], Tripathi, A.[Aditay], Chakraborty, A.[Anirban], Mishra, A.[Anand],
Sketch-guided Image Inpainting with Partial Discrete Diffusion Process,
NTIRE24(6024-6034)
IEEE DOI Code:
WWW Link. 2410
Visualization, Shape, Semantics, Diffusion processes, Text to image, Transformers BibRef

Guo, J.Y.[Jia-Yi], Xu, X.Q.[Xing-Qian], Pu, Y.F.[Yi-Fan], Ni, Z.[Zanlin], Wang, C.F.[Chao-Fei], Vasu, M.[Manushree], Song, S.[Shiji], Huang, G.[Gao], Shi, H.[Humphrey],
Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models,
CVPR24(7548-7558)
IEEE DOI Code:
WWW Link. 2410
Training, Measurement, Interpolation, Visualization, Fluctuations, Perturbation methods, Text to image BibRef

Lyu, M.Y.[Meng-Yao], Yang, Y.H.[Yu-Hong], Hong, H.[Haiwen], Chen, H.[Hui], Jin, X.[Xuan], He, Y.[Yuan], Xue, H.[Hui], Han, J.G.[Jun-Gong], Ding, G.[Guiguang],
One-dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications,
CVPR24(7559-7568)
IEEE DOI Code:
WWW Link. 2410
Deformable models, Adaptation models, Costs, Deformation, Text to image, Diffusion models, Permeability, Diffusion Models, Concept Erasing BibRef

Yang, L.[Ling], Qian, H.T.[Hao-Tian], Zhang, Z.L.[Zhi-Ling], Liu, J.W.[Jing-Wei], Cui, B.[Bin],
Structure-Guided Adversarial Training of Diffusion Models,
CVPR24(7256-7266)
IEEE DOI 2410
Training, Manifolds, Image synthesis, Noise reduction, Text to image, Diffusion models, Data models, Diffusion models, generative models, Image generation BibRef

Yu, Y.Y.[Yu-Yang], Liu, B.Z.[Bang-Zhen], Zheng, C.X.[Chen-Xi], Xu, X.M.[Xue-Miao], He, S.F.[Sheng-Feng], Zhang, H.D.[Huai-Dong],
Beyond Textual Constraints: Learning Novel Diffusion Conditions with Fewer Examples,
CVPR24(7109-7118)
IEEE DOI Code:
WWW Link. 2410
Training, Adaptation models, Codes, Text to image, Diffusion processes, Diffusion models, diffusion model BibRef

Xing, X.M.[Xi-Ming], Zhou, H.T.[Hai-Tao], Wang, C.[Chuang], Zhang, J.[Jing], Xu, D.[Dong], Yu, Q.[Qian],
SVGDreamer: Text Guided SVG Generation with Diffusion Model,
CVPR24(4546-4555)
IEEE DOI Code:
WWW Link. 2410
Visualization, Image color analysis, Shape, Text to image, Process control, Diffusion models, vector graphics, SVG, text-to-svg, Diffusion BibRef

Parihar, R.[Rishubh], Bhat, A.[Abhijnya], Basu, A.[Abhipsa], Mallick, S.[Saswat], Kundu, J.N.[Jogendra Nath], Babu, R.V.[R. Venkatesh],
Balancing Act: Distribution-Guided Debiasing in Diffusion Models,
CVPR24(6668-6678)
IEEE DOI 2410
Training, Image synthesis, Semantics, Noise reduction, Text to image, Diffusion models, Data augmentation, Debiasing, diffusion models, generative models BibRef

Ren, J.W.[Jia-Wei], Xu, M.M.[Meng-Meng], Wu, J.C.[Jui-Chieh], Liu, Z.W.[Zi-Wei], Xiang, T.[Tao], Toisoul, A.[Antoine],
Move Anything with Layered Scene Diffusion,
CVPR24(6380-6389)
IEEE DOI 2410
Codes, Layout, Noise reduction, Memory management, Text to image, Process control BibRef

Lu, Y.Z.[Yan-Zuo], Zhang, M.[Manlin], Ma, A.J.[Andy J.], Xie, X.H.[Xiao-Hua], Lai, J.H.[Jian-Huang],
Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis,
CVPR24(6420-6429)
IEEE DOI Code:
WWW Link. 2410
Training, Image synthesis, Semantics, Text to image, Process control, Diffusion models, Generators, Diffusion Model, Person Image Synthesis BibRef

Liu, C.[Chang], Wu, H.N.[Hao-Ning], Zhong, Y.J.[Yu-Jie], Zhang, X.Y.[Xiao-Yun], Wang, Y.F.[Yan-Feng], Xie, W.[Weidi],
Intelligent Grimm: Open-ended Visual Storytelling via Latent Diffusion Models,
CVPR24(6190-6200)
IEEE DOI Code:
WWW Link. 2410
Visualization, Electronic publishing, Computational modeling, Pipelines, Text to image, Image sequences, Diffusion Models BibRef

Wimbauer, F.[Felix], Wu, B.[Bichen], Schoenfeld, E.[Edgar], Dai, X.L.[Xiao-Liang], Hou, J.[Ji], He, Z.J.[Zi-Jian], Sanakoyeu, A.[Artsiom], Zhang, P.Z.[Pei-Zhao], Tsai, S.[Sam], Kohler, J.[Jonas], Rupprecht, C.[Christian], Cremers, D.[Daniel], Vajda, P.[Peter], Wang, J.L.[Jia-Liang],
Cache Me if You Can: Accelerating Diffusion Models through Block Caching,
CVPR24(6211-6220)
IEEE DOI 2410
Image quality, Visualization, Schedules, Image synthesis, Computational modeling, Noise reduction, Noise, diffusion, fid BibRef

Dalva, Y.[Yusuf], Yanardag, P.[Pinar],
NoiseCLR: A Contrastive Learning Approach for Unsupervised Discovery of Interpretable Directions in Diffusion Models,
CVPR24(24209-24218)
IEEE DOI 2410
Image synthesis, Computational modeling, Semantics, Text to image, Contrastive learning, Aerospace electronics, Diffusion models, semantic discovery BibRef

Sun, H.[Haoze], Li, W.B.[Wen-Bo], Liu, J.Z.[Jian-Zhuang], Chen, H.Y.[Hao-Yu], Pei, R.[Renjing], Zou, X.[Xueyi], Yan, Y.[Youliang], Yang, Y.[Yujiu],
CoSeR: Bridging Image and Language for Cognitive Super-Resolution,
CVPR24(25868-25878)
IEEE DOI Code:
WWW Link. 2410
Computational modeling, Superresolution, Semantics, Text to image, Benchmark testing, Diffusion models BibRef

Wang, Z.C.[Zhi-Cai], Wei, L.H.[Long-Hui], Wang, T.[Tan], Chen, H.Y.[He-Yu], Hao, Y.B.[Yan-Bin], Wang, X.[Xiang], He, X.N.[Xiang-Nan], Tian, Q.[Qi],
Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model,
CVPR24(17223-17233)
IEEE DOI Code:
WWW Link. 2410
Training, Computational modeling, Text to image, Data augmentation, Diffusion models, diffusion model, data augmentation BibRef

Hsiao, Y.T.[Yi-Ting], Khodadadeh, S.[Siavash], Duarte, K.[Kevin], Lin, W.A.[Wei-An], Qu, H.[Hui], Kwon, M.[Mingi], Kalarot, R.[Ratheesh],
Plug-and-Play Diffusion Distillation,
CVPR24(13743-13752)
IEEE DOI 2410
Training, Visualization, Image synthesis, Computational modeling, Text to image, Diffusion processes, distillation, model efficiency, diffusion model BibRef

Zhan, C.L.[Chen-Lu], Lin, Y.[Yu], Wang, G.A.[Gao-Ang], Wang, H.W.[Hong-Wei], Wu, J.[Jian],
MedM2G: Unifying Medical Multi-Modal Generation via Cross-Guided Diffusion with Visual Invariant,
CVPR24(11502-11512)
IEEE DOI 2410
Visualization, Adaptation models, Technological innovation, Magnetic resonance imaging, Text to image, Medical services, Diffusion Model BibRef

Kant, Y.[Yash], Siarohin, A.[Aliaksandr], Wu, Z.[Ziyi], Vasilkovsky, M.[Michael], Qian, G.C.[Guo-Cheng], Ren, J.[Jian], Guler, R.A.[Riza Alp], Ghanem, B.[Bernard], Tulyakov, S.[Sergey], Gilitschenski, I.[Igor],
SPAD: Spatially Aware Multi-View Diffusers,
CVPR24(10026-10038)
IEEE DOI 2410
Geometry, Text to image, Transforms, Cameras, Diffusion models, Encoding, novel view synthesis, diffusion BibRef

Starodubcev, N.[Nikita], Baranchuk, D.[Dmitry], Fedorov, A.[Artem], Babenko, A.[Artem],
Your Student is Better than Expected: Adaptive Teacher-Student Collaboration for Text-Conditional Diffusion Models,
CVPR24(9275-9285)
IEEE DOI 2410
Adaptation models, Computational modeling, Pipelines, Text to image, Collaboration, Diffusion models, Image and video synthesis and generation BibRef

Mei, K.[Kangfu], Delbracio, M.[Mauricio], Talebi, H.[Hossein], Tu, Z.Z.[Zheng-Zhong], Patel, V.M.[Vishal M.], Milanfar, P.[Peyman],
CoDi: Conditional Diffusion Distillation for Higher-Fidelity and Faster Image Generation,
CVPR24(9048-9058)
IEEE DOI 2410
Image synthesis, Superresolution, Text to image, Predictive models, Diffusion models BibRef

Ran, L.M.[Ling-Min], Cun, X.D.[Xiao-Dong], Liu, J.W.[Jia-Wei], Zhao, R.[Rui], Zijie, S.[Song], Wang, X.T.[Xin-Tao], Keppo, J.[Jussi], Shou, M.Z.[Mike Zheng],
X- Adapter: Universal Compatibility of Plugins for Upgraded Diffusion Model,
CVPR24(8775-8784)
IEEE DOI Code:
WWW Link. 2410
Training, Connectors, Adaptation models, Noise reduction, Text to image, Diffusion models, Data models BibRef

Liu, Y.J.[Yu-Jian], Zhang, Y.[Yang], Jaakkola, T.[Tommi], Chang, S.Y.[Shi-Yu],
Correcting Diffusion Generation Through Resampling,
CVPR24(8713-8723)
IEEE DOI Code:
WWW Link. 2410
Image quality, Image synthesis, Filtering, Computational modeling, Text to image, Detectors, image generation, diffusion model, particle filtering BibRef

Luo, G.[Grace], Darrell, T.J.[Trevor J.], Wang, O.[Oliver], Goldman, D.B.[Dan B], Holynski, A.[Aleksander],
Readout Guidance: Learning Control from Diffusion Features,
CVPR24(8217-8227)
IEEE DOI Code:
WWW Link. 2410
Training, Head, Image edge detection, Training data, Text to image, Diffusion models, Image and video synthesis and generation BibRef

Wallace, B.[Bram], Dang, M.[Meihua], Rafailov, R.[Rafael], Zhou, L.Q.[Lin-Qi], Lou, A.[Aaron], Purushwalkam, S.[Senthil], Ermon, S.[Stefano], Xiong, C.M.[Cai-Ming], Joty, S.[Shafiq], Naik, N.[Nikhil],
Diffusion Model Alignment Using Direct Preference Optimization,
CVPR24(8228-8238)
IEEE DOI 2410
Training, Learning systems, Visualization, Pipelines, Text to image, Reinforcement learning, Diffusion models, generative, diffusion, dpo BibRef

Yan, J.N.[Jing Nathan], Gu, J.[Jiatao], Rush, A.M.[Alexander M.],
Diffusion Models Without Attention,
CVPR24(8239-8249)
IEEE DOI 2410
Training, Image resolution, Computational modeling, Noise reduction, Text to image, Computer architecture BibRef

Gokaslan, A.[Aaron], Cooper, A.F.[A. Feder], Collins, J.[Jasmine], Seguin, L.[Landan], Jacobson, A.[Austin], Patel, M.[Mihir], Frankle, J.[Jonathan], Stephenson, C.[Cory], Kuleshov, V.[Volodymyr],
Common Canvas: Open Diffusion Models Trained on Creative-Commons Images,
CVPR24(8250-8260)
IEEE DOI 2410
Training, Computational modeling, Transfer learning, Text to image, Diffusion models, Data models, diffusion, copyright, text2image, dataset BibRef

Habibian, A.[Amirhossein], Ghodrati, A.[Amir], Fathima, N.[Noor], Sautiere, G.[Guillaume], Garrepalli, R.[Risheek], Porikli, F.M.[Fatih M.], Petersen, J.[Jens],
Clockwork Diffusion: Efficient Generation With Model-Step Distillation,
CVPR24(8352-8361)
IEEE DOI Code:
WWW Link. 2410
Training, Adaptation models, Runtime, Noise reduction, Semantics, Layout, Text to image, diffusion, efficient diffusion, distillation BibRef

Wang, J.Y.[Jun-Yan], Sun, Z.H.[Zhen-Hong], Tan, Z.Y.[Zhi-Yu], Chen, X.B.[Xuan-Bai], Chen, W.H.[Wei-Hua], Li, H.[Hao], Zhang, C.[Cheng], Song, Y.[Yang],
Towards Effective Usage of Human-Centric Priors in Diffusion Models for Text-based Human Image Generation,
CVPR24(8446-8455)
IEEE DOI Code:
WWW Link. 2410
Accuracy, Image synthesis, Semantics, Text to image, Diffusion processes, Diffusion models BibRef

Feng, Y.T.[Yu-Tong], Gong, B.[Biao], Chen, D.[Di], Shen, Y.J.[Yu-Jun], Liu, Y.[Yu], Zhou, J.[Jingren],
Ranni: Taming Text-to-Image Diffusion for Accurate Instruction Following,
CVPR24(4744-4753)
IEEE DOI 2410
Visualization, Protocols, Semantics, Pipelines, Text to image, Diffusion models, Generators, diffusion model, text-to-image BibRef

Lu, S.L.[Shi-Lin], Wang, Z.[Zilan], Li, L.[Leyang], Liu, Y.Z.[Yan-Zhu], Kong, A.W.K.[Adams Wai-Kin],
MACE: Mass Concept Erasure in Diffusion Models,
CVPR24(6430-6440)
IEEE DOI Code:
WWW Link. 2410
Codes, Text to image, Interference, Diffusion models, Generative AI, AI security, diffusion model, concept editing BibRef

Nam, J.[Jisu], Kim, H.[Heesu], Lee, D.[DongJae], Jin, S.[Siyoon], Kim, S.[Seungryong], Chang, S.[Seunggyu],
DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization,
CVPR24(8100-8110)
IEEE DOI 2410
Visualization, Computational modeling, Semantics, Noise reduction, Text to image, Diffusion models, Diffusion Models, Semantic Correspondence BibRef

Ham, C.[Cusuh], Fisher, M.[Matthew], Hays, J.[James], Kolkin, N.[Nicholas], Liu, Y.C.[Yu-Chen], Zhang, R.[Richard], Hinz, T.[Tobias],
Personalized Residuals for Concept-Driven Text-to-Image Generation,
CVPR24(8186-8195)
IEEE DOI 2410
Training, Measurement, Computational modeling, Text to image, Graphics processing units, Diffusion models, personalization, diffusion models BibRef

Phung, Q.[Quynh], Ge, S.W.[Song-Wei], Huang, J.B.[Jia-Bin],
Grounded Text-to-Image Synthesis with Attention Refocusing,
CVPR24(7932-7942)
IEEE DOI 2410
Visualization, Large language models, Computational modeling, Layout, Text to image, Benchmark testing, Diffusion models, grounded text-to-image BibRef

Nguyen, T.H.[Thuan Hoang], Tran, A.[Anh],
SwiftBrush: One-Step Text-to-Image Diffusion Model with Variational Score Distillation,
CVPR24(7807-7816)
IEEE DOI 2410
Training, Solid modeling, Text to image, Diffusion models, Neural radiance field, Data models BibRef

Cao, C.J.[Chen-Jie], Cai, Y.[Yunuo], Dong, Q.[Qiaole], Wang, Y.K.[Yi-Kai], Fu, Y.W.[Yan-Wei],
LeftRefill: Filling Right Canvas based on Left Reference through Generalized Text-to-Image Diffusion Model,
CVPR24(7705-7715)
IEEE DOI Code:
WWW Link. 2410
Adaptation models, Image synthesis, Text to image, Diffusion models, Filling, Diffusion Model, Image Inpainting BibRef

Mo, S.C.[Si-Cheng], Mu, F.Z.[Fang-Zhou], Lin, K.H.[Kuan Heng], Liu, Y.L.[Yan-Li], Guan, B.[Bochen], Li, Y.[Yin], Zhou, B.[Bolei],
FreeControl: Training-Free Spatial Control of Any Text-to-Image Diffusion Model with Any Condition,
CVPR24(7465-7475)
IEEE DOI Code:
WWW Link. 2410
Visualization, Text to image, Aerospace electronics, Diffusion models, Feature extraction, Controllable generation BibRef

Huang, M.Q.[Meng-Qi], Mao, Z.D.[Zhen-Dong], Liu, M.C.[Ming-Cong], He, Q.[Qian], Zhang, Y.D.[Yong-Dong],
RealCustom: Narrowing Real Text Word for Real-Time Open-Domain Text-to-Image Customization,
CVPR24(7476-7485)
IEEE DOI 2410
Training, Visualization, Adaptive systems, Limiting, Navigation, Text to image, text-to-image generation, diffusion models BibRef

Mahajan, S.[Shweta], Rahman, T.[Tanzila], Yi, K.M.[Kwang Moo], Sigal, L.[Leonid],
Prompting Hard or Hardly Prompting: Prompt Inversion for Text-to-Image Diffusion Models,
CVPR24(6808-6817)
IEEE DOI 2410
Vocabulary, Visualization, Image synthesis, Semantics, Text to image, Diffusion processes, Diffusion models BibRef

Zeng, Y.[Yu], Patel, V.M.[Vishal M.], Wang, H.C.[Hao-Chen], Huang, X.[Xun], Wang, T.C.[Ting-Chun], Liu, M.Y.[Ming-Yu], Balaji, Y.[Yogesh],
JeDi: Joint-Image Diffusion Models for Finetuning-Free Personalized Text-to-Image Generation,
CVPR24(6786-6795)
IEEE DOI 2410
Adaptation models, Computational modeling, Text to image, Benchmark testing, Diffusion models, image generation BibRef

Gong, B.[Biao], Huang, S.[Siteng], Feng, Y.T.[Yu-Tong], Zhang, S.W.[Shi-Wei], Li, Y.[Yuyuan], Liu, Y.[Yu],
Check, Locate, Rectify: A Training-Free Layout Calibration System for Text- to- Image Generation,
CVPR24(6624-6634)
IEEE DOI Code:
WWW Link. 2410
Image synthesis, Layout, Pipelines, Text to image, Benchmark testing, Diffusion models, Generators, text-to-image generation, training-free BibRef

Menon, S.[Sachit], Misra, I.[Ishan], Girdhar, R.[Rohit],
Generating Illustrated Instructions,
CVPR24(6274-6284)
IEEE DOI 2410
Measurement, Visualization, Large language models, Text to image, Diffusion models, diffusion, multimodal, text-to-image BibRef

Yang, J.Y.[Jing-Yuan], Feng, J.W.[Jia-Wei], Huang, H.[Hui],
EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models,
CVPR24(6358-6368)
IEEE DOI Code:
WWW Link. 2410
Measurement, Visualization, Image color analysis, Image synthesis, Semantics, Text to image BibRef

Yang, Y.J.[Yi-Jun], Gao, R.[Ruiyuan], Wang, X.[Xiaosen], Ho, T.Y.[Tsung-Yi], Xu, N.[Nan], xu, Q.[Qiang],
MMA-Diffusion: MultiModal Attack on Diffusion Models,
CVPR24(7737-7746)
IEEE DOI Code:
WWW Link. 2410
Visualization, Filters, Current measurement, Computational modeling, Text to image, Diffusion models, Adversarial attack BibRef

Hedlin, E.[Eric], Sharma, G.[Gopal], Mahajan, S.[Shweta], He, X.Z.[Xing-Zhe], Isack, H.[Hossam], Kar, A.[Abhishek], Rhodin, H.[Helge], Tagliasacchi, A.[Andrea], Yi, K.M.[Kwang Moo],
Unsupervised Keypoints from Pretrained Diffusion Models,
CVPR24(22820-22830)
IEEE DOI 2410
Codes, Noise reduction, Neural networks, Text to image, Diffusion models, Diffusion models, emergent understandings BibRef

Sato, T.[Takami], Yue, J.[Justin], Chen, N.[Nanze], Wang, N.F.[Ning-Fei], Chen, Q.A.[Qi Alfred],
Intriguing Properties of Diffusion Models: An Empirical Study of the Natural Attack Capability in Text-to-Image Generative Models,
CVPR24(24635-24644)
IEEE DOI 2410
Noise reduction, Text to image, Artificial neural networks, Visual systems, Predictive models, Diffusion models, Safety BibRef

Gandikota, K.V.[Kanchana Vaishnavi], Chandramouli, P.[Paramanand],
Text-Guided Explorable Image Super-Resolution,
CVPR24(25900-25911)
IEEE DOI 2410
Training, Degradation, Superresolution, Semantics, Text to image, Diffusion models, diffusion, text-to-image, super-resolution BibRef

Mo, W.[Wenyi], Zhang, T.Y.[Tian-Yu], Bai, Y.[Yalong], Su, B.[Bing], Wen, J.R.[Ji-Rong], Yang, Q.[Qing],
Dynamic Prompt Optimizing for Text-to-Image Generation,
CVPR24(26617-26626)
IEEE DOI 2410
Uniform resource locators, Training, Image synthesis, Semantics, Refining, Text to image, Reinforcement learning, Diffusion Model BibRef

Smith, J.S.[James Seale], Hsu, Y.C.[Yen-Chang], Kira, Z.[Zsolt], Shen, Y.L.[Yi-Lin], Jin, H.X.[Hong-Xia],
Continual Diffusion with STAMINA: STack-And-Mask INcremental Adapters,
WhatNext24(1744-1754)
IEEE DOI 2410
Training, Costs, Text to image, Benchmark testing, Diffusion models, text-to-image customization BibRef

Zhang, G.[Gong], Wang, K.[Kai], Xu, X.Q.[Xing-Qian], Wang, Z.Y.[Zhang-Yang], Shi, H.[Humphrey],
Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models,
WhatNext24(1755-1764)
IEEE DOI 2410
Adaptation models, Privacy, Accuracy, Computational modeling, Knowledge based systems, Text to image, Safety, text-to-image, concept forgetting BibRef

Tudosiu, P.D.[Petru-Daniel], Yang, Y.X.[Yong-Xin], Zhang, S.F.[Shi-Feng], Chen, F.[Fei], McDonagh, S.[Steven], Lampouras, G.[Gerasimos], Iacobacci, I.[Ignacio], Parisot, S.[Sarah],
MULAN: A Multi Layer Annotated Dataset for Controllable Text-to-Image Generation,
CVPR24(22413-22422)
IEEE DOI Code:
WWW Link. 2410
Training, Image segmentation, Annotations, Pipelines, Text to image, Image decomposition, Software, Dataset, Text-to-Image Generation, Diffusion Models BibRef

Wang, F.F.[Fei-Fei], Tan, Z.T.[Zhen-Tao], Wei, T.Y.[Tian-Yi], Wu, Y.[Yue], Huang, Q.D.[Qi-Dong],
SimAC: A Simple Anti-Customization Method for Protecting Face Privacy Against Text-to-Image Synthesis of Diffusion Models,
CVPR24(12047-12056)
IEEE DOI Code:
WWW Link. 2410
Training, Privacy, Adaptation models, Visualization, Frequency-domain analysis, Noise reduction, Text to image, face privacy BibRef

Pang, L.[Lianyu], Yin, J.[Jian], Xie, H.R.[Hao-Ran], Wang, Q.[Qiping], Li, Q.[Qing], Mao, X.D.[Xu-Dong],
Cross Initialization for Face Personalization of Text-to-Image Models,
CVPR24(8393-8403)
IEEE DOI Code:
WWW Link. 2410
Face recognition, Computational modeling, Text to image, Diffusion models, Surges, Image reconstruction BibRef

Xu, X.Q.[Xing-Qian], Guo, J.Y.[Jia-Yi], Wang, Z.Y.[Zhang-Yang], Huang, G.[Gao], Essa, I.[Irfan], Shi, H.[Humphrey],
Prompt-Free Diffusion: Taking 'Text' Out of Text-to-Image Diffusion Models,
CVPR24(8682-8692)
IEEE DOI 2410
Visualization, Pain, Image synthesis, Computational modeling, Semantics, Noise, Text to image, Generative Model, Image Editing, Text-to-Image BibRef

Qi, T.H.[Tian-Hao], Fang, S.C.[Shan-Cheng], Wu, Y.Z.[Yan-Ze], Xie, H.T.[Hong-Tao], Liu, J.W.[Jia-Wei], Chen, L.[Lang], He, Q.[Qian], Zhang, Y.D.[Yong-Dong],
DEADiff: An Efficient Stylization Diffusion Model with Disentangled Representations,
CVPR24(8693-8702)
IEEE DOI Code:
WWW Link. 2410
Learning systems, Visualization, Semantics, Text to image, Feature extraction, Diffusion models BibRef

Li, H.[Hang], Shen, C.Z.[Cheng-Zhi], Torr, P.[Philip], Tresp, V.[Volker], Gu, J.D.[Jin-Dong],
Self-Discovering Interpretable Diffusion Latent Directions for Responsible Text-to-Image Generation,
CVPR24(12006-12016)
IEEE DOI Code:
WWW Link. 2410
Ethics, Prevention and mitigation, Semantics, Text to image, Diffusion models, Vectors, Text-to-Image Generation, Explainability and Transparency BibRef

Li, H.[Hao], Zou, Y.[Yang], Wang, Y.[Ying], Majumder, O.[Orchid], Xie, Y.S.[Yu-Sheng], Manmatha, R., Swaminathan, A.[Ashwin], Tu, Z.W.[Zhuo-Wen], Ermon, S.[Stefano], Soatto, S.[Stefano],
On the Scalability of Diffusion-based Text-to-Image Generation,
CVPR24(9400-9409)
IEEE DOI 2410
Training, Costs, Systematics, Computational modeling, Scalability, Noise reduction, Text to image, diffusion models, text-to-image, Transformers BibRef

Guo, X.[Xiefan], Liu, J.L.[Jin-Lin], Cui, M.M.[Miao-Miao], Li, J.[Jiankai], Yang, H.Y.[Hong-Yu], Huang, D.[Di],
Initno: Boosting Text-to-Image Diffusion Models via Initial Noise Optimization,
CVPR24(9380-9389)
IEEE DOI Code:
WWW Link. 2410
Navigation, Instruments, Noise, Pipelines, Text to image, Aerospace electronics BibRef

Shen, D.[Dazhong], Song, G.L.[Guang-Lu], Xue, Z.[Zeyue], Wang, F.Y.[Fu-Yun], Liu, Y.[Yu],
Rethinking the Spatial Inconsistency in Classifier-Free Diffusion Guidance,
CVPR24(9370-9379)
IEEE DOI Code:
WWW Link. 2410
Image quality, Training, Costs, Semantic segmentation, Semantics, Noise reduction, Text-to-Image Diffusion Models, Semantic Segmentation BibRef

Zhou, Y.F.[Yu-Fan], Zhang, R.[Ruiyi], Gu, J.X.[Jiu-Xiang], Sun, T.[Tong],
Customization Assistant for Text-to-image Generation,
CVPR24(9182-9191)
IEEE DOI 2410
Training, Large language models, Text to image, Diffusion models, Testing BibRef

Patel, M.[Maitreya], Kim, C.[Changhoon], Cheng, S.[Sheng], Baral, C.[Chitta], Yang, Y.Z.[Ye-Zhou],
ECLIPSE: A Resource-Efficient Text-to-Image Prior for Image Generations,
CVPR24(9069-9078)
IEEE DOI Code:
WWW Link. 2410
Training, Image coding, Image synthesis, Computational modeling, Text to image, Contrastive learning, Diffusion models, ECLIPSE BibRef

Meral, T.H.S.[Tuna Han Salih], Simsar, E.[Enis], Tombari, F.[Federico], Yanardag, P.[Pinar],
CONFORM: Contrast is All You Need For High-Fidelity Text-to-Image Diffusion Models,
CVPR24(9005-9014)
IEEE DOI 2410
Source coding, Computational modeling, Semantics, Text to image, Benchmark testing, Diffusion models, Semantic fidelity BibRef

Jiang, Z.Z.[Zeyin-Zi], Mao, C.J.[Chao-Jie], Pan, Y.L.[Yu-Lin], Han, Z.[Zhen], Zhang, J.F.[Jing-Feng],
SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing,
CVPR24(8995-9004)
IEEE DOI Code:
WWW Link. 2410
Training, Adaptation models, Tuners, Image synthesis, Text to image, Diffusion models, Diffusion model, Text-to-image generation, Efficient Tuning BibRef

Kim, C.[Changhoon], Min, K.[Kyle], Patel, M.[Maitreya], Cheng, S.[Sheng], Yang, Y.Z.[Ye-Zhou],
WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models,
CVPR24(8974-8983)
IEEE DOI Code:
WWW Link. 2410
Solid modeling, Computational modeling, Prevention and mitigation, Text to image, Modulation, Generative Model BibRef

Shirakawa, T.[Takahiro], Uchida, S.[Seiichi],
NoiseCollage: A Layout-Aware Text-to-Image Diffusion Model Based on Noise Cropping and Merging,
CVPR24(8921-8930)
IEEE DOI Code:
WWW Link. 2410
Image synthesis, Image edge detection, Noise, Layout, Noise reduction, Merging, Text to image, diffusion model, text-to-image generation BibRef

Kwon, G.[Gihyun], Jenni, S.[Simon], Li, D.Z.[Ding-Zeyu], Lee, J.Y.[Joon-Young], Ye, J.C.[Jong Chul], Heilbron, F.C.[Fabian Caba],
Concept Weaver: Enabling Multi-Concept Fusion in Text-to-Image Models,
CVPR24(8880-8889)
IEEE DOI 2410
Fuses, Semantics, Text to image, Diffusion models, Optimization, Text-to-image Model, Multi-concept BibRef

Sueyoshi, K.[Kota], Matsubara, T.[Takashi],
Predicated Diffusion: Predicate Logic-Based Attention Guidance for Text-to-Image Diffusion Models,
CVPR24(8651-8660)
IEEE DOI 2410
Image quality, Image synthesis, Natural languages, Layout, Text to image, Diffusion models, text-to-image generation, attention guidance BibRef

Kim, J.[Jimyeong], Park, J.[Jungwon], Rhee, W.[Wonjong],
Selectively Informative Description can Reduce Undesired Embedding Entanglements in Text-to-Image Personalization,
CVPR24(8312-8322)
IEEE DOI 2410
Reflection, Text-to-Image Generation, Text-to-Image Diffusion, Text-to-image Personalization BibRef

Koley, S.[Subhadeep], Bhunia, A.K.[Ayan Kumar], Sain, A.[Aneeshan], Chowdhury, P.N.[Pinaki Nath], Xiang, T.[Tao], Song, Y.Z.[Yi-Zhe],
Text-to-Image Diffusion Models are Great Sketch-Photo Matchmakers,
CVPR24(16826-16837)
IEEE DOI 2410
Visualization, Adaptation models, Shape, Pipelines, Image retrieval, Text to image, Benchmark testing BibRef

Zhao, L.[Lin], Zhao, T.C.[Tian-Chen], Lin, Z.[Zinan], Ning, X.F.[Xue-Fei], Dai, G.H.[Guo-Hao], Yang, H.Z.[Hua-Zhong], Wang, Y.[Yu],
FlashEval: Towards Fast and Accurate Evaluation of Text-to-Image Diffusion Generative Models,
CVPR24(16122-16131)
IEEE DOI Code:
WWW Link. 2410
Training, Schedules, Quantization (signal), Computational modeling, Text to image, Training data, Diffusion models BibRef

Liu, H.[Hanwen], Sun, Z.C.[Zhi-Cheng], Mu, Y.D.[Ya-Dong],
Countering Personalized Text-to-Image Generation with Influence Watermarks,
CVPR24(12257-12267)
IEEE DOI 2410
Training, Visualization, Computational modeling, Semantics, Noise, Text to image, Watermarking, diffusion models, watermarks BibRef

Azarian, K.[Kambiz], Das, D.[Debasmit], Hou, Q.Q.[Qi-Qi], Porikli, F.M.[Fatih M.],
Segmentation-Free Guidance for Text-to-Image Diffusion Models,
GCV24(7520-7529)
IEEE DOI 2410
Image segmentation, Costs, Image color analysis, Text to image, Focusing, Switches BibRef

Li, C.[Cheng], Qi, Y.[Yali], Zeng, Q.[Qingtao], Lu, L.[Likun],
Comparison of Image Generation methods based on Diffusion Models,
CVIDL23(1-4)
IEEE DOI 2403
Training, Deep learning, Learning systems, Image synthesis, Computational modeling, Diffusion models BibRef

Xu, Y.[Yanwu], Zhao, Y.[Yang], Xiao, Z.S.[Zhi-Sheng], Hou, T.B.[Ting-Bo],
UFOGen: You Forward Once Large Scale Text-to-Image Generation via Diffusion GANs,
CVPR24(8196-8206)
IEEE DOI 2410
Image synthesis, Computational modeling, Text to image, Propulsion, Diffusion models, Hybrid power systems, diffusion models, GANs BibRef

Huang, R.H.[Run-Hui], Han, J.H.[Jian-Hua], Lu, G.S.[Guan-Song], Liang, X.D.[Xiao-Dan], Zeng, Y.H.[Yi-Han], Zhang, W.[Wei], Xu, H.[Hang],
DiffDis: Empowering Generative Diffusion Model with Cross-Modal Discrimination Capability,
ICCV23(15667-15677)
IEEE DOI 2401
BibRef

Yang, X.Y.[Xing-Yi], Wang, X.C.[Xin-Chao],
Diffusion Model as Representation Learner,
ICCV23(18892-18903)
IEEE DOI Code:
WWW Link. 2401
BibRef

Nair, N.G.[Nithin Gopalakrishnan], Cherian, A.[Anoop], Lohit, S.[Suhas], Wang, Y.[Ye], Koike-Akino, T.[Toshiaki], Patel, V.M.[Vishal M.], Marks, T.K.[Tim K.],
Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional Image Synthesis,
ICCV23(20793-20803)
IEEE DOI 2401
BibRef

Wang, Z.D.[Zhen-Dong], Bao, J.M.[Jian-Min], Zhou, W.G.[Wen-Gang], Wang, W.[Weilun], Hu, H.[Hezhen], Chen, H.[Hong], Li, H.Q.[Hou-Qiang],
DIRE for Diffusion-Generated Image Detection,
ICCV23(22388-22398)
IEEE DOI Code:
WWW Link. 2401
BibRef

Hong, S.[Susung], Lee, G.[Gyuseong], Jang, W.[Wooseok], Kim, S.[Seungryong],
Improving Sample Quality of Diffusion Models Using Self-Attention Guidance,
ICCV23(7428-7437)
IEEE DOI 2401
BibRef

Feng, B.T.[Berthy T.], Smith, J.[Jamie], Rubinstein, M.[Michael], Chang, H.[Huiwen], Bouman, K.L.[Katherine L.], Freeman, W.T.[William T.],
Score-Based Diffusion Models as Principled Priors for Inverse Imaging,
ICCV23(10486-10497)
IEEE DOI 2401
BibRef

Yang, B.B.[Bin-Bin], Luo, Y.[Yi], Chen, Z.L.[Zi-Liang], Wang, G.R.[Guang-Run], Liang, X.D.[Xiao-Dan], Lin, L.[Liang],
LAW-Diffusion: Complex Scene Generation by Diffusion with Layouts,
ICCV23(22612-22622)
IEEE DOI 2401
BibRef

Levi, E.[Elad], Brosh, E.[Eli], Mykhailych, M.[Mykola], Perez, M.[Meir],
DLT: Conditioned layout generation with Joint Discrete-Continuous Diffusion Layout Transformer,
ICCV23(2106-2115)
IEEE DOI Code:
WWW Link. 2401
BibRef

Couairon, G.[Guillaume], Careil, M.[Marlène], Cord, M.[Matthieu], Lathuilière, S.[Stéphane], Verbeek, J.[Jakob],
Zero-shot spatial layout conditioning for text-to-image diffusion models,
ICCV23(2174-2183)
IEEE DOI 2401
BibRef

Zhang, L.[Lvmin], Rao, A.[Anyi], Agrawala, M.[Maneesh],
Adding Conditional Control to Text-to-Image Diffusion Models,
ICCV23(3813-3824)
IEEE DOI 2401
Award, Marr Price, ICCV. BibRef

Zhao, W.L.[Wen-Liang], Rao, Y.M.[Yong-Ming], Liu, Z.[Zuyan], Liu, B.[Benlin], Zhou, J.[Jie], Lu, J.W.[Ji-Wen],
Unleashing Text-to-Image Diffusion Models for Visual Perception,
ICCV23(5706-5716)
IEEE DOI Code:
WWW Link. 2401
BibRef

Wu, Q.C.[Qiu-Cheng], Liu, Y.J.[Yu-Jian], Zhao, H.[Handong], Bui, T.[Trung], Lin, Z.[Zhe], Zhang, Y.[Yang], Chang, S.Y.[Shi-Yu],
Harnessing the Spatial-Temporal Attention of Diffusion Models for High-Fidelity Text-to-Image Synthesis,
ICCV23(7732-7742)
IEEE DOI 2401
BibRef

Zhao, J.[Jing], Zheng, H.[Heliang], Wang, C.[Chaoyue], Lan, L.[Long], Yang, W.J.[Wen-Jing],
MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models,
ICCV23(22535-22545)
IEEE DOI Code:
WWW Link. 2401
BibRef

Kumari, N.[Nupur], Zhang, B.L.[Bing-Liang], Wang, S.Y.[Sheng-Yu], Shechtman, E.[Eli], Zhang, R.[Richard], Zhu, J.Y.[Jun-Yan],
Ablating Concepts in Text-to-Image Diffusion Models,
ICCV23(22634-22645)
IEEE DOI 2401
BibRef

Schwartz, I.[Idan], Snæbjarnarson, V.[Vésteinn], Chefer, H.[Hila], Belongie, S.[Serge], Wolf, L.[Lior], Benaim, S.[Sagie],
Discriminative Class Tokens for Text-to-Image Diffusion Models,
ICCV23(22668-22678)
IEEE DOI Code:
WWW Link. 2401
BibRef

Patashnik, O.[Or], Garibi, D.[Daniel], Azuri, I.[Idan], Averbuch-Elor, H.[Hadar], Cohen-Or, D.[Daniel],
Localizing Object-level Shape Variations with Text-to-Image Diffusion Models,
ICCV23(22994-23004)
IEEE DOI 2401
BibRef

Schramowski, P.[Patrick], Brack, M.[Manuel], Deiseroth, B.[Björn], Kersting, K.[Kristian],
Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models,
CVPR23(22522-22531)
IEEE DOI 2309
BibRef

Chen, C.[Chen], Liu, D.[Daochang], Ma, S.Q.[Si-Qi], Nepal, S.[Surya], Xu, C.[Chang],
Private Image Generation with Dual-Purpose Auxiliary Classifier,
CVPR23(20361-20370)
IEEE DOI 2309
BibRef

Zhang, Q.S.[Qin-Sheng], Song, J.M.[Jia-Ming], Huang, X.[Xun], Chen, Y.X.[Yong-Xin], Liu, M.Y.[Ming-Yu],
DiffCollage: Parallel Generation of Large Content with Diffusion Models,
CVPR23(10188-10198)
IEEE DOI 2309
BibRef

Phung, H.[Hao], Dao, Q.[Quan], Tran, A.[Anh],
Wavelet Diffusion Models are fast and scalable Image Generators,
CVPR23(10199-10208)
IEEE DOI 2309
BibRef

Kim, S.W.[Seung Wook], Brown, B.[Bradley], Yin, K.X.[Kang-Xue], Kreis, K.[Karsten], Schwarz, K.[Katja], Li, D.[Daiqing], Rombach, R.[Robin], Torralba, A.[Antonio], Fidler, S.[Sanja],
NeuralField-LDM: Scene Generation with Hierarchical Latent Diffusion Models,
CVPR23(8496-8506)
IEEE DOI 2309
BibRef

Zhu, Y.Z.[Yuan-Zhi], Li, Z.H.[Zhao-Hai], Wang, T.W.[Tian-Wei], He, M.C.[Meng-Chao], Yao, C.[Cong],
Conditional Text Image Generation with Diffusion Models,
CVPR23(14235-14244)
IEEE DOI 2309
BibRef

Zhou, Y.F.[Yu-Fan], Liu, B.C.[Bing-Chen], Zhu, Y.Z.[Yi-Zhe], Yang, X.[Xiao], Chen, C.Y.[Chang-You], Xu, J.H.[Jin-Hui],
Shifted Diffusion for Text-to-image Generation,
CVPR23(10157-10166)
IEEE DOI 2309
BibRef

Li, M.H.[Mu-Heng], Duan, Y.Q.[Yue-Qi], Zhou, J.[Jie], Lu, J.W.[Ji-Wen],
Diffusion-SDF: Text-to-Shape via Voxelized Diffusion,
CVPR23(12642-12651)
IEEE DOI 2309
BibRef

Chai, S.[Shang], Zhuang, L.S.[Lian-Sheng], Yan, F.Y.[Feng-Ying],
LayoutDM: Transformer-based Diffusion Model for Layout Generation,
CVPR23(18349-18358)
IEEE DOI 2309
BibRef

Wu, Q.C.[Qiu-Cheng], Liu, Y.J.[Yu-Jian], Zhao, H.[Handong], Kale, A.[Ajinkya], Bui, T.[Trung], Yu, T.[Tong], Lin, Z.[Zhe], Zhang, Y.[Yang], Chang, S.Y.[Shi-Yu],
Uncovering the Disentanglement Capability in Text-to-Image Diffusion Models,
CVPR23(1900-1910)
IEEE DOI 2309
BibRef

Jain, A.[Ajay], Xie, A.[Amber], Abbeel, P.[Pieter],
VectorFusion: Text-to-SVG by Abstracting Pixel-Based Diffusion Models,
CVPR23(1911-1920)
IEEE DOI 2309
BibRef

Kumari, N.[Nupur], Zhang, B.L.[Bing-Liang], Zhang, R.[Richard], Shechtman, E.[Eli], Zhu, J.Y.[Jun-Yan],
Multi-Concept Customization of Text-to-Image Diffusion,
CVPR23(1931-1941)
IEEE DOI 2309
BibRef

Hui, M.[Mude], Zhang, Z.Z.[Zhi-Zheng], Zhang, X.Y.[Xiao-Yi], Xie, W.X.[Wen-Xuan], Wang, Y.W.[Yu-Wang], Lu, Y.[Yan],
Unifying Layout Generation with a Decoupled Diffusion Model,
CVPR23(1942-1951)
IEEE DOI 2309
BibRef

Ruiz, N.[Nataniel], Li, Y.Z.[Yuan-Zhen], Jampani, V.[Varun], Pritch, Y.[Yael], Rubinstein, M.[Michael], Aberman, K.[Kfir],
DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation,
CVPR23(22500-22510)
IEEE DOI 2309
BibRef

Zheng, G.C.[Guang-Cong], Zhou, X.P.[Xian-Pan], Li, X.W.[Xue-Wei], Qi, Z.A.[Zhong-Ang], Shan, Y.[Ying], Li, X.[Xi],
LayoutDiffusion: Controllable Diffusion Model for Layout-to-Image Generation,
CVPR23(22490-22499)
IEEE DOI 2309
BibRef

Liu, X.H.[Xi-Hui], Park, D.H.[Dong Huk], Azadi, S.[Samaneh], Zhang, G.[Gong], Chopikyan, A.[Arman], Hu, Y.X.[Yu-Xiao], Shi, H.[Humphrey], Rohrbach, A.[Anna], Darrell, T.J.[Trevor J.],
More Control for Free! Image Synthesis with Semantic Diffusion Guidance,
WACV23(289-299)
IEEE DOI 2302
Image synthesis, Annotations, Image matching, Semantics, Noise reduction, Probabilistic logic, Vision + language and/or other modalities BibRef

Pan, Z.H.[Zhi-Hong], Zhou, X.[Xin], Tian, H.[Hao],
Arbitrary Style Guidance for Enhanced Diffusion-Based Text-to-Image Generation,
WACV23(4450-4460)
IEEE DOI 2302
Graphics, Training, Technological innovation, Adaptation models, Adaptive systems, Art, Navigation, Vision + language and/or other modalities BibRef

Gu, S.Y.[Shu-Yang], Chen, D.[Dong], Bao, J.M.[Jian-Min], Wen, F.[Fang], Zhang, B.[Bo], Chen, D.D.[Dong-Dong], Yuan, L.[Lu], Guo, B.N.[Bai-Ning],
Vector Quantized Diffusion Model for Text-to-Image Synthesis,
CVPR22(10686-10696)
IEEE DOI 2210
Image quality, Image resolution, Image synthesis, Computational modeling, Noise reduction, Vision+language BibRef

Jing, B.[Bowen], Corso, G.[Gabriele], Berlinghieri, R.[Renato], Jaakkola, T.[Tommi],
Subspace Diffusion Generative Models,
ECCV22(XXIII:274-289).
Springer DOI 2211
BibRef

Han, L.G.[Li-Gong], Li, Y.X.[Yin-Xiao], Zhang, H.[Han], Milanfar, P.[Peyman], Metaxas, D.N.[Dimitris N.], Yang, F.[Feng],
SVDiff: Compact Parameter Space for Diffusion Fine-Tuning,
ICCV23(7289-7300)
IEEE DOI 2401
BibRef

Nair, N.G.[Nithin Gopalakrishnan], Bandara, W.G.C.[Wele Gedara Chaminda], Patel, V.M.[Vishal M.],
Unite and Conquer: Plug and Play Multi-Modal Synthesis Using Diffusion Models,
CVPR23(6070-6079)
IEEE DOI 2309
BibRef

Zheng, G.[Guangcong], Li, S.M.[Sheng-Ming], Wang, H.[Hui], Yao, T.P.[Tai-Ping], Chen, Y.[Yang], Ding, S.H.[Shou-Hong], Li, X.[Xi],
Entropy-Driven Sampling and Training Scheme for Conditional Diffusion Generation,
ECCV22(XXII:754-769).
Springer DOI 2211
BibRef

Sehwag, V.[Vikash], Hazirbas, C.[Caner], Gordo, A.[Albert], Ozgenel, F.[Firat], Ferrer, C.C.[Cristian Canton],
Generating High Fidelity Data from Low-density Regions using Diffusion Models,
CVPR22(11482-11491)
IEEE DOI 2210
Manifolds, Computational modeling, Diffusion processes, Data models, Representation learning BibRef

Chapter on 3-D Object Description and Computation Techniques, Surfaces, Deformable, View Generation, Video Conferencing continues in
Vision Transformers for Image Generation and Image Synthesis .


Last update:Oct 6, 2025 at 14:07:43