25.4.2.3.1 Font Generation, Font Synthesis, Multiple Fonts

Chapter Contents (Back)
Font Generaton. Font Recognition. For more than just the font
See also Text Generation, Text Synthesis, Text Placement.
See also Style Transfer.

Baluja, S.[Shumeet],
Learning typographic style: from discrimination to synthesis,
MVA(28), No. 5-6, August 2017, pp. 551-568.
WWW Link. 1708
BibRef

Li, G.Z.[Guan-Zhao], Zhang, J.W.[Jian-Wei], Chen, D.N.[Dan-Ni],
F2PNet: Font-to-painting translation by adversarial learning,
IET-IPR(14), No. 13, November 2020, pp. 3243-3253.
DOI Link 2012
For Chinese font images, when all their strokes are replaced by pattern elements such as flowers and birds, they become flower-bird character paintings. BibRef

Zhu, A., Lu, X., Bai, X., Uchida, S., Iwana, B.K., Xiong, S.,
Few-Shot Text Style Transfer via Deep Feature Similarity,
IP(29), 2020, pp. 6932-6946.
IEEE DOI 2007
Feature extraction, Rendering (computer graphics), Image color analysis, discriminative network BibRef

Pan, W.[Wei], Zhu, A.[Anna], Zhou, X.Y.[Xin-Yu], Iwana, B.K.[Brian Kenji], Li, S.L.[Shi-Lin],
Few shot font generation via transferring similarity guided global style and quantization local style,
ICCV23(19449-19459)
IEEE DOI Code:
WWW Link. 2401
BibRef

Qin, M.X.[Meng-Xi], Zhang, Z.[Ziying], Zhou, X.X.[Xiao-Xue],
Disentangled representation learning GANs for generalized and stable font fusion network,
IET-IPR(16), No. 2, 2022, pp. 393-406.
DOI Link 2201
Generation of fonts BibRef

Liu, X.[Xiyan], Meng, G.F.[Gao-Feng], Chang, J.L.[Jian-Long], Hu, R.G.[Rui-Guang], Xiang, S.M.[Shi-Ming], Pan, C.H.[Chun-Hong],
Decoupled Representation Learning for Character Glyph Synthesis,
MultMed(24), No. 2022, pp. 1787-1799.
IEEE DOI 2204
Font style transfer and content consistency. Task analysis, Generative adversarial networks, Topology, Standards, Decoding, Character glyph synthesis, generative adversarial networks BibRef

Muhammad, A.U.H.[Ammar Ul Hassan], Lee, H.[Hyunsoo], Choi, J.[Jaeyoung],
Exploiting mixing regularization for truly unsupervised font synthesis,
PRL(169), 2023, pp. 35-42.
Elsevier DOI 2305
BibRef

Liu, Y.T.[Yi-Tian], Lian, Z.H.[Zhou-Hui],
FontTransformer: Few-shot high-resolution Chinese glyph image synthesis via stacked transformers,
PR(141), 2023, pp. 109593.
Elsevier DOI 2306
Font generation, Style transfer, Transformers BibRef

Park, S.[Song], Chun, S.[Sanghyuk], Cha, J.[Junbum], Lee, B.[Bado], Shim, H.J.[Hyun-Jung],
Few-Shot Font Generation With Weakly Supervised Localized Representations,
PAMI(46), No. 3, March 2024, pp. 1479-1495.
IEEE DOI Code:
WWW Link. 2402
Task analysis, Libraries, Feature extraction, Data models, Visualization, Training, Skeleton, Few-shot font generation, BibRef

Cha, J.[Junbum], Chun, S.[Sanghyuk], Lee, G.[Gayoung], Lee, B.[Bado], Kim, S.[Seonghyeon], Lee, H.[Hwalsuk],
Few-shot Compositional Font Generation with Dual Memory,
ECCV20(XIX:735-751).
Springer DOI 2011
BibRef

Chen, X.[Xu], Wu, L.[Lei], Su, Y.L.[Yong-Liang], Meng, L.[Lei], Meng, X.X.[Xiang-Xu],
Font transformer for few-shot font generation,
CVIU(245), 2024, pp. 104043.
Elsevier DOI 2406
Font generation, Transformer, Few-shot learning, Image generation BibRef

Zhao, M.[Minda], Qi, X.Q.[Xing-Qun], Hu, Z.P.[Zhi-Peng], Li, L.[Lincheng], Zhang, Y.Q.[Yong-Qiang], Huang, Z.[Zi], Yu, X.[Xin],
Calligraphy Font Generation via Explicitly Modeling Location-Aware Glyph Component Deformations,
MultMed(26), 2024, pp. 5939-5950.
IEEE DOI 2404
Deformation, Task analysis, Feature extraction, Deformable models, Decoding, Training, Standards, Generative adversarial networks, image transformation BibRef

Zhao, H.H.[Hui-Huang], Ji, T.L.[Tian-Le], Rosin, P.L.[Paul L.], Lai, Y.K.[Yu-Kun], Meng, W.L.[Wei-Liang], Wang, Y.N.[Yao-Nan],
Cross-lingual font style transfer with full-domain convolutional attention,
PR(155), 2024, pp. 110709.
Elsevier DOI Code:
WWW Link. 2408
Cross-lingual, Full-domain convolutional attention, Multi-layer perceptual discriminator, Font style transfer BibRef

He, X.[Xiao], Zhu, M.R.[Ming-Rui], Wang, N.N.[Nan-Nan], Gao, X.B.[Xin-Bo],
Few-Shot Font Generation by Learning Style Difference and Similarity,
CirSysVideo(34), No. 9, September 2024, pp. 8013-8025.
IEEE DOI 2410
Task analysis, Self-supervised learning, Feature extraction, Dictionaries, Codes, Training, Libraries, Image-to-image translation, few-shot learning BibRef

He, H.B.[Hai-Bin], Chen, X.Y.[Xin-Yuan], Wang, C.Y.[Chao-Yue], Liu, J.[Juhua], Du, B.[Bo], Tao, D.C.[Da-Cheng], Yu, Q.[Qiao],
Diff-Font: Diffusion Model for Robust One-Shot Font Generation,
IJCV(132), No. 11, November 2024, pp. 5372-5386.
Springer DOI 2411
BibRef

Zeng, J.S.[Jin-Shan], Yuan, Y.Y.[Yi-Yang], Wang, X.J.[Xi-Jia], Zhang, Y.[Yan], Wang, Y.F.[Ye-Fei],
Cross-lingual font generation via patch-level style contrastive learning and relative position awareness,
PR(169), 2026, pp. 111937.
Elsevier DOI Code:
WWW Link. 2509
Few-shot font generation, Cross-lingual font generation, Zero-shot learning, Contrastive learning, Relative position BibRef

Lu, X.B.[Xiong-Bo], Chen, Y.X.[Ya-Xiong], Rong, Y.[Yi], Xiong, S.W.[Sheng-Wu],
ArtGlyphDiffuser: Text-driven artistic glyph generation via Style-to-CLIP Projection and Multi-Level Controlled diffusion,
PR(171), 2026, pp. 112172.
Elsevier DOI Code:
WWW Link. 2511
Artistic glyph generation, Cross-modal feature fusion, Text-driven image generation, Multi-level control BibRef


Thamizharasan, V.[Vikas], Liu, D.[Difan], Agarwal, S.[Shantanu], Fisher, M.[Matthew], Gharbi, M.[Michaël], Wang, O.[Oliver], Jacobson, A.[Alec], Kalogerakis, E.[Evangelos],
VecFusion: Vector Font Generation with Diffusion,
CVPR24(7943-7952)
IEEE DOI 2410
Geometry, Shape, Computational modeling, Predictive models, Diffusion models, diffusion models, few-shot font style transfer BibRef

Fu, B.[Bin], Yu, F.[Fanghua], Liu, A.[Anran], Wang, Z.X.[Zi-Xuan], Wen, J.[Jie], He, J.J.[Jun-Jun], Qiao, Y.[Yu],
Generate Like Experts: Multi-Stage Font Generation by Incorporating Font Transfer Process into Diffusion Models,
CVPR24(6892-6901)
IEEE DOI Code:
WWW Link. 2410
Costs, Noise, Diffusion processes, Transforms, Manuals, Diffusion models, Generative adversarial networks, Probabilistic Generative Model BibRef

Park, S.[Seonmi], Bae, I.[Inhwan], Shin, S.H.[Seung-Hyun], Jeon, H.G.[Hae-Gon],
Kinetic Typography Diffusion Model,
ECCV24(XXXIV: 166-185).
Springer DOI 2412
BibRef

Ma, J.[Jing], Xiang, X.[Xiang], He, Y.[Yan],
Masking Cascaded Self-attentions for Few-shot Font-generation Transformer,
ACCV24(V: 256-272).
Springer DOI 2412
BibRef

Mu, X.Z.[Xin-Zhi], Chen, L.[Li], Chen, B.[Bohan], Gu, S.Y.[Shu-Yang], Bao, J.M.[Jian-Min], Chen, D.[Dong], Li, J.[Ji], Yuan, Y.H.[Yu-Hui],
Fontstudio: Shape-adaptive Diffusion Model for Coherent and Consistent Font Effect Generation,
ECCV24(LVIII: 305-322).
Springer DOI 2412
BibRef

Fu, B.[Bin], He, J.J.[Jun-Jun], Wang, J.J.[Jian-Jun], Qiao, Y.[Yu],
Neural Transformation Fields for Arbitrary-Styled Font Generation,
CVPR23(22438-22447)
IEEE DOI 2309
BibRef

Wang, C.[Chi], Zhou, M.[Min], Ge, T.[Tiezheng], Jiang, Y.N.[Yu-Ning], Bao, H.J.[Hu-Jun], Xu, W.W.[Wei-Wei],
CF-Font: Content Fusion for Few-Shot Font Generation,
CVPR23(1858-1867)
IEEE DOI 2309
BibRef

Xia, Z.Q.[Ze-Qing], Xiong, B.[Bojun], Lian, Z.H.[Zhou-Hui],
VecFontSDF: Learning to Reconstruct and Synthesize High-Quality Vector Fonts via Signed Distance Functions,
CVPR23(1848-1857)
IEEE DOI 2309
BibRef

Liu, Y.T.[Ying-Tian], Zhang, Z.F.[Zhi-Fei], Guo, Y.C.[Yuan-Chen], Fisher, M.[Matthew], Wang, Z.W.[Zhao-Wen], Zhang, S.H.[Song-Hai],
DualVector: Unsupervised Vector Font Synthesis with Dual-Part Representation,
CVPR23(14193-14202)
IEEE DOI 2309
BibRef

Wang, Y.Q.[Yu-Qing], Wang, Y.Z.[Yi-Zhi], Yu, L.H.[Long-Hui], Zhu, Y.S.[Yue-Sheng], Lian, Z.H.[Zhou-Hui],
DeepVecFont-v2: Exploiting Transformers to Synthesize Vector Fonts with Higher Quality,
CVPR23(18320-18328)
IEEE DOI 2309
BibRef

Matsuda, S.[Seiya], Kimura, A.[Akisato], Uchida, S.[Seiichi],
Font Generation with Missing Impression Labels,
ICPR22(1400-1406)
IEEE DOI 2212
Training, Image coding, Codes, Generative adversarial networks, Data models BibRef

Aoki, H.[Haruka], Aizawa, K.[Kiyoharu],
SVG Vector Font Generation for Chinese Characters with Transformer,
ICIP22(646-650)
IEEE DOI 2211
Shape, Network architecture, Transformers, Rendering (computer graphics), Vector font generation, Transformer BibRef

Kong, Y.X.[Yu-Xin], Luo, C.J.[Can-Jie], Ma, W.H.[Wei-Hong], Zhu, Q.Y.[Qi-Yuan], Zhu, S.G.[Sheng-Gao], Yuan, N.[Nicholas], Jin, L.W.[Lian-Wen],
Look Closer to Supervise Better: One-Shot Font Generation via Component-Based Discriminator,
CVPR22(13472-13481)
IEEE DOI 2210
Couplings, Generators, Adversarial machine learning, Pattern recognition, Complexity theory, Low-level vision BibRef

Tang, L.C.[Li-Cheng], Cai, Y.Y.[Yi-Yang], Liu, J.M.[Jia-Ming], Hong, Z.B.[Zhi-Bin], Gong, M.M.[Ming-Ming], Fan, M.[Minhu], Han, J.Y.[Jun-Yu], Liu, J.[Jingtuo], Ding, E.[Errui], Wang, J.D.[Jing-Dong],
Few-Shot Font Generation by Learning Fine-Grained Local Styles,
CVPR22(7885-7894)
IEEE DOI 2210
Costs, Aggregates, Computational modeling, Pipelines, Libraries, Data models, Image and video synthesis and generation, Transfer/low-shot/long-tail learning BibRef

Liu, W.[Wei], Liu, F.[Fangyue], Ding, F.[Fei], He, Q.[Qian], Yi, Z.L.[Zi-Li],
XMP-Font: Self-Supervised Cross-Modality Pre-training for Few-Shot Font Generation,
CVPR22(7895-7904)
IEEE DOI 2210
Representation learning, Image resolution, Correlation, Machine vision, Transformers, Libraries, Vision applications and systems BibRef

Xie, Y.C.[Yang-Chen], Chen, X.Y.[Xin-Yuan], Sun, L.[Li], Lu, Y.[Yue],
DG-Font: Deformable Generative Networks for Unsupervised Font Generation,
CVPR21(5126-5136)
IEEE DOI 2111
Deformable models, Convolution, Image color analysis, Computational modeling, Writing BibRef

Wen, C.[Chuan], Pan, Y.J.[Yu-Jie], Chang, J.[Jie], Zhang, Y.[Ya], Chen, S.H.[Si-Heng], Wang, Y.F.[Yan-Feng], Han, M.[Mei], Tian, Q.[Qi],
Handwritten Chinese Font Generation with Collaborative Stroke Refinement,
WACV21(3881-3890)
IEEE DOI 2106
Training, Adaptation models, Computational modeling, Collaboration, Character generation BibRef

Li, C.H.[Chen-Hao], Taniguchi, Y.[Yuta], Lu, M.[Min], Konomi, S.[Shin'ichi],
Few-shot Font Style Transfer between Different Languages,
WACV21(433-442)
IEEE DOI 2106
Training, Computational modeling, Libraries, Task analysis BibRef

Park, S.[Song], Chun, S.[Sanghyuk], Cha, J.[Junbum], Lee, B.[Bado], Shim, H.J.[Hyun-Jung],
Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts,
ICCV21(13880-13889)
IEEE DOI 2203
Training, Head, Codes, Feature extraction, Image and video synthesis, Neural generative models, Vision applications and systems BibRef

Aoki, H.[Haruka], Tsubota, K.[Koki], Ikuta, H.[Hikaru], Aizawa, K.[Kiyoharu],
Few-Shot Font Generation with Deep Metric Learning,
ICPR21(8539-8546)
IEEE DOI 2105
Measurement, Visualization, Image color analysis, Feature extraction, Skeleton, Pattern recognition, Task analysis BibRef

Saito, J.[Junki], Nakamura, S.[Satoshi],
Fontender: Interactive Japanese Text Design with Dynamic Font Fusion Method for Comics,
MMMod19(II:554-559).
Springer DOI 1901
BibRef

Chapter on OCR, Document Analysis and Character Recognition Systems continues in
Language Recognition, Multi-Language Documents .


Last update:Nov 26, 2025 at 20:24:09