This repository contains the V2 of the contextual LoRA trained for black-forest-labs/FLUX.2-klein-9B. It generates 1280×1280 font atlases from a single reference image "Aa".
Update V2: Fixed dataset generation issues, increased resolution to 1280px, and improved vectorization scripts.
- LoRA weights:
Ref2FontV2.safetensors - ComfyUI workflow:
Example Workflow/(see notes inside the workflow nodes) - Examples:
Example/(input images + generated atlases) - Post-processing scripts:
flux_pipeline.py,flux_grid_to_ttf.py,flux_upscale.py
Disclaimer: it works well, but not perfectly. Expect occasional artifacts.
The post-processing scripts require Python 3.10+ and these packages:
numpy
pillow
fonttools
scikit-image
tqdm
For --no-upscale workflow this is enough (recommended).
flux_upscale.py is currently experimental and may not improve quality yet.
git clone https://github.com/SnJake/Ref2Font.git
cd Ref2Font
# from the repo root
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txtThe workflow is in Example Workflow/. It already contains detailed notes inside the nodes.
- Base model (FLUX.2 Klein 9B):
https://huggingface.co/black-forest-labs/FLUX.2-klein-base-9B/blob/main/flux-2-klein-base-9b.safetensors
Place in: ComfyUI/models/diffusion_models
- Text encoder (Qwen):
https://huggingface.co/Comfy-Org/vae-text-encorder-for-flux-klein-9b/blob/main/split_files/text_encoders/qwen_3_8b.safetensors
Place in: ComfyUI/models/text_encoders
- VAE:
https://huggingface.co/Comfy-Org/vae-text-encorder-for-flux-klein-9b/blob/main/split_files/vae/flux2-vae.safetensors
Place in: ComfyUI/models/vae
Download the LoRA (V2):
Or from CivitAI.
Place in: ComfyUI/models/loras
- Strict black & white only (no gray, no shadows, no volume)
- 1280×1280 (recommended) or 1024x1024
- Follow the examples in
Example/
After you generate the atlas, use the pipeline script to convert the atlas into a TTF font.
python flux_pipeline.py ^
--input "path\to\your_atlas.png" ^
--output-dir "output\folder" ^
--no-upscale ^
--use-grid ^
--simplify 0.5 ^
--canvas 1280 ^
--contour-level 0.5 ^
--trace-scale 4 ^
--trace-blur 1.0 ^
--smooth-iters 2 ^
--baseline-mode auto ^
--keep-components 4 ^
--min-component-area 3 ^
--component-center-bias 0.65 ^
--cell-bleed 0.4 ^
--cell-bleed-max 10 ^
--core-overlap-min 0.35 ^
--no-auto-invertpython flux_pipeline.py ^
--input "path\to\your_atlas.png" ^
--output-dir "output\folder" ^
--no-upscale ^
--use-grid ^
--simplify 0.5 ^
--canvas 1280 ^
--contour-level 0.5 ^
--trace-scale 4 ^
--trace-blur 1.0 ^
--smooth-iters 2 ^
--baseline-mode auto ^
--keep-components 3 ^
--min-component-area 10 ^
--component-center-bias 0.35 ^
--cell-bleed 0.12 ^
--cell-bleed-max 32 ^
--no-auto-invert- Download base models (see links above) and place them in ComfyUI folders.
- Download LoRA and put it in
ComfyUI/models/loras. - Create the input image (1280×1280 preferred, pure black/white).
- Run the ComfyUI workflow (
Example Workflow/) and generate the atlas.
Generate letters and symbols "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789!?.,;:-" in the style of the letters given to you as a reference.
- Create and activate a venv, then install dependencies.
- Run
flux_pipeline.pywith your atlas path to generate the TTF.
MIT