mirror of
https://hf-mirror.com/QuantStack/Wan2.1_T2V_14B_FusionX_VACE
synced 2026-04-02 11:02:56 +08:00
f3211ccece464f2617810bafec6519a18278f2ea
base_model, base_model_relation, tags, language, license
| base_model | base_model_relation | tags | language | license | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
|
merge |
|
|
apache-2.0 |
This is a merge of Wan-AI/Wan2.1-VACE-14B and vrgamedevgirl84/Wan14BT2VFusionX to provide additional VACE compatibility.
The process involved extracting VACE scopes and injecting into the target models. FP8 model weight was then converted to specific FP8 formats (E4M3FN and E5M2) using a custom ComfyUI node developed by lum3on, available at the ComfyUI-ModelQuantizer GitHub repository.
Usage
The model files can be used in ComfyUI with the WanVaceToVideo node. Place the required model(s) in the following folders:
| Type | Name | Location | Download |
|---|---|---|---|
| Main Model | Wan-14B-T2V-FusionX-VACE | ComfyUI/models/diffusion_models |
Safetensors (this repo) |
| Text Encoder | umt5-xxl-encoder | ComfyUI/models/text_encoders |
Safetensors / GGUF |
| VAE | Wan2_1_VAE_bf16 | ComfyUI/models/vae |
Safetensors |
Notes
All original licenses and restrictions from the base models still apply.
Reference
- For more information about the GGUF-quantized versions, refer to QuantStack/Wan-14B-T2V-FusionX-VACE-GGUF, where the quantization process is explained.
- For an overview of Safetensors format, please see the Safetensors.
Description