mirror of
https://hf-mirror.com/QuantStack/Wan2.1_T2V_14B_FusionX_VACE
synced 2026-04-03 03:22:54 +08:00
41 lines
2.1 KiB
Markdown
41 lines
2.1 KiB
Markdown
---
|
|
base_model:
|
|
- Wan-AI/Wan2.1-VACE-14B
|
|
- vrgamedevgirl84/Wan14BT2VFusioniX
|
|
base_model_relation: merge
|
|
tags:
|
|
- text-to-video
|
|
- image-to-video
|
|
- video-to-video
|
|
- merge
|
|
language:
|
|
- en
|
|
license: apache-2.0
|
|
---
|
|
|
|
This is a merge of [Wan-AI/Wan2.1-VACE-14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) scopes and [vrgamedevgirl84/Wan14BT2VFusionX](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX).
|
|
|
|
The process involved extracting VACE scopes and injecting into the target models.
|
|
|
|
- FP8 model weight was then converted to specific FP8 formats (E4M3FN and E5M2) using ComfyUI custom node [ComfyUI-ModelQuantizer](https://github.com/lum3on/ComfyUI-ModelQuantizer) by [lum3on](https://github.com/lum3on).
|
|
|
|
## Usage
|
|
|
|
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the WanVaceToVideo node. Place the required model(s) in the following folders:
|
|
|
|
| Type | Name | Location | Download |
|
|
| ------------ | ----------------------------- | --------------------------------- | ----------------------- |
|
|
| Main Model | Wan-14B-T2V-FusionX-VACE | `ComfyUI/models/diffusion_models` | Safetensors (this repo) |
|
|
| Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) |
|
|
| VAE | Wan2_1_VAE_bf16 | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) |
|
|
|
|
[**ComfyUI example workflow**](https://docs.comfy.org/tutorials/video/wan/vace)
|
|
|
|
### Notes
|
|
|
|
*All original licenses and restrictions from the base models still apply.*
|
|
|
|
## Reference
|
|
|
|
- For more information about the GGUF-quantized versions, refer to [QuantStack/Wan-14B-T2V-FusionX-VACE-GGUF](https://huggingface.co/QuantStack/Wan-14B-T2V-FusionX-VACE-GGUF).
|
|
- For an overview of Safetensors format, please see the [Safetensors](https://huggingface.co/docs/safetensors/index). |