mirror of
https://hf-mirror.com/QuantStack/Wan2.1_T2V_14B_FusionX_VACE
synced 2026-04-02 19:12:56 +08:00
Update README.md
This commit is contained in:
@ -25,7 +25,7 @@ The model files can be used in [ComfyUI](https://github.com/comfyanonymous/Comfy
|
||||
|
||||
| Type | Name | Location | Download |
|
||||
| ------------ | ----------------------------- | --------------------------------- | ----------------------- |
|
||||
| Main Model | Wan2.1-14B-T2V-FusionX-VACE | `ComfyUI/models/diffusion_models` | Safetensors (this repo) |
|
||||
| Main Model | Wan2.1_T2V_14B_FusionX_VACE | `ComfyUI/models/diffusion_models` | Safetensors (this repo) |
|
||||
| Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) |
|
||||
| VAE | Wan2_1_VAE_bf16 | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) |
|
||||
|
||||
@ -37,5 +37,5 @@ The model files can be used in [ComfyUI](https://github.com/comfyanonymous/Comfy
|
||||
|
||||
## Reference
|
||||
|
||||
- For more information about the GGUF-quantized versions, refer to [QuantStack/Wan2.1-14B-T2V-FusionX-VACE-GGUF](https://huggingface.co/QuantStack/Wan2.1-14B-T2V-FusionX-VACE-GGUF).
|
||||
- For more information about the GGUF-quantized versions, refer to [QuantStack/Wan2.1_T2V_14B_FusionX_VACE-GGUF](https://huggingface.co/QuantStack/Wan2.1_T2V_14B_FusionX_VACE-GGUF).
|
||||
- For an overview of Safetensors format, please see the [Safetensors](https://huggingface.co/docs/safetensors/index).
|
||||
Reference in New Issue
Block a user