mirror of
https://hf-mirror.com/QuantStack/Wan2.1_T2V_14B_FusionX_VACE
synced 2026-04-03 03:22:54 +08:00
Update README.md
This commit is contained in:
@ -16,6 +16,7 @@ license: apache-2.0
|
|||||||
This is a merge of [Wan-AI/Wan2.1-VACE-14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) and [vrgamedevgirl84/Wan14BT2VFusionX](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX) to provide additional VACE compatibility.
|
This is a merge of [Wan-AI/Wan2.1-VACE-14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) and [vrgamedevgirl84/Wan14BT2VFusionX](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX) to provide additional VACE compatibility.
|
||||||
|
|
||||||
The process involved extracting VACE scopes and injecting into the target models.
|
The process involved extracting VACE scopes and injecting into the target models.
|
||||||
|
Model weights were directly converted to specific FP8 formats (E4M3FN and E5M2) using a custom ComfyUI node developed by lum3on, available at the [ComfyUI-ModelQuantizer](https://github.com/lum3on/ComfyUI-ModelQuantizer) GitHub repository.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user