diff --git a/README.md b/README.md index f024d0a..f7017a9 100644 --- a/README.md +++ b/README.md @@ -13,10 +13,11 @@ language: license: apache-2.0 --- -This is a merge of [Wan-AI/Wan2.1-VACE-14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) and [vrgamedevgirl84/Wan14BT2VFusionX](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX) to provide additional VACE compatibility. +This is a merge of [Wan-AI/Wan2.1-VACE-14B](https://huggingface.co/Wan-AI/Wan2.1-VACE-14B) scopes and [vrgamedevgirl84/Wan14BT2VFusionX](https://huggingface.co/vrgamedevgirl84/Wan14BT2VFusioniX). The process involved extracting VACE scopes and injecting into the target models. -FP8 model weight was then converted to specific FP8 formats (E4M3FN and E5M2) using a custom ComfyUI node developed by lum3on, available at the [ComfyUI-ModelQuantizer](https://github.com/lum3on/ComfyUI-ModelQuantizer) GitHub repository. + +- FP8 model weight was then converted to specific FP8 formats (E4M3FN and E5M2) using ComfyUI custom node [ComfyUI-ModelQuantizer](https://github.com/lum3on/ComfyUI-ModelQuantizer) by [lum3on](https://github.com/lum3on). ## Usage @@ -36,5 +37,5 @@ The model files can be used in [ComfyUI](https://github.com/comfyanonymous/Comfy ## Reference -- For more information about the GGUF-quantized versions, refer to [QuantStack/Wan-14B-T2V-FusionX-VACE-GGUF](https://huggingface.co/QuantStack/Wan-14B-T2V-FusionX-VACE-GGUF), where the quantization process is explained. +- For more information about the GGUF-quantized versions, refer to [QuantStack/Wan-14B-T2V-FusionX-VACE-GGUF](https://huggingface.co/QuantStack/Wan-14B-T2V-FusionX-VACE-GGUF). - For an overview of Safetensors format, please see the [Safetensors](https://huggingface.co/docs/safetensors/index). \ No newline at end of file