mirror of
https://www.modelscope.cn/city96/Wan2.1-I2V-14B-480P-gguf.git
synced 2026-04-02 11:32:55 +08:00
master
base_model, library_name, quantized_by, tags, license, pipeline_tag, language
| base_model | library_name | quantized_by | tags | license | pipeline_tag | language | ||||
|---|---|---|---|---|---|---|---|---|---|---|
| Wan-AI/Wan2.1-I2V-14B-480P | gguf | city96 |
|
apache-2.0 | image-to-video |
|
This is a direct GGUF conversion of Wan-AI/Wan2.1-I2V-14B-480P
All quants are created from the FP32 base file, though I only uploaded FP16 due to it exceeding the 50GB max file limit and gguf-split loading not currently being supported in ComfyUI-GGUF.
The model files can be used with the ComfyUI-GGUF custom node.
Place model files in ComfyUI/models/unet - see the GitHub readme for further install instructions.
The other files required can be downloaded from this repository by Comfy-Org
Please refer to this chart for a basic overview of quantization types.
Description