Update README.md

This commit is contained in:
ai-modelscope
2025-06-26 02:53:14 +08:00
parent 0de3d02335
commit 9925280797
60 changed files with 561988 additions and 42 deletions

16
.gitattributes vendored
View File

@ -45,3 +45,19 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.wasm filter=lfs diff=lfs merge=lfs -text *.wasm filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text *.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text *tfevents* filter=lfs diff=lfs merge=lfs -text
assets/videos/seg_man_01.mp4 filter=lfs diff=lfs merge=lfs -text
models/hunyuancustom_720P/mp_rank_00_model_states.pt filter=lfs diff=lfs merge=lfs -text
models/hunyuancustom_720P/mp_rank_00_model_states_fp8.pt filter=lfs diff=lfs merge=lfs -text
models/hunyuancustom_audio_720P/mp_rank_00_model_states.pt filter=lfs diff=lfs merge=lfs -text
models/hunyuancustom_audio_720P/mp_rank_00_model_states_fp8.pt filter=lfs diff=lfs merge=lfs -text
models/hunyuancustom_editing_720P/mp_rank_00_model_states.pt filter=lfs diff=lfs merge=lfs -text
models/hunyuancustom_editing_720P/mp_rank_00_model_states_fp8.pt filter=lfs diff=lfs merge=lfs -text
models/llava-llama-3-8b-v1_1/model-00001-of-00004.safetensors filter=lfs diff=lfs merge=lfs -text
models/llava-llama-3-8b-v1_1/model-00002-of-00004.safetensors filter=lfs diff=lfs merge=lfs -text
models/llava-llama-3-8b-v1_1/model-00003-of-00004.safetensors filter=lfs diff=lfs merge=lfs -text
models/llava-llama-3-8b-v1_1/model-00004-of-00004.safetensors filter=lfs diff=lfs merge=lfs -text
models/openai_clip-vit-large-patch14/flax_model.msgpack filter=lfs diff=lfs merge=lfs -text
models/openai_clip-vit-large-patch14/model.safetensors filter=lfs diff=lfs merge=lfs -text
models/openai_clip-vit-large-patch14/pytorch_model.bin filter=lfs diff=lfs merge=lfs -text
models/openai_clip-vit-large-patch14/tf_model.h5 filter=lfs diff=lfs merge=lfs -text
models/vae_3d/hyvae_v1_0801/pytorch_model.pt filter=lfs diff=lfs merge=lfs -text

77
LICENSE Normal file
View File

@ -0,0 +1,77 @@
TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT
Tencent Hunyuan Custom Release Date: May 9, 2025
THIS LICENSE AGREEMENT DOES NOT APPLY IN THE EUROPEAN UNION, UNITED KINGDOM AND SOUTH KOREA AND IS EXPRESSLY LIMITED TO THE TERRITORY, AS DEFINED BELOW.
By clicking to agree or by using, reproducing, modifying, distributing, performing or displaying any portion or element of the Tencent Hunyuan Works, including via any Hosted Service, You will be deemed to have recognized and accepted the content of this Agreement, which is effective immediately.
1. DEFINITIONS.
a. “Acceptable Use Policy” shall mean the policy made available by Tencent as set forth in the Exhibit A.
b. “Agreement” shall mean the terms and conditions for use, reproduction, distribution, modification, performance and displaying of Tencent Hunyuan Works or any portion or element thereof set forth herein.
c. “Documentation” shall mean the specifications, manuals and documentation for Tencent Hunyuan made publicly available by Tencent.
d. “Hosted Service” shall mean a hosted service offered via an application programming interface (API), web access, or any other electronic or remote means.
e. “Licensee,” “You” or “Your” shall mean a natural person or legal entity exercising the rights granted by this Agreement and/or using the Tencent Hunyuan Works for any purpose and in any field of use.
f. “Materials” shall mean, collectively, Tencents proprietary Tencent Hunyuan and Documentation (and any portion thereof) as made available by Tencent under this Agreement.
g. “Model Derivatives” shall mean all: (i) modifications to Tencent Hunyuan or any Model Derivative of Tencent Hunyuan; (ii) works based on Tencent Hunyuan or any Model Derivative of Tencent Hunyuan; or (iii) any other machine learning model which is created by transfer of patterns of the weights, parameters, operations, or Output of Tencent Hunyuan or any Model Derivative of Tencent Hunyuan, to that model in order to cause that model to perform similarly to Tencent Hunyuan or a Model Derivative of Tencent Hunyuan, including distillation methods, methods that use intermediate data representations, or methods based on the generation of synthetic data Outputs by Tencent Hunyuan or a Model Derivative of Tencent Hunyuan for training that model. For clarity, Outputs by themselves are not deemed Model Derivatives.
h. “Output” shall mean the information and/or content output of Tencent Hunyuan or a Model Derivative that results from operating or otherwise using Tencent Hunyuan or a Model Derivative, including via a Hosted Service.
i. “Tencent,” “We” or “Us” shall mean THL A29 Limited.
j. “Tencent Hunyuan” shall mean the large language models, text/image/video/audio/3D generation models, and multimodal large language models and their software and algorithms, including trained model weights, parameters (including optimizer states), machine-learning model code, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing made publicly available by Us, including, without limitation to, Tencent Hunyuan Custom released at https://hunyuancustom.github.io/ .
k. “Tencent Hunyuan Works” shall mean: (i) the Materials; (ii) Model Derivatives; and (iii) all derivative works thereof.
l. “Territory” shall mean the worldwide territory, excluding the territory of the European Union, United Kingdom and South Korea.
m. “Third Party” or “Third Parties” shall mean individuals or legal entities that are not under common control with Us or You.
n. “including” shall mean including but not limited to.
2. GRANT OF RIGHTS.
We grant You, for the Territory only, a non-exclusive, non-transferable and royalty-free limited license under Tencents intellectual property or other rights owned by Us embodied in or utilized by the Materials to use, reproduce, distribute, create derivative works of (including Model Derivatives), and make modifications to the Materials, only in accordance with the terms of this Agreement and the Acceptable Use Policy, and You must not violate (or encourage or permit anyone else to violate) any term of this Agreement or the Acceptable Use Policy.
3. DISTRIBUTION.
You may, subject to Your compliance with this Agreement, distribute or make available to Third Parties the Tencent Hunyuan Works, exclusively in the Territory, provided that You meet all of the following conditions:
a. You must provide all such Third Party recipients of the Tencent Hunyuan Works or products or services using them a copy of this Agreement;
b. You must cause any modified files to carry prominent notices stating that You changed the files;
c. You are encouraged to: (i) publish at least one technology introduction blogpost or one public statement expressing Your experience of using the Tencent Hunyuan Works; and (ii) mark the products or services developed by using the Tencent Hunyuan Works to indicate that the product/service is “Powered by Tencent Hunyuan”; and
d. All distributions to Third Parties (other than through a Hosted Service) must be accompanied by a “Notice” text file that contains the following notice: “Tencent Hunyuan is licensed under the Tencent Hunyuan Community License Agreement, Copyright © 2025 Tencent. All Rights Reserved. The trademark rights of “Tencent Hunyuan” are owned by Tencent or its affiliate.”
You may add Your own copyright statement to Your modifications and, except as set forth in this Section and in Section 5, may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Model Derivatives as a whole, provided Your use, reproduction, modification, distribution, performance and display of the work otherwise complies with the terms and conditions of this Agreement (including as regards the Territory). If You receive Tencent Hunyuan Works from a Licensee as part of an integrated end user product, then this Section 3 of this Agreement will not apply to You.
4. ADDITIONAL COMMERCIAL TERMS.
If, on the Tencent Hunyuan version release date, the monthly active users of all products or services made available by or for Licensee is greater than 100 million monthly active users in the preceding calendar month, You must request a license from Tencent, which Tencent may grant to You in its sole discretion, and You are not authorized to exercise any of the rights under this Agreement unless or until Tencent otherwise expressly grants You such rights.
5. RULES OF USE.
a. Your use of the Tencent Hunyuan Works must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Tencent Hunyuan Works, which is hereby incorporated by reference into this Agreement. You must include the use restrictions referenced in these Sections 5(a) and 5(b) as an enforceable provision in any agreement (e.g., license agreement, terms of use, etc.) governing the use and/or distribution of Tencent Hunyuan Works and You must provide notice to subsequent users to whom You distribute that Tencent Hunyuan Works are subject to the use restrictions in these Sections 5(a) and 5(b).
b. You must not use the Tencent Hunyuan Works or any Output or results of the Tencent Hunyuan Works to improve any other AI model (other than Tencent Hunyuan or Model Derivatives thereof).
c. You must not use, reproduce, modify, distribute, or display the Tencent Hunyuan Works, Output or results of the Tencent Hunyuan Works outside the Territory. Any such use outside the Territory is unlicensed and unauthorized under this Agreement.
6. INTELLECTUAL PROPERTY.
a. Subject to Tencents ownership of Tencent Hunyuan Works made by or for Tencent and intellectual property rights therein, conditioned upon Your compliance with the terms and conditions of this Agreement, as between You and Tencent, You will be the owner of any derivative works and modifications of the Materials and any Model Derivatives that are made by or for You.
b. No trademark licenses are granted under this Agreement, and in connection with the Tencent Hunyuan Works, Licensee may not use any name or mark owned by or associated with Tencent or any of its affiliates, except as required for reasonable and customary use in describing and distributing the Tencent Hunyuan Works. Tencent hereby grants You a license to use “Tencent Hunyuan” (the “Mark”) in the Territory solely as required to comply with the provisions of Section 3(c), provided that You comply with any applicable laws related to trademark protection. All goodwill arising out of Your use of the Mark will inure to the benefit of Tencent.
c. If You commence a lawsuit or other proceedings (including a cross-claim or counterclaim in a lawsuit) against Us or any person or entity alleging that the Materials or any Output, or any portion of any of the foregoing, infringe any intellectual property or other right owned or licensable by You, then all licenses granted to You under this Agreement shall terminate as of the date such lawsuit or other proceeding is filed. You will defend, indemnify and hold harmless Us from and against any claim by any Third Party arising out of or related to Your or the Third Partys use or distribution of the Tencent Hunyuan Works.
d. Tencent claims no rights in Outputs You generate. You and Your users are solely responsible for Outputs and their subsequent uses.
7. DISCLAIMERS OF WARRANTY AND LIMITATIONS OF LIABILITY.
a. We are not obligated to support, update, provide training for, or develop any further version of the Tencent Hunyuan Works or to grant any license thereto.
b. UNLESS AND ONLY TO THE EXTENT REQUIRED BY APPLICABLE LAW, THE TENCENT HUNYUAN WORKS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED “AS IS” WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES OF ANY KIND INCLUDING ANY WARRANTIES OF TITLE, MERCHANTABILITY, NONINFRINGEMENT, COURSE OF DEALING, USAGE OF TRADE, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING, REPRODUCING, MODIFYING, PERFORMING, DISPLAYING OR DISTRIBUTING ANY OF THE TENCENT HUNYUAN WORKS OR OUTPUTS AND ASSUME ANY AND ALL RISKS ASSOCIATED WITH YOUR OR A THIRD PARTYS USE OR DISTRIBUTION OF ANY OF THE TENCENT HUNYUAN WORKS OR OUTPUTS AND YOUR EXERCISE OF RIGHTS AND PERMISSIONS UNDER THIS AGREEMENT.
c. TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT SHALL TENCENT OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, FOR ANY DAMAGES, INCLUDING ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, EXEMPLARY, CONSEQUENTIAL OR PUNITIVE DAMAGES, OR LOST PROFITS OF ANY KIND ARISING FROM THIS AGREEMENT OR RELATED TO ANY OF THE TENCENT HUNYUAN WORKS OR OUTPUTS, EVEN IF TENCENT OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
8. SURVIVAL AND TERMINATION.
a. The term of this Agreement shall commence upon Your acceptance of this Agreement or access to the Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein.
b. We may terminate this Agreement if You breach any of the terms or conditions of this Agreement. Upon termination of this Agreement, You must promptly delete and cease use of the Tencent Hunyuan Works. Sections 6(a), 6(c), 7 and 9 shall survive the termination of this Agreement.
9. GOVERNING LAW AND JURISDICTION.
a. This Agreement and any dispute arising out of or relating to it will be governed by the laws of the Hong Kong Special Administrative Region of the Peoples Republic of China, without regard to conflict of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement.
b. Exclusive jurisdiction and venue for any dispute arising out of or relating to this Agreement will be a court of competent jurisdiction in the Hong Kong Special Administrative Region of the Peoples Republic of China, and Tencent and Licensee consent to the exclusive jurisdiction of such court with respect to any such dispute.
EXHIBIT A
ACCEPTABLE USE POLICY
Tencent reserves the right to update this Acceptable Use Policy from time to time.
Last modified: November 5, 2024
Tencent endeavors to promote safe and fair use of its tools and features, including Tencent Hunyuan. You agree not to use Tencent Hunyuan or Model Derivatives:
1. Outside the Territory;
2. In any way that violates any applicable national, federal, state, local, international or any other law or regulation;
3. To harm Yourself or others;
4. To repurpose or distribute output from Tencent Hunyuan or any Model Derivatives to harm Yourself or others;
5. To override or circumvent the safety guardrails and safeguards We have put in place;
6. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
7. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections;
8. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement;
9. To intentionally defame, disparage or otherwise harass others;
10. To generate and/or disseminate malware (including ransomware) or any other content to be used for the purpose of harming electronic systems;
11. To generate or disseminate personal identifiable information with the purpose of harming others;
12. To generate or disseminate information (including images, code, posts, articles), and place the information in any public context (including through the use of bot generated tweets), without expressly and conspicuously identifying that the information and/or content is machine generated;
13. To impersonate another individual without consent, authorization, or legal right;
14. To make high-stakes automated decisions in domains that affect an individuals safety, rights or wellbeing (e.g., law enforcement, migration, medicine/health, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance);
15. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions;
16. To perform, facilitate, threaten, incite, plan, promote or encourage violent extremism or terrorism;
17. For any use intended to discriminate against or harm individuals or groups based on protected characteristics or categories, online or offline social behavior or known or predicted personal or personality characteristics;
18. To intentionally exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
19. For military purposes;
20. To engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or other professional practices.

118
Notice.txt Normal file
View File

@ -0,0 +1,118 @@
Usage and Legal Notices:
Tencent is pleased to support the open source community by making Tencent Tencent Hunyuan Custom available.
Copyright (C) 2025 THL A29 Limited, a Tencent company. All rights reserved. The below software and/or models in this distribution may have been modified by THL A29 Limited ("Tencent Modifications"). All Tencent Modifications are Copyright (C) THL A29 Limited.
Tencent Hunyuan Custom is licensed under the TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT except for the third-party components listed below. Tencent Hunyuan Custom does not impose any additional limitations beyond what is outlined in the repsective licenses of these third-party components. Users must comply with all terms and conditions of original licenses of these third-party components and must ensure that the usage of the third party components adheres to all relevant laws and regulations.
For avoidance of doubts, Tencent Hunyuan Custom means the large language models and their software and algorithms, including trained model weights, parameters (including optimizer states), machine-learning model code, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing may be made publicly available by Tencent in accordance with TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT.
Other dependencies and licenses:
Open Source Model Licensed under the TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT:
The below software in this distribution may have been modified by THL A29 Limited ("Tencent Modifications"). All Tencent Modifications are Copyright (C) 2025 THL A29 Limited.
--------------------------------------------------------------------
1. HunyuanVideo
Copyright (C) 2024 THL A29 Limited, a Tencent company. All rights reserved.
TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT:
--------------------------------------------------------------------
TENCENT HUNYUAN COMMUNITY LICENSE AGREEMENT
Tencent HunyuanVideo Release Date: December 3, 2024
THIS LICENSE AGREEMENT DOES NOT APPLY IN THE EUROPEAN UNION, UNITED KINGDOM AND SOUTH KOREA AND IS EXPRESSLY LIMITED TO THE TERRITORY, AS DEFINED BELOW.
By clicking to agree or by using, reproducing, modifying, distributing, performing or displaying any portion or element of the Tencent Hunyuan Works, including via any Hosted Service, You will be deemed to have recognized and accepted the content of this Agreement, which is effective immediately.
1. DEFINITIONS.
a. “Acceptable Use Policy” shall mean the policy made available by Tencent as set forth in the Exhibit A.
b. “Agreement” shall mean the terms and conditions for use, reproduction, distribution, modification, performance and displaying of Tencent Hunyuan Works or any portion or element thereof set forth herein.
c. “Documentation” shall mean the specifications, manuals and documentation for Tencent Hunyuan made publicly available by Tencent.
d. “Hosted Service” shall mean a hosted service offered via an application programming interface (API), web access, or any other electronic or remote means.
e. “Licensee,” “You” or “Your” shall mean a natural person or legal entity exercising the rights granted by this Agreement and/or using the Tencent Hunyuan Works for any purpose and in any field of use.
f. “Materials” shall mean, collectively, Tencents proprietary Tencent Hunyuan and Documentation (and any portion thereof) as made available by Tencent under this Agreement.
g. “Model Derivatives” shall mean all: (i) modifications to Tencent Hunyuan or any Model Derivative of Tencent Hunyuan; (ii) works based on Tencent Hunyuan or any Model Derivative of Tencent Hunyuan; or (iii) any other machine learning model which is created by transfer of patterns of the weights, parameters, operations, or Output of Tencent Hunyuan or any Model Derivative of Tencent Hunyuan, to that model in order to cause that model to perform similarly to Tencent Hunyuan or a Model Derivative of Tencent Hunyuan, including distillation methods, methods that use intermediate data representations, or methods based on the generation of synthetic data Outputs by Tencent Hunyuan or a Model Derivative of Tencent Hunyuan for training that model. For clarity, Outputs by themselves are not deemed Model Derivatives.
h. “Output” shall mean the information and/or content output of Tencent Hunyuan or a Model Derivative that results from operating or otherwise using Tencent Hunyuan or a Model Derivative, including via a Hosted Service.
i. “Tencent,” “We” or “Us” shall mean THL A29 Limited.
j. “Tencent Hunyuan” shall mean the large language models, text/image/video/audio/3D generation models, and multimodal large language models and their software and algorithms, including trained model weights, parameters (including optimizer states), machine-learning model code, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing made publicly available by Us, including, without limitation to, Tencent HunyuanVideo released at [https://github.com/Tencent/HunyuanVideo].
k. “Tencent Hunyuan Works” shall mean: (i) the Materials; (ii) Model Derivatives; and (iii) all derivative works thereof.
l. “Territory” shall mean the worldwide territory, excluding the territory of the European Union, United Kingdom and South Korea.
m. “Third Party” or “Third Parties” shall mean individuals or legal entities that are not under common control with Us or You.
n. “including” shall mean including but not limited to.
2. GRANT OF RIGHTS.
We grant You, for the Territory only, a non-exclusive, non-transferable and royalty-free limited license under Tencents intellectual property or other rights owned by Us embodied in or utilized by the Materials to use, reproduce, distribute, create derivative works of (including Model Derivatives), and make modifications to the Materials, only in accordance with the terms of this Agreement and the Acceptable Use Policy, and You must not violate (or encourage or permit anyone else to violate) any term of this Agreement or the Acceptable Use Policy.
3. DISTRIBUTION.
You may, subject to Your compliance with this Agreement, distribute or make available to Third Parties the Tencent Hunyuan Works, exclusively in the Territory, provided that You meet all of the following conditions:
a. You must provide all such Third Party recipients of the Tencent Hunyuan Works or products or services using them a copy of this Agreement;
b. You must cause any modified files to carry prominent notices stating that You changed the files;
c. You are encouraged to: (i) publish at least one technology introduction blogpost or one public statement expressing Your experience of using the Tencent Hunyuan Works; and (ii) mark the products or services developed by using the Tencent Hunyuan Works to indicate that the product/service is “Powered by Tencent Hunyuan”; and
d. All distributions to Third Parties (other than through a Hosted Service) must be accompanied by a “Notice” text file that contains the following notice: “Tencent Hunyuan is licensed under the Tencent Hunyuan Community License Agreement, Copyright © 2024 Tencent. All Rights Reserved. The trademark rights of “Tencent Hunyuan” are owned by Tencent or its affiliate.”
You may add Your own copyright statement to Your modifications and, except as set forth in this Section and in Section 5, may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Model Derivatives as a whole, provided Your use, reproduction, modification, distribution, performance and display of the work otherwise complies with the terms and conditions of this Agreement (including as regards the Territory). If You receive Tencent Hunyuan Works from a Licensee as part of an integrated end user product, then this Section 3 of this Agreement will not apply to You.
4. ADDITIONAL COMMERCIAL TERMS.
If, on the Tencent Hunyuan version release date, the monthly active users of all products or services made available by or for Licensee is greater than 100 million monthly active users in the preceding calendar month, You must request a license from Tencent, which Tencent may grant to You in its sole discretion, and You are not authorized to exercise any of the rights under this Agreement unless or until Tencent otherwise expressly grants You such rights.
5. RULES OF USE.
a. Your use of the Tencent Hunyuan Works must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Tencent Hunyuan Works, which is hereby incorporated by reference into this Agreement. You must include the use restrictions referenced in these Sections 5(a) and 5(b) as an enforceable provision in any agreement (e.g., license agreement, terms of use, etc.) governing the use and/or distribution of Tencent Hunyuan Works and You must provide notice to subsequent users to whom You distribute that Tencent Hunyuan Works are subject to the use restrictions in these Sections 5(a) and 5(b).
b. You must not use the Tencent Hunyuan Works or any Output or results of the Tencent Hunyuan Works to improve any other AI model (other than Tencent Hunyuan or Model Derivatives thereof).
c. You must not use, reproduce, modify, distribute, or display the Tencent Hunyuan Works, Output or results of the Tencent Hunyuan Works outside the Territory. Any such use outside the Territory is unlicensed and unauthorized under this Agreement.
6. INTELLECTUAL PROPERTY.
a. Subject to Tencents ownership of Tencent Hunyuan Works made by or for Tencent and intellectual property rights therein, conditioned upon Your compliance with the terms and conditions of this Agreement, as between You and Tencent, You will be the owner of any derivative works and modifications of the Materials and any Model Derivatives that are made by or for You.
b. No trademark licenses are granted under this Agreement, and in connection with the Tencent Hunyuan Works, Licensee may not use any name or mark owned by or associated with Tencent or any of its affiliates, except as required for reasonable and customary use in describing and distributing the Tencent Hunyuan Works. Tencent hereby grants You a license to use “Tencent Hunyuan” (the “Mark”) in the Territory solely as required to comply with the provisions of Section 3(c), provided that You comply with any applicable laws related to trademark protection. All goodwill arising out of Your use of the Mark will inure to the benefit of Tencent.
c. If You commence a lawsuit or other proceedings (including a cross-claim or counterclaim in a lawsuit) against Us or any person or entity alleging that the Materials or any Output, or any portion of any of the foregoing, infringe any intellectual property or other right owned or licensable by You, then all licenses granted to You under this Agreement shall terminate as of the date such lawsuit or other proceeding is filed. You will defend, indemnify and hold harmless Us from and against any claim by any Third Party arising out of or related to Your or the Third Partys use or distribution of the Tencent Hunyuan Works.
d. Tencent claims no rights in Outputs You generate. You and Your users are solely responsible for Outputs and their subsequent uses.
7. DISCLAIMERS OF WARRANTY AND LIMITATIONS OF LIABILITY.
a. We are not obligated to support, update, provide training for, or develop any further version of the Tencent Hunyuan Works or to grant any license thereto.
b. UNLESS AND ONLY TO THE EXTENT REQUIRED BY APPLICABLE LAW, THE TENCENT HUNYUAN WORKS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED “AS IS” WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES OF ANY KIND INCLUDING ANY WARRANTIES OF TITLE, MERCHANTABILITY, NONINFRINGEMENT, COURSE OF DEALING, USAGE OF TRADE, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING, REPRODUCING, MODIFYING, PERFORMING, DISPLAYING OR DISTRIBUTING ANY OF THE TENCENT HUNYUAN WORKS OR OUTPUTS AND ASSUME ANY AND ALL RISKS ASSOCIATED WITH YOUR OR A THIRD PARTYS USE OR DISTRIBUTION OF ANY OF THE TENCENT HUNYUAN WORKS OR OUTPUTS AND YOUR EXERCISE OF RIGHTS AND PERMISSIONS UNDER THIS AGREEMENT.
c. TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW, IN NO EVENT SHALL TENCENT OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, FOR ANY DAMAGES, INCLUDING ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, EXEMPLARY, CONSEQUENTIAL OR PUNITIVE DAMAGES, OR LOST PROFITS OF ANY KIND ARISING FROM THIS AGREEMENT OR RELATED TO ANY OF THE TENCENT HUNYUAN WORKS OR OUTPUTS, EVEN IF TENCENT OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
8. SURVIVAL AND TERMINATION.
a. The term of this Agreement shall commence upon Your acceptance of this Agreement or access to the Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein.
b. We may terminate this Agreement if You breach any of the terms or conditions of this Agreement. Upon termination of this Agreement, You must promptly delete and cease use of the Tencent Hunyuan Works. Sections 6(a), 6(c), 7 and 9 shall survive the termination of this Agreement.
9. GOVERNING LAW AND JURISDICTION.
a. This Agreement and any dispute arising out of or relating to it will be governed by the laws of the Hong Kong Special Administrative Region of the Peoples Republic of China, without regard to conflict of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement.
b. Exclusive jurisdiction and venue for any dispute arising out of or relating to this Agreement will be a court of competent jurisdiction in the Hong Kong Special Administrative Region of the Peoples Republic of China, and Tencent and Licensee consent to the exclusive jurisdiction of such court with respect to any such dispute.
EXHIBIT A
ACCEPTABLE USE POLICY
Tencent reserves the right to update this Acceptable Use Policy from time to time.
Last modified: November 5, 2024
Tencent endeavors to promote safe and fair use of its tools and features, including Tencent Hunyuan. You agree not to use Tencent Hunyuan or Model Derivatives:
1. Outside the Territory;
2. In any way that violates any applicable national, federal, state, local, international or any other law or regulation;
3. To harm Yourself or others;
4. To repurpose or distribute output from Tencent Hunyuan or any Model Derivatives to harm Yourself or others;
5. To override or circumvent the safety guardrails and safeguards We have put in place;
6. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
7. To generate or disseminate verifiably false information and/or content with the purpose of harming others or influencing elections;
8. To generate or facilitate false online engagement, including fake reviews and other means of fake online engagement;
9. To intentionally defame, disparage or otherwise harass others;
10. To generate and/or disseminate malware (including ransomware) or any other content to be used for the purpose of harming electronic systems;
11. To generate or disseminate personal identifiable information with the purpose of harming others;
12. To generate or disseminate information (including images, code, posts, articles), and place the information in any public context (including through the use of bot generated tweets), without expressly and conspicuously identifying that the information and/or content is machine generated;
13. To impersonate another individual without consent, authorization, or legal right;
14. To make high-stakes automated decisions in domains that affect an individuals safety, rights or wellbeing (e.g., law enforcement, migration, medicine/health, management of critical infrastructure, safety components of products, essential services, credit, employment, housing, education, social scoring, or insurance);
15. In a manner that violates or disrespects the social ethics and moral standards of other countries or regions;
16. To perform, facilitate, threaten, incite, plan, promote or encourage violent extremism or terrorism;
17. For any use intended to discriminate against or harm individuals or groups based on protected characteristics or categories, online or offline social behavior or known or predicted personal or personality characteristics;
18. To intentionally exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
19. For military purposes;
20. To engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or other professional practices.
--------------
For other third-party dependents of HunyuanVideo, please see: https://github.com/Tencent/HunyuanVideo/blob/main/Notice
Open Source Software Licensed under the MIT License:
--------------------------------------------------------------------
1. guided-diffusion
Copyright (c) 2021 OpenAI
Terms of the MIT License:
--------------------------------------------------------------------
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

394
README.md
View File

@ -1,47 +1,359 @@
--- ---
license: Apache License 2.0 language:
- en
#model-type: base_model:
##如 gpt、phi、llama、chatglm、baichuan 等 - tencent/HunyuanVideo
#- gpt pipeline_tag: image-to-video
#domain:
##如 nlp、cv、audio、multi-modal
#- nlp
#language:
##语言代码列表 https://help.aliyun.com/document_detail/215387.html?spm=a2c4g.11186623.0.0.9f8d7467kni6Aa
#- cn
#metrics:
##如 CIDEr、Blue、ROUGE 等
#- CIDEr
#tags:
##各种自定义,包括 pretrained、fine-tuned、instruction-tuned、RL-tuned 等训练方法和其他
#- pretrained
#tools:
##如 vllm、fastchat、llamacpp、AdaSeq 等
#- vllm
--- ---
### 当前模型的贡献者未提供更加详细的模型介绍。模型文件和权重,可浏览“模型文件”页面获取。 <!-- ## **HunyuanCustom** -->
#### 您可以通过如下git clone命令或者ModelScope SDK来下载模型
SDK下载 <p align="center">
<img src="assets/material/logo.png" height=100>
</p>
# **HunyuanCustom** 🌅
<div align="center">
<a href="https://github.com/Tencent/HunyuanCustom"><img src="https://img.shields.io/static/v1?label=HunyuanCustom%20Code&message=Github&color=blue"></a> &ensp;
<a href="https://hunyuancustom.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Web&color=green"></a> &ensp;
<a href="https://hunyuan.tencent.com/modelSquare/home/play?modelId=192"><img src="https://img.shields.io/static/v1?label=Playground&message=Web&color=green"></a>
</div>
<div align="center">
<a href="https://arxiv.org/pdf/2505.04512"><img src="https://img.shields.io/static/v1?label=Tech Report&message=Arxiv&color=red"></a> &ensp;
</div>
<div align="center">
<a href="https://huggingface.co/tencent/HunyuanCustom"><img src="https://img.shields.io/static/v1?label=HunyuanVideo&message=HuggingFace&color=yellow"></a> &ensp;
</div>
-----
> [**HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation**](https://arxiv.org/pdf/2505.04512) <be>
## 🔥🔥🔥 News!!
* June 6, 2025: 💃 We release the inference code and model weights of audio-driven and video-driven powered by [OmniV2V](https://arxiv.org/abs/2506.01801).
* May 13, 2025: 🎉 HunyuanCustom has been integrated into [ComfyUI-HunyuanVideoWrapper](https://github.com/kijai/ComfyUI-HunyuanVideoWrapper/blob/develop/example_workflows/hyvideo_custom_testing_01.json) by [Kijai](https://github.com/kijai).
* May 12, 2025: 🔥 HunyuanCustom is available in Cloud-Native-Build (CNB) [HunyuanCustom](https://cnb.cool/tencent/hunyuan/HunyuanCustom).
* May 8, 2025: 👋 We release the inference code and model weights of HunyuanCustom. [Download](models/README.md).
## 📑 Open-source Plan
- HunyuanCustom
- Single-Subject Video Customization
- [x] Inference
- [x] Checkpoints
- [x] ComfyUI
- Audio-Driven Video Customization
- [x] Inference
- [x] Checkpoints
- [ ] ComfyUI
- Video-Driven Video Customization
- [x] Inference
- [x] Checkpoints
- [ ] ComfyUI
- Multi-Subject Video Customization
## Contents
- [**HunyuanCustom** 🌅](#hunyuancustom-)
- [🔥🔥🔥 News!!](#-news)
- [📑 Open-source Plan](#-open-source-plan)
- [Contents](#contents)
- [**Abstract**](#abstract)
- [**HunyuanCustom Overall Architecture**](#hunyuancustom-overall-architecture)
- [🎉 **HunyuanCustom Key Features**](#-hunyuancustom-key-features)
- [**Multimodal Video customization**](#multimodal-video-customization)
- [**Various Applications**](#various-applications)
- [📈 Comparisons](#-comparisons)
- [📜 Requirements](#-requirements)
- [🛠️ Dependencies and Installation](#-dependencies-and-installation)
- [Installation Guide for Linux](#installation-guide-for-linux)
- [🧱 Download Pretrained Models](#-download-pretrained-models)
- [🚀 Parallel Inference on Multiple GPUs](#-parallel-inference-on-multiple-gpus)
- [🔑 Single-gpu Inference](#-single-gpu-inference)
- [Run with very low VRAM](#run-with-very-low-vram)
- [Run a Gradio Server](#run-a-gradio-server)
- [🔗 BibTeX](#-bibtex)
- [Acknowledgements](#acknowledgements)
---
## **Abstract**
Customized video generation aims to produce videos featuring specific subjects under flexible user-defined conditions, yet existing methods often struggle with identity consistency and limited input modalities. In this paper, we propose HunyuanCustom, a multi-modal customized video generation framework that emphasizes subject consistency while supporting image, audio, video, and text conditions. Built upon HunyuanVideo, our model first addresses the image-text conditioned generation task by introducing a text-image fusion module based on LLaVA for enhanced multi-modal understanding, along with an image ID enhancement module that leverages temporal concatenation to reinforce identity features across frames. To enable audio- and video-conditioned generation, we further propose modality-specific condition injection mechanisms: an AudioNet module that achieves hierarchical alignment via spatial cross-attention, and a video-driven injection module that integrates latent-compressed conditional video through a patchify-based feature-alignment network. Extensive experiments on single- and multi-subject scenarios demonstrate that HunyuanCustom significantly outperforms state-of-the-art open- and closed-source methods in terms of ID consistency, realism, and text-video alignment. Moreover, we validate its robustness across downstream tasks, including audio and video-driven customized video generation. Our results highlight the effectiveness of multi-modal conditioning and identity-preserving strategies in advancing controllable video generation.
## **HunyuanCustom Overall Architecture**
![image](assets/material/method.png)
We propose **HunyuanCustom, a multi-modal, conditional, and controllable generation model centered on subject consistency**, built upon the Hunyuan Video generation framework. It enables the generation of subject-consistent videos conditioned on text, images, audio, and video inputs.
## 🎉 **HunyuanCustom Key Features**
### **Multimodal Video customization**
HunyuanCustom supports inputs in the form of **text, images, audio, and video**.
Specifically, it can handle single or multiple image inputs to enable customized video generation for one or more subjects.
Additionally, it can incorporate extra audio inputs to drive the subject to speak the corresponding audio.
Lastly, HunyuanCustom supports video input, allowing for the replacement of specified objects in the video with subjects from a given image.
![image](assets/material/teaser.png)
### **Various Applications**
With the multi-modal capabilities of HunyuanCustom, numerous downstream tasks can be accomplished.
For instance, by taking multiple images as input, HunyuanCustom can facilitate **virtual human advertisements** and **virtual try-on**. Additionally,
with image and audio inputs, it can create **singing avatars**. Furthermore, by using an image and a video as inputs,
HunyuanCustom supports **video editing** by replacing subjects in the video with those in the provided image.
More applications await your exploration!
![image](assets/material/application.png)
## 📈 Comparisons
To evaluate the performance of HunyuanCustom, we compared it with state-of-the-art video customization methods,
including VACE, Skyreels, Pika, Vidu, Keling, and Hailuo. The comparison focused on face/subject consistency,
video-text alignment, and overall video quality.
| Models | Face-Sim | CLIP-B-T | DINO-Sim | Temp-Consis | DD |
|-------------------|----------|----------|----------|-------------|------|
| VACE-1.3B | 0.204 | _0.308_ | 0.569 | **0.967** | 0.53 |
| Skyreels | 0.402 | 0.295 | 0.579 | 0.942 | 0.72 |
| Pika | 0.363 | 0.305 | 0.485 | 0.928 | _0.89_ |
| Vidu2.0 | 0.424 | 0.300 | 0.537 | _0.961_ | 0.43 |
| Keling1.6 | 0.505 | 0.285 | _0.580_ | 0.914 | 0.78 |
| Hailuo | _0.526_ | **0.314**| 0.433 | 0.937 | **0.94** |
| **HunyuanCustom (Ours)** | **0.627**| 0.306 | **0.593**| 0.958 | 0.71 |
## 📜 Requirements
The following table shows the requirements for running HunyuanCustom model (batch size = 1) to generate videos:
| Model | Setting<br/>(height/width/frame) | GPU Peak Memory |
|:------------:|:--------------------------------:|:----------------:|
| HunyuanCustom | 720px1280px129f | 80GB |
| HunyuanCustom | 512px896px129f | 60GB |
* An NVIDIA GPU with CUDA support is required.
* The model is tested on a machine with 8GPUs.
* **Minimum**: The minimum GPU memory required is 24GB for 720px1280px129f but very slow.
* **Recommended**: We recommend using a GPU with 80GB of memory for better generation quality.
* Tested operating system: Linux
## 🛠️ Dependencies and Installation
Begin by cloning the repository:
```shell
git clone https://github.com/Tencent/HunyuanCustom.git
cd HunyuanCustom
```
### Installation Guide for Linux
We recommend CUDA versions 12.4 or 11.8 for the manual installation.
Conda's installation instructions are available [here](https://docs.anaconda.com/free/miniconda/index.html).
```shell
# 1. Create conda environment
conda create -n HunyuanCustom python==3.10.9
# 2. Activate the environment
conda activate HunyuanCustom
# 3. Install PyTorch and other dependencies using conda
# For CUDA 11.8
conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=11.8 -c pytorch -c nvidia
# For CUDA 12.4
conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4 -c pytorch -c nvidia
# 4. Install pip dependencies
python -m pip install -r requirements.txt
# 5. Install flash attention v2 for acceleration (requires CUDA 11.8 or above)
python -m pip install ninja
python -m pip install git+https://github.com/Dao-AILab/flash-attention.git@v2.6.3
```
In case of running into float point exception(core dump) on the specific GPU type, you may try the following solutions:
```shell
# Option 1: Making sure you have installed CUDA 12.4, CUBLAS>=12.4.5.8, and CUDNN>=9.00 (or simply using our CUDA 12 docker image).
pip install nvidia-cublas-cu12==12.4.5.8
export LD_LIBRARY_PATH=/opt/conda/lib/python3.8/site-packages/nvidia/cublas/lib/
# Option 2: Forcing to explicitly use the CUDA 11.8 compiled version of Pytorch and all the other packages
pip uninstall -r requirements.txt # uninstall all packages
pip install torch==2.4.0 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
pip install ninja
pip install git+https://github.com/Dao-AILab/flash-attention.git@v2.6.3
```
Additionally, you can also use HunyuanVideo Docker image. Use the following command to pull and run the docker image.
```shell
# For CUDA 12.4 (updated to avoid float point exception)
docker pull hunyuanvideo/hunyuanvideo:cuda_12
docker run -itd --gpus all --init --net=host --uts=host --ipc=host --name hunyuanvideo --security-opt=seccomp=unconfined --ulimit=stack=67108864 --ulimit=memlock=-1 --privileged hunyuanvideo/hunyuanvideo:cuda_12
pip install gradio==3.39.0 diffusers==0.33.0 transformers==4.41.2
# For CUDA 11.8
docker pull hunyuanvideo/hunyuanvideo:cuda_11
docker run -itd --gpus all --init --net=host --uts=host --ipc=host --name hunyuanvideo --security-opt=seccomp=unconfined --ulimit=stack=67108864 --ulimit=memlock=-1 --privileged hunyuanvideo/hunyuanvideo:cuda_11
pip install gradio==3.39.0 diffusers==0.33.0 transformers==4.41.2
```
## 🧱 Download Pretrained Models
The details of download pretrained models are shown [here](models/README.md).
## 🚀 Parallel Inference on Multiple GPUs
For example, to generate a video with 8 GPUs, you can use the following command:
### Run Single-Subject Video Customization
```bash ```bash
#安装ModelScope cd HunyuanCustom
pip install modelscope
``` export MODEL_BASE="./models"
```python export PYTHONPATH=./
#SDK模型下载 torchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \
from modelscope import snapshot_download --ref-image './assets/images/seg_woman_01.png' \
model_dir = snapshot_download('AI-ModelScope/HunyuanCustom') --pos-prompt "Realistic, High-quality. A woman is drinking coffee at a café." \
``` --neg-prompt "Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border." \
Git下载 --ckpt ${MODEL_BASE}"/hunyuancustom_720P/mp_rank_00_model_states.pt" \
``` --video-size 720 1280 \
#Git模型下载 --seed 1024 \
git clone https://www.modelscope.cn/AI-ModelScope/HunyuanCustom.git --sample-n-frames 129 \
--infer-steps 30 \
--flow-shift-eval-video 13.0 \
--save-path './results/sp_720p'
``` ```
<p style="color: lightgrey;">如果您是本模型的贡献者,我们邀请您根据<a href="https://modelscope.cn/docs/ModelScope%E6%A8%A1%E5%9E%8B%E6%8E%A5%E5%85%A5%E6%B5%81%E7%A8%8B%E6%A6%82%E8%A7%88" style="color: lightgrey; text-decoration: underline;">模型贡献文档</a>,及时完善模型卡片内容。</p> ### Run Video-Driven Video Customization (Video Editing)
```bash
cd HunyuanCustom
export MODEL_BASE="./models"
export PYTHONPATH=./
torchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \
--ref-image './assets/images/sed_red_panda.png' \
--input-video './assets/input_videos/001_bg.mp4' \
--mask-video './assets/input_videos/001_mask.mp4' \
--expand-scale 5 \
--video-condition \
--pos-prompt "Realistic, High-quality. A red panda is walking on a stone road." \
--neg-prompt "Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border." \
--ckpt ${MODEL_BASE}"/hunyuancustom_editing_720P/mp_rank_00_model_states.pt" \
--seed 1024 \
--infer-steps 50 \
--flow-shift-eval-video 5.0 \
--save-path './results/sp_editing_720p'
# --pose-enhance # Enable for human videos to improve pose generation quality.
```
### Run Audio-Driven Video Customization
```bash
cd HunyuanCustom
export MODEL_BASE="./models"
export PYTHONPATH=./
torchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \
--ref-image './assets/images/seg_man_01.png' \
--input-audio './assets/audios/milk_man.mp3' \
--audio-strength 0.8 \
--audio-condition \
--pos-prompt "Realistic, High-quality. In the study, a man sits at a table featuring a bottle of milk while delivering a product presentation." \
--neg-prompt "Two people, two persons, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border." \
--ckpt ${MODEL_BASE}"/hunyuancustom_audio_720P/mp_rank_00_model_states.pt" \
--seed 1026 \
--video-size 720 1280 \
--sample-n-frames 129 \
--cfg-scale 7.5 \
--infer-steps 30 \
--use-deepcache 1 \
--flow-shift-eval-video 13.0 \
--save-path './results/sp_audio_720p'
```
## 🔑 Single-gpu Inference
For example, to generate a video with 1 GPU, you can use the following command:
```bash
cd HunyuanCustom
export MODEL_BASE="./models"
export DISABLE_SP=1
export PYTHONPATH=./
python hymm_sp/sample_gpu_poor.py \
--ref-image './assets/images/seg_woman_01.png' \
--pos-prompt "Realistic, High-quality. A woman is drinking coffee at a café." \
--neg-prompt "Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border." \
--ckpt ${MODEL_BASE}"/hunyuancustom_720P/mp_rank_00_model_states_fp8.pt" \
--video-size 512 896 \
--seed 1024 \
--sample-n-frames 129 \
--infer-steps 30 \
--flow-shift-eval-video 13.0 \
--save-path './results/1gpu_540p' \
--use-fp8
```
### Run with very low VRAM
```bash
cd HunyuanCustom
export MODEL_BASE="./models"
export CPU_OFFLOAD=1
export PYTHONPATH=./
python hymm_sp/sample_gpu_poor.py \
--ref-image './assets/images/seg_woman_01.png' \
--pos-prompt "Realistic, High-quality. A woman is drinking coffee at a café." \
--neg-prompt "Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border." \
--ckpt ${MODEL_BASE}"/hunyuancustom_720P/mp_rank_00_model_states_fp8.pt" \
--video-size 720 1280 \
--seed 1024 \
--sample-n-frames 129 \
--infer-steps 30 \
--flow-shift-eval-video 13.0 \
--save-path './results/cpu_720p' \
--use-fp8 \
--cpu-offload
```
## Run a Gradio Server
```bash
cd HunyuanCustom
# Single-Subject Video Customization
bash ./scripts/run_gradio.sh
# Video-Driven Video Customization
bash ./scripts/run_gradio.sh --video
# Audio-Driven Video Customization
bash ./scripts/run_gradio.sh --audio
```
## 🔗 BibTeX
If you find [HunyuanCustom](https://arxiv.org/abs/2505.04512) useful for your research and applications, please cite using this BibTeX:
```BibTeX
@misc{hu2025hunyuancustom,
title={HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation},
author={Teng Hu and Zhentao Yu and Zhengguang Zhou and Sen Liang and Yuan Zhou and Qin Lin and Qinglin Lu},
year={2025},
eprint={2505.04512},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2505.04512},
}
```
## Acknowledgements
We would like to thank the contributors to the [HunyuanVideo](https://github.com/Tencent/HunyuanVideo), [HunyuanVideo-Avatar](https://github.com/Tencent-Hunyuan/HunyuanVideo-Avatar), [MimicMotion](https://github.com/Tencent/MimicMotion), [SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [FLUX](https://github.com/black-forest-labs/flux), [Llama](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [Xtuner](https://github.com/InternLM/xtuner), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.

33
assets/README.md Normal file
View File

@ -0,0 +1,33 @@
# Download Pretrained Models
All models are stored in `HunyuanCustom/models` by default, and the file structure is as follows
```shell
HunyuanCustom
├──models
│ ├──README.md
│ ├──hunyuancustom_720P
│ │ ├──mp_rank_00_model_states.pt
│ │ │──mp_rank_00_model_states_fp8.pt
│ │ ├──mp_rank_00_model_states_fp8_map.pt
├ ├──vae_3d
│ ├──openai_clip-vit-large-patch14
│ ├──llava-llama-3-8b-v1_1
├──...
```
## Download HunyuanCustom model
To download the HunyuanCustom model, first install the huggingface-cli. (Detailed instructions are available [here](https://huggingface.co/docs/huggingface_hub/guides/cli).)
```shell
python -m pip install "huggingface_hub[cli]"
```
Then download the model using the following commands:
```shell
# Switch to the directory named 'HunyuanCustom'
cd HunyuanCustom
# Use the huggingface-cli tool to download HunyuanCustom model in HunyuanCustom/models dir.
# The download time may vary from 10 minutes to 1 hour depending on network conditions.
huggingface-cli download tencent/HunyuanCustom --local-dir ./
```

BIN
assets/images/method.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

BIN
assets/images/poodle.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 265 KiB

BIN
assets/images/seg_boy.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 226 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 226 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 200 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 176 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 871 KiB

BIN
assets/material/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

BIN
assets/material/method.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

BIN
assets/material/teaser.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 722 KiB

BIN
assets/videos/seg_man_01.mp4 (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

1
configuration.json Normal file
View File

@ -0,0 +1 @@
{"framework": "pytorch", "task": "text-to-video-synthesis", "allow_remote": true}

33
models/README.md Normal file
View File

@ -0,0 +1,33 @@
# Download Pretrained Models
All models are stored in `HunyuanCustom/models` by default, and the file structure is as follows
```shell
HunyuanCustom
├──models
│ ├──README.md
│ ├──hunyuancustom_720P
│ │ ├──mp_rank_00_model_states.pt
│ │ │──mp_rank_00_model_states_fp8.pt
│ │ ├──mp_rank_00_model_states_fp8_map.pt
├ ├──vae_3d
│ ├──openai_clip-vit-large-patch14
│ ├──llava-llama-3-8b-v1_1
├──...
```
## Download HunyuanCustom model
To download the HunyuanCustom model, first install the huggingface-cli. (Detailed instructions are available [here](https://huggingface.co/docs/huggingface_hub/guides/cli).)
```shell
python -m pip install "huggingface_hub[cli]"
```
Then download the model using the following commands:
```shell
# Switch to the directory named 'HunyuanCustom'
cd HunyuanCustom
# Use the huggingface-cli tool to download HunyuanCustom model in HunyuanCustom/models dir.
# The download time may vary from 10 minutes to 1 hour depending on network conditions.
huggingface-cli download tencent/HunyuanCustom --local-dir ./
```

BIN
models/hunyuancustom_720P/mp_rank_00_model_states.pt (Stored with Git LFS) Normal file

Binary file not shown.

BIN
models/hunyuancustom_720P/mp_rank_00_model_states_fp8.pt (Stored with Git LFS) Normal file

Binary file not shown.

BIN
models/hunyuancustom_720P/mp_rank_00_model_states_fp8_map.pt (Stored with Git LFS) Normal file

Binary file not shown.

BIN
models/hunyuancustom_audio_720P/mp_rank_00_model_states.pt (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
models/hunyuancustom_editing_720P/mp_rank_00_model_states.pt (Stored with Git LFS) Normal file

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,122 @@
---
datasets:
- Lin-Chen/ShareGPT4V
pipeline_tag: image-text-to-text
library_name: xtuner
---
<div align="center">
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
[![Generic badge](https://img.shields.io/badge/GitHub-%20XTuner-black.svg)](https://github.com/InternLM/xtuner)
</div>
## Model
llava-llama-3-8b-v1_1-hf is a LLaVA model fine-tuned from [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) and [CLIP-ViT-Large-patch14-336](https://huggingface.co/openai/clip-vit-large-patch14-336) with [ShareGPT4V-PT](https://huggingface.co/datasets/Lin-Chen/ShareGPT4V) and [InternVL-SFT](https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat#prepare-training-datasets) by [XTuner](https://github.com/InternLM/xtuner).
**Note: This model is in HuggingFace LLaVA format.**
Resources:
- GitHub: [xtuner](https://github.com/InternLM/xtuner)
- Official LLaVA format model: [xtuner/llava-llama-3-8b-v1_1-hf](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-hf)
- XTuner LLaVA format model: [xtuner/llava-llama-3-8b-v1_1](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1)
- GGUF format model: [xtuner/llava-llama-3-8b-v1_1-gguf](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-gguf)
## Details
| Model | Visual Encoder | Projector | Resolution | Pretraining Strategy | Fine-tuning Strategy | Pretrain Dataset | Fine-tune Dataset |
| :-------------------- | ------------------: | --------: | ---------: | ---------------------: | ------------------------: | ------------------------: | -----------------------: |
| LLaVA-v1.5-7B | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, Frozen ViT | LLaVA-PT (558K) | LLaVA-Mix (665K) |
| LLaVA-Llama-3-8B | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, LoRA ViT | LLaVA-PT (558K) | LLaVA-Mix (665K) |
| LLaVA-Llama-3-8B-v1.1 | CLIP-L | MLP | 336 | Frozen LLM, Frozen ViT | Full LLM, LoRA ViT | ShareGPT4V-PT (1246K) | InternVL-SFT (1268K) |
## Results
<div align="center">
<img src="https://github.com/InternLM/xtuner/assets/36994684/a157638c-3500-44ed-bfab-d8d8249f91bb" alt="Image" width=500" />
</div>
| Model | MMBench Test (EN) | MMBench Test (CN) | CCBench Dev | MMMU Val | SEED-IMG | AI2D Test | ScienceQA Test | HallusionBench aAcc | POPE | GQA | TextVQA | MME | MMStar |
| :-------------------- | :---------------: | :---------------: | :---------: | :-------: | :------: | :-------: | :------------: | :-----------------: | :--: | :--: | :-----: | :------: | :----: |
| LLaVA-v1.5-7B | 66.5 | 59.0 | 27.5 | 35.3 | 60.5 | 54.8 | 70.4 | 44.9 | 85.9 | 62.0 | 58.2 | 1511/348 | 30.3 |
| LLaVA-Llama-3-8B | 68.9 | 61.6 | 30.4 | 36.8 | 69.8 | 60.9 | 73.3 | 47.3 | 87.2 | 63.5 | 58.0 | 1506/295 | 38.2 |
| LLaVA-Llama-3-8B-v1.1 | 72.3 | 66.4 | 31.6 | 36.8 | 70.1 | 70.0 | 72.9 | 47.7 | 86.4 | 62.6 | 59.0 | 1469/349 | 45.1 |
## QuickStart
### Chat by `pipeline`
```python
from transformers import pipeline
from PIL import Image
import requests
model_id = "xtuner/llava-llama-3-8b-v1_1-transformers"
pipe = pipeline("image-to-text", model=model_id, device=0)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
prompt = ("<|start_header_id|>user<|end_header_id|>\n\n<image>\nWhat are these?<|eot_id|>"
"<|start_header_id|>assistant<|end_header_id|>\n\n")
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
print(outputs)
>>> [{'generated_text': 'user\n\n\nWhat are these?assistant\n\nThese are two cats, one brown and one gray, lying on a pink blanket. sleep. brown and gray cat sleeping on a pink blanket.'}]
```
### Chat by pure `transformers`
```python
import requests
from PIL import Image
import torch
from transformers import AutoProcessor, LlavaForConditionalGeneration
model_id = "xtuner/llava-llama-3-8b-v1_1-transformers"
prompt = ("<|start_header_id|>user<|end_header_id|>\n\n<image>\nWhat are these?<|eot_id|>"
"<|start_header_id|>assistant<|end_header_id|>\n\n")
image_file = "http://images.cocodataset.org/val2017/000000039769.jpg"
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
).to(0)
processor = AutoProcessor.from_pretrained(model_id)
raw_image = Image.open(requests.get(image_file, stream=True).raw)
inputs = processor(prompt, raw_image, return_tensors='pt').to(0, torch.float16)
output = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(processor.decode(output[0][2:], skip_special_tokens=True))
>>> These are two cats, one brown and one gray, lying on a pink blanket. sleep. brown and gray cat sleeping on a pink blanket.
```
### Reproduce
Please refer to [docs](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/llava/phi3_mini_4k_instruct_clip_vit_large_p14_336#readme).
## Citation
```bibtex
@misc{2023xtuner,
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
author={XTuner Contributors},
howpublished = {\url{https://github.com/InternLM/xtuner}},
year={2023}
}
```

View File

@ -0,0 +1,44 @@
{
"architectures": [
"LlavaForConditionalGeneration"
],
"ignore_index": -100,
"image_token_index": 128257,
"model_type": "llava",
"pad_token_id": 128258,
"projector_hidden_act": "gelu",
"text_config": {
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 128000,
"eos_token_id": 128001,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"model_type": "llama",
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 500000.0,
"torch_dtype": "float16",
"vocab_size": 128320
},
"torch_dtype": "float16",
"transformers_version": "4.40.1",
"vision_config": {
"architectures": [
"CLIPVisionModel"
],
"dropout": 0.0,
"hidden_size": 1024,
"image_size": 336,
"intermediate_size": 4096,
"model_type": "clip_vision_model",
"num_attention_heads": 16,
"num_hidden_layers": 24,
"patch_size": 14,
"projection_dim": 768,
"torch_dtype": "float32"
},
"vision_feature_layer": -2,
"vision_feature_select_strategy": "default"
}

View File

@ -0,0 +1,6 @@
{
"_from_model_config": true,
"bos_token_id": 128000,
"eos_token_id": 128001,
"transformers_version": "4.40.1"
}

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@ -0,0 +1,693 @@
{
"metadata": {
"total_size": 16752504832
},
"weight_map": {
"language_model.lm_head.weight": "model-00004-of-00004.safetensors",
"language_model.model.embed_tokens.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.0.input_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.0.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.0.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.0.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.0.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.0.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.1.input_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.1.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.1.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.1.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.1.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.1.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.10.input_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.10.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.10.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.10.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.10.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.10.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.10.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.10.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.10.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.11.input_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.11.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.11.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.11.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.11.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.11.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.11.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.11.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.11.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.12.input_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.12.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.12.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.12.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.12.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.12.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.12.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.12.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.12.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.13.input_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.13.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.13.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.13.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.13.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.13.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.13.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.13.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.13.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.14.input_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.14.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.14.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.14.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.14.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.14.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.14.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.14.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.14.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.15.input_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.15.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.15.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.15.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.15.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.15.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.15.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.15.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.15.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.16.input_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.16.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.16.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.16.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.16.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.16.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.16.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.16.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.16.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.17.input_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.17.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.17.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.17.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.17.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.17.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.17.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.17.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.17.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.18.input_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.18.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.18.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.18.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.18.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.18.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.18.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.18.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.18.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.19.input_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.19.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.19.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.19.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.19.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.19.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.19.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.19.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.19.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.2.input_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.2.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.2.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.2.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.2.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.2.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.20.input_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.20.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.20.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.20.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.20.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.20.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.20.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.20.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.20.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.21.input_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.21.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.21.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.21.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.21.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.21.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.21.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.21.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.21.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.22.input_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.22.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.22.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.22.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.22.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.22.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.22.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.22.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.22.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.23.input_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.23.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.23.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.23.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.23.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.23.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.23.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.23.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.23.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.24.input_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.24.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.24.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.24.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.24.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.24.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.24.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.24.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.24.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.25.input_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.25.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.25.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.25.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.25.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.25.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.25.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.25.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.25.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.26.input_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.26.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.26.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.26.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.26.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.26.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.26.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.26.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.26.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.27.input_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.27.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.27.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.27.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.27.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.27.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.27.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.27.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.27.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.28.input_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.28.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.28.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.28.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.28.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.28.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.28.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.28.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.28.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.29.input_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.29.mlp.down_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.29.mlp.gate_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.29.mlp.up_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.29.post_attention_layernorm.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.29.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.29.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.29.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.29.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.3.input_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.3.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.3.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.3.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.3.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.3.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.30.input_layernorm.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.30.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.30.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.30.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.30.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.30.self_attn.k_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.30.self_attn.o_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.30.self_attn.q_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.30.self_attn.v_proj.weight": "model-00003-of-00004.safetensors",
"language_model.model.layers.31.input_layernorm.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.31.mlp.down_proj.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.31.mlp.gate_proj.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.31.mlp.up_proj.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.31.post_attention_layernorm.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.31.self_attn.k_proj.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.31.self_attn.o_proj.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.31.self_attn.q_proj.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.31.self_attn.v_proj.weight": "model-00004-of-00004.safetensors",
"language_model.model.layers.4.input_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.4.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.4.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.4.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.4.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.4.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.5.input_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.5.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.5.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.5.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.5.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.5.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.6.input_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.6.mlp.down_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.6.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.6.mlp.up_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.6.post_attention_layernorm.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.6.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.7.input_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.7.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.7.mlp.gate_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.7.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.7.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.7.self_attn.o_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"language_model.model.layers.8.input_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.8.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.8.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.8.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.8.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.8.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.8.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.8.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.8.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.9.input_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.9.mlp.down_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.9.mlp.gate_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.9.mlp.up_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.9.post_attention_layernorm.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.9.self_attn.k_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.9.self_attn.o_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.9.self_attn.q_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.layers.9.self_attn.v_proj.weight": "model-00002-of-00004.safetensors",
"language_model.model.norm.weight": "model-00004-of-00004.safetensors",
"multi_modal_projector.linear_1.bias": "model-00001-of-00004.safetensors",
"multi_modal_projector.linear_1.weight": "model-00001-of-00004.safetensors",
"multi_modal_projector.linear_2.bias": "model-00001-of-00004.safetensors",
"multi_modal_projector.linear_2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.embeddings.class_embedding": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.embeddings.patch_embedding.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.embeddings.position_embedding.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.0.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.1.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.10.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.11.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.12.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.13.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.14.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.15.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.16.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.17.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.18.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.19.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.2.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.20.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.21.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.22.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.23.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.3.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.4.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.5.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.6.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.7.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.8.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.layer_norm1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.layer_norm1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.layer_norm2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.layer_norm2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.mlp.fc1.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.mlp.fc1.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.mlp.fc2.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.mlp.fc2.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.self_attn.k_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.self_attn.k_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.self_attn.out_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.self_attn.out_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.self_attn.q_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.self_attn.q_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.self_attn.v_proj.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.encoder.layers.9.self_attn.v_proj.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.post_layernorm.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.post_layernorm.weight": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.pre_layrnorm.bias": "model-00001-of-00004.safetensors",
"vision_tower.vision_model.pre_layrnorm.weight": "model-00001-of-00004.safetensors"
}
}

View File

@ -0,0 +1,45 @@
{
"_valid_processor_keys": [
"images",
"do_resize",
"size",
"resample",
"do_center_crop",
"crop_size",
"do_rescale",
"rescale_factor",
"do_normalize",
"image_mean",
"image_std",
"do_convert_rgb",
"return_tensors",
"data_format",
"input_data_format"
],
"crop_size": {
"height": 336,
"width": 336
},
"do_center_crop": true,
"do_convert_rgb": true,
"do_normalize": true,
"do_rescale": true,
"do_resize": true,
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_processor_type": "CLIPImageProcessor",
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"processor_class": "LlavaProcessor",
"resample": 3,
"rescale_factor": 0.00392156862745098,
"size": {
"shortest_edge": 336
}
}

View File

@ -0,0 +1,24 @@
{
"bos_token": {
"content": "<|begin_of_text|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "<|end_of_text|>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": "<unk>"
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,145 @@
---
tags:
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: playing music, playing sports
example_title: Cat & Dog
---
# Model Card: CLIP
Disclaimer: The model card is taken and modified from the official CLIP repository, it can be found [here](https://github.com/openai/CLIP/blob/main/model-card.md).
## Model Details
The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context theyre being deployed within.
### Model Date
January 2021
### Model Type
The base model uses a ViT-L/14 Transformer architecture as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.
The original implementation had two variants: one using a ResNet image encoder and the other using a Vision Transformer. This repository has the variant with the Vision Transformer.
### Documents
- [Blog Post](https://openai.com/blog/clip/)
- [CLIP Paper](https://arxiv.org/abs/2103.00020)
### Use with Transformers
```python
from PIL import Image
import requests
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
## Model Use
### Intended Use
The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
#### Primary intended uses
The primary intended users of these models are AI researchers.
We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
### Out-of-Scope Use Cases
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIPs performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
## Data
The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
### Data Mission Statement
Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
## Performance and Limitations
### Performance
We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:
- Food101
- CIFAR10
- CIFAR100
- Birdsnap
- SUN397
- Stanford Cars
- FGVC Aircraft
- VOC2007
- DTD
- Oxford-IIIT Pet dataset
- Caltech101
- Flowers102
- MNIST
- SVHN
- IIIT5K
- Hateful Memes
- SST-2
- UCF101
- Kinetics700
- Country211
- CLEVR Counting
- KITTI Distance
- STL-10
- RareAct
- Flickr30
- MSCOCO
- ImageNet
- ImageNet-A
- ImageNet-R
- ImageNet Sketch
- ObjectNet (ImageNet Overlap)
- Youtube-BB
- ImageNet-Vid
## Limitations
CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
### Bias and Fairness
We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with Middle Eastern having the highest accuracy (98.4%) and White having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
## Feedback
### Where to send questions or comments about the model
Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)

View File

@ -0,0 +1,171 @@
{
"_name_or_path": "clip-vit-large-patch14/",
"architectures": [
"CLIPModel"
],
"initializer_factor": 1.0,
"logit_scale_init_value": 2.6592,
"model_type": "clip",
"projection_dim": 768,
"text_config": {
"_name_or_path": "",
"add_cross_attention": false,
"architectures": null,
"attention_dropout": 0.0,
"bad_words_ids": null,
"bos_token_id": 0,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout": 0.0,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 2,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"hidden_act": "quick_gelu",
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_factor": 1.0,
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 77,
"min_length": 0,
"model_type": "clip_text_model",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beam_groups": 1,
"num_beams": 1,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 1,
"prefix": null,
"problem_type": null,
"projection_dim" : 768,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.16.0.dev0",
"use_bfloat16": false,
"vocab_size": 49408
},
"text_config_dict": {
"hidden_size": 768,
"intermediate_size": 3072,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"projection_dim": 768
},
"torch_dtype": "float32",
"transformers_version": null,
"vision_config": {
"_name_or_path": "",
"add_cross_attention": false,
"architectures": null,
"attention_dropout": 0.0,
"bad_words_ids": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"cross_attention_hidden_size": null,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout": 0.0,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": null,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"hidden_act": "quick_gelu",
"hidden_size": 1024,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"image_size": 224,
"initializer_factor": 1.0,
"initializer_range": 0.02,
"intermediate_size": 4096,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"layer_norm_eps": 1e-05,
"length_penalty": 1.0,
"max_length": 20,
"min_length": 0,
"model_type": "clip_vision_model",
"no_repeat_ngram_size": 0,
"num_attention_heads": 16,
"num_beam_groups": 1,
"num_beams": 1,
"num_hidden_layers": 24,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": null,
"patch_size": 14,
"prefix": null,
"problem_type": null,
"projection_dim" : 768,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.16.0.dev0",
"use_bfloat16": false
},
"vision_config_dict": {
"hidden_size": 1024,
"intermediate_size": 4096,
"num_attention_heads": 16,
"num_hidden_layers": 24,
"patch_size": 14,
"projection_dim": 768
}
}

BIN
models/openai_clip-vit-large-patch14/flax_model.msgpack (Stored with Git LFS) Normal file

Binary file not shown.

File diff suppressed because it is too large Load Diff

BIN
models/openai_clip-vit-large-patch14/model.safetensors (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1,19 @@
{
"crop_size": 224,
"do_center_crop": true,
"do_normalize": true,
"do_resize": true,
"feature_extractor_type": "CLIPFeatureExtractor",
"image_mean": [
0.48145466,
0.4578275,
0.40821073
],
"image_std": [
0.26862954,
0.26130258,
0.27577711
],
"resample": 3,
"size": 224
}

BIN
models/openai_clip-vit-large-patch14/pytorch_model.bin (Stored with Git LFS) Normal file

Binary file not shown.

View File

@ -0,0 +1 @@
{"bos_token": {"content": "<|startoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": "<|endoftext|>"}

BIN
models/openai_clip-vit-large-patch14/tf_model.h5 (Stored with Git LFS) Normal file

Binary file not shown.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,34 @@
{
"unk_token": {
"content": "<|endoftext|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": true,
"__type": "AddedToken"
},
"bos_token": {
"content": "<|startoftext|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": true,
"__type": "AddedToken"
},
"eos_token": {
"content": "<|endoftext|>",
"single_word": false,
"lstrip": false,
"rstrip": false,
"normalized": true,
"__type": "AddedToken"
},
"pad_token": "<|endoftext|>",
"add_prefix_space": false,
"errors": "replace",
"do_lower_case": true,
"name_or_path": "openai/clip-vit-base-patch32",
"model_max_length": 77,
"special_tokens_map_file": "./special_tokens_map.json",
"tokenizer_class": "CLIPTokenizer"
}

File diff suppressed because one or more lines are too long

View File

@ -0,0 +1,34 @@
{
"_class_name": "AutoencoderKLCausal3D",
"_diffusers_version": "0.4.2",
"act_fn": "silu",
"block_out_channels": [
128,
256,
512,
512
],
"down_block_types": [
"DownEncoderBlockCausal3D",
"DownEncoderBlockCausal3D",
"DownEncoderBlockCausal3D",
"DownEncoderBlockCausal3D"
],
"in_channels": 3,
"latent_channels": 16,
"layers_per_block": 2,
"norm_num_groups": 32,
"out_channels": 3,
"sample_size": 256,
"sample_tsize": 64,
"up_block_types": [
"UpDecoderBlockCausal3D",
"UpDecoderBlockCausal3D",
"UpDecoderBlockCausal3D",
"UpDecoderBlockCausal3D"
],
"scaling_factor": 0.476986,
"time_compression_ratio": 4,
"mid_block_add_attention": true,
"mid_block_causal_attn": true
}

BIN
models/vae_3d/hyvae_v1_0801/pytorch_model.pt (Stored with Git LFS) Normal file

Binary file not shown.