mirror of
https://www.modelscope.cn/black-forest-labs/FLUX.1-Kontext-dev.git
synced 2026-04-02 18:12:57 +08:00
Update README.md
This commit is contained in:
@ -95,7 +95,6 @@ For VRAM saving measures and speed ups check out the [diffusers docs](https://hu
|
||||
|
||||
# Risks
|
||||
|
||||
Risks
|
||||
Black Forest Labs is committed to the responsible development of generative AI technology. Prior to releasing FLUX.1 Kontext, we evaluated and mitigated a number of risks in our models and services, including the generation of unlawful content. We implemented a series of pre-release mitigations to help prevent misuse by third parties, with additional post-release mitigations to help address residual risks:
|
||||
1. **Pre-training mitigation**. We filtered pre-training data for multiple categories of “not safe for work” (NSFW) content to help prevent a user generating unlawful content in response to text prompts or uploaded images.
|
||||
2. **Post-training mitigation.** We have partnered with the Internet Watch Foundation, an independent nonprofit organization dedicated to preventing online abuse, to filter known child sexual abuse material (CSAM) from post-training data. Subsequently, we undertook multiple rounds of targeted fine-tuning to provide additional mitigation against potential abuse. By inhibiting certain behaviors and concepts in the trained model, these techniques can help to prevent a user generating synthetic CSAM or nonconsensual intimate imagery (NCII) from a text prompt, or transforming an uploaded image into synthetic CSAM or NCII.
|
||||
|
||||
Reference in New Issue
Block a user