包阅导读总结
1. 关键词:FLUX.1、Fine-tuning、Replicate、Training、Model
2. 总结:本文介绍了如何在 Replicate 上对 FLUX.1 进行微调,包括准备训练数据、在网页和通过 API 创建训练、使用训练好的模型、分享模型、更快生成的方法、许可和商业使用以及定价等内容。
3. 主要内容:
– 准备训练数据:
– 收集能代表要教授模型概念的多样化图像,12 – 20 张为宜,尽量用大尺寸、JPEG 或 PNG 格式,可创建同名.txt 描述文件,压缩成一个文件。
– 创建训练:
– 在网页创建:在 Replicate 上选择或创建模型,上传训练数据压缩文件,设置训练参数,点击创建训练。
– 通过 API 创建:设置环境变量,创建新模型,创建训练运行,提供相关参数。
– 使用训练好的模型:训练完成后可在 Replicate 上直接使用,也可通过 API 调用。
– 分享模型:将模型设为公开,可通过发送 URL 分享。
– 更快生成:可用 FLUX.1 [schnell] 模型,降低参数加快生成。
– 许可和商业使用:在 Replicate 上生成的图像可商用,下载权重在本地生成的不可商用。
– 定价:按秒收费,时长与训练步数有关。
思维导图:
文章地址:https://replicate.com/blog/fine-tune-flux
文章来源:replicate.com
作者:Replicate’s blog
发布时间:2024/8/15 22:41
语言:英文
总字数:1152字
预计阅读时间:5分钟
评分:92分
标签:FLUX.1,微调,文本转图像模型,Replicate 平台,AI 图像生成
以下为原文内容
本内容来源于用户推荐转载,旨在分享知识与观点,如有侵权请联系删除 联系邮箱 media@ilingban.com
How to fine-tune FLUX.1
You can fine-tune FLUX.1 on Replicate by just uploading some images, either on the web or via an API.
If you’re not familiar with Replicate, we make it easy to run AI as an API. You don’t have to go looking for a beefy GPU, you don’t have to deal with environments and containers, you don’t have to worry about scaling. You write normal code, with normal APIs, and pay only for what you use.
Prepare your training data
To start fine-tuning, you’ll need a collection of images that represent the concept you want to teach the model. These images should be diverse enough to cover different aspects of the concept. For example, if you’re fine-tuning on a specific character, include images in various settings, poses, and lighting.
Here are some guidelines:
- Use 12-20 images for best results
- Use large images if possible
- Use JPEG or PNG formats
- Optionally, create a corresponding .txt file for each image with the same name, containing the caption
Once you have your images (and optional captions), zip them up into a single file.
Create a training on the web
To start the training process on the web, navigate to Ostris’s FLUX.1 [dev] trainer on Replicate.
First, select a model as your destination or create a new one by typing the name in the model selector field.
Next, upload the zip file containing your training data as the input_images
, then set up the training parameters.
The trigger_word
refers to the object, style or concept you are training on. Pick a string that isn’t a real word, like TOK
or something related to what’s being trained, like CYBRPNK
. The trigger word you specify will be associated with all images during training. Then when you run your fine-tuned model, you can include the trigger word in prompts to activate your concept.
For steps
, a good starting point is 1000.
Leave the learning_rate
, batch_size
, and resolution
at their default values. Leave autocaptioning
enabled unless you want to provide your own captions.
If you want to save your model on Hugging Face, enter your Hugging Face token and set the repository ID.
Once you’ve filled out the form, click “Create training” to begin the process of fine-tuning.
Create a training via an API
Alternatively, you can create a training from your own code with an API.
Make sure you have your REPLICATE_API_TOKEN set in your environment. Find it in your account settings.
export REPLICATE_API_TOKEN=r8_***************************
Create a new model that will serve as the destination for your fine-tuned weights. This is where your trained model will live once the process is complete.
import replicatemodel = replicate.models.create( owner="yourusername", name="flux-your-model-name", visibility="public", # or "private" if you prefer hardware="gpu-t4", # Replicate will override this for fine-tuned models description="A fine-tuned FLUX.1 model")print(f"Model created: {model.name}")print(f"Model URL: https://replicate.com/{model.owner}/{model.name}")
Now that you have your model, start the training process by creating a new training run. You’ll need to provide the input images, the number of steps, and any other desired parameters.
# Now use this model as the destination for your trainingtraining = replicate.trainings.create( version="ostris/flux-dev-lora-trainer:4ffd32160efd92e956d39c5338a9b8fbafca58e03f791f6d8011f3e20e8ea6fa", input={ "input_images": open("/path/to/your/local/training-images.zip", "rb"), "steps": 1000, "hf_token": "YOUR_HUGGING_FACE_TOKEN", # optional "hf_repo_id": "YOUR_HUGGING_FACE_REPO_ID", # optional }, destination=f"{model.owner}/{model.name}")print(f"Training started: {training.status}")print(f"Training URL: https://replicate.com/p/{training.id}")
Note that it doesn’t matter which hardware you pick for your model at this time, because we route to H100s for all our FLUX.1 fine-tunes. Training for this many steps typically takes 20-30 minutes and costs under $2.
Use your trained model
Once the training is complete, you can use your trained model directly on Replicate, just like any other model.
You can run it on the web:
- Go to your model page on Replicate (e.g.,
https://replicate.com/yourusername/flux-your-model-name
). - For the prompt input, include your trigger word (such as “bad 70s food”) to activate your fine-tuned concept.
- Adjust any other inputs as needed.
- Click “Run” to generate your image.
Or, with an API. For example, using the Python client:
import replicateoutput = replicate.run( "yourusername/flux-your-model-name:version_id", input={ "prompt": "A portrait photo of a space station, bad 70s food", "num_inference_steps": 28, "guidance_scale": 7.5, "model": "dev", })print(f"Generated image URL: {output}")
Replace yourusername/flux-your-model-name:version_id
with your actual model details.
You can find more information about running it with an API on the “API” tab of your model page.
Share your model
If you want others to be able to discover and use your new fine tuned-model, you’ll need to make it public.
If you created your new model using using the web-based training form, it will be private by default.
To make your model public, go to the model settings page and set the visibility to “Public”.
Once your model is public, you can share it with others by sending them the URL of the model page, and it will appear in the Explore section of the site and in the collection of Flux fine-tunes.
Using FLUX.1 [schnell] for faster generation
You can use your FLUX.1 [dev] LoRA with the smaller FLUX.1 [schnell] model, to generate images faster and cheaper. Just change the model
parameter from dev
to schnell
when you generate, and lower num_inference_steps
to something small like 4.
Note that outputs will still be under the non-commercial license of FLUX.1 [dev].
Examples and use cases
Check out our examples gallery for inspiration. You can see how others have fine-tuned FLUX.1 to create different styles, characters, a never-ending parade of cute animals, and more.
Licensing and commercial use
If you generate images on Replicate with FLUX.1 models and their fine-tunes, then you can use the images commercially.
If you download the weights off Replicate and generate images on your own computer, you can’t use the images commercially.
Pricing
Fine-tuned FLUX.1 models on Replicate are charged per second, for both fine-tuning and generating images. The total time a fine-tuning run will take varies based on the number of steps you train.
Check our pricing page for more details.
What’s next?
Follow us on X or sign up to our newsletter keep up date with new features, models and fine-tuning techniques.
Happy training!