# You can save the transformer weights as bf16 or 8-bit with the --do_8_bit flag # You can also save with scaled 8-bit using the --do_8bit_scaled flag # Call like ...
# python convert_flux_diffusers_to_orig.py /path/to/diffusers/checkpoint /path/to/flux1-dev-fp8.safetensors /output/path/my_finetune.safetensors --do_8_bit # Call ...
一些您可能无法访问的结果已被隐去。
显示无法访问的结果