How Do You Download and Use Flux From Huggingface for AI Development

Author

Posted Nov 6, 2024

Reads 742

Close-up of a hand holding a smartphone displaying an app store interface on a gray background.
Credit: pexels.com, Close-up of a hand holding a smartphone displaying an app store interface on a gray background.

To download Flux from Hugging Face, start by navigating to the Hugging Face website and clicking on the "Models" tab. This will take you to a page where you can search for and select the Flux model you want to use.

Flux is a deep learning framework that can be used for a wide range of AI development tasks, including natural language processing and computer vision. You can find more information about the specific features and capabilities of Flux on the Hugging Face website.

To get started with using Flux, you'll need to install it on your local machine. This can be done by running the command "pip install transformers" in your terminal, which will install the necessary dependencies for Flux.

For another approach, see: How to Download Hugging Face Model

Types of Flux

There are different types of Flux models available, each with its own characteristics. Kijai, a developer, offers a lower-end GPU option called TYPE D: FLUX.

You can download these models from the Hugging Face repository, but be aware that they come in a compressed form with reduced image quality. To use them, you'll also need to download a VAE (ae.safetensors file) and place it in the "ComfyUI/models/vae" folder.

There are two models available from Kijai: Flux Dev and Flux Schnell, each with its own VAE file.

For more insights, see: Ollama Huggingface Models

Type B: Flux Quantized

CSS code displayed on a computer screen highlighting programming concepts and technology.
Credit: pexels.com, CSS code displayed on a computer screen highlighting programming concepts and technology.

Type B: Flux Quantized is a great option if you want to balance image quality with GPU consumption. This version of Flux produces good quality images without compromising on pixel count, thanks to the extensive support for image pixels in ComfyUI.

The Flux GGUF Quantized version consumes low GPU consumption, which means it won't slow down your system. Rendering time is also significantly reduced, making it a great choice for those with lower-end hardware.

The GGUF Loader works on the GPU to improve overall performance, and the T5 text encoder is included to lower VRAM power consumption. This makes it a more efficient option compared to other versions.

To get started with Type B: Flux Quantized, you'll need to clone the repository using the command `git clone https://github.com/city96/ComfyUI-GGUF.git`. Then, navigate to the "ComfyUI_windows_portable" folder and type "cmd" to open the command prompt.

Here are the steps to follow:

  • Clone the repository using the provided command.
  • Navigate to the "ComfyUI_windows_portable" folder.
  • Open the command prompt by typing "cmd".
  • Download a Pre-quantized model or both from the repository.
  • Save the model to the "ComfyUI/models/clip" folder.
  • Restart and refresh ComfyUI to take effect.

Type D

Type D: Flux is a great option for those with lower-end GPUs, thanks to Kijai's compressed versions of the Flux Dev and Flux Schnell models in FP8bit versions.

A Close-Up Shot of a Terminal
Credit: pexels.com, A Close-Up Shot of a Terminal

These models can be downloaded from Kijai's Hugging Face repository, but be aware that there will be some image quality reduction.

You'll also need to download the respective VAE (ae.safetensors file) from the TYPE A section, which can be put into the "ComfyUI/models/vae" folder.

To avoid running out of memory, you'll need to download the Clip models, specifically the FP8 clip model if you have less than 32GB of System RAM.

Here are the specific Clip models you'll need to download:

  • clip_l.safetensors
  • t5xxl_fp16 (for systems with more than 32GB of System RAM)
  • t5xxl_fp8_e4m3fn (for systems with less VRAM)

Save these models into the "ComfyUI/models/clip" folder.

Once you have all the necessary models, you can load the Flux model into the Load diffusion model node and select the "fp8_e5m2" or "fp8_e4m3fn" option if you're getting out-of-memory errors.

For more insights, see: How to Use Models from Huggingface

Frequently Asked Questions

How to download using Hugging Face?

To download files using Hugging Face, use the hf_hub_download() function, which caches the file on disk and returns its local file path. This function simplifies the download process and ensures efficient access to files.

Carrie Chambers

Senior Writer

Carrie Chambers is a seasoned blogger with years of experience in writing about a variety of topics. She is passionate about sharing her knowledge and insights with others, and her writing style is engaging, informative and thought-provoking. Carrie's blog covers a wide range of subjects, from travel and lifestyle to health and wellness.

Love What You Read? Stay Updated!

Join our community for insights, tips, and more.