Skip to content

Model import

Peter Major edited this page Jun 16, 2023 · 3 revisions

Introduction

Note: this page is just a simplified version of the Microsoft Olive Stable Diffusion ONNX conversion guide

To generate images with Unpaint, you will need to install a Stable Diffusion model.

Some useful links:

Prerequisites

Install the following - default settings are OK for this tutorial.

There are a number of ways to deploy Python on Windows, including: installing it from the Microsoft Store, downloading it from Python.org, or using Anaconda/Miniconda. We will describe the process with Miniconda.

Preparation

  • Start anaconda prompt from the Start Menu
  • Create a working folder for the model conversion
mkdir C:\olive-sd
cd C:\olive-sd
  • Create a conda environment and activate it
conda create -n olive-env python=3.8
conda activate olive-env
  • Install pip
conda install pip
  • If you do not have git, you can also install it
conda install git
  • Install Microsoft Olive
git clone https://github.com/microsoft/olive --branch v0.2.1
cd olive/examples/directml/stable_diffusion

pip install olive-ai[directml]==0.2.1
pip install -r requirements.txt

Converting a model

Follow the following steps to convert a model to the ONNX format expected by Unpaint:

  • Find a model you would like to convert, note down its name, for example: stable diffusion 1.5
  • Go to HuggingFace.co and search for the same name in the search field, and then open the result you find most promising, e.g. https://huggingface.co/runwayml/stable-diffusion-v1-5
  • Note down the username / repository part, in the above case it is: runwayml/stable-diffusion-v1-5
  • Open and activate the conda environment as described above, then go to the olive/examples/directml/stable_diffusion directory.
  • Execute the following command: python stable_diffusion.py --optimize --model_id runwayml/stable-diffusion-v1-5
  • Wait patiently as the conversion will take some time
  • Once the process completes, the output will be placed into the following directory: models\optimized\<user>\<repository> so in this case it will be models\optimized\runwayml\stable-diffusion-v1-5

Tip: many models do not have VAE included and the system falls back to the original VAE shipped with Stable Diffusion, this is however known to exhibit blurrier output and artifacts (such as blue spots) on many of the generated images. To avoid this you may use this updated version of the VAE. To do this this, you may clone the target model, overwrite the contents of the vae_decoder folder with the above VAE, and run the above process on the local directory by specifying .\your_model_here as the model id.

Importing a converted model into Unpaint

Open the Model Library in Unpaint and press the Import model from disk option, then select the output directory generated above.

Sharing the model

If you have a model converted and want use it on other computers, you may share it with others on HuggingFace.co. To do this create a repo and then upload your converted model. The directory and file names should remain the same, e.g. the vae_decoder directory should be placed in the root of your repository.

Once this is done, you can use the Import model from HuggingFace option and specify the model ID as user/repository corresponding to your model.

Clone this wiki locally