Skip to content

Stable Diffusion with Core ML on Apple Silicon

License

Notifications You must be signed in to change notification settings

mph4/ml-stable-diffusion

 
 

Repository files navigation

Core ML Stable Diffusion

Run Stable Diffusion on Apple Silicon with Core ML

[Blog Post] [BibTeX]

This repository comprises:

  • python_coreml_stable_diffusion, a Python package for converting PyTorch models to Core ML format and performing image generation with Hugging Face diffusers in Python
  • StableDiffusion, a Swift package that developers can add to their Xcode projects as a dependency to deploy image generation capabilities in their apps. The Swift package relies on the Core ML model files generated by python_coreml_stable_diffusion

If you run into issues during installation or runtime, please refer to the FAQ section. Please refer to the System Requirements section before getting started.

System Requirements

Details (Click to expand)

Model Conversion:

macOS Python coremltools
13.1 3.8 7.0

Project Build:

macOS Xcode Swift
13.1 14.3 5.8

Target Device Runtime:

macOS iPadOS, iOS
13.1 16.2

Target Device Runtime (With Memory Improvements):

macOS iPadOS, iOS
14.0 17.0

Target Device Hardware Generation:

Mac iPad iPhone
M1 M1 A14

Performance Benchmarks

Details (Click to expand)

stabilityai/stable-diffusion-2-1-base (512x512)

Device --compute-unit --attention-implementation End-to-End Latency (s) Diffusion Speed (iter/s)
iPhone 12 Mini CPU_AND_NE SPLIT_EINSUM_V2 18.5* 1.44
iPhone 12 Pro Max CPU_AND_NE SPLIT_EINSUM_V2 15.4 1.45
iPhone 13 CPU_AND_NE SPLIT_EINSUM_V2 10.8* 2.53
iPhone 13 Pro Max CPU_AND_NE SPLIT_EINSUM_V2 10.4 2.55
iPhone 14 CPU_AND_NE SPLIT_EINSUM_V2 8.6 2.57
iPhone 14 Pro Max CPU_AND_NE SPLIT_EINSUM_V2 7.9 2.69
iPad Pro (M1) CPU_AND_NE SPLIT_EINSUM_V2 11.2 2.19
iPad Pro (M2) CPU_AND_NE SPLIT_EINSUM_V2 7.0 3.07
Details (Click to expand)
  • This benchmark was conducted by Apple and Hugging Face using public beta versions of iOS 17.0, iPadOS 17.0 and macOS 14.0 Seed 8 in August 2023.
  • The performance data was collected using the benchmark branch of the Diffusers app
  • Swift code is not fully optimized, introducing up to ~10% overhead unrelated to Core ML model execution.
  • The median latency value across 5 back-to-back end-to-end executions are reported
  • The image generation procedure follows the standard configuration: 20 inference steps, 512x512 output image resolution, 77 text token sequence length, classifier-free guidance (batch size of 2 for unet).
  • The actual prompt length does not impact performance because the Core ML model is converted with a static shape that computes the forward pass for all of the 77 elements (tokenizer.model_max_length) in the text token sequence regardless of the actual length of the input text.
  • Weights are compressed to 6 bit precision. Please refer to this section for details.
  • Activations are in float16 precision for both the GPU and the Neural Engine.
  • * indicates that the reduceMemory option was enabled which loads and unloads models just-in-time to avoid memory shortage. This added up to 2 seconds to the end-to-end latency.
  • In the benchmark table, we report the best performing --compute-unit and --attention-implementation values per device. The former does not modify the Core ML model and can be applied during runtime. The latter modifies the Core ML model. Note that the best performing compute unit is model version and hardware-specific.
  • Note that the performance optimizations in this repository (e.g. --attention-implementation) are generally applicable to Transformers and not customized to Stable Diffusion. Better performance may be observed upon custom kernel tuning. Therefore, these numbers do not represent peak HW capability.
  • Performance may vary across different versions of Stable Diffusion due to architecture changes in the model itself. Each reported number is specific to the model version mentioned in that context.
  • Performance may vary due to factors like increased system load from other applications or suboptimal device thermal state.

stabilityai/stable-diffusion-xl-base-1.0-ios (768x768)

Device --compute-unit --attention-implementation End-to-End Latency (s) Diffusion Speed (iter/s)
iPhone 12 Pro CPU_AND_NE SPLIT_EINSUM 116* 0.50
iPhone 13 Pro Max CPU_AND_NE SPLIT_EINSUM 86* 0.68
iPhone 14 Pro Max CPU_AND_NE SPLIT_EINSUM 77* 0.83
iPhone 15 Pro Max CPU_AND_NE SPLIT_EINSUM 31 0.85
iPad Pro (M1) CPU_AND_NE SPLIT_EINSUM 36 0.69
iPad Pro (M2) CPU_AND_NE SPLIT_EINSUM 27 0.98
Details (Click to expand)
  • This benchmark was conducted by Apple and Hugging Face using iOS 17.0.2 and iPadOS 17.0.2 in September 2023.
  • The performance data was collected using the benchmark branch of the Diffusers app
  • The median latency value across 5 back-to-back end-to-end executions are reported
  • The image generation procedure follows this configuration: 20 inference steps, 768x768 output image resolution, 77 text token sequence length, classifier-free guidance (batch size of 2 for unet).
  • Unet.mlmodelc is compressed to 4.04 bit precision following the Mixed-Bit Palettization algorithm recipe published here
  • All models except for Unet.mlmodelc are compressed to 16 bit precision
  • madebyollin/sdxl-vae-fp16-fix by @madebyollin was used as the source PyTorch model for VAEDecoder.mlmodelc in order to enable float16 weight and activation quantization for the VAE model.
  • --attention-implementation SPLIT_EINSUM is chosen in lieu of SPLIT_EINSUM_V2 due to the prohibitively long compilation time of the latter
  • * indicates that the reduceMemory option was enabled which loads and unloads models just-in-time to avoid memory shortage. This added significant overhead to the end-to-end latency. Note that end-to-end latency difference between iPad Pro (M1) and iPhone 13 Pro Max despite identical diffusion speed.
  • The actual prompt length does not impact performance because the Core ML model is converted with a static shape that computes the forward pass for all of the 77 elements (tokenizer.model_max_length) in the text token sequence regardless of the actual length of the input text.
  • In the benchmark table, we report the best performing --compute-unit and --attention-implementation values per device. The former does not modify the Core ML model and can be applied during runtime. The latter modifies the Core ML model. Note that the best performing compute unit is model version and hardware-specific.
  • Note that the performance optimizations in this repository (e.g. --attention-implementation) are generally applicable to Transformers and not customized to Stable Diffusion. Better performance may be observed upon custom kernel tuning. Therefore, these numbers do not represent peak HW capability.
  • Performance may vary across different versions of Stable Diffusion due to architecture changes in the model itself. Each reported number is specific to the model version mentioned in that context.
  • Performance may vary due to factors like increased system load from other applications or suboptimal device thermal state.

stabilityai/stable-diffusion-xl-base-1.0 (1024x1024)

Device --compute-unit --attention-implementation End-to-End Latency (s) Diffusion Speed (iter/s)
MacBook Pro (M1 Max) CPU_AND_GPU ORIGINAL 46 0.46
MacBook Pro (M2 Max) CPU_AND_GPU ORIGINAL 37 0.57
Mac Studio (M1 Ultra) CPU_AND_GPU ORIGINAL 25 0.89
Mac Studio (M2 Ultra) CPU_AND_GPU ORIGINAL 20 1.11
Details (Click to expand)
  • This benchmark was conducted by Apple and Hugging Face using public beta versions of iOS 17.0, iPadOS 17.0 and macOS 14.0 in July 2023.
  • The performance data was collected by running the StableDiffusion Swift pipeline.
  • The median latency value across 3 back-to-back end-to-end executions are reported
  • The image generation procedure follows the standard configuration: 20 inference steps, 1024x1024 output image resolution, classifier-free guidance (batch size of 2 for unet).
  • Weights and activations are in float16 precision
  • Performance may vary across different versions of Stable Diffusion due to architecture changes in the model itself. Each reported number is specific to the model version mentioned in that context.
  • Performance may vary due to factors like increased system load from other applications or suboptimal device thermal state. Given these factors, we do not report sub-second variance in latency.

Weight Compression (6-bits and higher)

Details (Click to expand)

coremltools-7.0 supports advanced weight compression techniques for pruning, palettization and linear 8-bit quantization. For these techniques, coremltools.optimize.torch.* includes APIs that require fine-tuning to maintain accuracy at higher compression rates whereas coremltools.optimize.coreml.* includes APIs that are applied post-training and are data-free.

We demonstrate how data-free post-training palettization implemented in coremltools.optimize.coreml.palettize_weights enables us to achieve greatly improved performance for Stable Diffusion on mobile devices. This API implements the Fast Exact k-Means algorithm for optimal weight clustering which yields more accurate palettes. Using --quantize-nbits {2,4,6,8} during conversion is going to apply this compression to the unet and text_encoder models.

For best results, we recommend training-time palettization: coremltools.optimize.torch.palettization.DKMPalettizer if fine-tuning your model is feasible. This API implements the Differentiable k-Means (DKM) learned palettization algorithm. In this exercise, we stick to post-training palettization for the sake of simplicity and ease of reproducibility.

The Neural Engine is capable of accelerating models with low-bit palettization: 1, 2, 4, 6 or 8 bits. With iOS 17 and macOS 14, compressed weights for Core ML models can be just-in-time decompressed during runtime (as opposed to ahead-of-time decompression upon load) to match the precision of activation tensors. This yields significant memory savings and enables models to run on devices with smaller RAM (e.g. iPhone 12 Mini). In addition, compressed weights are faster to fetch from memory which reduces the latency of memory bandwidth-bound layers. The just-in-time decompression behavior depends on the compute unit, layer type and hardware generation.

Weight Precision --compute-unit stabilityai/stable-diffusion-2-1-base generating "a high quality photo of a surfing dog"
6-bit cpuAndNeuralEngine
16-bit cpuAndNeuralEngine
16-bit cpuAndGPU

Note that there are minor differences across 16-bit (float16) and 6-bit results. These differences are comparable to the differences across float16 and float32 or differences across compute units as exemplified above. We recommend a minimum of 6 bits for palettizing Stable Diffusion. Smaller number of bits (1, 2 and 4) will require either fine-tuning or advanced palettization techniques such as MBP.

Resources:

Advanced Weight Compression (Lower than 6-bits)

Details (Click to expand)

This section describes an advanced compression algorithm called Mixed-Bit Palettization (MBP) built on top of the Post-Training Weight Palettization tools and using the Weights Metadata API from coremltools.

MBP builds a per-layer "palettization recipe" by picking a suitable number of bits among the Neural Engine supported bit-widths of 1, 2, 4, 6 and 8 in order to achieve the minimum average bit-width while maintaining a desired level of signal strength. The signal strength is measured by comparing the compressed model's output to that of the original float16 model. Given the same random seed and text prompts, PSNR between denoised latents is computed. The compression rate will depend on the model version as well as the tolerance for signal loss (drop in PSNR) since this algorithm is adaptive.

3.41-bit 4.50-bit 6.55-bit 16-bit (original)

For example, the original float16 stabilityai/stable-diffusion-xl-base-1.0 model has an ~82 dB signal strength. Naively applying linear 8-bit quantization to the Unet model drops the signal to ~65 dB. Instead, applying MBP yields an average of 2.81-bits quantization while maintaining a signal strength of ~67 dB. This technique generally yields better results compared to using --quantize-nbits during model conversion but requires a "pre-analysis" run that takes up to a few hours on a single GPU (mps or cuda).

Here is the signal strength (PSNR in dB) versus model size reduction (% of float16 size) for stabilityai/stable-diffusion-xl-base-1.0. The {1,2,4,6,8}-bit curves are generated by progressively palettizing more layers using a palette with fixed number of bits. The layers were ordered in ascending order of their isolated impact to end-to-end signal strength so the cumulative compression's impact is delayed as much as possible. The mixed-bit curve is based on falling back to a higher number of bits as soon as a layer's isolated impact to end-to-end signal integrity drops below a threshold. Note that all curves based on palettization outperform linear 8-bit quantization at the same model size except for 1-bit.

Here are the steps for applying this technique on another model version:

Step 1: Run the pre-analysis script to generate "recipes" with varying signal strength:

python -m python_coreml_stable_diffusion.mixed_bit_compression_pre_analysis --model-version <model-version> -o <output-dir>

For popular base models, you may find the pre-computed pre-analysis results here. Fine-tuned models models are likely to honor the recipes of their corresponding base models but this is untested.

Step 2: The resulting JSON file from Step 1 will list "baselines", e.g.:

{
  "model_version": "stabilityai/stable-diffusion-xl-base-1.0",
  "baselines": {
    "original": 82.2,
    "linear_8bit": 66.025,
    "recipe_6.55_bit_mixedpalette": 79.9,
    "recipe_5.52_bit_mixedpalette": 78.2,
    "recipe_4.89_bit_mixedpalette": 76.8,
    "recipe_4.41_bit_mixedpalette": 75.5,
    "recipe_4.04_bit_mixedpalette": 73.2,
    "recipe_3.67_bit_mixedpalette": 72.2,
    "recipe_3.32_bit_mixedpalette": 71.4,
    "recipe_3.19_bit_mixedpalette": 70.4,
    "recipe_3.08_bit_mixedpalette": 69.6,
    "recipe_2.98_bit_mixedpalette": 68.6,
    "recipe_2.90_bit_mixedpalette": 67.8,
    "recipe_2.83_bit_mixedpalette": 67.0,
    "recipe_2.71_bit_mixedpalette": 66.3
  },
}

Among these baselines, select a recipe based on your desired signal strength. We recommend palettizing to ~4 bits depending on the use case even if the signal integrity for lower bit values are higher than the linear 8-bit quantization baseline.

Finally, apply the selected recipe to the float16 Core ML model as follows:

python -m python_coreml_stable_diffusion.mixed_bit_compression_apply --mlpackage-path <path-to-float16-unet-mlpackage> -o <output-dir> --pre-analysis-json-path <path-to--pre-analysis-json> --selected-recipe <selected-recipe-string-key>

An example <selected-recipe-string-key> would be "recipe_4.50_bit_mixedpalette" which achieves an average of 4.50-bits compression (compressed from ~5.2GB to ~1.46GB for SDXL). Please note that signal strength does not directly map to image-text alignment. Always verify that your MBP-compressed model variant is accurately generating images for your test prompts.

Using Stable Diffusion 3

Details (Click to expand)

Model Conversion

Stable Diffusion 3 uses some new and some old models to run. For the text encoders, the conversion can be done using a similar command as before with the --sd3-version flag.

python -m python_coreml_stable_diffusion.torch2coreml --model-version stabilityai/stable-diffusion-3-medium --bundle-resources-for-swift-cli --convert-text-encoder --sd3-version -o <output-dir>

For the new models (MMDiT, a new VAE with 16 channels, and the T5 text encoder), there are a number of new CLI flags that utilize the DiffusionKit repo:

  • --sd3-version: Indicates to the converter to treat this as a Stable Diffusion 3 model
  • --convert-mmdit: Convert the MMDiT model
  • --convert-vae-decoder: Convert the new VAE model (this will use the 16 channel version if --sd3-version is set)
  • --include-t5: Downloads and includes a pre-converted T5 text encoder in the conversion

e.g.:

python -m python_coreml_stable_diffusion.torch2coreml --model-version stabilityai/stable-diffusion-3-medium --bundle-resources-for-swift-cli --convert-vae-decoder --convert-mmdit  --include-t5 --sd3-version -o <output-dir>

To convert the full pipeline with at 1024x1024 resolution, the following command may be used:

python -m python_coreml_stable_diffusion.torch2coreml --model-version stabilityai/stable-diffusion-3-medium --bundle-resources-for-swift-cli --convert-text-encoder --convert-vae-decoder --convert-mmdit --include-t5 --sd3-version --latent-h 128 --latent-w 128 -o <output-dir>

Keep in mind that the MMDiT model is quite large and will require increasingly more memory and time to convert as the latent resolution increases.

Also note that currently the MMDiT model requires fp32 and therefore only supports CPU_AND_GPU compute units and ORIGINAL attention implementation (the default for this pipeline).

Swift Inference

Swift inference for Stable Diffusion 3 is similar to the previous versions. The only difference is that the --sd3 flag should be used to indicate that the model is a Stable Diffusion 3 model.

swift run StableDiffusionSample <prompt> --resource-path <output-mlpackages-directory/Resources> --output-path <output-dir> --compute-units cpuAndGPU --sd3

Using Stable Diffusion XL

Details (Click to expand)

Model Conversion

e.g.:

python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-vae-decoder --convert-text-encoder --xl-version --model-version stabilityai/stable-diffusion-xl-base-1.0 --refiner-version stabilityai/stable-diffusion-xl-refiner-1.0 --bundle-resources-for-swift-cli --attention-implementation {ORIGINAL,SPLIT_EINSUM} -o <output-dir>
  • --xl-version: Additional argument to pass to the conversion script when specifying an XL model
  • --refiner-version: Additional argument to pass to the conversion script when specifying an XL refiner model, required for "Ensemble of Expert Denoisers" inference.
  • --attention-implementation: ORIGINAL is recommended for cpuAndGPU for deployment on Mac
  • --attention-implementation: SPLIT_EINSUM is recommended for cpuAndNeuralEngine for deployment on iPhone & iPad
  • --attention-implementation: SPLIT_EINSUM_V2 is not recommended for Stable Diffusion XL because of prohibitively long compilation time
  • Tip: Adding --latent-h 96 --latent-w 96 is recommended for iOS and iPadOS deployment which leads to 768x768 generation as opposed to the default 1024x1024.
  • Tip: Due to known float16 overflow issues in the original Stable Diffusion XL VAE, the model conversion script enforces float32 precision. Using a custom VAE version such as madebyollin/sdxl-vae-fp16-fix by @madebyollin via --custom-vae-version madebyollin/sdxl-vae-fp16-fix will restore the default float16 precision for VAE.

Swift Inference

swift run StableDiffusionSample <prompt> --resource-path <output-mlpackages-directory/Resources> --output-path <output-dir> --compute-units {cpuAndGPU,cpuAndNeuralEngine} --xl
  • Only the base model is required, refiner model is optional and will be used by default if provided in the resource directory
  • ControlNet for XL is not yet supported

Python Inference

python -m python_coreml_stable_diffusion.pipeline --prompt <prompt> --compute-unit {CPU_AND_GPU,CPU_AND_NE} -o <output-dir> -i <output-mlpackages-directory/Resources> --model-version stabilityai/stable-diffusion-xl-base-1.0
  • refiner model is not yet supported
  • ControlNet for XL is not yet supported

Using ControlNet

Details (Click to expand)

Example results using the prompt "a high quality photo of a surfing dog" conditioned on the scribble (leftmost):

ControlNet allows users to condition image generation with Stable Diffusion on signals such as edge maps, depth maps, segmentation maps, scribbles and pose. Thanks to @ryu38's contribution, both the Python CLI and the Swift package support ControlNet models. Please refer to this section for details on setting up Stable Diffusion with ControlNet.

Note that ControlNet is not yet supported for Stable Diffusion XL.

Using the System Multilingual Text Encoder

Details (Click to expand)

With iOS 17 and macOS 14, NaturalLanguage framework introduced the NLContextualEmbedding which provides Transformer-based textual embeddings for Latin (20 languages), Cyrillic (4 languages) and CJK (3 languages) scripts. The WWDC23 session titled Explore Natural Language multilingual models demonstrated how this powerful new model can be used by developers to train downstream tasks such as multilingual image generation with Stable Diffusion.

The code to reproduce this demo workflow is made available in this repository. There are several ways in which this workflow can be implemented. Here is an example:

Step 1: Curate an image-text dataset with the desired languages.

Step 2: Pre-compute the NLContextualEmbedding values and replace the text strings with these embedding vectors in your dataset.

Step 3: Fine-tune a base model from Hugging Face Hub that is compatible with the StableDiffusionPipeline by using your new dataset and replacing the default text_encoder with your pre-computed NLContextualEmbedding values.

Step 4: In order to be able to swap the text_encoder of a base model without training new layers, the base model's text_encoder.hidden_size must match that of NLContextualEmbedding. If it doesn't, you will need to train a linear projection layer to map between the two dimensionalities. After fine-tuning, this linear layer should be converted to CoreML as follows:

python -m python_coreml_stable_diffusion.multilingual_projection --input-path <path-to-projection-torchscript> --output-dir <output-dir>

The command above will yield a MultilingualTextEncoderProjection.mlmodelc file under --output-dir and this should be colocated with the rest of the Core ML model assets that were generated through --bundle-resources-for-swift-cli.

Step 5: The multilingual system text encoder can now be invoked by setting useMultilingualTextEncoder to true when initializing a pipeline or setting --use-multilingual-text-encoder in the CLI. Note that the model assets are distributed over-the-air so the first invocation will trigger asset downloads which is less than 100MB.

Resources:

Using Ready-made Core ML Models from Hugging Face Hub

Click to expand

🤗 Hugging Face ran the conversion procedure on the following models and made the Core ML weights publicly available on the Hub. If you would like to convert a version of Stable Diffusion that is not already available on the Hub, please refer to the Converting Models to Core ML.

If you want to use any of those models you may download the weights and proceed to generate images with Python or Swift.

There are several variants in each model repository. You may clone the whole repos using git and git lfs to download all variants, or selectively download the ones you need.

To clone the repos using git, please follow this process:

Step 1: Install the git lfs extension for your system.

git lfs stores large files outside the main git repo, and it downloads them from the appropriate server after you clone or checkout. It is available in most package managers, check the installation page for details.

Step 2: Enable git lfs by running this command once:

git lfs install

Step 3: Use git clone to download a copy of the repo that includes all model variants. For Stable Diffusion version 1.4, you'd issue the following command in your terminal:

git clone https://huggingface.co/apple/coreml-stable-diffusion-v1-4

If you prefer to download specific variants instead of cloning the repos, you can use the huggingface_hub Python library. For example, to do generation in Python using the ORIGINAL attention implementation (read this section for details), you could use the following helper code:

from huggingface_hub import snapshot_download
from pathlib import Path

repo_id = "apple/coreml-stable-diffusion-v1-4"
variant = "original/packages"

model_path = Path("./models") / (repo_id.split("/")[-1] + "_" + variant.replace("/", "_"))
snapshot_download(repo_id, allow_patterns=f"{variant}/*", local_dir=model_path, local_dir_use_symlinks=False)
print(f"Model downloaded at {model_path}")

model_path would be the path in your local filesystem where the checkpoint was saved. Please, refer to this post for additional details.

Converting Models to Core ML

Click to expand

Step 1: Create a Python environment and install dependencies:

conda create -n coreml_stable_diffusion python=3.8 -y
conda activate coreml_stable_diffusion
cd /path/to/cloned/ml-stable-diffusion/repository
pip install -e .

Step 2: Log in to or register for your Hugging Face account, generate a User Access Token and use this token to set up Hugging Face API access by running huggingface-cli login in a Terminal window.

Step 3: Navigate to the version of Stable Diffusion that you would like to use on Hugging Face Hub and accept its Terms of Use. The default model version is CompVis/stable-diffusion-v1-4. The model version may be changed by the user as described in the next step.

Step 4: Execute the following command from the Terminal to generate Core ML model files (.mlpackage)

python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker --model-version <model-version-string-from-hub> -o <output-mlpackages-directory>

WARNING: This command will download several GB worth of PyTorch checkpoints from Hugging Face. Please ensure that you are on Wi-Fi and have enough disk space.

This generally takes 15-20 minutes on an M1 MacBook Pro. Upon successful execution, the 4 neural network models that comprise Stable Diffusion will have been converted from PyTorch to Core ML (.mlpackage) and saved into the specified <output-mlpackages-directory>. Some additional notable arguments:

  • --model-version: The model version name as published on the Hugging Face Hub

  • --refiner-version: The refiner version name as published on the Hugging Face Hub. This is optional and if specified, this argument will convert and bundle the refiner unet alongside the model unet.

  • --bundle-resources-for-swift-cli: Compiles all 4 models and bundles them along with necessary resources for text tokenization into <output-mlpackages-directory>/Resources which should provided as input to the Swift package. This flag is not necessary for the diffusers-based Python pipeline. However using these compiled models in Python will significantly speed up inference.

  • --quantize-nbits: Quantizes the weights of unet and text_encoder models down to 2, 4, 6 or 8 bits using a globally optimal k-means clustering algorithm. By default all models are weight-quantized to 16 bits even if this argument is not specified. Please refer to [this section](#compression-6-bits-and-higher for details and further guidance on weight compression.

  • --chunk-unet: Splits the Unet model in two approximately equal chunks (each with less than 1GB of weights) for mobile-friendly deployment. This is required for Neural Engine deployment on iOS and iPadOS if weights are not quantized to 6-bits or less (--quantize-nbits {2,4,6}). This is not required for macOS. Swift CLI is able to consume both the chunked and regular versions of the Unet model but prioritizes the former. Note that chunked unet is not compatible with the Python pipeline because Python pipeline is intended for macOS only.

  • --attention-implementation: Defaults to SPLIT_EINSUM which is the implementation described in Deploying Transformers on the Apple Neural Engine. --attention-implementation SPLIT_EINSUM_V2 yields 10-30% improvement for mobile devices, still targeting the Neural Engine. --attention-implementation ORIGINAL will switch to an alternative implementation that should be used for CPU or GPU deployment on some Mac devices. Please refer to the Performance Benchmark section for further guidance.

  • --check-output-correctness: Compares original PyTorch model's outputs to final Core ML model's outputs. This flag increases RAM consumption significantly so it is recommended only for debugging purposes.

  • --convert-controlnet: Converts ControlNet models specified after this option. This can also convert multiple models if you specify like --convert-controlnet lllyasviel/sd-controlnet-mlsd lllyasviel/sd-controlnet-depth.

  • --unet-support-controlnet: enables a converted UNet model to receive additional inputs from ControlNet. This is required for generating image with using ControlNet and saved with a different name, *_control-unet.mlpackage, distinct from normal UNet. On the other hand, this UNet model can not work without ControlNet. Please use normal UNet for just txt2img.

  • --convert-vae-encoder: not required for text-to-image applications. Required for image-to-image applications in order to map the input image to the latent space.

Image Generation with Python

Click to expand

Run text-to-image generation using the example Python pipeline based on diffusers:

python -m python_coreml_stable_diffusion.pipeline --prompt "a photo of an astronaut riding a horse on mars" -i <core-ml-model-directory> -o </path/to/output/image> --compute-unit ALL --seed 93

Please refer to the help menu for all available arguments: python -m python_coreml_stable_diffusion.pipeline -h. Some notable arguments:

  • -i: Should point to the -o directory from Step 4 of Converting Models to Core ML section from above. If you specified --bundle-resources-for-swift-cli during conversion, then use the resulting Resources folder (which holds the compiled .mlmodelc files). The compiled models load much faster after first use.
  • --model-version: If you overrode the default model version while converting models to Core ML, you will need to specify the same model version here.
  • --compute-unit: Note that the most performant compute unit for this particular implementation may differ across different hardware. CPU_AND_GPU or CPU_AND_NE may be faster than ALL. Please refer to the Performance Benchmark section for further guidance.
  • --scheduler: If you would like to experiment with different schedulers, you may specify it here. For available options, please see the help menu. You may also specify a custom number of inference steps by --num-inference-steps which defaults to 50.
  • --controlnet: ControlNet models specified with this option are used in image generation. Use this option in the format --controlnet lllyasviel/sd-controlnet-mlsd lllyasviel/sd-controlnet-depth and make sure to use --controlnet-inputs in conjunction.
  • --controlnet-inputs: Image inputs corresponding to each ControlNet model. Please provide image paths in same order as models in --controlnet, for example: --controlnet-inputs image_mlsd image_depth.

Image Generation with Swift

Click to expand

Example CLI Usage

swift run StableDiffusionSample "a photo of an astronaut riding a horse on mars" --resource-path <output-mlpackages-directory>/Resources/ --seed 93 --output-path </path/to/output/image>

The output will be named based on the prompt and random seed: e.g. </path/to/output/image>/a_photo_of_an_astronaut_riding_a_horse_on_mars.93.final.png

Please use the --help flag to learn about batched generation and more.

Example Library Usage

import StableDiffusion
...
let pipeline = try StableDiffusionPipeline(resourcesAt: resourceURL)
pipeline.loadResources()
let image = try pipeline.generateImages(prompt: prompt, seed: seed).first

On iOS, the reduceMemory option should be set to true when constructing StableDiffusionPipeline

Swift Package Details

This Swift package contains two products:

  • StableDiffusion library
  • StableDiffusionSample command-line tool

Both of these products require the Core ML models and tokenization resources to be supplied. When specifying resources via a directory path that directory must contain the following:

  • TextEncoder.mlmodelc or `TextEncoder2.mlmodelc (text embedding model)
  • Unet.mlmodelc or UnetChunk1.mlmodelc & UnetChunk2.mlmodelc (denoising autoencoder model)
  • VAEDecoder.mlmodelc (image decoder model)
  • vocab.json (tokenizer vocabulary file)
  • merges.text (merges for byte pair encoding file)

Optionally, for image2image, in-painting, or similar:

  • VAEEncoder.mlmodelc (image encoder model)

Optionally, it may also include the safety checker model that some versions of Stable Diffusion include:

  • SafetyChecker.mlmodelc

Optionally, for the SDXL refiner:

  • UnetRefiner.mlmodelc (refiner unet model)

Optionally, for ControlNet:

  • ControlledUNet.mlmodelc or ControlledUnetChunk1.mlmodelc & ControlledUnetChunk2.mlmodelc (enabled to receive ControlNet values)
  • controlnet/ (directory containing ControlNet models)
    • LllyasvielSdControlnetMlsd.mlmodelc (for example, from lllyasviel/sd-controlnet-mlsd)
    • LllyasvielSdControlnetDepth.mlmodelc (for example, from lllyasviel/sd-controlnet-depth)
    • Other models you converted

Note that the chunked version of Unet is checked for first. Only if it is not present will the full Unet.mlmodelc be loaded. Chunking is required for iOS and iPadOS and not necessary for macOS.

Example Swift App

Click to expand

🤗 Hugging Face created an open-source demo app on top of this library. It's written in native Swift and Swift UI, and runs on macOS, iOS and iPadOS. You can use the code as a starting point for your app, or to see how to integrate this library in your own projects.

Hugging Face has made the app available in the Mac App Store.

FAQ

Click to expand
Q1: ERROR: Failed building wheel for tokenizers or error: can't find Rust compiler

A1: Please review this potential solution.

Q2: RuntimeError: {NSLocalizedDescription = "Error computing NN outputs."

A2: There are many potential causes for this error. In this context, it is highly likely to be encountered when your system is under increased memory pressure from other applications. Reducing memory utilization of other applications is likely to help alleviate the issue.

Q3: My Mac has 8GB RAM and I am converting models to Core ML using the example command. The process is getting killed because of memory issues. How do I fix this issue?

A3: In order to minimize the memory impact of the model conversion process, please execute the following command instead:

python -m python_coreml_stable_diffusion.torch2coreml --convert-vae-encoder --model-version <model-version-string-from-hub> -o <output-mlpackages-directory> && \
python -m python_coreml_stable_diffusion.torch2coreml --convert-vae-decoder --model-version <model-version-string-from-hub> -o <output-mlpackages-directory> && \
python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --model-version <model-version-string-from-hub> -o <output-mlpackages-directory> && \
python -m python_coreml_stable_diffusion.torch2coreml --convert-text-encoder --model-version <model-version-string-from-hub> -o <output-mlpackages-directory> && \
python -m python_coreml_stable_diffusion.torch2coreml --convert-safety-checker --model-version <model-version-string-from-hub> -o <output-mlpackages-directory> &&

If you need --chunk-unet, you may do so in yet another independent command which will reuse the previously exported Unet model and simply chunk it in place:

python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --chunk-unet -o <output-mlpackages-directory>
Q4: My Mac has 8GB RAM, should image generation work on my machine?

A4: Yes! Especially the --compute-unit CPU_AND_NE option should work under reasonable system load from other applications. Note that part of the Example Results were generated using an M2 MacBook Air with 8GB RAM.

Q5: Every time I generate an image using the Python pipeline, loading all the Core ML models takes 2-3 minutes. Is this expected?

A5: Both .mlpackage and .mlmodelc models are compiled (also known as "model preparation" in Core ML terms) upon first load when a specific compute unit is specified. .mlpackage does not cache this compiled asset so each model load retriggers this compilation which may take up to a few minutes. On the other hand, .mlmodelc files do cache this compiled asset and non-first load times are reduced to just a few seconds.

In order to benefit from compilation caching, you may use the .mlmodelc assets instead of .mlpackage assets in both Swift (default) and Python (possible thanks to @lopez-hector's contribution) image generation pipelines.

Q6: I want to deploy StableDiffusion, the Swift package, in my mobile app. What should I be aware of?

A6: The Image Generation with Swift section describes the minimum SDK and OS versions as well as the device models supported by this package. We recommend carefully testing the package on the device with the least amount of RAM available among your deployment targets.

The image generation process in StableDiffusion can yield over 2 GB of peak memory during runtime depending on the compute units selected. On iPadOS, we recommend using .cpuAndNeuralEngine in your configuration and the reduceMemory option when constructing a StableDiffusionPipeline to minimize memory pressure.

If your app crashes during image generation, consider adding the Increased Memory Limit capability to inform the system that some of your app’s core features may perform better by exceeding the default app memory limit on supported devices.

On iOS, depending on the iPhone model, Stable Diffusion model versions, selected compute units, system load and design of your app, this may still not be sufficient to keep your apps peak memory under the limit. Please remember, because the device shares memory between apps and iOS processes, one app using too much memory can compromise the user experience across the whole device.

We strongly recommend compressing your models following the recipes in Advanced Weight Compression (Lower than 6-bits) for iOS deployment. This reduces the peak RAM usage by up to 75% (from 16-bit to 4-bit) while preserving model output quality.

Q7: How do I generate images with different resolutions using the same Core ML models?

A7: The current version of python_coreml_stable_diffusion does not support single-model multi-resolution out of the box. However, developers may fork this project and leverage the flexible shapes support from coremltools to extend the torch2coreml script by using coremltools.EnumeratedShapes. Note that, while the text_encoder is agnostic to the image resolution, the inputs and outputs of vae_decoder and unet models are dependent on the desired image resolution.

Q8: Are the Core ML and PyTorch generated images going to be identical?

A8: If desired, the generated images across PyTorch and Core ML can be made approximately identical. However, it is not guaranteed by default. There are several factors that might lead to different images across PyTorch and Core ML:

1. Random Number Generator Behavior

The main source of potentially different results across PyTorch and Core ML is the Random Number Generator (RNG) behavior. PyTorch and Numpy have different sources of randomness. python_coreml_stable_diffusion generally relies on Numpy for RNG (e.g. latents initialization) and StableDiffusion Swift Library reproduces this RNG behavior by default. However, PyTorch-based pipelines such as Hugging Face diffusers relies on PyTorch's RNG behavior. Thanks to @liuliu's contributions, one can match the PyTorch (CPU/GPU) RNG behavior in Swift by specifying --rng torch/cuda which selects the torchRNG/cudaRNG mode.

2. PyTorch

"Completely reproducible results are not guaranteed across PyTorch releases, individual commits, or different platforms. Furthermore, results may not be reproducible between CPU and GPU executions, even when using identical seeds." (source).

3. Model Function Drift During Conversion

The difference in outputs across corresponding PyTorch and Core ML models is a potential cause. The signal integrity is tested during the conversion process (enabled via --check-output-correctness argument to python_coreml_stable_diffusion.torch2coreml) and it is verified to be above a minimum PSNR value as tested on random inputs. Note that this is simply a sanity check and does not guarantee this minimum PSNR across all possible inputs. Furthermore, the results are not guaranteed to be identical when executing the same Core ML models across different compute units. This is not expected to be a major source of difference as the sample visual results indicate in this section.

4. Weights and Activations Data Type

When quantizing models from float32 to lower-precision data types such as float16, the generated images are known to vary slightly in semantics even when using the same PyTorch model. Core ML models generated by coremltools have float16 weights and activations by default unless explicitly overridden. This is not expected to be a major source of difference.

Q9: The model files are very large, how do I avoid a large binary for my App?

A9: The recommended option is to prompt the user to download these assets upon first launch of the app. This keeps the app binary size independent of the Core ML models being deployed. Disclosing the size of the download to the user is extremely important as there could be data charges or storage impact that the user might not be comfortable with.

Q10: `Could not initialize NNPACK! Reason: Unsupported hardware`

A10: This warning is safe to ignore in the context of this repository.

Q11: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect

A11: This warning is safe to ignore in the context of this repository.

Q12: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown

A12: If this warning is printed right after zsh: killed python -m python_coreml_stable_diffusion.torch2coreml ... , then it is highly likely that your Mac has run out of memory while converting models to Core ML. Please see Q3 from above for the solution.

BibTeX Reference

@misc{stable-diffusion-coreml-apple-silicon,
title = {Stable Diffusion with Core ML on Apple Silicon},
author = {Atila Orhon and Michael Siracusa and Aseem Wadhwa},
year = {2022},
URL = {null}
}

About

Stable Diffusion with Core ML on Apple Silicon

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 53.1%
  • Swift 46.9%