This is an amazing step for Stable Diffusion on Mac. #14
Replies: 2 comments 1 reply
-
For pure speed, especially if you use Hires fix, try Mochi Diffusion. It is a native Apple CoreML implementation. It is very basic in its capabilities though, and you need to use models that are converted for CoreML. It is about 30% faster for me on a straight 512x512 and way, way faster with its Hires fix, that is locked into 2048x2048 using RealESRGAN. It's also very memory efficient. It can do text to image and image to image, but that's it. No ControlNet, no LoRA's. etc. It is really just a well done Swift GUI wrapper on whatever Apple's ml-stable-diffusion package can do. Apple just added ControlNet about a week ago, but Mochi has not added that yet. it's free open source software. https://github.com/godly-devotion/MochiDiffusion |
Beta Was this translation helpful? Give feedback.
-
Automatic1111 does not use the ANE. It is not a CoreML application. It does use the GPU through MPS (Metal Performance Shaders) with python. |
Beta Was this translation helpful? Give feedback.
-
As an Apple Mac M1 user, until now the application that had taken advantage of this technology the most was Draw Things. However, after testing this version of the Automatic 1111, it is indisputable that you are on par and in some ways this fork is faster than DT.
With DT I can generate a 512x512 image and 20 steps in 29 sec. The same scenario with this fork, it takes me 27 sec. Using the official version of Automatic1111, generating this image takes me 35 seconds.
I am writing this post to sincerely thank you for your work and encourage you to keep moving forward. I will be watching to see how I can help you.
Greetings and blessings to all!
Beta Was this translation helpful? Give feedback.
All reactions