StableDiffusion can generate an image on Apple Silicon Macs in under 18 seconds, thanks to new optimizations in macOS 13.1


On its machine learning blog, Apple announced resounding support for the StableDiffusion project. This includes updates in the just-released macOS 13.1 beta 4 and iOS 16.2 beta 4 to improve performance running these models on Apple Silicon chips.

Apple also published extensive document and sample code to show how to convert source StableDiffusion models into a native Core ML format.

This announcement is the biggest official endorsement Apple has shown to the recent emergence of AI image generators.

As a recap, machine learning based image generation techniques rose to prominence thanks to the surprising results of the DALL-E model. These AI image generators accept a string of text as a prompt and attempt to create an image of what you asked for.

A variant called StableDiffusion launched in August 2022 and has already seen a lot of community investment.

Thanks to new hardware optimizations in the Apple OS releases, the Core ML StableDiffusion models take full advantage of the Neural Engine and Apple GPU architectures found in the M-series chips.

This leads to some impressively speedy generators. Apple says a baseline M2 MacBook Air can generate an image using a 50-iteration StableDiffusion model in under 18 seconds. Even an M1 iPad Pro could do the same task in under 30 seconds.

Apple hopes this work will encourage developers to integrate StableDiffusion into their apps to run on the client, rather than depending on backend cloud services. Unlike cloud implementations, running on device is “free” and privacy-preserving.

FTC: We use income earning auto affiliate links. More.

graphical user interface, website


Check out 9to5Mac on YouTube for more Apple news:



Source link

Previous articleus justice department ftx bankruptcy: US justice department watchdog seeks independent review of FTX bankruptcy
Next articleTencent wants you to pay with your palm. What could go wrong?