- Google DeepMind has enhanced and expanded access to its Music AI Sandbox
- The Sandbox now includes the Lyria 2 model and RealTime features to generate, extend, and edit music
- The music is watermarked with SynthID
Google DeepMind has brought some new and improved sounds to its Music AI Sandbox, which, despite sand being notoriously bad for musical instruments, is where Google hosts experimental tools for laying down tracks with the aid of AI models. The Sandbox now offers the new Lyria 2 AI model and the Lyria RealTime AI musical production tools.
Google has pitched the Music AI Sandbox as a way to spark ideas, generate soundscapes, and maybe help you finally finish that half-written verse you’ve been avoiding looking at all year. The Sandbox is aimed mainly at professional musical artists and producers, and access has been pretty restricted since its 2023 debut. But, Google is now opening up the platform to many more people in music production, including those looking to create soundtracks for films and games.
The new Lyria 2 AI music model is the rhythm section underlying the new Sandbox. The model is trained to produce high-fidelity audio outputs, with detailed and intricate compositions across any genre, from shoegaze to synthpop to whatever weird lo-fi banjo-core hybrid you’re cooking up in your bedroom studio.
The Lyria RealTime feature puts the AI’s creation in a virtual studio that you can jam with. You can sit at your keyboard, and Lyria RealTime will help you mix ambient house beats with classic funk, performing and tweaking its sound on the fly.
Virtual music studio
The Sandbox offers three main tools for producing the tunes. Create, seen above, lets you describe the kind of sound you’re aiming for in words. Then the AI whips up music samples you can use as jumping-off points. If you’ve already got a rough idea down but can’t figure out what happens after the second chorus, you can upload what you have and let the Extend feature come up with ways to continue the piece in the same style.
The third feature is called Edit, which, as the name suggests, remakes the music in a new style. You can ask for your tune to be reimagined in a different mood or genre, either by messing with the digital control board or through text prompts. For instance, you could ask for something as basic as “Turn this into a ballad,” or something more complex like, “Make this sadder but still danceable,” or see how weird you can get by asking the AI to “Score this EDM drop like it’s all just an oboe section.” You can hear an example below created by Isabella Kensington.
AI singalong
Everything generated by Lyria 2 and RealTime is watermarked using Google’s SynthID technology. That means the AI-generated tracks can be identified even if someone tries to pass them off as the next lost Frank Ocean demo. It’s a smart move in an industry that’s already gearing up for heated debates about what counts as “real” music and what doesn’t.
These philosophical questions also decide the destination of a lot of money, so it’s more than just abstract discussions about how to define creativity at stake. But, as with AI tools for producing text, images, and video, this isn’t the death knell of traditional songwriting. Nor is it a magic source of the next chart-topping hit. AI could make a half-baked hum fall flat if poorly used. Happily, plenty of musical talents understand what AI can do, and what it can’t, as Sidecar Tommy demonstrates below.