Illustration: Nick Barclay / The Verge
Meta released a new open-source AI model called AudioCraft, which lets users create music and sounds entirely through generative AI.
It consists of three AI models, all tackling different areas of sound generation. MusicGen takes text inputs to generate music. This model was trained on “20,000 hours of music owned by Meta or licensed specifically for this purpose.” AudioGen creates audio from written prompts, simulating barking dogs or footsteps, and was trained on public sound effects. An improved version of Meta’s EnCodec decoder lets users create sounds with fewer artifacts — which is what happens when you manipulate audio too much.
The company let the media listen to some sample audio made with AudioCraft. The generated noise of…