In steps aimed at transforming the arena of bespoke audio and music creation and editing, Adobe Research has recently announced exploratory undertakings. Of these, Project Music GenAI Control takes centre stage. This innovative tool encompassing generative AI capabilities is in its early developmental stages. It will enable creators to produce music from text suggestions while facilitating seamless control over modifying the audio to match their explicit specifications.
Adobe boasts a ten-year trail of AI innovation. Its Firefly, a collection of generative AI models, has quickly gained recognition as the most favoured AI image generation model designed for riskless commercial employability worldwide.
Project Music GenAI Control is being advanced in collaboration with researchers from the University of California, San Diego, and counterparts at the School of Computer Science, Carnegie Mellon University. Notables among them include Zachary Novack, Julian McAuley, Taylor Berg-Kirkpatrick, Shih-Lun Wu, Chris Donahue, and Shinji Watanabe.
Adobe's new promising endeavour is anticipated to alter the way custom audio and music are created and edited. As a generative AI music generation and editing tool in its initial stages, Project Music GenAI Control will make it possible for creators to convert text prompts into music and then conclude their creative process with fine adjustments to the ensuing audio according to the imperatives of the project.
Nicholas Bryan, a Senior Research Scientist at Adobe Research and among the originators of these technologies, explains, "With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they're broadcasters, podcasters, or anyone else who needs audio that's just the right mood, tone, and length."
Adobe's generative AI models of the Firefly family demonstrated their edge in AI innovation. They have been instrumental in creating more than 6 billion images so far. Adobe pledges to align its technology with AI ethics principles such as accountability, responsibility, and transparency. Consequently, all content birthed with Firefly comes with the assurance of content credentials tracking the content throughout its use, publication, and storage.
The new tools spring into action by grounding an AI model in a text prompt, a technique Adobe already employs in Firefly. A user inputs a text suggestion like 'powerful rock,' 'happy dance,' or 'sad jazz' to set the AI into music creation mode. After generating music, the tool innovatively incorporates fine-tuning directly into the workflow.
With this straightforward user interface, users can reformulate the resultant audio in line with a base melody; alter the rhythm, pattern, and repeating components; decide on upbeats and downbeats; prolong the duration of a clip; remix a segment; or even conceive an unlimited loop. Instead of manually trimming existing music to create intros, outros, and background audio, Project Music GenAI Control could facilitate users to create exactly the pieces they need, streamlining the workflow from start to finish.
Describing the thrilling capabilities of these new tools, Bryan states, "They aren't just about generating audio—they're taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio. It's a kind of pixel-level control for music."