Adobe’s new generative AI tool prototype for generating music using text prompts, which includes in-built audio controls, was announced during the Hot Pod Summit in Brooklyn yesterday, reports The Verge.
According to the report, the Project Music GenAI Control protype, described by Adobe as an “early-stage” experiment is being developed in collaboration with the University of California and the School of Computer Science at Carnegie Mellon University.
The tool is used by inputting text description that will generate music in a specified style, and Adobe’s integrated editing controls then allow users to customise those results, explained The Verge, pointing out that sections of music can be remixed, and audio can be generated as a repeating loop.
Adobe also says the tool can adjust the generated audio “based on a reference melody” and extend the length of audio clips a track needs to be long enough for things like fixed animation or podcast segments, said the repot.
“One of the most exciting things about these new tools is that they aren’t just about generating audio,” said Nicholas Bryan, a senior research scientist at Adobe Research, in a press release. “They’re taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio. It’s a kind of pixel-level control for music.”
According to The Verge, the tool isn’t available to the public yet, and no release date has been announced.