- OpenAI is reportedly developing an AI music producer that creates songs from prompts
- Juilliard students are said to be helping to annotate sheet music for the project
- The project could create a rivalry with the likes of Suno and Udio, as well as legal battles with music labels and artists
OpenAI, the company behind AI tools including the Sora 2 video generator, is reportedly tuning up a new AI tool that will create music based on text and audio prompts.
According to a report by The Information, OpenAI is working with music students from the prestigious Juilliard School to annotate scores used to help build and train the model, although the school itself has stated that it is not involved in the project.
Should the unnamed and unconfirmed OpenAI project come to fruition, it would allow users to use words or a short snippet of sound to create new instrumental accompaniments, such as a guitar track, to pair with a vocal recording, or to produce background music tailored to a particular mood, tempo or visual.
OpenAI has previously experimented with AI music models. The company created MuseNet in 2019, which could produce music that matched different styles, but it was limited to small MIDI files. Jukebox, which appeared in 2020, produced full vocal tracks to match the music it wrote, but that was pretty primitive compared to more recent efforts by Suno and other AI music developers.
What OpenAI seems to be working on now would go far beyond the early forays and look more like what OpenAI’s new Sora 2 model and Sora app represent for AI-generated videos.
The supposed inclusion of Juilliard students in score annotation is an interesting touch and suggests that OpenAI recognizes how, while it’s not unusual for large language models to train on massive unstructured datasets, musical structure is notoriously difficult to teach that way.
Unlike text, where you can scrape billions of examples, music requires an understanding of harmony, rhythm, instrumentation and timing β not just what sounds good, but why. The students could be much better at teaching the AI ββto ‘read’ the music.
Battle of AI bands
OpenAI’s music project looks like it would put OpenAI in direct competition with tools like Suno, Udio, Google’s Music Sandbox, and other AI music tools. There has recently been a lot of interest in such platforms as Suno, and others have gone ahead in sophistication. But that improvement is paired with plenty of mess.
Streaming platforms are already inundated with AI-generated content, only some of which is labeled correctly. Sometimes these AI tracks are advertised as being made by real people.
Universal Music Group and Warner Music Group have already filed suit against Suno and Udio for copyright theft. OpenAI’s entry into the space only raises the stakes, especially since OpenAI has its own legal baggage in the form of several ongoing disputes over the use of copyrighted content in model training. If it turns out that this new music model was partly trained on commercial recordings, it could be another powder keg waiting to go off.
Nevertheless, the AI-generated music economy is growing faster than regulators and copyright owners can track. The people who use these tools are rushing toward a moment where half of the music online may be AI-generated, but no one agrees on who owns what.
And that’s why OpenAI’s move is important. It is a bet that music, like text and images, can be made flexible and programmable. It’s a bet that users will want and expect to make music the same way they make Instagram filters or TikTok lyrics. This does not necessarily mean the end of man-made music, but it does mean that we will have to decide how valuable man-made music is to us.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



