Birth of a Prodigy

Published
Reading time
1 min read
Musicians' names connected

Never mind Spotify, here's MuseNet — a model that spins music endlessly in a variety of styles.

What’s new: MuseNet has a fair grasp of styles from bluegrass to Lady Gaga, and it can combine them as well. You can tinker with it here through May 12. After that, OpenAI plans to retool it based on user feedback with the aim of open-sourcing it.

How it works: Drawing on OpenAI's GPT-2 language model, MuseNet is a 72-layer transformer model with 24 attention heads. This architecture, plus embeddings that help the model keep track of musical features over time, give it a long memory for musical structure. It uses unsupervised learning to predict the next note for up to 10 instruments in compositions up to four minutes long.

To be sure: Computer-generated music composition dates back to the 1950s, but rarely has it been easy on the ears. The best examples, such the recent Google doodle commemorating the birthday of J.S. Bach, stick with a single style. Project lead (and Deep Learning Specialization graduate) Christine Payne is a Juilliard-trained pianist, and the model’s mimicry of piano masters such as Chopin and Mozart (or both) is especially good. That said, MuseNet inhabits was trained on MIDI files and speaks through a general-purpose synthesizer, so its output often sounds stiff and artificial. Its understanding of harmony and form, while impressive for an algorithmic composer, is shallow, and it often meanders.

We’re thinking: To build your own algorithmic jazz composer, check out Course Five of the Deep Learning Specialization on Coursera.

Share

Subscribe to The Batch

Stay updated with weekly AI News and Insights delivered to your inbox