I’m sitting in a packed classroom when the weird noises start whirring around me. In the front of the room, lines of code pour down the screen while beeps and bloops start chiming from the laptop speakers on each desk.
Some of the sounds are reminiscent of a vintage synthesizer, but what’s happening here is much more modern. A powerful neural network is helping to create these tones in the hopes of offering musicians a cutting-edge new tool for their creative arsenal. Over time, the machines will learn to create music themselves.
The classroom is being led by Adam Roberts and Colin Raffel, two Google engineers working on the Magenta project within the rapidly expanding Google Brain artificial intelligence lab. First unveiled to the public last May at the Moogfest music and technology festival in Durham, NC, Magenta is focused on teaching machines to understand and generate music and building tools that supplement human creativity with the horsepower of Google’s machine learning and neural networks. Today, almost exactly one year later, Roberts and Raffel are back at Moogfest showing software engineers and musicians how to get Magenta’s latest tools up and running on their computers so they can start playing around and, they hope, contributing code and ideas to the open source project.
“The goal of the project is to interface with the outside world, especially creators,” says Roberts. “We all have some artistic abilities in some sense, but we don’t consider ourselves artists. We’re trained as researchers and software engineers.”
A Different Knob To Play With
This workshop, focused on Magenta’s MIDI-based musical sequence generator API, was just one of several events Magenta engineers hosted at Moogfest this year. Throughout the four-day festival, they could be seen giving workshops, presenting demos of Magenta’s latest playable interfaces, like the web-based NSynth “neural synthesizer,” which uses neural networks to mathematically…