Guitar, Drums, Cat?: Where is NSynth taking music?
When George Beauchamp designed the first electric guitar in 1931 he probably never envisaged young musicians manipulating frequencies and feedback as part of music. But this is the beauty of musical evolution.
The NSynth sound maker (part of Google’s Magenta project) is a truly novel approach to sound. Conceptually “NSynth uses deep neural networks to generate sounds at the level of individual samples. Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer.”
Practically, this means that you could blend a trombone and a cow with varying degrees of each to create a whole new ‘instrument’. Although it is fun to tinker with, you should recognize that from a musician’s perspective, this could mean a complete rethinking of the compartmentalization of a musician. It may be the case that you are no longer just a guitarist but rather a… well, we’ll come up with something.
The NSynth system was trained through the development of an audio dataset containing 305,979 musical notes, each with a unique pitch, timbre, and envelope. This dataset has been made available for download (https://magenta.tensorflow.org/datasets/nsynth).
If you’d like to play around with the system it is available at https://experiments.withgoogle.com/ai/sound-maker/view/. Also, to see what other people are doing it check out Andrew Huang’s demo on https://www.youtube.com/watch?v=AaALLWQmCdI&t=333s.