Part 1 of this blog post covered generative music which involves an AI producing a music piece on its own, mimicking human creativity. This idea has probably scared a few of you who believe that one day AI will take over the world and replace humans at their jobs. However, the reality of the matter is that long before that happens (if it happens), AI will serve to augment the abilities of humans and let them do things they could never have done before. AI has already transformed many professions through its applications as powerful tools for professionals to harness, and music is no exception. This section covers some of the AI tools available to musicians today.
The first tool we will cover is Flow Machines by Sony CSL. Flow Machines is composed of a DAW plugin and a mobile app that work in conjunction with each other. Musicians can apply the plugin to their melodies within their DAW and use the flow machines app to change their styles based on the artist's intentions. For example, it can convert a jazz melody to a variation based on pop, as well as change the melody's key and texture. These changes are executed by AI, and allow musicians to see their music from a different perspective by dealing with styles based on their initial ideas rather than specific notes. You can see how the tool works in the following video.
Next, we have Magenta Studio, created by Google Brain under the Magenta project. The Magenta project is an open-source endeavour that aims to empower artists and musicians with cutting-edge AI technology. Magenta studio consists of five tools that let musicians apply ML algorithms to their MIDI files: Continue, Generate 4 bars, drumify, interpolate, and groove. The platform is completely free and is available as a standalone application or as a plugin for Ableton Live. You can check out a description of the tools in magenta studio in the following video:
Google NSynth Super
Google Nsynth is another interesting project within Magenta. Nsynth stands for Neural Synthesizer and aims to explore whether AI can be used to mix the sound of two instruments to create something new. Through Nsynth, you can create a new instrument that sounds like a mix between a flute and a guitar, or even a dog and a piano! While the other tools described in this blog are software tools, the Nsynth super is a physical synthesizer device with a touchscreen that lets you mix and match various inputs using the Nsynth AI. You can see it in action in the following video.
Unusually, the Nsynth device is not available on the market. Instead, Google has completely open sourced the design, and a full instruction manual with a list of parts and PCB designs for printing is available online so you can build one yourself. If you are not the DIY type, there are groups on Reddit and Facebook where enthusiasts make Nsynth devices in bulk and sell them to others at a relatively low cost. Until you get your hands on one you can check out how it works in this online tool.
Finally, we have the simple but powerful Samplab, which was originally created at the Institute for Computer Music and Sound Technology (ICST) in Zurich, Switzerland. Samplab is an AI powered music to MIDI converter and editor, that greatly improves the degree to which you can edit the music you produce. The platform takes a sound file, analyses the notes within it, and converts it to an editable MIDI piano roll. Through the interface, you can move around each note in your track to fundamentally change it while preserving the original timbre of each note and the overall sound of the original file. It works well with any DAW, and is a great tool for using AI to fine-tune any music to your liking. You can see how Samplab works on their website, and in the following video: