top of page

AI Tools for Musicians – Part 1: Generative Music

Updated: Apr 29, 2022



This article aims to explore AI's role in music and introduce a few public tools that musicians can use to incorporate AI and machine learning in their practice.


Artificial Intelligence for Music production can be broadly categorized into two areas: music information retrieval (MIR) and generative music. While music information retrieval sounds intimidating, all this involves is using AI to analyse any sound data and extract certain information from it, such as the key, its texture, its frequency range, etc. This information can then be used to modulate or edit the original sound file to change its key, mood or other components. Generative music on the other hand is concerned with the creation of 'creative AIs' that can produce music on their own.


Part 1 of this blog post covers generative music, and part 2 concerns AI tools and interfaces that musicians can use in their practice.



 

Part 1: Generative music


The terms generative music and algorithmic composition are often used interchangeably and generally mean the same thing today. However, algorithmic composition is an older and more abstract idea that is actually a precursor to generative music. While algorithms today are exclusively associated with computers, they are actually just any series of steps that produce a desired outcome, so even cooking recipes are algorithms. This idea of using a 'recipe' to produce music based on simple rules goes all the way back to ancient Greece, where philosophers such as Plato and Pythagoras wrote about the mathematical formalisms or 'algorithms' that underlay the ancient musical systems of the time. The development of automatic composition techniques developed and evolved through history, with a more modern example being the Musikalisches WürzelSpiel in the 18th century, which is attributed to Mozart, wherein a musical piece is constructed automatically from smaller precomposed elements through the random process of a dice roll.


As one might expect, the advent of computers saw a natural evolution in algorithmic composition, with computers allowing new and more complex processes to create music. Even Ada Lovelace, credited as the world's first computer programmer and inventor of the precursor to modern computers in the 19th century, predicted that computers would one day be able to generate music of their own without human intervention.


The 1950s saw the earliest examples of computer-generated music, but its popularity only began to reach the mainstream through the work of ambient music pioneer Brian Eno, who coined the phrase 'generative music' in 1996. Eno was interested in the intervention of machines in music and envisioned a world where artists could sell systems that produce music endlessly rather than simply recording the outputs of machines and putting them on albums.


The tools Eno had access to in his endeavour to create these systems were still rather simple in the late 1990s, but the rapid progress of computing and AI in the following decades has paved the road for far more complex systems that can compose music without the need for extensive human intervention or programming. Today's generative music algorithms are so advanced that they are even capable of generating vocals in the voices of any musician.


For instance, the project Lost Tapes of the 27 Club used generative music to create an album of songs from deceased artists in the 27 club. The 27 club is a group of highly influential musicians that all tragically died at the age of 27, including Amy Winehouse, Jimi Hendrix, Curt Cobain, and Jim Morrison. The project utilized AI trained on the music of these artists to recreate music that could have existed if they were still alive, to raise awareness about mental health in the music industry. You can listen to one of these posthumous tracks by Amy Winehouse in the video below.





There are several other interesting example of generative music such as google's first AI-powered doodle, which lets you collaboratively compose a piece in the style of JS Bach, and there's even an AI song contest held every year!


So how can you bring generative AI into your musical practice? There are typically two routes: the first is to use commercially available platforms designed for this. These can be either paid or unpaid, and offer different degrees of control to users. The second is to work directly with the open-source code of the numerous generative music algorithms that are being released each year. The second method offers the highest degree of customizability and expressivity but is also the most complex and arduous. Thankfully there are many tutorials out there that can help you through the process.


 

Websites for Generative Music


This section presents a few of these generative music platforms and algorithms so you can use them to create music yourself. This list is by no means exhaustive but should give you an idea of what kinds of platforms and algorithms are out there.


Computoser


Starting with the most basic platform, we have the online generative music platform Computoser. Computoser generates a unique song based on a variety of parameters that you can change such as mood, tempo, and the instruments within it. The tracks it creates are truly unique as each time you hit play with the same parameters selected, the platform generates a different never-before-heard track. Once you generate a track, you can download it as a midi, mp3, or musicXML file. The platform is completely free to use, but tracks produced by it are under the creative commons license and must be attributed to Computoser. You can hear a piece we produced through the platform below:



While it is very interesting that Computoser can produce completely unique tracks, the tracks it produces are not particularly impressive as the process through which it generates them is not truly artificially intelligent. Computoser utilizes a randomness based algorithm that is built on hand-engineered rules of music (the old way of doing things), and produces synthetically produces notes from different instruments according to those rules. Computoser's about page states that "a computer can hardly have the performance of a human musician". However, we will see this is only true of the old paradigm of generative music of which Computoser is an illustrative example. The new paradigm using artificial intelligence is more than capable of passing a modified Ruring test for music generation. The following platforms are illustrations of these modern generative tools.


Aiva


Aiva (Artificial Intelligence Visual Artist) is a platform for generating 'soundtrack music' that lets you choose from one of many genres/styles, and automatically generates a new track for you in that style. Unlike Computoser which only creates a track, AIDA has a special focus on controlling the MIDI level data of the song it generates, through extensive editing options in a piano roll of the track it generates. This means you can use Aida's platform as a base and fine-tune its output to create something that you are happy with. You can also condition the generation on MIDI files that you've created. Aiva is a paid service, so the copyright and royalties will depend on your billing plan. You can hear a track produced by Aiva and how it works in the following two videos.






Amper


Like Aiva, Amper also generates entire pieces based on basic input from the user. However, with Amper, the inputs are a bit different as it is more geared towards producing tracks for your video content. Rather than simply a genre, Amper also requests an adjective to describe your piece, such as "sentimental" or "futuristic". Based on the mood and the genre you are looking for, Amper automatically generates a unique track for you. The platform also lets you edit various parameters such as the key, the mood, the bpm, etc once it has generated a track so you are happy with it in the end, but ultimately has less fine-grained control of notes in the track than Aiva does. Like Aiva, Amper is a paid platform, and you have to purchase the tracks it generates at a cost ranging from 5 dollars to 499 dollars depending on how you intend to use them. You can check out a tutorial on how to use Amper in the video below.



 

Open Source Generative Music Research


The platforms described in the previous section are polished platforms for AI-generated music. However, platforms such as these restricts the user to only use the music generation paradigms and features conceived within them. Alternatively, those with a DIY spirit can tinker with the neural networks that power these platforms directly. Manipulating these networks offers a free and (within the limits of what the network can do) unrestricted means of generating your own music, albeit with a lot more trial and error and a higher degree of complexity. Any features that you would not get with this method such as editing generated tracks can be compensated with some of the AI tools we discuss in Part 2 of this blog post. In this section, we will go through a few popular generative networks that you can use to create your own AI-generated tracks. We will provide links along the way that will get you working with them in no time, even if you don't know anything about programming.


MuseNet


There have been many attempts in the research world at generative music generation over the years, but the first model (AI 'algorithm') of note that we will be covering is OpenAI's MuseNet. Musenet learns to create its own music by taking short sequences of MIDI notes, and predicting what comes next. This technique is a powerful way to train neural networks, most commonly used to train AIs that generate text, and more recently used in AIs that generate images. While MuseNet is trained to predict MIDI sequences, the model takes several desires of the user into account such as what instrument you want to see and synthesises these instruments using deep learning. Deep learning synthesised instruments sound significantly more realistic than older means of synthesis, such as the one used in Computoser, as you can see in MuseNet's live concert in the video below.




You can read about how MuseNet works here, and explore some samples it has created. Luckily for us, we don't have to touch any code to work with this model, as a GitHub user by the name of Steven Waterman has kindly created a custom frontend for MuseNet called MuseTree that you can experiment with. Click here to try it out. The GUI lets you upload your own MIDI file to condition MuseNet on, or you can also generate something unconditionally (from nothing).


Jukebox


Musenet's focus on MIDI makes it restricted to only instrumental pieces, without any vocals. This is where our next model comes in. Jukebox was also created by OpenAI using Musenet as a base, but instead of training on MIDI data, Jukebox is trained directly on sound files.. Jukebox is an example of what is called a variational autoencoder and works by learning general structures in music by compressing raw sound data to a different representation that codifies a knowledge base that descrines the music it was trained on. From this internal representation, Jukebox can output new sound files, but more importantly, it can be conditioned on some rather interesting parameters. Jukebox lets you recreate music in the style of specific musicians, such as an album of never before heard Pink Floyd songs:



In addition, you could also create songs by specific artists in genres they have never made music in. You can read all about Jukebox and how it works here, and check out some samples created by it here. One should note that not all of Jukebox's outputs are cohesive, and many end up being noisy and garbled. However, given the highly advanced nature of what it attempts to do, it is definitely a step in the right direction and a great asset to musicians when you manage to tame this beast. Unfortunately, Jukebox doesn't yet have a frontend that lets you use it easily, but the code is publicly available and can be edited to create your own stuff. Tutorials such as this should help you get started with Jukebox, and you can try your hand at working with the code from jupyter notebooks such as this. Even if you don't know how to code, following all the instructions will get you generating music with Jukebox in no time.


Conclusion


In this section, we covered a few powerful and popular generative music platforms and AI models. This should serve as an ample introduction to get you started with generating music through AI.


Beyond generating music on its own, AI can also augment your own music production process in interesting and powerful ways. Part 2 of our blog post goes over such AI tools.


Continue to Part 2 of this blog post here:

 

Further Reading & Links


Documentaries:


Other AI music projects:


Fun little music-AI experiments by google:


Academic resources for music AI papers:


References and further reading


Recent Posts

See All
bottom of page