Artificial Intelligence and the future of music
Artificial Intelligence and the future of music
"Music comes from the emotions, from the heart" we often hear this phrase, but professional musicians know that in order to create valuable material a synergy between emotions and theoretical knowledge is necessary. When these two elements work together, music becomes so much better. Nonetheless… What about machines? Those lacking “emotion”, but most certainly having the knowledge if they’re well programmed.
There was a time when the idea of machines replacing humans was laughed at. In 1956, the MANIAC computer became the first AI, or Artificial Intelligence, to defeat an amateur in chess, something not that surprising as skeptics would say.
By 1968 Larry Atkins, David Slate, and Keith Gorlen had written a program called Chess x.x, which would later become the first AI to win a chess tournament in history. In 1981 Chess Blitz, another program, defeated world champion Joe Sentef and was awarded the title of master. Finally, the imposing AI named “Deep Blue” arrived, and between 1996 and 1997, defeating the world champions became a reality, this was seen with Garry Kasparov, who lost three matches, tied two and won just one for a total of six games against the AI.
We could simply say chess is a game with high mathematical content, as such, computers would sooner or later outperform the opponents’ calculations, but we can’t deny just how amazing technological advances have been throughout the years.
One example would be looking at today's forms of artificial intelligence, which can substantially improve stop motion images, converting a 30-fps movie (frames per second) into 60-fps.
[DAIN APP] 60fps Coraline fragment :
And this not only applies to stop motion, but also to animation in general.
Turning anime to 60 fps using AI:
In this case, the program observes each image and generates frames depending on the missing movement between each one. This is perhaps a perfect example of what humans and computers working hand in hand can achieve.
Another interesting case is deepfakes, which consist of replacing the face of any person with another in videos, something only possible in photographs. This has led to hilarious results and genuine concerns; how can we tell if a material is real or fake nowadays?
Deepfake Mr Bean Compilation (Titanic, Superman, Trum and more).
Here a more convincing one:
Jim Carrey as Joker:
Perhaps to better understand the learning curve of AIs, we have to observe the different learning stages through a game:
Google's DeepMind AI Just Taught Itself to Walk:
For a program to "learn" we need several phases:
- Step 1: Create the conditions and physics the program has to deal with to get from point A to point B.
- Step 2: Give the program the only instruction to reach the goal.
- Step 3: Wait and observe the many mistakes it makes. Through each error, the AI subtly corrects certain parameters in its movement.
- Step 4: After thousands of attempts, the program will be able to surpass obstacles more efficiently than a human.
The same applies to video games – any AI enthusiast must watch this fantastic Seth Bling video:
MarI/O – Machine Learning for Video Games:
Going back to music; we could say this isn’t a subject as explored as chess or any other, but we already have various programs capable of creating music from a collection of pieces. A fantastic example is the one made by Sony CSL team by giving the entire Beatles catalog to the computer, and then receiving a song uncannily familiar.
To be perfectly fair, we don't know how much info was in the file created by the computer, as the production was later polished, arranged, and produced by Benoit Carré. However, it’s still an interesting experiment.
Daddy's Car: a song composed by Artificial Intelligence – in the style of the Beatles:
We also have AIVA (Artificial Intelligence Virtual Artist), currently available to anyone willing to pay a monthly fee, with its results being hit-or-miss.
Through Time and Space – AI Algorithmic Composition by AIVA:
Nor can we overlook the international conquest virtual popstar Hatsune Miku has done over the years, this is a virtual singer whose voice library comes from the VOCALOID plugin. It’s hard to understand how Miku turned from a vocal plugin to the first computational singer able to fill stadiums and perform entirely as a hologram, but it’s a great remined of the digital potential for ages to come. And the best part? She’s accompanied by flesh and bone musicians.
Hatsune Miku Magical Mirai 2016:
Some musicians and producers may fear the possibility of being replaced by machines in the future, but there’s still a lot of time before that happens, as we haven’t even begun to grasp all the possibilities technology has to offer, and even though development has been great over the decades, human labor remains vital.
This was the case with the advent of synthesizers and MIDI programming in the 1970s. Some claimed it was the end of live musicians, but it turned out to be an incredibly useful tool that smoothed and expanded the possibilities of creating valuable productions at home.
Some theorize that machines could take care of the heaviest and repetitive tasks in the future, while humans would take care of the most pleasant ones. Another possible scenario is, with the technology demand, humans take more and more responsibility to program AI, so that it doesn’t turn to its creators and becomes more efficient, perhaps creating more jobs for this.
From my point of view, musical composition will always need a personal touch, which is difficult to emulate, we may as well consider it our special voice. Music will probably have to be more experimental and transgressive in order to distance itself from overused formulas, as John Cage did in his now popular 639-year-old organ piece and other works in his catalog. Creativity like his won’t be available in AI for a long time, but we’ve already seen what happened in chess, ouch.
Either way, the future is catching up with us.
If you enjoyed the article, you'll love these games: