• Sound Awareness
  • Posts
  • Art Meets Algorithms — Was This Composed by a Person or An AI?

Art Meets Algorithms — Was This Composed by a Person or An AI?

The boundaries seem to blur

Created by DallE

Algorithms are the new gods of today’s world.

In a world that is evolving toward replacing doctors, drivers, lawyers, and many other professions with algorithms, the only human thing that seems to remain untouched is art.

Until now.

Why would art (music, in this case) be safe from AI?

If music comes from organic algorithms (a.k.a. humans) using mathematical patterns (time, pitch, dynamics, etc.), why isn’t there space for non-organic to do the same?

Bridging the musical gap

David Cope is a controversial musicology professor at the University of California in Santa Cruz.

He has created algorithms that can write music based on existing music pieces. His first creation, called EMI (Experiments in Musical Intelligence), delved into the work of the most mathematical composer of all time, Johan Sebastian Bach. After seven years of hard work developing this program, EMI created 5,000 chorales in the style of Bach in one afternoon.

David released “his” first album in 1994 named “Bach by Design,” which included songs composed in the style of Bach, Mozart, Prokofiev, and other classical composers.

Here’s one chorale in the style of Bach composed by EMI. Would you be able to figure out if an AI wrote it?

A similar thing happened at a music event with a sophisticated musical audience.

The battle for music supremacy

Enraged by this blasphemy of AI composing music, a professor at Oregon University, Steve Larson, challenged Cope to a musical battle.

Dr. Larson’s wife, pianist Winifred Kerner, would perform three musical pieces, one by Bach, one by Dr.Larson, and one by Cope’s EMI, while the audience (compromised of professors, students, and music aficionados) would guess the authors of each piece. Larson was convinced that people would easily distinguish the emotional human pieces from the logical creations by the machine.

The result?

The audience thought Larson’s music was composed by EMI (that hit right in the feels), that EMI’s piece was actually Bach, and that Bach’s was composed by Larson.

0% success rate.

This means that there’s space for AI music, and what we thought was exclusively a human endeavor isn’t anymore.

EMI later composed a piano concerto in the style of Mozart that was performed by the Santa Cruz Baroque Festival, which baffled musicians on stage.

The piano soloist later said: “It felt a little different than playing a normal Mozart work. But it was very much like a work of the same period. It was certainly in the ballpark.’’

No one is indifferent to this new composer in town.

The next frontier: Annie

While EMI was a program designed to compose music in the style of various classical composers, Cope aimed to expand the capabilities of EMI with Annie.

As its predecessor, the AI system uses machine learning techniques to analyze and understand the compositional styles of different composers, but Annie has some additional features:

  • Better machine learning: It uses more sophisticated algorithms to understand other composers’ music styles. This allows Annie to generate compositions that show a higher level of authenticity to the works of specific composers.

  • Expanded repertoire: while EMI primarily focused on emulating the styles of a few classical composers, Annie used a broader range of composers from different eras and musical traditions. A larger pool from where to extract musical elements for its compositions.

  • Improved variation: Unlike EMI, which tended to generate music that resembled specific compositions too closely, Annie aims to introduce more originality and creativity while still capturing the essence of the targeted composer’s style.

These enhancements allow Annie to create compositions that push the boundaries of AI-generated music in terms of creativity and authenticity.

Now consider that Annie is already over 20 years old, and technology has advanced immensely in two decades.

With all the buzz around AI chatbots (ChatGPT) and AI-generated visual art (Midjourney, Dall-E), we are now part of the third revolution in AI tech with sound. And what Google’s MusicLM has done is unbelievable.

Check out the article I wrote about it here.

Takeaway

What was once considered exclusively human artistry is now being harnessed by algorithms that can compose music in the style of renowned composers.

And that’s just the beginning.

The third revolution in AI technology, focusing on sound, promises to open up new possibilities and reshape the music landscape in the upcoming years.

Brace yourself for a future where AI and human creativity intertwine.

If you enjoyed reading this article, consider chipping in a few bucks to support my work. It takes a ton of time and effort to research and write these pieces, and your donation would mean the world to me!

Donate here