Artificial Intelligence and Electronic Music
- Thom Holmes
- Sep 29
- 5 min read
My Podcast: The Holmes Archive of Electronic Music
My blog for the Bob Moog Foundation.

The field of artificial intelligence (AI) has grown enormously in the three years since this book was last revised. The use of AI is evolving so rapidly that I think it would be ludicrous to attempt any broad predictions about its state in another three years. However, taking the position of the electronic musician and composer, I think we can recognize some trends that are worth noting at this time as to how the use of AI can affect the composition of new music of an experimental nature.
Predicting how AI will grow in the arts is a challenging topic at the moment. But I am neither scared nor appalled by what is happening—AI no more threatens what we do in electronic music than did digital recording when it replaced tape recording or the computer when it became available to automate processes and cut production time. The history of electronic music continually surges forward with the help of emerging technologies. Inventors and composers have been successful at finding innovative ways of making music with tools that were never intended for that purpose. For example, while magnetic tape and the turntable were originally intended solely to playback sound, musicians took a leap of faith and imagined how original music could be crafted with these tools.
We must somehow put aside the realization that AI is already a huge commercial enterprise. It is quickly becoming a ubiquitous part of the day-to-day operations that drive businesses and institutions while its creators guard their proprietary code so that they can establish billion-dollar companies. Unfortunately, while doing this, those same corporations have walled-off access to anyone but a privileged few with an academic or artistic interest in finding ways to utilize AI for human benefit.
So, we experiment with what we’re given, which is essentially a generation of AI that accepts natural language instructions and generates music. But there’s a catch. Use that kind of AI at your own risk because if we’re dumb enough to want to own our creations, that isn’t possible if the output was generated by AI. The corporations that provide the AI systems own the output. AI also creeps into the domain of programming where much coding can be automated and run without much human intervention. In music programming, musicians are finding ways to progressively train AI to program algorithms for making music, for example creating patches for MAX, but as of this writing, with not much success. This is destined to change. I think having AI do more of the coding would free the composer to think more conceptually about their music.
Saying that, there is some work in academia, arts institutions, and in the music community itself, where musicians, researchers and scholars are trying to upturn the commercial use of AI and leverage it for the creation of music. This podcast provides some examples of such work that come from the AI Music program at the University of California at San Diego, IRCAM, the French music institution that grew from experiments with electronic music, and several ready-made AI platforms that have been purpose-built to create music for gaming, videos, and online platforms. I also search for an AI engine that came closest to providing me with what I would deem music of a truly experimental nature. I found that in DeepAI, dedicated to generating music for background tracks, sound effects, and soundscapes. I wanted to see how it handled noise music and it was the only that I tried—and I tried many including OpenAI MuseNet, Suno AI VX, Google’s Magenta, Amper Music, and AIVA—that did little more than churn out their concept of experimental which generally meant beat-driven rock music or harmonic ambient waves.
Episode 181
Artificial Intelligence and Electronic Music
Playlist
Time | Track Time | Start |
Introduction | 05:42 | 00.00 |
1. Cornelius Cardew, “Treatise: String Orchestra (2025). The first of three AI interpretations of a piece by Cardew composed between 1963 and 1967. The work was written as a graphic score. Produced by the team of Professor Shlomo Dubnov of the University of California at San Diego, they used as the basis for an improvisation Cardew’s graphical musical score comprising 193 pages of lines, symbols, and various geometric or abstract shapes that largely stray from conventional musical notation (pages 1 to 33 were used). The recordings from Dubnov’s team interpreted this graphic score with the help of Open AI’s ChatGPT 40 and a program they developed themselves called Music Latent Diffusion Model (MusicLDM), an AI-like algorithm. The recordings show how AI can transform visual stimuli into sound and expand on their interpretation in an experimental music composition. This version is arranged for digital string orchestra. | 11:23 | 05:54 |
2. Cornelius Cardew, “Treatise” Sinewave” (2025). This version from Dubrov’s lab was arranged for sinewave generator. | 11:15 | 17:10 |
3. Cornelius Cardew, “Treatise: Experimental” (2025). This version from Dubrov’s lab was arranged for a mix of instruments defined as “experimental” by the team. | 11:32 | 28:24 |
4. Valérie Philippin, “Extraits de recherche” (2024). Vocal interaction experiment conducted with vocalist Valérie Philippin while she was in artistic residence at European Research Council REACH project (ERC) at IRCAM. AI interaction in real-time using the Somax2 program. Voice: Valérie Philippin, Somax2 & electronics: Mikhail Malt. | 03:52 | 39:48 |
5. Horse Lords and The Who/Men, “Zero Degree Machine” (2023). Horse Lords Concert at ERC REACH. Music using Somax2 to interact with the performers and add new parts and instruments in real time. If you hear something other than a guitar, drums, bass, and sax, then it was created by Somax2. You might detect loops of instruments (e.g., saxophone) as well because Somax2 adds to the mix. Horse Lords (Max Eilbacher bass/electronics, Sam Haberman percussion, Owen Gardner guitar, Andrew Bernstein percussion/saxophone). The Who/Men: Gérard Assayag, Mikhail Malt, Reach interactive AI: Somax2; Marco Fiorini, Reach interactive AI: Somax2 and electric guitar; Manuel Poletti, computer music production at IRCAM). The Who/Men are providing guidance for Somax2 in real-time, operating different instances of the program on their laptops. | 18:45 | 43:42 |
6. PintoCreation “AI-generated Sci-Fi Sci-Fi and Visual Storytelling” (2025). This is just an example of how task-specific AI is being used to generate videos with electronic music soundtracks. This is an excerpt from one of the soundtracks for the many videos they have generated for their YouTube channel. | 07:54 | 01:02:26 |
7. Artificial Intelligence Music, “Melodic Techno” (2025). Excerpt of AI-generated techno music found on this YouTube site. They explain that the music found here “was composed by an AI, meticulously trained on the nuances of this captivating genre.’ I have no idea what AI engine was used, but this is just one example of how many music producers are getting onto the AI train. | 06:51 | 01:10:17 |
8. Atmoscapia, “Calm Ambient” (2025). This is a purpose-built generative ambient music creator for “Films, Games, YouTube, and Creative Projects.” Billed as an “Instant Ambient Music Generator For Content Creators,” you use it by selecting styles and lengths up to an hour long. In this case, I chose the style “Calm, Meditative, Dreamy.” Two other categories are also provided for “Cinematic, Dramatic, Emotional” and “Dark, Horror, Suspense.” Those are the extent of the current choices in the free version. It delivers a soundtrack that you can download. | 10:00 | 01:17:08 |
9. Thom Holmes, “Thom DeepAI Noise Music” (2025). In an attempt to generate something more experimental using an AI system, I turned to DeepAI and gave it the following instructions: “Experimental, noise sounds. No melody, no harmony, no rhythm. Randomized intervals of silence. Randomized mood swings.” It was short as I was not using the premium version, but it came closer than some other AI programs to creating a work that was more closely aligned with experimental. | 1:45 | 01:41:49 |
Opening background music: Ambient music generated by the Atmoscapia AI system using the “Dark, Horror, Suspense” setting (excerpt).
Introduction to the podcast voiced by Anne Benkovitz.
Additional opening, closing, and other incidental music by Thom Holmes.




