Hitting the wrong notes: AI’s troubled relationship with music
Artificial Intelligence, especially the use of Large Language Models (LLMs), has transformed the music industry by helping with music production in various ways, from augmenting traditional music making tools to generating full-scale “original” music.
While AI no doubt expands possibilities for the music industry, it increasingly comes at a cost of copyright infringement, lack of compensation for artists, and proliferating low-quality, forgettable music. The recent news that Spotify—a popular music-streaming and profit-making platform—is pushing AI-generated music has further strained the already tenuous relationship between music and AI. Naturally, these concerns have also affected music education, as students and educators grapple with what it means to learn, create, and teach music in an era shaped by AI.
Dr. Robert Komaniecki (he/him), a lecturer of music theory at the UBC School of Music, speaks to these issues in this Q&A. He argues that not only is AI taking away jobs from musicians, but the music it produces is riddled with mistakes that even a high school musician would avoid. Dr. Komaniecki urges us to reconsider how, by giving ourselves over to AI, we might eventually lose our ability to be musical at all.
How does using AI to learn and create music differ from traditional methods?
In my experience over the past few years, students are not using AI to learn music, but rather to complete assignments that they may view as secondary to their musical goals. Even now, after years of refinement in LLMs, the major models struggle to “teach” music in a reliable way. I’m not sure whether it’s the level of abstraction, the artistic subjectivity, or the mix of alphabetic and numerical symbols, but LLMs continue to make musical mistakes that wouldn’t be made by a high school musician. For now, at least, LLMs are simply worse at teaching music than trained music educators.
As far as creating music using AI, it is a spectrum, with one end being prompt-generated songs and the other end being quasi-random assemblages of inputs in DAWs (Digital Audio Workstations) to use as a starting point. I am interested in a narrow subset of AI-assisted music, wherein electronic musicians will prompt an AI system to create an instrument or sound according to their descriptions. This differs from a traditional method of creating electronic music, where a musician would conceive of a sound by experimenting and manipulating various sonic parameters in a DAW or synthesizer.
What are the major risks and opportunities that AI poses to music education and production, both within and beyond the university?
AI-generated music presents significant risks to music production in that it allows profit-driven businesses another opportunity to obtain art without paying an artist. We are seeing this already: Major music investors are announcing the “signing” of AI “artists” to their labels, complete with AI-generated visuals, social media presences, etc. AI-generated music is likely to be used in situations where the music itself was always an afterthought, such as background music, children’s programming, or some TV scoring scenarios. This will result in fewer jobs for musicians as AI continues to nibble around the margins of our already-strained business.
The major risk that AI poses to music education is in the deleterious impact it is having on our pupils. Use of generative AI to complete assignments is making students less literate, less capable, less trustworthy, and less musical. Music educators, like any educators, are fighting against an increasing trend toward a specific type of quasi-illiteracy, where students can technically read and understand short ideas, but are unable or unwilling to engage with longer texts.
Tell us about your approach to teaching students about AI practices.
I tell students that the spectrum of what is considered “AI” is quite broad, and that includes using assistive tools such as spelling or grammar checks. I tell students that I do not permit generative AI such as ChatGPT for completion of class assignments. I understand that it is impossible for me to reliably “catch” every instance of LLM use in course assignments, so I attempt to frame the discussion in a way that casts the students and myself as part of a broader effort toward learning. I remind students that accommodations can be made if they are running low on time to complete assignments, and that I’d much rather receive a flawed piece of writing generated by a student than a copy-pasted paragraph from ChatGPT.
I do not use AI in my courses except to demonstrate its shortfalls. In one instance, I generated a song using an AI prompt, and the students contributed to a discussion where we pointed out the various technical flaws and hypothesized as to why AI had difficulty with those parameters of music.
What do you believe are the biggest AI-related opportunities and/or challenges facing faculty and students in Music?
To be completely honest, I believe the greatest challenge faced by students and faculty is the same: AI presents an alluring shortcut for many parts of our daily lives, from writing emails to dissertations. In my experience, using AI in ways that we think are “inconsequential” results in an increasing deluge of AI in more prominent roles, thus endangering our jobs and (most importantly) lowering the quality of our work.
This is part of an ongoing GenAI op-ed series, Arts Perspectives on AI (Artificial Intelligence), that features student and faculty voices from the UBC Faculty of Arts community.
Ritwik Bhattacharjee is a PhD candidate in the Interdisciplinary Studies Graduate Program and a Communications Specialist for the UBC Faculty of Arts.