Artificial intelligence can now write wedding songs on command—from first-dance ballads to cocktail hour instrumentals—but the question no one seems to ask is the most important one: should it? As a wedding DJ who's watched music evolve through vinyl, CDs, MP3s, streaming—and now AI—I have thoughts…big ones. If you've ever wondered whether AI belongs at a wedding, or whether robot-generated music can truly replace human emotion, you're going to want to read this.
November 10, 2025
Artificial intelligence. It is has crept into every corner of life—from the way we search for recipes to the way we schedule dentist appointments. Many people champion AI because it frees up so much of our time; without question, we are no longer enslaved to the mundanity of daily routine. AI lessens the guesswork, inconvenience, and responsibility of our everyday activity—self-checkouts shorten long lines at the register, kiosks ring up our orders at restaurants, algorithms recommend our viewing habits, hell, our phones finish our sentences! As a sci-fi fan, this should be cause for alarm. Shades of Cybernet all around me should have me on the lookout for Terminators, but I confess, I've not given any of this much thought. I simply accepted it. Until recently.
What caused me to step back and reflect on the dangers of AI? The rise of AI created music.
As a wedding DJ with nearly three decades in the booth, I am now apprehensive. I’ve watched music evolve, morph, stretch, reinvent itself, and occasionally implode under the weight of its own trends. But nothing—and I mean nothing—has felt quite as strange, fascinating, and unnerving as AI-generated music. Songs can now be generated with nothing more than a prompt. “Make it sound like Taylor Swift meets Journey but with lyrics about a golden retriever officiating our wedding.” And within seconds, BOOM—there it is, ready and waiting for my brides to walk down the aisle.
On the surface, it feels innovative. Maybe even magical. No need to wait for an artist to write the song, record it, produce it, or even feel something while making it. Type the prompt, click a button, and voilà—an emotional moment custom-built on demand. But I am becoming more uncomfortable and more conflicted by this all of the time—not because I fear technology, and not because innovation is unwelcome. It was not so long ago that I DJed with vinyl. For hours all I did was drop needles, queueing each song of my set list with vinyl. It was impossible to leave the DJ booth; I was confined. When vinyl gave way to cassettes, cassettes bowed to CDs, and CDs surrendered to MP3s, I celebrated. My job became that much easier. Every shift brought something new—convenience, access, flexibility—and I adapted, because music evolves, and so must the people who play it.
But now, I find myself facing a new kind of shift—one that isn’t just technological, but philosophical.
Music has always been deeply, unequivocally human. And now, I'm being asked to consider whether something built from code and prediction can stand shoulder-to-shoulder with a song written from heartbreak, grief, joy, or love.
And the more I learn about where AI-generated music is heading, the more complicated it becomes.
If you think AI-generated songs are a novelty tucked away in a dusty experimental corner of the internet, think again. Here’s a number that nearly made me spill my coffee: as of late 2025, nearly 28% of all new tracks uploaded to the streaming platform Deezer were fully AI-generated. That’s roughly 30,000 new AI songs per day.
Thirty. Thousand. Every. Day.
And before we applaud the brave new digital frontier, here’s the darker side: up to 70% of streams on those AI tracks have been tied to fraudulent activity. Think streaming farms. Think bots. Think systems designed to siphon pennies at scale from the same royalty pool that real musicians—the ones whose art we depend on every weekend — rely on to survive. AI isn’t just creating music. It’s crowding out human artists, clogging search results, and making it harder to find meaningful tracks among a sea of algorithmic noise.
Imagine scrolling through Spotify trying to find a heartfelt first dance song… but instead running into:
“Love Dance Wedding Ballad ft. AI-Voice #14”
“Romantic Marriage Slow Song (No Copyright Needed)"
“Bruno Mars-ish Wedding Song (Legally Not Bruno Mars)”
That's not a playlist—that's a graveyard of emotional counterfeits.
No matter how good AI gets, no matter how realistic the vocals become or how clean the production is, something remains missing:
The lived experience behind the music.
Eric Clapton didn’t write “Wonderful Tonight” because a predictive model suggested it had a high emotional engagement score—he wrote it while waiting on his wife to get ready for a night out, capturing a moment so ordinary and intimate that it became universal. Pianist Jonathan Cain penned “Faithfully” on the road with Journey—wrestling with love, distance, temptation, and devotion in a way only a touring musician could. John Legend wrote “All of Me” with his future wife sitting beside him, the lyrics reflecting their real flaws and real tenderness. Alicia Keys wrote “If I Ain’t Got You” just weeks after losing her close friend and fellow musician Aaliyah—it was grief and clarity poured into melody. And when Billy Joel wrote “Just the Way You Are,” he wasn’t engineering a hit—he was writing a love letter to his then-wife Elisabeth when their marriage suffered from financial problems following a contract dispute with his label.
Real music is built from truth—sometimes messy, sometimes imperfect, always human.
AI music is built from prediction — clean, calculated, and emotionally neutral.
Now, do people like AI music?
Sort of.
Surveys suggest about one-third of listeners are comfortable with AI creating instrumentals. But when it comes to full songs—vocals included—the number of comfortable listeners drops sharply. Roughly 44% of listeners admit they’re uneasy with fully AI-generated music.
Honestly, I get it.
A slow ballad sung by a neural network doesn’t land the same way as Etta James recording “At Last” after a lifetime of heartbreak. A model predicting emotion is fundamentally different than a human living one. At weddings, authenticity matters. Music isn’t just sound. It’s memory. It’s identity. It’s lived experience poured into melody. As DJs, we don’t just play songs—we curate emotional truth. We connect people to who they were, who they are, and who they’re becoming. Music has weight because a human being crafted it from joy, heartbreak, love, grief, nostalgia, rebellion, or celebration. When I play “Ordinary,” “Shallow,” or “Perfect,” I’m not pressing a button—I’m releasing decades of emotional history into the room.
So what happens when the song that brings a couple to tears wasn’t written from a heartbeat, but from code?
This is the tension.
And this trend is growing faster and faster.
On one hand, I understand the appeal. Couples can commission AI to create personalized music. It can be playful. It can solve oddly-specific wedding dilemmas like, “We want a slow 83 BPM song mentioning our cats by name.” It can even rescue indecisive couples from endless Spotify scrolling. For some, the novelty alone is part of the fun. Some of these songs are genuinely sweet. But some feel like Mad Libs set to soft piano, and some cross into emotional territory that gets uncomfortable fast — like recreating the voice of a deceased parent to sing or speak at the wedding.
Beautiful? Maybe.
But also: ethically complex.
Just because we can doesn’t always mean we should.
When I hear AI vocals mimic an artist who never sang those words—Amy Winehouse resurrected to croon a song she never wrote—it hits me wrong. It feels like borrowing a soul that isn’t there. Music is one of the last sensory experiences that still feels uniquely human—messy, imperfect, emotional, unrepeatable. The scratches in a vinyl record, the crack in a singer’s voice, the rhythm mistake during a garage-band rehearsal—those things matter. They are the evidence that the song lived before it ever reached a speaker.
AI music challenges that. It blurs authorship. It disconnects feeling from creation. And as DJs who traffic in emotion—not just audio—we feel that.
But here’s the legal kicker: Fully AI-generated music cannot be copyrighted. The U.S. Copyright Office has made that clear—art requires a human author.
Translation?
A couple’s favorite AI-generated love song may technically fall into a legal gray area…and the AI model that created it may have been trained on copyrighted work without permission from the original artists. Record labels are already suing. The legal precedents are forming in real time. And DJs—whether we want to be or not—are standing right in the middle of the debate.
Talking with others in the industry, I have learned that I am not alone. As DJs, we all spend our lives working with music that comes from lived human experience — heartbreak, joy, innocence, chaos, defiance, hope.
When I drop “September” by Earth, Wind & Fire and the dance floor erupts, that’s not an algorithm succeeding.
That’s history.
That’s memory.
That’s culture.
That’s humanity.
And yet — I also know AI isn't going away. In fact, projections suggest that by 2028, AI-generated music could make up 20–60% of streaming catalogs and threaten up to 23% of musician revenue globally.
The pendulum is already swinging.
So where does that leave me?
Somewhere between curiosity and resistance. Somewhere between possibility and caution. Somewhere between “Wow, that’s impressive” and “It isn't real?”
If a couple asks for an AI song, I’ll talk with them—honestly. We’ll decide whether it belongs in a meaningful moment or somewhere lighter—maybe cocktail hour or a novelty placement. If they insist, they win. They hire me to play the music they select after all.
But when the lights fade, the room goes quiet, and the first dance begins? I will always advocate for a song written by a human being who meant it. Because music isn’t just something we hear. It’s something we feel. And until AI can feel heartbreak, beauty, fear, longing, hope, and love…humans will always do it better.