AI Human vs Human AI. A brief history of transhuman music. Most AI-generated songs now sound more human than most human-made songs. How did music go so horribly wrong? It all happened so fast, too! If you’re a Gen Xer like me, you’ve witnessed this whole process unfold from start to finish in just a few decades. As a kid I remember the first time I saw Depeche Mode performing live on some TV show. I was so confused. I heard drums, but I didn’t see a drummer!? What kind of sorcery was this? Remember, this was before the internet, so I had no way of finding out how they did that. Then a year or so later, I saw these little black boxes in my local music instrument store. “What kind of instrument is that?” I asked the long-haired store clerk, who was rather busy air-guitaring along to some ‘80s shred album. “It’s a drum machine,” he told me. “How does it play the drums?” I asked, with even more curiosity than before. He laughed, and then explained how it doesn’t physically play drums, it’s a computer that replicates the sound of a drummer. I was shocked! “But then bands won’t need drummers anymore?” I said, hoping that he would tell me I’d misunderstood his explanation. But no. He confirmed that the future indeed did not look good for drummers. “Are you a drummer?” he asked. “No, I play piano and guitar,” I replied. “Well you’ve got nothing to worry about then!” he assured me, trying to put my young prophetic mind at ease. But the writing was on the wall, and it was obvious to all with eyes to see. Subscribe to get the latest posts in your inbox. The release of drum machines did two things, which together formed the catalyst for the death of music. Firstly, a lot of drummers lost their gigs. You see, the fewer people in a band, the more money each member makes. And remember, for working-class musicians like us who are just trying to scrape by, that increase can be the difference between paying rent and being homeless. So I’m not judging any bands that replaced their drummers with drum machines. I know the struggle. The drummer in my band was my brother. So who knows, if it was someone else, I may have done that too. The second consequence of replacing drummers with drum machines was that all the drummers who still had bands, like my brother, were now being compared to drum machines. No matter how tight their playing, it was never as tight as the machine. This destroyed most drummers’ confidence, and it began the false idea that having everything perfectly on the MIDI grid was the best way. This is a lie. The feel of a great drummer is exclusively due to them not hitting everything on the grid. A well-played soul groove that’s slightly behind the beat feels a million times better than a drum machine. And a well-played punk groove that’s slightly ahead of the beat feels so intense compared to a drum machine. There’s nothing perfect about playing perfectly on the grid. It just feels dead. Like the machine. Also, the slight variations that you get from a human drummer throughout each song is priceless, and breathes life into every single bar. Now, the next stage in the death of music occurred in 1997, with the release of Auto-Tune. This was the first time that the pitch of vocals could be corrected. It was dark sorcery, too. I remember the first time I experienced this. I had just walked into the beautiful Metropolis Studios building in London, and my engineer friend Rohan Onraet met me with an excited “Dude, you gotta see this!” as he quickly ushered me into Studio C where he was working on a mix. He sat me down and proceeded to play a Before and After version of a vocal track. I still remember who the band was, but I won’t mention them, as the first rule of Studio Club is: You don’t talk about what happens in the studio! The difference between his Before and After versions of the vocals was shocking. The singer was not very good, but after my friend had auto-tuned him, he sounded just as good as anyone else. I could not believe it. “That’s cheating” I protested. “Yep” he agreed. And after a dramatic pause, he added: “This changes everything.” Turns out he was far more prophetic than he realized. Subscribe to get the latest posts in your inbox. By the early 2000s, pitch correction software had been adapted to be able to tune any and all instruments, including guitar. Around the same time, rhythmic correction software was also developed. Now it was possible to record a live drummer, but then using software like Beat Detective, every hit could be perfectly aligned to the MIDI grid. This is why the drums on most recordings from 2001 onwards start sounding like drum machines, despite often being human drummers. So 20 years ago, singers were already sounding like robots, and drummers were sounding like drum machines. And many bass guitarists had already been replaced with synthesizers by then, and the bassists that remained, were heavily edited as well. For a brief period, us guitarists were the last musicians standing. But that didn’t last long, as not only did pitch and rhythmic correction software start making guitarists sound like robots too, but then virtual guitars started appearing. And as with all virtual instruments, they sounded terrible at first. However, it didn’t take long for them to start sounding convincingly realistic. And eventually, they would also become indistinguishable from the real thing. And that concludes the bizarre story of how human musicians turned themselves into robots, long before AI turned up. So by the time AI-generated songs began to infiltrate the airwaves in 2022, music already sounded robotic. But now in 2026, AI is able to replicate old recordings, so it’s sounding more human than the humans. In fact, a recent study by Deezer found that 97% of people can’t tell the difference between fully AI-generated songs and human-made songs. They also discovered that around 50,000 fully AI-generated songs are now uploaded to streaming platforms every day, accounting for 34% of all daily deliveries. That’s utterly horrifying! In my view, there is literally only one good thing that could come from generative-AI, and it’s the fact that AI is working relentlessly to sound human. That should be a huge wake-up call for us! Think about it. If sounding like a robot was ideal, then AI wouldn’t be constantly learning how to sound more human. Subscribe to get the latest posts in your inbox. There’s hardly anything that we can do without any effort whatsoever. But sounding human is one of them. So please, embrace your human imperfections. It’s precisely those imperfections that bring your music to life and make it soulful. So this is my invitation to start being human again. If you’re a musician, don’t edit your recordings. Those little pitch or timing issues are the magic that makes you unique. And if you’re a producer, then I encourage you to get a MIDI keyboard and start practising. Even if you can only learn to play your drum beats and bass lines on the keyboard, that will still add a significant human feel to your recordings. On that note, if you need some help with your music, I’ve got you covered. From beginner to advanced, there’s something for you on my website. If you’re a beginner, start by reading my free book 12 Music Theory Hacks to Learn Scales & Chords. It only takes about half an hour to read, then you’ll have a solid foundation of the basics. If you’re already making music, though, you can work your way through 30 free PDF tutorials. They’re step-by-step musical “recipes” you follow to instantly make better music. All genres are there, too. Electronic to hip-hop, classical to metal, and everything in between. Enjoy! On top of the free book, 30 free PDFs, and over 220 free YouTube tutorials, I don’t paywall any of these posts either. I don’t want to exclude anyone. But, if you’re enjoying all these free offerings and want me to make more, please support my work by becoming a paid subscriber. It’s only about the cost of one coffee per month, but if enough people join, then I can pay the rent and keep doing this work. To sign up, please visit HackMusicTheory.com/Join. If you can’t afford to at the moment, though, no problem. You can give Hack Music Theory a 5-star rating in your podcast app, that supports my work too. Either way, thank you so much! And welcome aboard the Songwriter’s Ark, where all the music making skills are being preserved through this global AI flood. The flood shall pass. The skills will last. Ray Harmony :) Donate. Help keep the Songwriter's Ark afloat. Photo by Mart Production About. Ray Harmony is a multi award-winning music lecturer, who’s made music with Serj Tankian (System Of A Down), Tom Morello (Rage Against The Machine), Steven Wilson (Porcupine Tree), Devin Townsend (Strapping Young Lad), Ihsahn (Emperor), Kool Keith (Ultramagnetic MCs), Madchild (Swollen Members), and more. Ray is also the founder of Hack Music Theory, a YouTube channel with over 10 million views and over 250,000 subscribers learning the fast, easy and fun way to make music without using AI, cos it ain’t no fun getting a robot to write “your” songs! Photo by Magda Ehlers Outro music by Ray Harmony, based on the music theory from GoGo Penguin "Everything Is Going to Be OK". Podcast. Listen below, or on any podcast app.