It’s the dog days of summer—time to address the issue that has been dogging music for the past few years: Auto-Tune. Over the past few years, of course, as more and more musicians incorporate the voice modulation program Auto-tune into their recordings and live performances it has become more widely accepted by a widening gyre of artists—not just in electronica and hip-hop, but also rock and country artists as well. In fact, some listeners hear the fingerprints of Auto-tune everywhere—not only T-Pain and Kanye West and Daft Punk, but also Bon Iver, Cat Power, Tim McGraw, Madonna and Sufjan Stevens. In case you missed it, the shift into the era of Auto-Tune has already occurred.
The battle rages. For some listeners it is another spice in the musical stew, no different than another musical instrument. For others, Auto-tune is gimmicky, faddish and makes the human voice sound robotic. I fall clearly into the latter camp, with a caveat.
Auto-tune is so ubiquitous and so misused and abused, it’s difficult to pull a single example of badness, however, it seems more than fair to start with a song like “I Gotta Feeling” by the Black Eyed Peas. As much as I dislike the song, it is clearly catchy pop and not nearly as loathsome as some of their other bombs. However, the voices clearly do sound shimmery and shiny. No musical instruments were harmed in the making of this dog of a song. As one of my friends likes to say, this is music that makes itself. Aside from the mediocre (and modified) singing, the song is composed of a beat with blips.
This leads the listener to the conclusion that Auto-tune is the musical equivalent of plastic surgery, a way to improve what is natural with a layer of artificiality. This is a fair analogy. Personally, I enjoy electronica as a genre and find the use of computer-generated exotica often entrancing, but why does the human voice also need to sound like a computer? Yes, Auto-Tune covers up musical ineptitude, the inability to hit a certain note. My problem with Auto-Tune is that it sounds so self-consciously, laughably bogus. Auto-Tune is bad CGI.
Speaking of CGI, I was watching the trailer for the new film The Girl on the Train. The entire trailer is overlaid with some kind of Auto-Tune garbage. Auto-Tune is not the product of a niche market anymore; it sells (or at least hopes to). What intelligent listeners use to mock and deride is now mainstream musical pabulum presented to mass audiences.
When the mother of all Auto-Tune songs, “Believe” by Cher, was released in 1998 her voice sounded comically fakey-fake, as if she was singing with an I-Mac over her noggin. Now the use of Auto-Tune is spread more seamlessly throughout songs and without a sense of irony or the shame of musical ineptitude. Auto-Tune is everywhere. “Believe” remains a watershed song, and also still sounds just as positively dreadful as it did on the day I first heard it ooze through some mall speakers somewhere. The songs which have taken the mantle of “Believe” sound just as bad (to my ears at least), if not worse.
What started off as kitsch has become standard issue, even amongst some singer-songwriters with actual talent. Take for instance Justin Vernon’s Bon Iver—the darling sensitive rock band of the 21st century, winning Grammys and raising goosebumps on the arm hairs of listeners everywhere. But Bon Iver has, on occasion, taken to heavy doses of the Auto-Tune (e.g. “Woods”) and to appalling effect. Justin Vernon has an incredible voice—why modulate what doesn’t need modulating? As for Cat Power, whose early albums were the height of indie and minor key—et tu!? She came up as a folky, minor key depressive—glorious in her fragile shell. But a few years ago she used Auto-Tune on “3,6,9” (among others) a catchy song but disappointingly overproduced and glossy. I can live with Auto-Tune when it is confined to the pop music genre I ignore anyway—to me the musical equivalent of Taco Bell—but when real musicians sink to Auto-Tune, it is particularly painful. Now I’m a paranoid listener—I even believe David Bowie’s last album used a smattering of Auto-Tune.
To take the issue one step further, Auto-Tune is representative of a larger acceptance of computer generated music on the listener’s part. This should not be a surprise–we are already wired 24/7; why wouldn’t we want our music to reflect our love affair with electronic devices? Auto-Tune is simply Siri with a beat. As reported by Matt McFarland in the June 6th Washington Post, Google’s Project Magenta is creating music by itself, no human involved except in the initial coding. AI is the most promising new composer on the scene, and lacking a half-life. “The song was created with a neural network—a computer system loosely modeled on the human brain….” Furthermore, an earlier classical composition entitled “Nasciturus” was created in Spain’s University of Malaga and a computer system at Yale entitled Kulitta has created classical compositions regularly. Computer-generated music is most likely an unstoppable train. “Ok, Computer,” indeed.
So what does this all add up to? It really depends on how the electronically modified voice sounds to your ear. Six years ago Anna Pulley wrote a piece for Mother Jones with the title “Is Auto-Tune Killing Pop Music?” Her answer was, essentially, yes. An updated version of this piece might run with the title “Will People Still Make Music in the Future?” The answer, of course, is yes. But we are faced with a musical backdrop which is more and more accepting of complete artificiality. Electronica creates other-worldly soundscapes to great effect, but when the voice (musically-speaking the most human element of all) too is a digital wash what do we really have left?
In the year 2016 are we cyborgs, half-human, half-computer? Despite the abject failure of conceits such as Google Glass, observing the regular behavior of pedestrians in Washington, D.C., where eyes rarely lift from a digital device, I’d say the answer for many of us is a resounding yes. Auto-Tune is what we want, what we have come to accept as normal. We cyborgs want music for cyborgs.