Handlebars, AI, and the Beginning of Infinity
How Flobots predicted the doomer vs. E/acc discourse
Handlebars by Flobots is a remarkable song. I distinctly remember listening to aroud its 2005 release. Being one of the few songs on my iPod at the time I had probably played it hundreds of times, entranced by its powerful aura, but too young to reflect deeply on its meaning. Musically, it takes inspiration from the trumpet-backed, semi-ska/rock influence from the likes of Cake and others, while having a distinctive lyrical meter akin to Rise Against the Machine.
If you’re not already familiar with the song, it manages to compress the fundamental tension between the literally infinite capacity for human creativity and invention and the will to power with which that is so often associated in a tight 3 and a half minutes.
The song has a simple refrain - “I can ride my bike with no handlebars” - perhaps the simplest thing one could accomplish that is even somewhat impressive (look ma, no hands!). From there the singer’s accomplishments grow, if slowly, “I can take apart the remote control and I can almost put it back together.” The singer then realizes “I can do anything that I want”, and goes on to describe his increasingly advanced technological innovations, “I can make new antibiotics, I can make computers survive acquatic conditions… me and my friends understand the future”. He finally describes an ultimate culmnation of this power in a haunting climax “I can end the planet in a holocaust”.
The song effectively delivers this lyrical content as the vocal intensity and instrumental layering build build with the level of technological innovation. It has resonated with me strongly as of late.
The song was released in 2005, but seems oddly prescient as of writing nearly 20 years later in 2024. We are at the dawn of a new era in AI, where many of the technological advancements promised by Handlebars, are already fulfilled by what are now out-of-date LLMs. GPT-4 can “tell you about Leif Erikson”, “know[s] all the words to "De Colores"”, and “can make you want to buy a product”; and arguably current AI models can “make new antibiotics”, “guide a missle by satellite”, and “make anybody go to prison”.
More than in 2005 the apocalyptic scenarios of holocaust that are presented in the song feel increasingly likely to some groups who think carefully about the implications of AI. In particular, Elizier Yudkowsky who predicts AI will likely make humans extinct, or at least present civilizational collapse, in a matter of years to decades. At the same time the e/acc faction, perhaps best exemplified by pseudonymous poaster “Beff Jezos” takes almost the opposite approach to this — rather than creating an almost certain apocolypse, AI and related technological improvements will limitlessly benefit humanity by virtue of those very technological improvements.
Both groups grasp the fundamental insight that David Deutsch presents in “The Beginning of Infinity”. This is a book that I probably would recommend more highly than any other I have read. At risk of oversimplifying its thesis, Deutsch presents the case that anything that is not prohibited by the laws of physics is possible. And that intelligent beings (perhaps both humans and AI) are capable of creating explanations that allow them to acheive any such possibility (as allowed by the laws of physics). Every technological development that humanity has created from fire to nuclear submarines is a corollary to this axiom — we create explanations whose validity is empirically tested and then go on to stack those explanations to build increasingly powerful technology.
What is not articulated so clearly in the book is the power that is necessarily associated with such technologiocal developments. Humans have of course always realized this, this story being told many times from myth of Prometheus to Mary Shelly’s Frankenstein. Throughout history, the creation of new technology has gone hand in hand with its military use - from bronze to gunpowder to radio to fission to AI. As I discussed in my last post, Culture as Memes and Agents, the technological advancments brought about by effective explanations are perhaps the single most important tool through which agents (read both humans and AI) can affect their will to power.
Fundamentally what seperates the doomers from the e/accs is their view on the will to power. AI “doomers” think about this will to power exclusively in the negative. They see risk in an AI creating scientific explanations so rapidly that it will quickly have the power to harness this technology, and use this technology sqaurely against the interests of humans. E/acc sees this power exclusively in the positive. They see the potential that these same explanations can offer mankind and put themselves on the vanguard of harnessing and commercializing that technology.
I believe both groups are overzealous in moving to an extreme position without considering the multitude of moral outcomes. There is no inherent moral virtue or vice in the creation of technology, only in its use. How one chooses to make this moral evaluation ultimately depends on what moral framework they use. This is something I intend to write more about soon. Suffice to say, I believe a deep evaluation of our moral frameworks is more important than a blanket discussion of whether technology is “good” or “bad”.
Flobots presents a bleakly pessimistic case for the potential use of such technology. Society to date has found mostly beneficial uses of such technology, improving living standards worldwide by many measures, though perhaps worsening standards by some.
As the unabated drive of technological explanation continues, those leading its creation should consider their power, their moral foundations and the moral implications of the technology they create. Whatever moral framework you choose, you should not hope that the creation of technology alone will be enough to attain your desired outcome.