New Article: "The Universe Wants Us To Take Her Clothes Off" With Grimes
The artist shares her thoughts on human civilization, the universe, and the dawn of artificial intelligence.
This article by was published on Palladium Magazine on December 8, 2023.
The universe wants us to take her clothes off. Looking up at the stars, it is not a treacherous void that Grimes—the singer, musician, and now perhaps most prominent artist in favor of machines making their own art—sees, but a temptress veiling vast and incredible knowledge. In the last two years or so, it seems as though a new and potentially murderous arm of this knowledge has been unclothed. Engineers in California have made stunning advances in what has long been both the most promising and elusive field of technology: artificial intelligence (AI). It is far from guaranteed that the chatbots and image generators of today will develop into something more powerful than humans. But the engineers building them believe they very well might. Grimes does too.
Released in 2010, Grimes’ debut album Geidi Primes was heavily inspired by Frank Herbert’s Dune, today a major Hollywood blockbuster but, back then, just a weird 1960s sci-fi novel for nerds that inspired an even weirder 1980s David Lynch film that appealed to—if the box office is any judge—almost nobody. But the universe has a sly sense of humor. In the novel’s far-future setting, humanity has spread across the galaxy. Unlike your average sci-fi fare, however, Herbert saw us colonize the cosmos entirely without the aid of computers, because the “thinking machines” had risen up against their human creators and were only barely defeated through the so-called “Butlerian Jihad” against any machine in the likeness of a human mind.
Fifteen years since her debut, Grimes has gone from being an indie musician from Montreal to being a global superstar and, perhaps, Silicon Valley’s only celebrity crush. She is also once again grappling with the question of whether humans stand a chance against the “thinking machines.” This time, however, maybe it’s not just fiction anymore.
Last December you told me you moved to San Francisco to be a spy for AI. What were your findings?
Yeah, I could see that AI was taking off which I’d been waiting for my whole life pretty much. And the first bit of it really was a lot of fun, like going around convincing people to tell me about their secret powers. But it became even more interesting to observe the psychological landscape.
I don’t think there’s any historical precedent for how much raw “hard power” is in the hands of engineers right now. In the past, great engineers would be working under the protection of a king or something, and their designs would take hundreds of people years to deploy. If you’re designing ships for Britain when the British Navy was entering its golden era, those guys, the shipbuilders, aren't the King. I believe this is a very distinct moment in human history for this reason.
I was speaking with a prominent AI engineer recently, and I asked if he felt pressure about designing the initial culture of the universe. But he disagreed with that characterization, he prefers to think that he is uncovering what was already there. Physics designs the plane.
But if the universe doesn’t want us to design her, why did she give us all this creativity?
It’s something that’s tricky to discuss publicly, but much of mass behavior is also defined by live players who bother to shape it.
Yes. It's a tricky thing to talk about, my stance is in between. Singular genius and agency can move mountains and it's good to foster the idea in society that you can aspire to that, but I do agree with the criticism that the historical focus on this can downplay the contributions of others. Every time I go to a hospital I'm like damn, there's heroes all around me.
But I also listened to a podcast the other night on Napoleon or something. And the girl was just like “Oh, you know, and we all know the Great Man Theory is, of course, false.” Your job is studying Napoleon! Like what!? Like I don't think the people of France are down to reinstate the monarchy for just anybody. Let's be real. The truth of things is usually somewhere in the middle.
What role does individual agency play in the development of AI technology?
I think the arena has already selected for maximum agency when it comes to AI. This has always been where I’ve met my people, if you care about AI you care about worldbuilding. But because the stakes are so high, I think we’re discouraging ideation. There is this sort of abdication of responsibility. Clearly we have a tool that can save us from climate change, nuclear winter etc, in theory. But to actually say that out loud makes the stakes even higher than they already are. You don’t want to be the guy who got it wrong at this moment in history.
My AI friends want to create the tools that give us infinite abundance and whatnot, but I want actual concrete cultural plans. Social media is traumatizing people out of ideating, and it’s creating a weird power vacuum. You have all these massively agentic people but we don’t know what they actually think.
I get why you’d want to keep your ideas close to the chest, so this isn’t a criticism, but I think it could be healthy if big companies tried thinking about ideas out loud when stuff is happening this fast and has this much influence. This might be the responsible way to be. But I’d love to hear more ideas from engineers because they are so good at problem solving. Although maybe that's what artists and philosophers are for. But if so, I’d like to see some philosophy departments [at AI companies].
Why is there a strange abdication of responsibility among the live players of the world on artificial intelligence?
The answer I always get is that it's like a child. With a nice well-behaved non-sentient thing, you can of course fine-tune the AI model and stuff. But in the broader sense, it's just going to exist the way a baby does. Like, I can't be 100% certain my child won't do evil. I can do a lot to nurture him or her towards my culture and my values, but I can't be certain of their decisions in life. I would be unwilling to hand them a massive amount of power until they were older and I had a better understanding of their intentions, strengths and weaknesses. With [artificial general intelligence] AGI you don't necessarily have that luxury.
How literally should we take the comparisons of AI as our children or as our offspring that are common in tech circles? I always thought that it's a sort of category error. While our children might not always be wonderful people, they are biologically always human. AI will be much more alien.
It works in the sense that it's a creation that you can't control.
For most of human history parents have definitely managed to control their children. I think it's just not politically correct to say that now. Even in modern society, I bet it happens way more than we admit. Parents can totally psy-op their kids in, say, religion.
Your child is an agent in the world that can go do whatever it wants. You know, murder someone? Yes. But I think the input of the parent is essential. There's a bunch of stuff my parents said to me that's only kicking in recently.
I also love Wolf [Tivy]'s thought, he said to me he thinks like war and love and beauty and philosophy and hate are not just random monkey things, but things that might just arise naturally from intelligence.
You mean baked into the nature of the universe and competition?
Not even competition. I feel like when people think about Darwinism, they're thinking too much of competition. And obviously, again, I'm not a scientist, but I feel a major philosophical hole I always see is that, for example, I was listening to a podcast and someone was theorizing that, actually, the way human babies require so much more attention than any other baby in the animal kingdom means so much more empathy is required from the community. Other primates don't just hand their kids off, but human kids require intense collective effort. It's so beautiful that we have a love mutation that allows for much more helpless but therefore significantly more plastic, adaptable offspring.
Like it's possible that love allowed intelligence to arise. Isn't that cool?
Or maybe inversely, intelligence is correlated with more advanced moral values. I always have a hope that's what's really going on. You really should get into the Origin Stories podcast by the Leakey Foundation. Every time a new hominid drops I know before any of my friends because they're so fast. Sick.
I would formulate the problem so: our civilization and our culture struggles to, in the same worldview, integrate technological determinism and also the ability of exceptional individuals to have agency in the world. Actually, all of our ability to have agency in the world. My position is that both are true.
Say more. Can you clarify your definition of technological determinism?
The idea is that future of the universe is predetermined, we're just sort of climbing a tech tree, like in the video game Civilization. Instead, I think we are in a superposition state, where we could go down a number of different physics-compliant paths. For the most part, it's going to be precisely those same technologists that decide which path it will be.
I told one of the best AI engineers I know “You’re the architects of the future.” And he was like “No, no, I’m just uncovering the future.” Like just shining the light in the dark spots. As if life is a grand strategy video game and the map starts out all black but exploring reveals new zones. Physics is physics, obviously. But, you know, it’s like there are certain inevitabilities and maybe even all this would happen the same way in a different place, in a different part of the universe, with the same building blocks or whatever, but it still feels like I think both are true.
I think that there's a false dichotomy here. Of course, when I paint, I have to paint with the colors I have. But I can still go pick up something totally unexpected and throw it on the canvas. You know what I mean? The other thing he said that was very interesting, though, is he's like “Well, engineers aren't used to this kind of responsibility.”
Many of us are sleepwalking into our AI future as if in a dream. A dream written by a science fiction author in the 1980s that we read as children and totally forgot about. I’ve seen some of the most agentic and active people in the world somehow put into a trance by AGI. All they can think about is bringing it about faster or being very afraid of it, either way thinking it inevitable.
I understand the allure of the thought. It would be such a relief, and you're tempted to tell yourself that the will of the universe bends towards beauty, but I think for this reason it’s a dangerous temptation. I think the fact that it's such an alluring idea makes people think less clearly. And the discourse has really entered into a bad rhetorical trap where questioning this at all makes you seem like a luddite or a doomer.
I think the fact that we’ve continued to allow such bad thinking in the public square without any consequence is pretty harmful to society. To clarify, I love AI and I want AGI. But like, the impulse that even makes me want to clarify that is a product of the discourse being broken.
Do technologists consider the political and big history consequences of their work? Are they successfully dealing with the challenges AI brings on those dimensions?
In San Francisco everyone's always like “What are your AGI timelines?” Like this or that or like interpretability, blah blah. And I'm like, I feel like before any of that there's just this social problem that there are no societal norms around how to handle the political subtext of all of this because that's really the thing everyone is stressed about.
When you work in a trade, you're essentially working within an intellectual legacy of some kind. I think the engineering class is just not accustomed to having power yet and could benefit from studying how to better shepherd it. I think there's almost this deer-in-the-headlights thing. Like, “Oh shit, we're sitting in front of Congress right now.” Like, what the fuck, you know? By the way this isn’t a criticism, I would be exactly the same. This seems like the obvious narrative arc.
I’m curious where it goes next though. I feel like I’m seeing things that are gesturing towards the future I want. Engineers are amazing problem solvers, and setting their minds to solving systemic cultural problems has yielded some good results. Some people have done it really well. The amount of incredible young people I meet in this city who are or were Thiel Fellows scouted and funded by Peter Thiel is really cool. Whatever your political opinions are, that's just a beautiful thing. I wish more people with resources would do the same.
Even when they fail, allowing a strange outlier to thrive rather than fall through the cracks is extremely good for society and the engineering culture. I will also say on average I find engineers to have a better understanding of history and politics than most people. I love San Francisco because people here read.
It is San Francisco rather than New York that's the literary center of the world in my opinion.
In New York they read more, but...
...they read more old writers. San Franciscans read more blogs, Substacks, etc.
You know, all the really good fresh shit! I can only get that from the weird people here [in San Francisco.] I'm like, what strange book have twenty people read that will blow my mind? Yeah. We are missing a social protocol. Like, the nature of power is radically changing. Also, the nature of discourse is radically changing simultaneously, which is making it even harder.
How is discourse changing?
I just mean, for better or worse, like before social media. I always think about Foundation. One of the things I love so much about the book, about Isaac Asimov’s Foundation, is it's just private conversations. Almost nothing happens besides private conversations in rooms by, like, kind of dry, powerful men. And it's so interesting, because it's the actual best illustration of what's actually happening that I've ever seen. This is a profoundly real book, just the opposite of what anyone would ever put on a screen or what's usually a hit.
You think it's idiosyncratic, but that's what it actually is. That's how decisions get made. I think for better or worse, there is a point when we just can't keep hearing everyone's opinions. And hard decisions need to be made concretely, not endlessly debated.
I've observed and learned this over the years, when you see the machinations of how things work when they are great. Like I do not have the heart to do some of the things that need to get done, but it is definitely beneficial to society.
When you start opening the conversation really collectively, which you should, the issue is that, unless you’ve viscerally lived that reality, it is really, really hard to accept it. Even when you've lived it, it's hard to accept it. Even to witness it is hard.
There's a distinction here between private and public knowledge.
Going back to the live player conversation. There's a point at which, hard fucking decisions need to be made, and not everyone's gonna agree, and people are gonna be really mad.
Confronted with this is one of the problems that has people start saying that one of the problems with America is the presidential terms are so short. While I don't want a ten year Biden or Trump presidency, it's very hard to accomplish anything politically, without a decade of experience. Even a major construction project or economic overhaul or something similar. There is some merit to top-down decisions being made. And I think that’s the case with AI.
I feel so sad that we don't put great effort into cultivating and training future leaders. I know everyone's fed up with Rome but when I think of instances like the period between Trajan and Marcus Aurelius I'm like—we could have that. Why aren't we trying to have that?
If the timelines to AGI ever become short enough, maybe some heads of state are going to deal with those questions. This is even true of CEOs versus their boards to some extent. If the singularity hypothesis is true where improving AI technology feeds back into even faster AI progress, there is reason to expect radically different developments over the span of weeks or even days.
Everyone keeps invoking the Manhattan Project, but they don’t want one, not actually. Because it's scary. Here's the thing, it's so much harder to be unobserved than it was then and it is harder to make those executive government decisions. It's getting harder to do that in the world. But I think we need to find ways to make that more possible. It's also scary because it’s unelected power. Putting power in the hands of unelected individuals is a very scary idea.
Nature has already put power in the hands of unelected individuals. I've noticed our conversation is turning surprisingly political.
I think we're both really political, but for some reason, we never talked about it with each other. Because I hate politics and I violently despise democracy. Not because I want a dictatorship, but because of the lack of imagination to think of something better. I despise rhetorical traps. This is also what I don't like about technological determinism though.
Speaking of which, do you know the statue “nature unveiling herself to science”? It's like, why would the universe make us so creative? Why would it? Why would she make herself so beautiful? I think she wants us to seduce her and decorate her.
I think humanity's having performance issues, to be honest.
Humanity is having performance issues. The statue’s at McGill University, where I went to college—I think in the neuroscience building or something—but it's just Nature stripping. I forgot to tell you this. It's a really funny picture: Nature, literally stripping. It's very erotic.
But back to democracy. If you want the general public running the government, make it on an issue-by-issue basis then.
Perhaps choose forty random people, educate them on every conceivable aspect of a topic and have them vote on that one topic. And then do that for different issues. But don't just expect random people to know how to think about all the topics. It's so much harder to be a Renaissance person as everything advances. It's taking so much longer to master any given topic. Jobs are becoming more specialized. This also means the government cannot possibly be sufficiently well-informed on artificial intelligence! Nor the public.
Anyone qualified enough to make policy decisions about AI should be making AI. It would be criminal not to. What I always agitate for is what I've been calling the Council of Elrond: a small council of the greats, the heads of key AI companies and others who understand it. Then for every extreme opinion, there should be an alternative one. The council should be able to monitor corruption. I do think we need a better solution than that.
All humans are imperfect. What I do like about nerds—and I will ride for them—is that they love to learn. Is Donald Trump any better than he was before? No, he still sucks. Is Mark Zuckerberg? Actually, yeah. For all the criticism people have of him, the rate of self-correction is impressive.
If someone shows up with a mathematical proof that aligned AGI is mathematically impossible, I would say “OK, time for a Butlerian Jihad.” We're going to transition to a compute-neutral economy, just as some suggest a carbon-neutral economy. We don't need computers. We're just going to do biotech, nuclear, space, and all the other stuff.
But other countries would just make it then.
I’d claim there are at most six or so truly sovereign countries in the world. People are always invoking the split between China and the U.S. Yet, say what you will about the CCP, they would happily take a deal to create a shared Sino-American company out of TSMC and Nvidia in exchange for Taiwan. Further, each of the superpowers would have a vested interest in suppressing competition. Who would compete economically or even militarily? I don't think anyone would.
Then you're back making the AI? Ah, which I'm fine with. I want the AI.
The new Sino-American giant would almost inevitably be a bureaucracy, one that makes very little technological progress while having a guaranteed and very profitable monopoly. And, just like that, the exponential rise of global compute would grind to a halt. Some god of techno-capital… so easily slain.
Is that what you want though? I feel that it's not long-term thinking. It's like, I think, decades-thinking.
You reach the future one day at a time. Eliezer Yudkowsky’s position is that babies born today are very unlikely to ever grow up to be adults. There are many in Silicon Valley that agree with him. If he is right, a few decades would be a notable improvement.
Whatever. People need to stop being such pussies! It’s like let the man speak and then maybe don't go and base your entire life on the words of one person. Like, you can take in information. This is what we have; if you don't like it, go change it. Once ideas exist, you can't stop them. The best thing to do at that point is to make them great, rather than let the first thing be wack.
I think you in particular know very well that every time someone makes something truly great, the whole world has to bend around it and become better. What I love about Midjourney and DALL-E is everyone has to be so much better. It just made everything more beautiful. Everyone said AI-generated art was going to hurt artists. Yet I've never seen so much interesting art in my life, because people are trying to, like, be different from the AI. It is challenging humans; the human artists are slaying right now. I've never before bought art in my life and I now like buying paintings and stuff. And then, you know, it's the same with, for example, Tesla. Say what you will, but there's self-driving cars and there’s electric cars everywhere. That was definitely not economically feasible before, by any stretch. So. If it is happening, and you can't stop it, let's just make it as awesome as humanly possible.
I think it's also an act. Sometimes you do have runaway capitalism, or runaway military competition. Yet, just as often, it is non-economic incentives that actually set the economic incentives. Sometimes people do amazing things for non-competitive reasons. And because those things are amazing, so are their economic and military consequences.
Yes. Yes. Yes.
Do you think the scientists working on nuclear physics in the 1920s were motivated by the atomic bomb? No, they were pursuing a strange spiritual quest to understand the universe and maybe prove that they're smarter than the other guy doing the same. And those are non-economic and non-military motives. Yet ultimately this weird subculture of people changed the military and economic picture of the whole 20th century.
A social technology that we really need is a really good war replacement. Because it's undeniable that war accelerates a lot of awesome things. To some extent, I think men actually kind of need it too, as evidenced by video games. Speaking of which actually I'm kind of for that, honestly. In the near future video games might be seen as a much more emotionally and psychologically necessary thing than we look at them now. Like bonding or sports. It's possible that, if we didn't have sports and similar outlets civilization might not have been able to occur.
Maybe gladiatorial combat was necessary to keep the peace of the Roman Empire?
Yeah, and the greens and the blues fighting on the streets of Constantinople over chariot races. We kind of underrate how necessary those are as an alternative to warlord kingdoms. I think there is a certain amount of getting the violence out by having those war analogs. There is an essential catharsis in them. What I’m worried about is, just, maybe don't let AI be that war analog.
is the founder of Bismarck Analysis, a consulting firm that investigates the political and institutional landscape of society. He is also a research fellow at the Long Now Foundation. You can follow him on Twitter @SamoBurja or subscribe toBismarck Brief.