The Doctor: You lot, you spend all your time thinking about dying, like you're gonna get killed by eggs, or beef, or global warming, or asteroids. But you never take time to imagine the impossible. Like maybe you survive. (Doctor Who, “The End of the World”)
It’s tempting to think that humanity is doomed: environmental catastrophe, nuclear war, and pandemics all seem capable of wiping us out, and that’s without imagining all of the exciting new technologies that might be lying in wait across the horizon waiting to devour us. However, I’m an optimist. I think there’s an excellent chance humanity will see this century out. And if we eventually become a multi-planetary species, the odds start looking really quite good for us. Nonetheless, in thinking about the potential value in human survival (or the potential loss from human extinction), I think we could do more first to pin down whether (and why) we should care about our survival, and exactly what would be required for us to survive.
For many hardnosed people, I imagine there’s an obvious answer to both questions: there is no special value in human survival, and in fact, the universe may be a better place for everyone (including perhaps us) if we were to all quietly go extinct. This is a position I’ve heard from ecologists and antinatalists, and while I won’t debate it here, I find it deeply unpersuasive. As far as we know, humanity is the only truly intelligent species in the universe – the only species that is capable of great works of art, philosophy, and technological development. And while we may not be the only conscious species on earth, we are likely the only species capable of the more rarefied forms of happiness and value. Further to that, even though there are surely other conscious species on earth worth caring about, our sun will finish them off in a few billion years, and they’re not getting off this planet without our help (in other words: no dogs on Mars unless we put them there).
However, even if you’re sympathetic to this line of response, it admittedly doesn’t show there’s any value in specifically human survival. Even if we grant that humans are an important source of utility worth protecting, surely there are intelligent aliens somewhere out there in the cosmos capable of enjoying just as fancy pleasures as those we experience. Insofar as we’re concerned with human survival at all, then, maybe it should just be in virtue of our more general high capacity for well-being?
Again, I’m not particularly convinced by this. Leaving aside the fact that we may be alone in the universe, I can’t shake the deep intuition that there’s some special value in the thriving of humanity, even if only for us. To illustrate the point, imagine that one day a group of tiny aliens show up in orbit and politely ask if they can terraform earth to be more amenable to them, specifically replacing our atmosphere with one composed of sulphur dioxide. The downside of this will be that humanity and all of the life on Earth will die out. On the upside, however, the aliens’ tiny size means that Earth could sustain trillions of them. “You’re rational ethical beings,” they say. “Surely, you can appreciate that it’s a better use of resources to give us your planet? Think of all the utility we’d generate! And if you’re really worried, we can keep a few organisms from every species alive in one of our alien zoos.”
Maybe I’m parochial and selfish, but the idea that we should go along with the aliens’ wishes seems absurd to me (well, maybe they can have Mars). One of my deepest moral intuitions is that there is some special good that we are rationally allowed – if not obliged – to pursue in ensuring the continuation and thriving of humanity.
Let’s just say you agree with me. We now face a further question: what would it take for humanity to survive in this ethically relevant sense? It’s a surprisingly hard question to answer. One simple option would be that we survive as long as the species Homo sapiens is still kicking around. Without getting too deeply into the semantics of “humanity”, it seems like this misses the morally interesting dimensions of survival. For example, imagine that in the medium term future, beneficial gene-modding becomes ubiquitous, to the point where all our descendants would be reproductively cut off from breeding with the likes of us. While that would mean the end of Homo sapiens (at least by standard definitions of species), it wouldn’t, to my mind, mean the end of humanity in the broader and more ethically meaningful sense.
A trickier scenario would involve the idea that one day we may cease to be biological organisms, having all uploaded ourselves to computers or robot bodies. Could humanity still exist in this scenario? My intuition is that we might well survive this. Imagine a civilization of robots who counted biological humans among their ancestors, and went around quoting Shakespeare to each other, discussing the causes of the Napoleonic Wars, and debating whether the great television epic Game of Thrones was a satisfactory adaptation of the books. In that scenario, I feel that humanity in the broader sense could well be thriving, even if we no longer have biological bodies.
This leads me to a final possibility: maybe what’s ethically relevant in our survival is really the survival of our culture and values: that what matters is really that beings relevantly like us are partaking in the artistic and cultural fruits of our civilization.
While I’m tempted by this view, I think it’s just a little bit too liberal. Imagine we wipe ourselves out next year in a war involving devastating bioweapons, and then a few centuries later, a group of aliens show up on Earth to find that nobody’s home. Though they’re disappointed that there are no living humans, they are delighted by the cultural treasure trove of they’ve found. Soon, alien scholars are quoting Shakespeare and George R.R. Martin and figuring out how to cook pasta al dente. Earth becomes to the aliens what Pompeii is to us: a fantastic tourist destination, a cultural theme park.
In that scenario, my gut says we still lose. Even though there are beings that are (let’s assume) relevantly like us that are enjoying our culture, humanity did not survive in the ethically relevant sense.
So what’s missing? What is it that’s preserved in the robot descendant scenario that’s missing in the alien tourist one? My only answer is that some kind of appropriate causal continuity must be what makes the difference. Perhaps it’s that we choose, through a series of voluntary, purposive actions to bring about the robot scenario, whereas the alien theme park is a mere accident. Or perhaps it’s the fact that I’m assuming there’s a gradual transition from us to the robots, rather than the eschatological lacuna of the theme park case.
I have some more thought experiments that might help us decide between these alternatives, but that would be taking us beyond the scope of a blogpost. And perhaps my intuitions that got us this far are already radically at odds with yours. But in any case, as we take our steps into the next stage of human development, I think it’s important for us to figure out what it is about us (if anything) that makes humanity valuable.