venerdì 7 maggio 2010

Ethics for Extraterrestrials

FROM NYTIMES
By ROBERT WRIGHT

Remember the episode of “The Twilight Zone” where the earthlings discover only too late that a book brandished by extraterrestrial visitors, titled “To Serve Man,” is not, in fact, a philanthropic manifesto — but, sadly, a cookbook?

Even after watching this show as a kid, I didn’t give serious thought to the possibility of space aliens turning us into toast. But then last week Stephen Hawking made news by weighing in on the subject: “If aliens ever visit us, I think the outcome would be much as when Christopher Columbus first landed in America, which didn’t turn out very well for the Native Americans.” And Hawking knows a thing or two about outer space!

Assessing Hawking’s conjecture may seem hopeless. Doesn’t discerning the motivations of aliens who show no sign of existing amount to the most untethered thought experiment ever?

Not necessarily. Rather than build a whole thought experiment de novo, we can just look around — because, in a sense, we are the thought experiment. We’re an example of an intelligent species (as species go, I mean) that is a century or two away from the technological capacity to voyage beyond its solar system and, upon finding civilizations that are less advanced, having its way with them. Which way would that way be?

It turns out there’s reason to hope that, actually, we’d be kinder to a new world than Europeans were to the New World.

At the outset I should concede that there are differences between us and any given race of space aliens. We alone (to take just one example) have used advanced technology to make a TV show called “Jersey Shore.”

Though our moral “progress” to date has been driven largely by self-interest, the role of enlightenment will have to grow if we are to venture beyond our solar system.
Still, we — like, presumably, any intelligent species anywhere — were created by natural selection, for better and worse. And, like any scientifically advanced species, we’re finding that the laws of the universe grant the technological potential for both mass affiliation and mass murder. So the question is which aspect of this technology our naturally selected nature would incline us to emphasize a century or two from now, should we stumble upon an inhabited planet.

On Hawking’s side of the argument is the fact that natural selection does create organisms prone to belligerent self-interest. And when individuals manage to submerge their self-interest in the interest of a group — clan, tribe, nation — the belligerence tends to just move to the group level, as it did when European explorers, while behaving very politely toward one another, slaughtered natives. As the biologist Richard Alexander has put it, the flip side of “within-group amity” tends to be “between-group enmity.”

So why wouldn’t an alien species evince this principle, and unite only to conquer?

One possible answer can be found in the philosopher Peter Singer’s 1981 book “The Expanding Circle.”

Singer notes the striking moral progress we’ve seen since the days when citizens of Greek city-states treated citizens of other Greek city-states as subhuman. Compare this to the now-common belief that people of all races, creeds and colors are actually people, worthy of decent treatment.

Encouragingly, Singer sees this progress as pretty natural. It begins with intuitions planted in us by natural selection and then is nurtured by reflection, reason and discourse. Eventually a kind of intellectual momentum kicks in, carrying us toward enlightenment. And we’re not done yet! Singer thinks our circle of moral concern could grow beyond our species to encompass all sentient beings (notably nonhuman ones on Earth; Singer is a seminal animal rights advocate).

At that point, presumably, planets everywhere would be safe from the ravages of earthlings.

A slightly less hopeful argument has been made by — well, by me. In my book “Nonzero” I argue that the moral progress Singer rightly celebrates has been driven less by pure reason than by pragmatic self-interest. Technology has drawn groups of people into more and more far-flung “non-zero-sum” relations — relations of interdependence; increasingly it has been in the interest of one group to acknowledge the humanity of another group, if only so the groups can play win-win games. In this view, the decline of American prejudice toward Japanese after World War II was driven less by purely rational enlightenment than by the Japanese transition from mortal enemies to trade partners and Cold War allies. (In this TED conference talk, Steven Pinker, who is writing a book on the decline of violence, contrasts my view with Singer’s.)

If I’m right, and we generally grant the moral significance of other beings to the extent that it’s in our interest to do so, then why wouldn’t we, in 100 or 200 years, do what Hawking imagines aliens doing — happen upon a planet, extract its resources through whatever brutality is most efficient and then move on to the next target? Absent cause to be nice, why would we be nice?

Well, you could make a case that, though our moral “progress” to date has been driven largely by self-interest, with only a smidgen of true enlightenment, the role of enlightenment will have to grow if we are to venture beyond our solar system a century from now.

After all, to do that venturing, we first have to survive the intervening 100 years in good shape. And that job is complicated by various technologies, notably weapons that could blow up the world.

More to the point: these weapons are now embedded in a particularly dicey context: a world where shadowy “nonstate actors” are the looming threat, a world featuring a “war on terror” that, if mishandled, could pull us into a simmering chaos that ultimately engulfs the whole planet. And maybe “winning” that war — averting global chaos — would entail authentic and considerable moral progress.

That, at least, is a claim I make in my most recent book, “The Evolution of God.” I argue in the penultimate chapter that if we don’t radically develop our “moral imagination” — get much better at putting ourselves in the shoes of people very different from ourselves, even the shoes of our enemies — then the planet could be in big trouble.

It’s not crazy to think that, broadly speaking, this sort of challenge would eventually face an intelligent species on any planet. Certainly the challenge’s technological underpinning — that the capacity to escape your solar system arrives well after the capacity to destroy your planet — could reflect the order in which the laws of physics reveal themselves to any inquisitive species, not a peculiar intellectual path taken by our species.

So maybe any visiting aliens would themselves have passed this test; they’d have mustered the moral progress necessary to avoid ruining their planet, and this progress would involve enough genuine enlightenment — enough respect for sentient life — that we’d be safe in their hands.

This is less than wholly reassuring. After all, the suggestion here is that before any species can shift into high technological gear, it has to undergo a moral test so stringent that most species would fail it. In which case the chances are we’ll fail it — and our best hope may be to hold on long enough for kindly space aliens to ride in and save the day.

Things would be a lot simpler if it turned out that Peter Singer is right. And for all I know he is.

Nessun commento:

Posta un commento