© 2024 91.9 KVCR

KVCR is a service of the San Bernardino Community College District.

San Bernardino Community College District does not discriminate on the basis of age, color, creed, religion, disability, marital status, veteran status, national origin, race, sex, sexual orientation, gender identity or gender expression.

701 S Mt Vernon Avenue, San Bernardino CA 92410
909-384-4444
Where you learn something new every day.
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

How can we help humans thrive trillions of years from now? This philosopher has a plan

Philosopher William MacAskill coined the term "longtermism" to convey the idea that humans have a moral responsibility to protect the future of humanity, prevent it from going extinct and create a better future for many generations to come. He outlines this concept in his new book, <em>What We</em> Owe<em> the Future</em>.
Matt Crockett
Philosopher William MacAskill coined the term "longtermism" to convey the idea that humans have a moral responsibility to protect the future of humanity, prevent it from going extinct and create a better future for many generations to come. He outlines this concept in his new book, What We Owe the Future.

Let's say you're hiking, and you drop a piece of glass on the trail. Eventually someone will walk along the trail and might cut themselves on the glass.

You'd be really sorry to hear if it happens to someone you know in a week. But what if the victim lived thousands, even millions of years in the future?

Philosopher William MacAskill, 35, likes to bring up this scenario to drive home a point: "If you're thinking about the possibility of harming someone, [it doesn't] really matter that person will be harmed next week or next year, or even in a hundred or a thousand years. Harm is harm."

That's MacAskill's argument behind longtermism, a term he coined to describe the idea that humans have a moral responsibility to protect the future of humanity, prevent it from going extinct — and create a better future for many generations to come. He outlines this concept in his new book, What We Owe the Future.

In the book, MacAskill explains why today's humans need to figure out how to minimize the harm that global threats such as pandemics, biowarfare, climate change or nuclear disaster could have on future humans. And he encourages readers to think outside the box about what a sustainable far-future could look like. Perhaps humans could find a way to live on other planets or prevent the sun from expanding and burning up Earth in half a billion years.

It sounds wild. But MacAskill, an associate professor in philosophy and a senior research fellow at Global Priorities Institute at the University of Oxford, is dead serious. In a conversation with NPR ranging from the earliest hunter-gatherers to space flight, he talks about what we can do to ensure that humanity lasts trillions of years. This interview has been edited for length and clarity.

In your book, you urge people to protect the "future of humanity." How many years into the future are you talking about?

Well, we don't know because we don't know how long human civilization will last. But it could be an extremely long time. Typical mammal species last for a million years. Homo sapiens have [already] existed for 300,000 years. That would give us 700,000 years to come. The Earth, meanwhile, will remain habitable for hundreds of millions of years, and if we one day escaped Earth and took to the stars, then we could live for hundreds of trillions of years.

The idea of earthlings flying out across the universe to find a home on other planets sounds like science fiction.

We don't have the technology to do that now. But there's nothing in principle to say why we couldn't do that with continued technological progress over the coming thousands of years and the patience to do so over hundreds of millions of years.

You say that if we can have the potential to harm people in the future, we can benefit them, too. What are some things that people can do today to safeguard the future?

When we store and bury radioactive nuclear waste, we're thinking [about protecting humans] tens of thousands of years into the future. When we think about the creation of political institutions or legal systems, we're often thinking they will be beneficial in the short term but also far into the future.

Philosopher William MacAskill's book, <em>What We Owe the Future</em>, urges today's humans to protect future humans — an idea he calls longtermism.
/ Basic Books
/
Basic Books
Philosopher William MacAskill's book, What We Owe the Future, urges today's humans to protect future humans — an idea he calls longtermism.

How do we know that the things we do can have a long-term impact? Can you share an example?

There are many examples from the past. Even looking at early hunter-gatherers, the world today is very different as a result of their actions. There used to be a wide variety of megafauna — glyptodonts (armadillos the size of small cars); giant ground sloths weighing up to half a ton; dire wolves [a large canine related to the wolf]. The evidence is clear that it was human beings [who] killed off many beautiful creatures by overhunting or [causing] environmental change. And that means there are many fewer species of large animals around today.

In the past 250 years, we've made incredible advances in science and technology. And you argue that currently, we're in a unique position to make a difference for future humans. Why?

We're still living through an era of fast technological progress. That means we're encountering new technology that could be very good for humanity but also risky. One category is biotechnology. We're able to create new vaccines much more quickly, but we're also developing the ability to create novel, [lab-created] pathogens with greater destructive power than existing pathogens — pathogens that could cause the extinction of the human race.

Artificial intelligence is another one. The leading A.I. models [today] have the computational power of something like an insect brain. Over the course of our lifetimes, that will change. They will start to have the computational power of the human brain or even significantly greater.

That sounds exciting. How is that risky?

If we get to the point of developing human-level AI, that will be one of the most important inventions of all time. It could be enormously beneficial in creating abundance and prosperity for everyone. It could also mean that power gets concentrated into the hands of a very small number of people who reap the gains and then use A.I. to gain power for themselves.

So you're saying that because these technologies are in these early stages of development, we have the power to choose the way they go, whether they harm people in the long run or help us.

That's exactly right.

There are a lot of ways that civilization could be wiped out: climate change, nuclear war. But the greatest threat, you say, is biowarfare. How do we protect ourselves and future generations from a catastrophic pandemic?

One thing we can do is implement projects for early detection. Our organization Future Fund has already been promoting and funding some work in this area, such as the Nucleic Acid Observatory. Essentially, all around the world, [people can] take samples of wastewater and scan it for DNA to see if there are any pathogens we haven't seen yet. If there were, we could have a much faster response to future pandemics.

A second thing we could [implement] — that Future Fund is helping to fund — is technology called Far-UVC Lighting. It uses a small spectrum of light at high intensity to essentially sterilize a room — if it's implanted into a light bulb — of all pathogens. We need to do further studies to see if it is OK for humans to be underneath that light for long periods of time.

Those are examples of people today developing technology for future good. Let's talk about what a perfect future for humans looks like.

We often suffer from a lack of imagination in this regard. There's an enormous amount of dystopian future fiction. There's not a huge amount of utopian fiction.

The future could be very good indeed. Just look at the progress we've made in the last 300 years. In 1700, three-quarters of the world was in some form of forced labor, slavery or serfdom. There was no anesthetic. Even if Englishmen could travel, they couldn't go very far. No one lived in a democracy.

If we imagine a good future as being just as good as the best lives today, maybe like a global Sweden or something, that's a real failure of imagination. Instead, we should reflect on what your very best days [are like] and then think, wow, imagine if life for everyone were as good as those very best days — but all the time. And then think, OK, now take that [and make it] ten times or 100 times better. That's how good the future could be if we play our cards right.

What do you think humans could achieve in the very far future?

Life on Earth will end in something like half a billion years because of the expanding sun. That's kind of guaranteed unless we are able to extend the sun's time period. And it seems like future generations could have the technology to do so. They might be able to siphon out some of the hydrogen in the sun to make it a little smaller and burn cooler. [Then when the sun starts cooling down] they could return the hydrogen to keep the sun burning for billions of years.

That just sounds so unimaginable.

The funny thing about space is that the basic physics is not that hard in terms of complexity. Space is [comprised of] lumps of rock kind of just floating around – and gas clouds coalescing in very predictable ways.

What actions can people take now that could have a real effect on people living, you know, 5,000 years from now?

In terms of mitigating climate change, donate 10% of your income to the most effective climate charities. Or think about a career change.

Like doing what?

Work on the safe design of artificial intelligence systems. Or work in think tanks or government institutions to ensure we have sensible regulation that can get at the benefit of these AI technologies while avoiding the harms. Or try to boost some of the defensive technology within biotechnology, like creating vaccines fairly quickly or very good PPE so we can be protected against novel pathogens

What about my job?

We also need writers and documentary makers and good speakers who can help inspire others to get out of their seats and use their time and money to make the world a better place.

Copyright 2022 NPR. To see more, visit https://www.npr.org.

Malaka Gharib is the deputy editor and digital strategist on NPR's global health and development team. She covers topics such as the refugee crisis, gender equality and women's health. Her work as part of NPR's reporting teams has been recognized with two Gracie Awards: in 2019 for How To Raise A Human, a series on global parenting, and in 2015 for #15Girls, a series that profiled teen girls around the world.