#84 – William MacAskill: Effective Altruism

William MacAskill is a philosopher, ethicist, and one of the originators of the effective altruism movement. His research focuses on the fundamentals of effective altruism – the use of evidence and reason to help others by as much as possible with our time and money, with a particular concentration on how to act given moral uncertainty. He is the author of Doing Good Better – Effective Altruism and a Radical New Way to Make a Difference. He is a co-founder and the President of the Centre for Effective Altruism (CEA) that encourages people to commit to donate at least 10% of their income to the most effective charities. He co-founded 80,000 Hours, a non-profit that provides research and advice on how you can best make a difference through your career.

Support this podcast by signing up with these sponsors:
– Cash App – use code “LexPodcast” and download:
– Cash App (App Store): https://apple.co/2sPrUHe
– Cash App (Google Play): https://bit.ly/2MlvP5w

EPISODE LINKS:
William’s Twitter: https://twitter.com/willmacaskill
William’s Website: http://www.williammacaskill.com/
Doing Good Better (book): https://amzn.to/2UsMRDj
GiveWell: https://www.givewell.org/
80,000 Hours: https://80000hours.org/

This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.

Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.

OUTLINE:
00:00 – Introduction
02:39 – Utopia – the Long Reflection
10:25 – Advertisement model
15:56 – Effective altruism
38:28 – Criticism
49:02 – Biggest problems in the world
53:40 – Suffering
1:01:40 – Animal welfare
1:09:23 – Existential risks
1:19:08 – Existential risk from AGI