Nick Bostrom: Superintelligence (#256)

Nick Bostrom: Superintelligence (#256)

Nick Bostrom: Superintelligence
1 Stunde 8 Minuten
Podcast
Podcaster
A podcast of science stories, ideas, and speculations. Hosted by Professor Brian Keating

Beschreibung

vor 3 Jahren
Nick Bostrom is a Swedish-born philosopher at the University of
Oxford known for his work on existential risk, the anthropic
principle, human enhancement ethics, superintelligence risks, and
the reversal test. In 2011, he founded the Oxford Martin Program on
the Impacts of Future Technology and is the founding director of
the Future of Humanity Institute at Oxford University. In 2009 and
2015, he was included in Foreign Policy's Top 100 Global Thinkers
list. Bostrom is the author of over 200 publications, and has
written two books and co-edited two others. The two books he has
authored are Anthropic Bias: Observation Selection Effects in
Science and Philosophy (2002) and Superintelligence: Paths,
Dangers, Strategies (2014). Superintelligence was a New York Times
bestseller, was recommended by Elon Musk and Bill Gates among
others, and helped to popularize the term "superintelligence".
Bostrom believes that superintelligence, which he defines as "any
intellect that greatly exceeds the cognitive performance of humans
in virtually all domains of interest," is a potential outcome of
advances in artificial intelligence. He views the rise of
superintelligence as potentially highly dangerous to humans, but
nonetheless rejects the idea that humans are powerless to stop its
negative effects. In his book Superintelligence, Professor Bostrom
asks the questions: What happens when machines surpass humans in
general intelligence? Will artificial agents save or destroy us?
Nick Bostrom lays the foundation for understanding the future of
humanity and intelligent life. The human brain has some
capabilities that the brains of other animals lack. It is to these
distinctive capabilities that our species owes its dominant
position. If machine brains surpassed human brains in general
intelligence, then this new superintelligence could become
extremely powerful - possibly beyond our control. As the fate of
the gorillas now depends more on humans than on the species itself,
so would the fate of humankind depend on the actions of the machine
superintelligence. But we have one advantage: we get to make the
first move. Will it be possible to construct a seed Artificial
Intelligence, to engineer initial conditions so as to make an
intelligence explosion survivable? How could one achieve a
controlled detonation? https://www.fhi.ox.ac.uk/
https://nickbostrom.com/ Related Episodes: David Chalmers
elaborates on the simulation hypothesis, virtual reality, and his
philosophy of consciousness. https://youtu.be/ywjbbQXAFic Sabine
Hossenfelder on Existential Physics: https://youtu.be/g00ilS6tBvs
Connect with me: Twitter: https://twitter.com/DrBrianKeating
Instagram: https://instagram.com/DrBrianKeating  Subscribe
https://www.youtube.com/DrBrianKeating?sub_confirmation=1 Join my
mailing list; just click here http://briankeating.com/list ️
Detailed Blog posts here: https://briankeating.com/blog.php ️
Listen on audio-only platforms: https://briankeating.com/podcast
  Join Shortform through my link Shortform.com/impossible and
you’ll receive 5 days of unlimited access and an additional 20%
discounted annual subscription! Subscribe to the Jordan Harbinger
Show for amazing content from Apple’s best podcast of 2018! Can you
do me a favor? Please leave a rating and review of my
Podcast:  On Apple devices, click here, scroll down to the
ratings and leave a 5 star rating and review The INTO THE
IMPOSSIBLE Podcast. ️On Spotify it’s here   On Audible
it’s here Other ways to rate here:
https://briankeating.com/podcast  Support the podcast on
Patreon or become a Member on YouTube Learn more about your ad
choices. Visit megaphone.fm/adchoices

Kommentare (0)

Lade Inhalte...

Abonnenten

15
15