Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris

Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris

Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought...
1 Stunde 7 Minuten

Beschreibung

vor 3 Jahren

Filmmaker Jay Shapiro has produced a new series of audio
documentaries, exploring the major topics that Sam has focused on
over the course of his career.


Each episode weaves together original analysis, critical
perspective, and novel thought experiments with some of the most
compelling exchanges from the Making Sense archive. Whether you
are new to a particular topic, or think you have your mind made
up about it, we think you’ll find this series fascinating.


In this episode, we explore the landscape of Artificial
Intelligence. We’ll listen in on Sam’s conversation with decision
theorist and artificial-intelligence researcher Eliezer
Yudkowsky, as we consider the potential dangers of AI – including
the control problem and the value-alignment problem – as well as
the concepts of Artificial General Intelligence, Narrow
Artificial Intelligence, and Artificial Super Intelligence.


We’ll then be introduced to philosopher Nick Bostrom’s “Genies,
Sovereigns, Oracles, and Tools,” as physicist Max Tegmark
outlines just how careful we need to be as we travel down the AI
path. Computer scientist Stuart Russell will then dig deeper into
the value-alignment problem and explain its importance.


 


We’ll hear from former Google CEO Eric Schmidt about the
geopolitical realities of AI terrorism and weaponization. We’ll
then touch the topic of consciousness as Sam and psychologist
Paul Bloom turn the conversation to the ethical and psychological
complexities of living alongside humanlike AI. Psychologist
Alison Gopnik then reframes the general concept of intelligence
to help us wonder if the kinds of systems we’re building using
“Deep Learning” are really marching us towards our
super-intelligent overlords.


 


Finally, physicist David Deutsch will argue that many
value-alignment fears about AI are based on a fundamental
misunderstanding about how knowledge actually grows in this
universe.


 

Kommentare (0)

Lade Inhalte...

Abonnenten

15
15