Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
Eliezer Yudkowsky is a researcher, writer, and philosopher on the topic of superintelligent AI. Please support this podcast by checking out our sponsors:
- Linode: to get $100 free credit
- House of Macadamias: and use code LEX to get 20% off your first order
- InsideTracker: to get 20% off
EPISODE LINKS:
Eliezer’s Twitter:
LessWrong Blog:
Eliezer’s Blog page:
Books and resources mentioned:
1. AGI Ruin (blog post):
2. Adaptation and Natural Selection:
PODCAST INFO:
Podcast website:
Apple Podcasts:
Spotify:
RSS:
Full episodes playlist:
Clips playlist:
OUTLINE:
0:00 - Introduction
0:43 - GPT-4
23:23 - Open sourcing GPT-4
39:41 - Defining AGI
47:38 - AGI alignment
1:30:30 - How AGI may kill us
2:22:51 - Superintelligence
2:30:03 - Evolution
2:36:33 - Consciousness
2:47:04 - Aliens
2:52:35 - AGI Timeline
3:00:35 - Ego
3:06:27 - Advice for young people
3:11:45 - Mortality
3:13:26 - Love
SOCIAL:
- Twitter:
- LinkedIn:
- Facebook:
- Instagram:
- Medium: @lexfridman
- Reddit:
- Support on Patreon:
1 view
103
29
2 months ago 00:12:48 1
That Alien Message
6 months ago 00:21:11 1
GPT-5 AI spy shows how it can destroy the US in a day.
10 months ago 11:31:41 11
Harry Potter and The Methods of Rationality: Part 1 (Chapers 1-21)
11 months ago 03:12:41 1
Eliezer Yudkowsky on if Humanity can Survive AI
12 months ago 02:33:30 1
S4A Office Hours 2023-07-25: Studying Marxism to Solve Present-Day Social-Political Problems & More
1 year ago 00:11:28 1
The Hidden Complexity of Wishes
1 year ago 01:49:22 5
159 - We’re All Gonna Die with Eliezer Yudkowsky
1 year ago 00:17:27 1
Harry Potter and The Methods Of Rationality: Chapter 1
1 year ago 00:10:33 1
Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED
1 year ago 00:06:44 1
Sorting Pebbles Into Correct Heaps - A Short Story By Eliezer Yudkowsky
1 year ago 00:07:20 1
The Power of Intelligence - An Essay By Eliezer Yudkowsky
1 year ago 00:03:32 2
The Parable of the Dagger
1 year ago 03:17:51 1
Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368
1 year ago 04:03:25 1
Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality
1 year ago 01:17:09 1
Eliezer Yudkowsky on the Dangers of AI 5/8/23
1 year ago 01:10:09 5
L’ICEBERG de l’HORREUR EXISTENTIELLE : 42 Théories Dingues et Flippantes sur la Réalité
1 year ago 01:35:45 1
George Hotz vs Eliezer Yudkowsky AI Safety Debate
1 year ago 03:08:46 1
George Hotz: Tiny Corp, Twitter, AI Safety, Self-Driving, GPT, AGI & God | Lex Fridman Podcast #387
1 year ago 00:17:11 1
Don’t Look Up - The Documentary: The Case For AI As An Existential Threat
1 year ago 00:00:59 1
Harry Potter and the Methods of Rationality Audiobook
2 years ago 00:35:17 1
Eliezer Yudkowsky on “Three Major Singularity Schools“
2 years ago 03:12:13 1
Daniel Schmachtenberger: “Artificial Intelligence and The Superorganism“ | The Great Simplification
2 years ago 02:02:06 1
Дискуссия “Искусственный интеллект: возможности и риски“; Шоу-игра “Не бешеные эксперты“ 1 выпуск
2 years ago 02:43:14 6
Discussion / debate with AI expert Eliezer Yudkowsky