3Blue1Brown How might LLMs store facts | Chapter 7, Deep Learning
🎯 Загружено автоматически через бота:
🚫 Оригинал видео:
📺 Данное видео принадлежит каналу «3Blue1Brown» (@3blue1brown). Оно представлено в нашем сообществе исключительно в информационных, научных, образовательных или культурных целях. Наше сообщество не утверждает никаких прав на данное видео. Пожалуйста, поддержите автора, посетив его оригинальный канал.
✉️ Если у вас есть претензии к авторским правам на данное видео, пожалуйста, свяжитесь с нами по почте support@, и мы немедленно удалим его.
📃 Оригинальное описание:
Unpacking the multilayer perceptrons in a transformer, and how they may store facts
Instead of sponsored ad reads, these lessons are funded directly by viewers:
An equally valuable form of support is to share the videos.
AI Alignment forum post from the Deepmind researchers referenced at the video’s start:
Anthropic posts about superposition referenced near the end:
Some added resources for those interested in learning more about mechanistic interpretability, offered by Neel Nanda
Mechanistic interpretability paper reading list
Getting started in mechanistic interpretability
An interactive demo of sparse autoencoders (made by Neuronpedia)
#main
Coding tutorials for mechanistic interpretability (made by ARENA)
Sections:
- Where facts in LLMs live
- Quick refresher on transformers
- Assumptions for our toy example
- Inside a multilayer perceptron
- Counting parameters
- Superposition
- Up next
------------------
These animations are largely made using a custom Python library, manim. See the FAQ comments here:
#manim
All code for specific videos is visible here:
The music is by Vincent Rubinetti.
------------------
3blue1brown is a channel about animating math, in all senses of the word animate. If you’re reading the bottom of a video description, I’m guessing you’re more interested than the average viewer in lessons here. It would mean a lot to me if you chose to stay up to date on new ones, either by subscribing here on YouTube or otherwise following on whichever platform below you check most regularly.
Mailing list:
Twitter:
Instagram:
Reddit:
Facebook:
Patreon:
Website:
2 views
0
0
2 weeks ago 00:08:52 6
[3Blue1Brown] Cross products | Chapter 10, Essence of linear algebra
2 weeks ago 00:10:00 15
[3Blue1Brown] Почему “вероятность 0“ не означает “невозможно“
2 months ago 00:27:06 270
[3Blue1Brown] How (and why) to raise e to the power of a matrix | DE6
2 months ago 00:14:12 19
[3Blue1Brown] Solving the heat equation | DE3
2 months ago 00:12:51 1
Change of basis | Chapter 13, Essence of linear algebra
2 months ago 00:12:09 1
Inverse matrices, column space and null space | Chapter 7, Essence of linear algebra
2 months ago 00:46:24 4
How are holograms possible? | Optics puzzles 5
2 months ago 00:27:13 10
[3Blue1Brown] How large language models work, a visual intro to transformers | Chapter 5, Deep Learning
2 months ago 00:22:42 2
[3Blue1Brown] How might LLMs store facts | Chapter 7, Deep Learning
2 months ago 00:46:23 8
[3Blue1Brown] How are holograms possible? | Optics puzzles 5
2 months ago 00:53:41 1
How I animate 3Blue1Brown | A Manim demo with Ben Sparks
3 months ago 00:46:23 11
[3Blue1Brown] How are holograms possible?
2 months ago 00:22:43 1
How might LLMs store facts | Chapter 7, Deep Learning
2 months ago 00:26:10 1
Attention in transformers, visually explained | Chapter 6, Deep Learning
3 months ago 00:00:59 1
A cute probability fact (part 2)
3 months ago 00:00:59 4
Temperature in LLMs
3 months ago 00:00:59 2
How word vectors encode meaning
3 months ago 00:27:13 10
But what is a GPT? Visual intro to transformers | Chapter 5, Deep Learning
3 months ago 00:00:58 1
How the Mandelbrot set is defined
3 months ago 00:00:50 1
Ellipses have multiple definitions, how are these the same?
4 months ago 00:31:51 1
Visualizing quaternions (4d numbers) with stereographic projection