Multimodal Neurons in Artificial Neural Networks (w/ OpenAI Microscope, Research Paper Explained)
#openai #clip #microscope
OpenAI does a huge investigation into the inner workings of their recent CLIP model via faceted feature visualization and finds amazing things: Some neurons in the last layer respond to distinct concepts across multiple modalities, meaning they fire for photographs, drawings, and signs depicting the same concept, even when the images are vastly distinct. Through manual examination, they identify and investigate neurons corresponding to persons, geographical regions, religions, emotions, and much more. In this video, I go through the publication and then I present my own findings from digging around in the OpenAI Microscope.
OUTLINE:
0:00 - Intro & Overview
3:35 - OpenAI Microscope
7:10 - Categories of found neurons
11:10 - Person Neurons
13:00 - Donald Trump Neuron
17:15 - Emotion Neurons
22:45 - Region Neurons
26:40 - Sparse Mixture of Emotions
28:05 - Emotion Atlas
29:45 - Adversarial Typographic Attacks
31:55 - Stroop Test
33:10 - My Findings in OpenAI Microscope
33:30 - Superma
1 view
44
12
1 year ago 01:01:43 1
Methylene Blue: Part 2 with Dr. Francisco Gonzalez-Lima
4 years ago 00:10:12 4
Do Neural Networks Think Like Our Brain? OpenAI Answers! ðŸ§
4 years ago 00:00:40 18
Apple or iPod??? Easy Fix for Adversarial Textual Attacks on OpenAI’s CLIP Model! #Shorts
4 years ago 00:06:09 2
AI 360: 08/03/2021. A Chinese PLM, Multi-modal Neurons, Productionising ML/DL, PyTorch 1.8 and SEER
4 years ago 01:05:04 8
AI Weekly Update - March 8th, 2021 (#27)!
4 years ago 00:51:30 1
Multimodal Neurons in Artificial Neural Networks (w/ OpenAI Microscope, Research Paper Explained)