All about AI Accelerators: GPU, TPU, Dataflow, Near-Memory, Optical, Neuromorphic & more (w/ Author)
#ai #gpu #tpu
This video is an interview with Adi Fuchs, author of a series called “AI Accelerators“, and an expert in modern AI acceleration technology.
Accelerators like GPUs and TPUs are an integral part of today’s AI landscape. Deep Neural Network training can be sped up by orders of magnitudes by making good use of these specialized pieces of hardware. However, GPUs and TPUs are only the beginning of a vast landscape of emerging technologies and companies that build accelerators for the next generation of AI models. In this interview, we go over many aspects of building hardware for AI, including why GPUs have been so successful, what the most promising approaches look like, how they work, and what the main challenges are.
OUTLINE:
0:00 - Intro
5:10 - What does it mean to make hardware for AI?
8:20 - Why were GPUs so successful?
16:25 - What is “dark silicon“?
20:00 - Beyond GPUs: How can we get even faster AI compute?
28:00 - A look at today’s accelerator landscape
30:00 - Systolic Arrays and VLIW
35: