Beyond neural scaling laws – Paper Explained [NVIDIA GTC giveaway!]
„Beyond neural scaling laws: beating power law scaling via data pruning” paper explained with animations. You do not need to train your neural network on the entire dataset!
Sponsor: NVIDIA ❗ use this link to register for the GTC 👉
Google Form to enter DLI credits giveaway:
📺 PaLM model explained:
Check out our daily #MachineLearning Quiz Questions:
Paper 📜: Sorscher, Ben, Robert Geirhos, Shashank Shekhar, Surya Ganguli, and Ari S. Morcos. “Beyond neural scaling laws: beating power law scaling via data pruning.“ arXiv preprint arXiv: (2022).
Outline:
00:00 Stable Diffusion is a Latent Diffusion Model
01:43 NVIDIA (sponsor): Register for the GTC!
03:00 What are neural scaling laws? Power laws explained.
05:15 Exponential scaling in theory
07:40 What the theory predicts
09:50 Unsupervised data pruning with foundation models
Thanks to our Patrons who support us in Tier 2, 3, 4: 🙏
Don Rosenthal, Dres. Trost GbR, Julián Salazar, Edvard Grødem, Vignesh Valliappan, Mutual Information, Mike Ton
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔥 Optionally, pay us a coffee to help with our Coffee Bean production! ☕
Patreon:
Ko-fi:
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
🔗 Links:
AICoffeeBreakQuiz:
Twitter:
Reddit:
YouTube:
#AICoffeeBreak #MsCoffeeBean #MachineLearning #AI #research
1 view
1
0
2 months ago 00:19:28 1
Linux drops Russian maintainers, CoreBoot nonsense & Snap drivers - Linux & Open Source News
4 months ago 01:19:36 2
Continual Learning in Neural Networks: on Catastrophic Forgetting and Beyond [in Russian]
4 months ago 02:03:09 1
Futura - Trip (AudioVisual Нypnotic Аrt)
4 months ago 00:30:26 1
Physics-informed Machine Learning for Inverse Problems