Building makemore Part 3: Activations & Gradients, BatchNorm Published 2022-10-04 Download video MP4 360p Recommendations 1:55:24 Building makemore Part 4: Becoming a Backprop Ninja 17:38 The moment we stopped understanding AI [AlexNet] 20:18 Why Does Diffusion Work Better than Auto-Regression? 06:14 Mamba Language Model Simplified In JUST 5 MINUTES! 40:08 The Most Important Algorithm in Machine Learning 12:45 Day in the life of Andrej Karpathy | Lex Fridman Podcast Clips 1:32:27 Andrej Karpathy's Keynote & Winner Pitches at UC Berkeley AI Hackathon 2024 Awards Ceremony 09:06 10 weird algorithms 1:15:40 Building makemore Part 2: MLP 25:28 Watching Neural Networks Learn 35:53 Official PyTorch Documentary: Powering the AI Revolution 2:06:38 This is why Deep Learning is really weird. 29:48 How AI Powers Self-Driving Tesla with Elon Musk and Andrej Karpathy 1:56:20 Let's build GPT: from scratch, in code, spelled out. 1:25:26 AI HYPE - Explained by Computer Scientist || El Podcast EP48 1:57:45 The spelled-out intro to language modeling: building makemore 11:11 PyTorch at Tesla - Andrej Karpathy, Tesla 10:21 Future Proof Your Tech Career In the Age of AI Similar videos 56:22 Building makemore Part 5: Building a WaveNet 54:40 Introduction to Deep Learning - 8. Training Neural Networks Part 3 (Summer 2020) 00:50 Layer Normalization by hand 1:49:29 MakeMore Wirh C - Part 3 03:20 The genius of Andrej Karpathy | John Carmack and Lex Fridman 2:25:52 The spelled-out intro to neural networks and backpropagation: building micrograd 50:19 Recitation 3 | Methods for Efficiency and Optimization 1:35:58 Машинное обучение 22f. Лекция 09: Dropout and batchnorm 15:14 L11.2 How BatchNorm Works 1:11:41 Stanford CS25: V2 I Introduction to Transformers w/ Andrej Karpathy 03:06 Andrej Karpathy: Self-driving is really hard | Lex Fridman Podcast Clips More results