EMNLP 2022 Tutorial - "Modular and Parameter-Efficient Fine-Tuning for NLP Models" Published 2023-02-13 Download video MP4 360p Recommendations 17:07 LoRA explained (and a bit about precision and quantization) 58:26 A Guide to Parameter-Efficient Fine-Tuning - Vlad Lialin | Munich NLP Hands-on 021 17:38 The moment we stopped understanding AI [AlexNet] 19:17 Low-rank Adaption of Large Language Models: Explaining the Key Concepts Behind LoRA 46:56 PEFT w/ Multi LoRA explained (LLM fine-tuning) 1:15:46 Deep Learning Bootcamp: Kaiming He 20:18 Why Does Diffusion Work Better than Auto-Regression? 28:18 Fine-tuning Large Language Models (LLMs) | w/ Example Code 45:32 A Survey of Techniques for Maximizing LLM Performance 1:00:03 Prefix-Tuning: Optimizing Continuous Prompts for Generation 40:55 PEFT LoRA Explained in Detail - Fine-Tune your LLM on your local GPU 1:00:37 Профессор физики Michio Kaku - #лекция для инженеров компании Гугл 1:24:48 The End of Finetuning — with Jeremy Howard of Fast.ai 38:12 HuggingFace Crash Course - Sentiment Analysis, Model Hub, Fine Tuning 16:49 The Next Decade in AI by Ray Kurzweil and Ilya Sutskever 1:19:27 Stanford CS25: V3 I Retrieval Augmented Language Models 49:47 “What's wrong with LLMs and what we should be building instead” - Tom Dietterich - #VSCF2023 42:06 Understanding 4bit Quantization: QLoRA explained (w/ Colab) 1:15:53 A little guide to building Large Language Models in 2024 Similar videos 15:35 Fine-tuning LLMs with PEFT and LoRA 03:01 EMNLP 2022 04:59 EMNLP 2022 System Demo - Azimuth: Systematic Error Analysis for Text Classification 12:44 Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning 22:51 Parameter-efficient fine-tuning with QLoRA and Hugging Face More results