Navigating LLM Threats: Detecting Prompt Injections and Jailbreaks Published 2024-01-09 Download video MP4 360p Recommendations 1:00:40 Mitigating LLM Hallucinations with a Metrics-First Evaluation Framework 50:23 Lessons Learned from Crowdsourced LLM Threat Intelligence 59:17 RLHF: How to Learn from Human Feedback with Reinforcement Learning 42:35 Real-world exploits and mitigations in LLM applications (37c3) 1:12:30 Jeff Dean (Google): Exciting Trends in Machine Learning 49:11 LLMOps (LLM Bootcamp) 19:52 How to set up RAG - Retrieval Augmented Generation (demo) 1:00:01 Jailbreaking LLMs - Prompt Injection and LLM Security 1:24:34 Hacker Saket Modi Returns: New Cyber Risks, Identity Thefts, Deep Fake Horrors | TRS 374 19:15 GraphRAG: The Marriage of Knowledge Graphs and RAG: Emil Eifrem 25:14 Efficiently Scaling and Deploying LLMs // Hanlin Tang // LLM's in Production Conference 1:19:27 Stanford CS25: V3 I Retrieval Augmented Language Models 47:41 Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Spies, Microsoft, & Enlightenment 57:26 Data Privacy for LLMs 1:16:49 Evaluation for Large Language Models and Generative AI - A Deep Dive 46:02 What is generative AI and how does it work? – The Turing Lectures with Mirella Lapata Similar videos 36:29 Compromising LLMs: The Advent of AI Malware 59:48 [1hr Talk] Intro to Large Language Models 00:43 How to Hack ChatGPT 14:58 The Secret Methods To Jailbreak ChatGPT 1:27:14 AI and hacking - opportunities and threats - Joseph “rez0” Thacker More results