How to evaluate LLM Applications - Webinar by deepset.ai Published 2023-08-16 Download video MP4 360p Recommendations 40:15 How to detect prompt injections - Jasper Schwenzow, deepset.ai 59:36 Using RAG QA for enterprise search - Webinar by deepset.ai 49:50 Evaluating LLM-based Applications // Josh Tobin // LLMs in Prod Conference Part 2 09:02 Intro to Haystack Pipelines: Build and customize AI applications 1:02:56 LLM Hallucinations in RAG QA - Thomas Stadelmann, deepset.ai 31:41 RAG Versus Fine Tuning—How to Efficiently Tailor an LLM to Your Domain Data 49:07 [Webinar] LLMs for Evaluating LLMs 33:50 Evaluating LLM-based Applications 1:02:10 Customer Story: Building LLM-Powered Customer Experiences in 120 Days 1:00:40 Mitigating LLM Hallucinations with a Metrics-First Evaluation Framework 37:21 Session 7: RAG Evaluation with RAGAS and How to Improve Retrieval 58:51 Building and Curating Datasets for RLHF and LLM Fine-tuning // Daniel Vila Suero // LLMs in Prod Con 54:05 5 Steps to a successful Gen AI Pilot 57:27 Stop Waiting and Start Delivering: Taking charge in AI application delivery. 59:11 Deep Dive into LLM Evaluation with Weights & Biases 44:46 Open NLP Meetup #12: From Hybrid Retrieval to RAG with OpenSearch and Haystack 1:16:49 Evaluation for Large Language Models and Generative AI - A Deep Dive Similar videos 05:18 LLM Evaluation Basics: Datasets & Metrics 58:12 Prompting LLMs Using Haystack 1:04:28 Building Applications with LLM-Based Agents 02:23 LLM Module 4: Fine-tuning and Evaluating LLMs | 4.9 Evaluating LLMs 01:27 How deepset empowers enterprises to build NLP applications - Interview from London AI Summit 2022 01:50 Top 5 automated ways to evaluate LLMs 09:16 LLM Module 4: Fine-tuning and Evaluating LLMs | 4.11 Guest Lecture Harrison Chase, LangChain Creator 01:42 Evaluating the Output of Your LLM (Large Language Models): Insights from Microsoft & LangChain More results