Securing LLMs: How to detect prompt injections

The concerns surrounding potential prompt injections have hampered companies from fully integrating Language Learning Models (LLMs) and AI into their operations. In response to this, we have curated a dataset and trained a classifier that serves to detect such injections, using data augmentation techniques including translations and adversarial examples.

During this webinar, we will provide a detailed walkthrough on how we trained the model and how you can integrate this model into your AI system to improve its security. This webinar aims to equip your team with the knowledge to mitigate risks, ensure system integrity, and enhance the adoption of AI with minimal reservations.

Featured Speakers

Dr. Jasper Schwenzow

Sr. Applied AI Engineer

Watch now on-demand.