Join us for an engaging online session as we delve into the intriguing world of prompt injection—a critical vulnerability affecting Large Language Models (LLMs). Ranked as LLM06 in the OWASP LLM Top 10 list, prompt injection poses a significant risk due to LLMs' susceptibility to external input manipulation. Discover how skillfully crafted inputs can trick LLMs into executing unintended and unwanted actions.
Disha Mark III - Prompt Injection
Sunday , 21 Apr • 7:30 – 8:30 pm (GMT+5:30)
Google Meet joining info
Meet link: https://meet.google.com/wre-wiao-ewy
Slides:
Resources: