In this course, students will learn about one of the most prevalent and pernicious vulnerabilities in the GenAI and LLM Security landscape, Prompt Injection. Prompt Injection arises when a system inadvertently processes user input as part of its command or execution context, potentially allowing an attacker to manipulate the AI into executing unintended actions or disclosing sensitive information.
This vulnerability is particularly relevant in scenarios where AI models, including chatbots, virtual assistants, and other interactive systems, accept and act upon user-generated prompts.Prompt Injection is a key class of vulnerabilities in the OWASP Top 10 for Large Language Models.
It is a vulnerability that has an impact on other vulnerabilities like Excessive Agency, Training Data Poisoning, Overreliance and Sensitive Information Disclosure. It is a vulnerability that is simple to exploit and hard to detect and fix.In this course we're going to look Prompt Injection from both an attack and defense perpsective. We're going to deploy private LLMs and perform Prompt Injection Attacks against them.
We're also going to look at a history of Prompt Injection attacks against real-world applications.Finally we'll be exploring the defense against Prompt Injection. We'll explore the reasons for the probablistic nature of LLMs being a reason for why it is hard to fix Prompt Injection. We'll explore a very interesting library and approach that can help mitigate and deter against Prompt Injections.
LLM Prompt Injection - Attack
LLM Prompt Injection - Sensitive Data Exposure
LLM Guard