Organisations must understand how to secure their AI systems. This in-depth course delves into the AI security landscape, addressing vulnerabilities like prompt injection, denial of service attacks, model theft, and more. Learn how attackers exploit these weaknesses and gain hands-on experience with proven defense strategies and security APIs.
Discover how to securely integrate LLMs into your applications, safeguard training data, build robust AI infrastructure, and ensure effective human-AI interaction. By the end of this course, you'll be equipped to protect your organization's AI assets and maintain the integrity of your systems.
No prerequisites, aside general understanding of AI principles.
This course will cover the following topics:
Learning Outcomes
Day 1
Introduction to AI security
Using AI for malicious intents
The AI Security landscape
Prompt Injection
Day 2
Prompt Injection
Visual Prompt Injection
Denial of Service
Model theft
Day 3
LLM integration
Training data manipulation
Human-AI interaction
Secure AI infrastructure
Join our public courses in our Russia facilities. Private class trainings will be organized at the location of your preference, according to your schedule.