Module 1: Introduction to Vertex AI Security Principles
- Google Cloud Security
- Vertex AI components
- Vertex AI Security concerns
- Review Google Cloud Security fundamentals.
- Establish a foundational understanding of Vertex AI.
- Enumerate the security concerns related to Vertex AI features and components.
- Lab: Vertex AI: Training and Serving a Custom Model
Module 2:Identity and Access Management (IAM) in Vertex AI
- Overview of IAM in Google Cloud
- Control access with Identity Access Management.
- Simplify permission using organization hierarchies and policies.
- Use service accounts for least privileged access.
- Lab: Service Accounts and Roles: Fundamentals
Module 3: Data Security and Privacy
- Data encryption
- Protecting Sensitive Data
- VPC Service Controls
- Disaster recovery planning
- Configure encryption at rest and in-transit.
- Encrypt data using customer-managed encryption keys.
- Protect sensitive data using the Data Loss Prevention service.
- Prevent exfiltration of data using VPC Service Controls.
- Architect systems with disaster recovery in mind.
- Lab: Getting Started with Cloud KMS
- Lab: Creating a De-identified Copy of Data in Cloud Storage
Module 4: Securing Vertex AI Endpoints and model deployment
- Network security
- Securing model endpoints
- Deploy ML models using model endpoints.
- Secure model endpoints.
- Lab: Configuring Private Google Access and Cloud NAT
Module 5: Monitoring and logging in Vertex AI
- Logging
- Monitoring
- Write to and analyze logs.
- Set up monitoring and alerting.
Module 6: Security risks in generative AI applications
- Overview of gen AI security risks
- Overview of AI Safety
- Prompt security
- LLM safeguards
- Identify security risks specific to LLMs and gen AI applications.
- Understand methods for mitigating prompt hacking and injection attacks.
- Explore the fundamentals of securing generative AI models and applications.
- Introduce fundamentals of AI Safety.
- Lab: Safeguarding with Vertex AI Gemini API
- Lab: Gen AI & LLM Security for Developers
Module 7: Testing and evaluating generative AI model responses
- Testing generative AI model responses.
- Evaluating model responses.
- Fine-Tuning LLMs.
- Implement best practices for testing model responses.
- Apply techniques for improving response security in gen AI applications.
- Lab: Measure Gen AI Performance with the Generative AI Evaluation Service
- Lab: Unit Testing Generative AI Applications
Module 8: Securing Retrieval-Augmented Generation (RAG) systems
- Fundamentals of Retrieval-Augmented Generation
- Security in RAG systems
- Understand RAG architecture and security implications.
- Implement best practices for grounding and securing data sources in RAG systems.
- Lab: Multimodal Retrieval Augmented Generation (RAG) Using the Vertex AI Gemini API
- Lab: Introduction to Function Calling with Gemini