Title: AWS re:Inforce 2024 - Detecting and responding to threats in generative AI workloads (TDR302)
Insights:
- Introduction to AWS Customer Incident Response Team: The team assists customers with incident response, particularly in dealing with security events on the customer side of the shared responsibility model.
 
- Generative AI Security Areas:
- Security of generative AI applications (e.g., securing chatbots).
 
- Using generative AI for security (e.g., prioritizing alerts).
 
- Security from generative AI threats (e.g., deep fakes, phishing emails).
 
 
- Focus on Compromised Generative AI Applications: The session primarily focuses on incident response for compromised generative AI applications, such as chatbots.
 
- Generative AI Security Foundations:
- Applications leveraging large language models (e.g., Amazon Q family).
 
- Tools to build with large language models (e.g., Amazon Bedrock).
 
- Infrastructure for foundation model training (e.g., GPUs, SageMaker).
 
 
- Shared Responsibility Model:
- Customers are responsible for configuring and securing what they put into generative AI services.
 
- Different levels of responsibility depending on the service used (e.g., Amazon Q vs. Bedrock).
 
 
- Incident Response Lifecycle:
- Preparation: People, process, and technology.
 
- Detection and Analysis: Identifying and scoping incidents.
 
- Containment, Eradication, and Recovery: Minimizing risk and removing threats.
 
- Post-Incident Activity: Learning and iterating on the program.
 
 
- Types of Security Events in AI/ML:
- AI/ML application as the source of the event (e.g., prompt injection).
 
- AI/ML application as the target of the event (e.g., stealing tokens).
 
 
- Components of AI/ML Model:
- Organization, computing infrastructure, AI/ML application, private data, and users.
 
 
- Elements of AI/ML Model:
- Access, computing changes, AI changes, data store changes, invocation, private data, and proxy.
 
 
- Example Incident Response:
- Unauthorized user accessing an organization via scraped credentials.
 
- Investigating access, computing changes, AI changes, data store changes, invocation, private data, and proxy elements.
 
 
- Actionable Steps for Customers:
- Training and developing new playbooks.
 
- Enabling and utilizing new log sources (e.g., model invocation logging).
 
- Using existing tools like GuardDuty, Config, and Security Hub for detection and analysis.
 
- Emphasizing IAM and least privilege for security.
 
 
Quotes:
- "Our job is really to assist customers with incident response as they're dealing with security events on the customer side of the shared responsibility model."
 
- "We are primarily focused on number one here, so we're assuming a generative AI application has been compromised."
 
- "The difference here is CloudTrail logs would provide you more control plane information, whereas the model invocation logs would provide you more of the data plane information."
 
- "You should discuss your lessons learned with your stakeholders and really improve your defenses."
 
- "IAM is always going to be super critical for incident response or sorry for security in AWS so just ensuring you have least privilege is always going to matter."
 
- "The fundamentals are still here, right? Whether it's IAM, incident response itself, fundamentals really still apply."