Module 41: Cloud-Specific Risks and Threat Modeling
The CCSP exam tests whether you can identify risks that are unique to or amplified by cloud computing and apply structured threat modeling to cloud-native application architectures.
Cloud-Amplified Application Risks
Cloud does not create new vulnerability categories but amplifies existing ones:
- Data exposure at scale — a single misconfigured API can expose millions of records versus thousands in a traditional environment
- Blast radius expansion — cloud account compromise affects all applications, not just one server
- Supply chain depth — cloud applications depend on CSP services, third-party libraries, container images, and SaaS integrations
- Ephemeral infrastructure — auto-scaling creates and destroys instances, making traditional security monitoring difficult
- Shared resource abuse — cryptojacking and resource abuse through compromised cloud accounts
When the exam describes a breach with unusually large impact, the amplifying factor is almost always the cloud deployment model. Cloud makes small misconfigurations have massive consequences.
Threat Modeling for Cloud Architectures
Apply STRIDE to cloud-specific components:
- API gateways — Spoofing (forged API keys), Tampering (request modification), DoS (rate limit bypass)
- Serverless functions — Elevation of Privilege (overly permissive roles), Information Disclosure (function environment variables)
- Container orchestration — Spoofing (unauthorized container deployment), Tampering (image modification)
- Cloud storage — Information Disclosure (public buckets), Tampering (unauthorized data modification)
- Message queues — Repudiation (unattributed messages), Tampering (message injection)
The exam expects you to apply threat modeling systematically to each cloud component, not just the application code.
Risk Assessment for Cloud Applications
Cloud application risk assessment must account for:
- The shared responsibility boundary for each service used
- Data classification of information processed by the application
- Regulatory requirements that apply to the application’s data
- The application’s exposure surface (public API, internal only, partner-facing)
- Dependency chain risks (third-party services, open-source components)
The exam rewards candidates who consider the full context of an application’s risk profile, not just its technical vulnerabilities.
AI/ML Application Threat Modeling
AI-powered cloud applications introduce new threat categories:
- Prompt injection — manipulating LLM inputs to bypass safety controls or extract sensitive data
- Model poisoning — corrupting training data to influence model behavior
- Model inversion — extracting training data from model outputs
- Adversarial inputs — crafted inputs that cause misclassification
These threats should be included in threat models for any cloud application incorporating AI/ML components.