AI security follows a shared responsibility model, similar to cloud security. Both your organization and your AI provider have distinct roles, and understanding this split is key to planning investments and controls.
Think of AI security across three layers:
- AI platform: The underlying infrastructure and AI services (for example, Azure AI, the models behind Copilot).
- AI application: The software and integrations you build or configure to use generative AI productively and securely.
- AI usage: How people in your organization actually use AI—what data they provide, what they generate, and how they share it.
The division of responsibility shifts depending on the deployment model:
IaaS (Infrastructure as a Service – “build your own model”)
- The provider (e.g., Microsoft) secures the infrastructure (compute, storage, networking).
- Your organization is responsible for securing models, data, and applications you build on top, including:
- Model configuration, training data governance, and tuning.
- Application security, plugins, and integrations.
- Identity, access, and data protection.
PaaS (Platform as a Service – Azure AI and similar)
- The provider offers AI capabilities with many embedded controls and safety features.
- You focus on securing the custom application and its usage:
- Designing safe prompts, workflows, and business logic.
- Controlling which data sources the AI can access.
- Managing user access, monitoring, and governance.
SaaS (Software as a Service – managed AI like Copilot)
- The provider manages the service, software, and most platform-level security.
- Your organization still owns:
- AI usage: User training, acceptable use policies, and how people interact with the tool.
- Data security and governance: What data you connect, how it’s labeled, and who has access.
- Identity and access: Ensuring accounts, roles, and permissions are configured correctly.
To operationalize this model, it helps to focus on three control areas:
- Data access controls: Use APIs, ACLs, and labeling to govern which data AI can see and how it’s used.
- Application controls: Define how applications interact with models and data, including plugins and integrations.
- AI model controls: Ensure models are configured and monitored to prevent unintended disclosures and misuse.
By mapping responsibilities clearly across these layers and models, you can plan where to invest in people, processes, and technology—and avoid gaps where both you and the provider assume the other party is handling a critical security function.