If you run security or compliance at a SOC 2, HIPAA-adjacent, or FINRA-regulated company, you have probably had the same conversation five times: a team wants ChatGPT, legal is nervous, IT is overloaded, and someone has already pasted a customer list into the consumer web app.
The right response is not a blanket ban. Bans create shadow AI, which is worse. The right response is a controlled deployment that your auditors can defend.
This guide explains what that actually looks like.
Start from the threat model, not the vendor
Before you approve any tool, write down three things:
- What data may touch the model? Customer PII, financial records, PHI, source code, legal drafts? Each category has different controls.
- What happens if the model memorises that data? What happens if a prompt from team A surfaces in a response to team B?
- What happens if an engineer puts an API key in a prompt? Is there detection? Rotation? A policy that was read and signed?
Vendors love to sell "enterprise-grade" without answering those questions. Your auditors will ask them.
The three deployment patterns that survive SOC 2 review
Pattern 1: ChatGPT Enterprise or Team, with policy and training. Good for general productivity. Zero data retention, no training on business data, admin controls. This is usually enough for drafting, summarisation, and internal-only knowledge work. It is not enough if you need the model to read regulated data systematically.
Pattern 2: Azure OpenAI Service inside your tenant. You run OpenAI models inside your own Azure subscription, with your identity model, your network controls, and your audit logging. Data does not leave your tenant. Logs feed your SIEM. This is the typical enterprise answer for a regulated organisation that wants OpenAI capability under its existing compliance posture.
Pattern 3: AWS Bedrock (Claude, Titan, Llama) or Google Vertex. Same shape as Azure OpenAI: models you can call inside your cloud, with your IAM, your VPC, your logging. Choose on stack fit and which models you actually need.
For most SOC 2 organisations the answer is not one of these patterns — it is two. ChatGPT Enterprise for general productivity, plus Azure OpenAI or Bedrock for any workflow that reads regulated content systematically.
Controls your auditors will expect
Your SOC 2 auditor is not going to fail you for using AI. They are going to fail you for using it without controls. Map each control to a system:
- Access control. AI tooling gated behind SSO, scoped to roles, access reviewed on a cadence. No personal ChatGPT accounts used for business work.
- Data handling policy. A written AI usage policy that defines what data may be entered, what may not, and what to do on incident. Signed by every employee.
- Vendor review. Vendor assessments completed for every AI tool in scope, including subprocessors. If you use ChatGPT Enterprise, OpenAI is the primary; any model partners are subprocessors.
- Logging. Prompts and completions logged for any workflow that processes regulated data. Tied to a user identity. Retained per your retention policy.
- Data loss prevention. DLP that can detect sensitive content (customer PII, PHI, financial identifiers) before it leaves the endpoint.
- Incident response. A documented path for "an employee pasted sensitive data into a model." Who gets paged, what gets deleted, what gets disclosed.
- Training. Annual AI-specific training, documented, with a completion rate you can show.
- Change management. Any net-new AI workflow that touches regulated data goes through your change management process, same as any other production system.
Procurement checklist for an AI vendor
Before you approve a vendor, get written answers to:
- Where is data processed? Which regions, which cloud providers, which subprocessors?
- Is our data used for training, by default or ever? What is the opt-out mechanism?
- What is the data retention window? Can we configure zero retention? Who at your company can access our content?
- What is the SOC 2 or ISO 27001 status? Can we review the report under NDA?
- Do you support SSO, SAML, and just-in-time provisioning?
- Is there audit logging we can export to our SIEM?
- What is the breach notification commitment?
- What is the subprocessor list and update cadence?
If a vendor cannot answer these quickly, they are not ready for your environment.
The practical control baseline, in plain terms
You do not need to build a custom AI governance program from scratch. For most mid-market SOC 2 organisations, the baseline is:
- ChatGPT Enterprise (or equivalent) as the approved productivity AI
- A per-workflow review for anything that uses an API key or processes regulated data
- Azure OpenAI or Bedrock for workflows that need model access inside your cloud
- An AI usage policy, signed by employees, reviewed annually
- A prompt library with guidance on what may and may not be included in prompts
- DLP coverage for endpoints that can reach public AI tools
- Logging for regulated workflows, routed to your SIEM
- Annual AI-specific training with completion tracking
What to avoid
- Blanket bans. They produce shadow AI on personal devices. Worse than a controlled tool.
- Ungoverned Copilot rollouts. Copilot surfaces anything a user can already see. If your SharePoint is ungoverned, fix that before you enable Copilot broadly.
- "Trust me" from vendors. If it is not in writing, it is not a control.
- Compliance theatre. A policy no one has read is not a control. Training people to actually follow it is.
- Model freelancing. Teams choosing models and spinning up API keys without review. Centralise the entry points.
What we recommend
If you are at a SOC 2 company and your team is asking for AI, do not start with the tool. Start with a controlled deployment plan: a documented policy, a named approved productivity path (usually ChatGPT Enterprise or Copilot), a model-hosting path for regulated workflows (Azure OpenAI or Bedrock), and a short prompt library.
If you want that whole plan written for your specific environment — including the approved vendor list, policy drafts, and a rollout schedule — the Workflow Automation Assessment or the Executive AI Opportunity Review produces it in one to two weeks, depending on scope.
One last note on posture. Nothing in this guide is legal advice, and it is not a replacement for an audit. Treat it as a starting baseline you can give to your CISO and your GRC lead to sharpen for your specific program.