The VRC Newsletter (August 27)

AI Risk Management: Essentials for Every Compliance Program

In partnership with

AI Risk Management: The New Baseline for Trust

AI is already inside your workflows: drafting messages, analyzing datasets, powering copilots, and assisting decisions. That makes AI part of your control environment, not a side experiment. If you want your SOC 2, HIPAA, ISO 27001, and future ISO 42001 programs to hold up under real scrutiny, you need a structured way to manage AI risk.

Start with an Inventory, Not a Guess

  • Build a single AI system register that includes:

    • Homegrown models

    • AI features inside SaaS tools

    • Copilots and assistants

    • Low-code automations

    • Shadow AI in teams

  • For each system, record:

    • Owner and business purpose

    • Model/provider (and version if available)

    • Data inputs/sources and classifications (e.g., PII, PHI)

    • Where outputs go (apps, tickets, email, data stores)

    • Hosting/region (if known)

    • Business criticality

Remember: you can’t protect what you can’t see.

Protect the Data that Touches AI

“Sanitize your data” should be concrete and repeatable:

  • De-identify or tokenize direct identifiers (names, MRNs, SSNs, emails, phone numbers) before prompts or training sets are created.

  • Minimize inputs to only what is necessary for the task; block PHI/PII in prompts by default.

  • Set retention limits for prompts, logs, and outputs; purge on a fixed schedule.

  • Encrypt at rest and in transit for prompt stores, vector DBs, and model outputs.

Put the Right Controls around AI Access and Activity

  • Role-based access + MFA for admin consoles, API keys, and model endpoints.

  • Log prompts and outputs tied to a user or service account; capture model versions and parameters.

  • Key management and rotation for provider tokens; restrict by IP and environment.

  • Human-in-the-loop for high-impact use cases such as billing, clinical, legal, or customer-impacting communications.

Assure Model and Output Integrity

  • Quality checks: sampling, dual-review for critical outputs, automated validations where possible.

  • Bias and safety testing before go-live and on a cadence; document the results.

  • Drift monitoring with thresholds and alerts when accuracy or behavior changes.

  • Change management for model swaps, temperature changes, or new prompts; include rollback plans.

Demand Real Assurances from Vendors

Ask for more than marketing:

  • Governance artifacts: security whitepaper, model card, data-flow diagrams, and a clear “no training on your data” statement unless you opt in.

  • Data handling: training-data provenance, retention/deletion timelines, region boundaries, and subprocessor lists.

  • Contracts: BAA/DPA as applicable, breach notice SLAs, audit rights or evidence rights, clear PHI/PII prohibitions where needed.

Build People and Process into the Program

  • AI acceptable-use policy with examples of permitted and prohibited prompts.

  • Role-based training on prompt hygiene, data minimization, and reporting suspicious behavior.

  • AI-aware incident response: playbooks for accidental PHI disclosure, harmful output, or model drift.

Map AI risk to the Frameworks You Live in

  • SOC 2: inventory and access (CC6), logging and monitoring (CC7), change management (CC8), vendor risk (CC9).

  • HIPAA Security Rule: risk analysis, access controls, audit controls, integrity, transmission security.

  • ISO 27001: asset management, access control, logging and monitoring, secure development, supplier relationships.

  • ISO 42001: AI governance, transparency, risk management, and ongoing monitoring for AI systems.

📬 Already a VRC client? We can bundle our services saving you money and time!

Your Must-Do This Quarter

  1. Stand up an AI system register and owner list.

  2. Block PHI/PII in prompts, enable tokenization, and set 30–90 day retention limits for AI logs.

  3. Turn on prompt/output logging and MFA for AI consoles and APIs.

  4. Add AI scenarios to your risk analysis and incident response testing.

  5. Send a one-page AI acceptable-use policy and run role-based micro-training.

  6. Issue an AI governance addendum to key vendor contracts and collect evidence.

At VanRein Compliance, we help you embed these controls inside existing SOC 2, HIPAA, ISO 27001, and ISO 42001 programs so compliance becomes a safeguard, not a checkbox.

👉 Schedule your AI Assessment Call to identify your risks, align your controls, and show your customers that your compliance evolves as fast as your technology.

Are your employees using unauthorized AI meeting notetakers?

You wouldn’t allow unmanaged devices on your network, so why allow unmanaged AI into your meetings?

Shadow IT is becoming one of the biggest blind spots in cybersecurity.

Employees are adopting AI notetakers without oversight, creating ungoverned data trails that can include confidential conversations and sensitive IP.

Don't wait until it's too late.

This Shadow IT prevention guide from Fellow.ai gives Security and IT leaders a playbook to prevent shadow AI, reduce data exposure, and enforce safe AI adoption, without slowing down innovation.

It includes a checklist, policy templates, and internal comms examples you can use today.

Vendor AI Oversight: Where Your Risk Actually Lives

Your AI risk doesn’t just sit inside your walls. It travels with every vendor model you connect, every “copilot” embedded in a SaaS tool, and every API that touches your data. Most organizations now rely on third-party AI for core workflows like support, analysis, coding assistance, and document automation. That convenience can also create blind spots: unclear training data, opaque retention policies, fine-tuning on your prompts, or shadow subprocessors outside your approved regions. When a vendor slips, you inherit the exposure with your customers, auditors, and regulators.

Pinpoint the Risk in the Supply Chain

Create a living AI Vendor Register that lists every supplier providing AI functionality, even if AI is “just a feature.” Capture:

  • Product and model details, version, hosting region, subprocessors

  • Data flows for inputs, embeddings, and outputs

  • Whether prompts or outputs are stored, retained, or used for training

  • Business criticality and data classifications touched (PHI, PII, payment, student data)

Require Evidence, Not Assurances

Ask vendors for artifacts that prove control maturity:

  • Governance package: model card or system card, data-flow diagram, safety testing summary, change-management process

  • Data handling: retention and deletion timelines, region boundaries, training-data provenance, policy on using your data for training or fine-tuning

  • Security: access controls, MFA, logging scope, key management, incident response timeframes

  • Compliance: attestations or reports relevant to your program (SOC 2, ISO 27001, HIPAA BAA, ISO 42001 roadmap)

Put Hard Protections in Your Contracts

Update DPAs/BAAs and MSAs with AI-specific terms:

  • No training on your data without explicit written opt-in

  • Data minimization and regionality requirements aligned to your obligations

  • Retention limits and verified deletion on termination

  • Breach and misuse SLAs with 24–72 hour notice windows

  • Subprocessor transparency with approval rights for changes

  • Evidence rights for audits or independent reviews tied to your frameworks

  • Use restrictions for PHI, student data, payment data, or other high-risk classes

Monitor Continuously, Not Annually

Shift from point-in-time questionnaires to ongoing assurance:

  • Quarterly attestations on data use, retention, and subprocessor changes

  • Evidence feeds or dashboards for logs, uptime, and model changes

  • Key rotation and least-privilege reviews for API and service accounts

  • Targeted red-teaming of high-impact AI features handling sensitive data

  • Join the incident loop: require vendors to include you in tabletop tests and post-incident reviews

Train the Team to Spot Vendor AI Risks

Give procurement, legal, security, and product teams role-based training on AI risks:

  • What to ask vendors during sourcing and renewal

  • How to read model cards and interpret safety claims

  • How to evaluate “no training” promises and retention settings

  • When to escalate to a DPIA, HIPAA risk analysis, or executive review

Operationalizing Vendor AI Oversight

VanRein Compliance, with the help of our exclusive VRC1 compliance platform, helps you install a repeatable vendor AI governance program that auditors and customers will trust:

  • AI-aware security questionnaires and evidence requests mapped to SOC 2, HIPAA, ISO 27001, and ISO 42001

  • Contract language kits: AI governance addendum, “no-training” clauses, retention, regionality, and subprocessor terms

  • Continuous monitoring playbooks with attestations, log spot-checks, and renewal checkpoints

  • Leadership reporting that shows accountability and control performance over time

Third-party AI is now part of your product surface and your audit scope. When you trade convenience for opacity, you take on silent liabilities that surface at the worst possible moment. A disciplined vendor AI program—clear inventory, verifiable evidence, enforceable contracts, continuous monitoring, and trained teams—turns that exposure into control. Build it once, use it across every framework you live in, and make vendor risk a strength instead of a surprise.

📣 Strengthen vendor AI governance before the next renewal cycle. Book a Strategic AI Governance Call to harden contracts, stand up continuous monitoring, and brief leadership on their oversight responsibilities.

Start learning AI in 2025

Keeping up with AI is hard – we get it!

That’s why over 1M professionals read Superhuman AI to stay ahead.

  • Get daily AI news, tools, and tutorials

  • Learn new AI skills you can use at work in 3 mins a day

  • Become 10X more productive