1) Purpose
This policy establishes the organization’s AI governance system to ensure that AI systems, especially generative AI, are safe, secure, privacy‑preserving, fair, compliant, and aligned to business objectives. It implements an AI Management System (AIMS) consistent with ISO/IEC 42001 and integrates AI risk management per the NIST AI RMF (G/MAP/MEASURE/MANAGE). It also leverages existing SOC 2 controls for training, change management, logging/monitoring, vendor management, and continuity/incident processes to avoid duplication.
2) Scope
- In scope: All AI systems (built, fine‑tuned, or integrated), data pipelines, models, prompts, agents, evaluation harnesses, and user‑facing AI features; all business units; all environments (dev/test/stage/prod).
- Out of scope: Purely manual processes without automated inference or learning (until connected to AI decisions/outputs).
3) Normative Alignment
- ISO/IEC 42001: AIMS (Context, Leadership, Planning, Support, Operation, Performance Evaluation, Improvement).
- NIST AI RMF 1.0: Govern, Map, Measure, Manage (referred to as [G] [MAP] [MEASURE] [MANAGE] tags throughout).
- SOC 2 Foundations reused: Information‑security training & comms, change management, logging/monitoring, vendor risk, and BCDR/incident processes.
4) Definitions (selected)
- AI System: Software that performs tasks requiring human‑like cognitive abilities (prediction, generation, classification, decisioning).
- Generative AI: Models that generate text, code, images, audio, or other content.
- Model Owner: Accountable single owner for each AI system.
- High‑Risk Use Case: Any AI capability whose failure, misuse, bias, or security/privacy defect could materially harm customers, employees, the business, or trigger regulatory obligations.
5) Roles & Responsibilities
- Board Risk & Compliance Committee: Approves AI risk appetite and this policy; receives quarterly AIMS reports. [G]
- Chief AI Officer (CAIO): Owns AIMS; charters the AI Governance Committee; sets standards, KPIs, and assurance cadence. [G]
- AI Governance Committee (AIGC): Cross‑functional (AI/ML, Product, Security, Privacy, Legal, Ethics, Compliance, CX). Reviews risk assessments, approves go‑live for high‑risk use cases, tracks corrective actions. [G][MANAGE]
- CISO & Security Engineering: AI threat modeling, model supply‑chain security, logging/monitoring integration, incident response. Aligns with existing SOC 2 logging/monitoring and incident processes.
- DPO/Privacy: DPIAs/PTAs, consent, data minimization, retention, user rights, cross‑border controls. [MAP][MANAGE]
- Model Owners / Product Owners: Maintain model cards, risk register entries, evaluation results; ensure control adherence. [MEASURE][MANAGE]
- Data Stewards: Data sourcing, provenance, labeling, quality, sensitive‑data controls. [MAP][MEASURE]
- Red Team / Testing: Adversarial testing, jailbreak/prompt‑injection testing, safety evals. [MEASURE]
- All Employees & Contractors: Comply with acceptable use and training obligations (leverages current security awareness program).
6) AIMS Requirements (ISO/IEC 42001 aligned) with NIST AI RMF tags
6.1 Context of the Organization [G]
- AI Scope & Inventory: Maintain a centralized AI System Register of all AI models/systems, their purpose, owners, data sources, risk tier, and deployment status.
- Stakeholders & Legal Landscape: Track applicable regulations, standards, customer commitments, and license/IP constraints; update before new launches.
- Risk Appetite: Define AI risk tolerance thresholds for safety, privacy, security, fairness, and explainability metrics. Document compensating controls if thresholds are exceeded.
6.2 Leadership & Governance [G]
- Policy & Charter: This policy is the top‑level AI governance control. The AIGC charter defines quorum, decision rights, and escalation.
- Accountability: Every AI system has a Model Owner; high‑risk systems also have an Executive Sponsor.
- Culture & Training: Mandatory onboarding and annual refresher training covering responsible AI, secure prompt handling, data classification, and anti‑bias practices—delivered via the existing security awareness channel.
6.3 Planning [MAP][MEASURE][MANAGE]
- AI Risk Assessment (AIRA): Before development and before go‑live, complete an AIRA (threats, misuse, bias, privacy, IP, safety, compliance). Tie findings to the corporate risk register and treatment plan. Your risk management policy already establishes this cadence; reuse it.
- Objectives & KPIs: Set measurable targets (e.g., bias thresholds, robustness scores, privacy loss budgets, uptime SLOs, and incident MTTR).
- Change Planning: Any model retrain/fine‑tune or prompt policy change follows the existing change management process (authorization, testing, approval, documentation).
6.4 Support [G]
- Competence: Role‑based AI training (developers, reviewers, incident responders, red teams).
- Tooling & Secure SDLC: Approved MLOps toolchain, dependency scanning, SBOMs, and secret scanning for model code and pipelines.
- Documentation: Maintain Model Cards, Data Sheets, and System Cards as living documents.
6.5 Operation (AI SDLC) [MAP][MEASURE][MANAGE]
Integrate the following controls into the AI development lifecycle and your established SDLC:
- Use‑Case Qualification & Risk Tiering
- Classify proposed AI use by potential impact; high‑risk requires AIGC approval to build and to launch. [G][MANAGE]
- Data Governance
- Source & licensing due diligence; provenance tracking; consent and lawful basis; data minimization; retention/disposal; synthetic data governance; PII handling aligned with privacy program. [MAP][MANAGE]
- Sensitive data (e.g., customer records) must not be sent to external models without an approved data‑protection pattern (tokenization, redaction, or enterprise controls). [MANAGE]
- Safety, Security, and Robustness by Design
- Threat modeling for AI (prompt injection, data poisoning, model extraction, adversarial inputs).
- Red‑teaming and abuse testing before go‑live; add model and prompt‑policy guardrails. [MEASURE]
- Changes to models/policies follow formal change management—authorized, tested, approved, and documented prior to release.
- Evaluation & Testing
- Establish quantitative/qualitative eval suites for accuracy, fairness, toxicity/safety, grounding/fact‑faithfulness, explainability, latency, and cost.
- For generative systems, include jailbreak and prompt‑injection tests; set minimum passing thresholds tied to risk tier. [MEASURE]
- Transparency & Human Oversight
- Provide user disclosures that AI is involved; record known limitations; expose recourse channels.
- Define human‑in‑the‑loop checkpoints for high‑impact decisions; ensure override/rollback controls exist. [MANAGE]
- Release & Deployment
- Canary and staged rollouts; rollback plans.
- Enable telemetry for prompts/outputs (appropriately pseudonymized) to support security monitoring and quality evaluation—integrated with your existing logging/alerting approach.
- Third‑Party & Vendor AI
- Perform vendor AI risk assessments before onboarding; require contractual clauses for data protection, IP, model safety responsibilities, and incident cooperation. Leverage your existing vendor management policy and SOC reviews.
- Where feasible, prefer vendors with ISO/IEC 42001 or NIST AI RMF alignment (or equivalent), and existing SOC reports.
6.6 Performance Evaluation [MEASURE]
- Production Monitoring: Track model performance, drift, fairness metrics, safety/abuse events, hallucination rates, and robustness indicators. Alert on threshold breaches and open corrective actions.
- Internal Audit & Independent Assessment: Audit conformance to this policy and AIMS effectiveness; feed results into management review.
- Management Review: Quarterly AIGC review of risks, incidents, metrics, and improvement actions; annual Board report. [G]
- Security Monitoring & Alerts: Utilize existing capabilities to detect anomalies, access misuse, and high‑risk events across AI services.
6.7 Improvement [MANAGE]
- Corrective & Preventive Actions (CAPA): Document issues, owners, deadlines, and verifications.
- Lessons Learned: Post‑incident reviews for AI‑specific events; update patterns, guardrails, and training.
- Continual Improvement: Use KPIs and audit findings to refine the AIMS.
7) Minimum Control Set for Generative AI (Applicable to employees, contractors, products, and vendors)
- Acceptable Use
- Prohibit submitting secrets, credentials, or regulated PII to unapproved tools.
- Disallow generating content that violates law or policy (harassment, hate, malicious code for unauthorized use, etc.). [G]
- Data Handling & Privacy
- Follow approved redaction/tokenization for any sensitive input to external models; log lawful basis/consent where required; apply retention limits. [MAP][MANAGE]
- Apply privacy threshold analyses for new datasets; perform DPIAs where needed. [MAP]
- Security
- Enforce principle of least privilege for model access; store prompts/outputs according to data classification; enable guardrails and content filters; integrate with SIEM for high‑risk events.
- All model/prompt‑policy changes follow change management (authorization, testing, approval).
- Safety & Quality
- Maintain evaluation packs including red‑team results; document known failure modes and mitigations in the Model Card. [MEASURE]
- For externally supplied models, require vendor test evidence and allow audit/re‑test rights.
- Transparency & IP
- Provide user‑visible AI notices; include content‑provenance or watermarking where feasible; respect third‑party licenses and data creator rights. [G][MANAGE]
- Human Oversight
- High‑risk outputs must be reviewed and approved by a qualified human prior to acting on them. [MANAGE]
8) Incident Response & Business Continuity (AI‑Specific)
- AI Incident Types: Model/data compromise, prompt injection, harmful output, privacy breach, systemic bias detection, drift‑induced failure, IP leakage.
- Process: Use the organization’s existing incident response and BCDR processes; ensure AI incidents are triaged, contained, eradicated, and reported with the same rigor and communication standards already in place.
9) Documentation & Records
Maintain at minimum (system‑level): Model Card, Data Sheet, Risk Assessment (AIRA), Evaluation Results, Red‑Team Report, Deployment Record, Change Approvals, Monitoring Metrics, Incident Reports, User‑facing Disclosures. Store in a system of record; retain per data‑classification and legal requirements.
10) Training & Awareness
- All personnel: Annual training on responsible AI, privacy, secure use of AI tools, and prompt/response hygiene—delivered through the existing security awareness program.
- Specialized roles (ML engineers, reviewers, red team, product): Role‑specific training and refreshers aligned to responsibilities.
11) Third‑Party/Vendor Management Requirements (AI)
Before engaging any third‑party AI service or model:
- Complete vendor AI risk assessment (security, privacy, safety, IP, availability).
- Review the vendor’s SOC reports/attestations and require remediation of high findings; this aligns with your current vendor management policy and SOC review cadence.
- Insert contractual clauses for data ownership, acceptable training/retention, incident notification, and audit/re‑test rights.
12) Enforcement
Violations may result in access restriction, disciplinary action, or contract termination consistent with the Code of Conduct and HR policies. Enforcement leverages existing disciplinary processes.
13) Exceptions
Documented, risk‑accepted exceptions require CAIO and CISO approval (and AIGC for high‑risk use cases), must include compensating controls and an expiry date.
14) Review & Continuous Improvement
This policy is reviewed at least annually and upon material changes in regulation, architecture, or risk posture. Management reviews consider audit results, incidents, performance metrics, and stakeholder feedback. [G][MANAGE]
Appendix A — Crosswalk at a Glance
Requirement (this policy)ISO/IEC 42001 areaNIST AI RMFAI scope, inventory, roles, charterContext, LeadershipGovernAIRA risk assessment & register linkagePlanningMap / ManageTraining & awarenessSupportGovernData governance & provenanceOperationMap / ManageSecure SDLC & change control for models/promptsOperationManageSafety/bias/robustness evaluationsOperation/Performance EvalMeasureProduction monitoring & alertsPerformance EvalMeasure / ManageIncident response & continuityImprovement/OperationManageVendor AI risk mgmtOperationGovern / Manage
Appendix B — AI System Register (minimum fields)
- System/Model Name & Owner; Executive Sponsor (if high‑risk)
- Purpose & Business Process; User Population
- Inputs & Data Sources (incl. sensitivity), Provenance & Licenses
- Model Type (e.g., LLM, classifier), Version, Dependencies
- Risk Tier & Justification; Applicable Regulations/Standards
- Evaluation Pack (safety, bias, robustness, grounding) — status & results
- Human Oversight Points & Fallbacks
- Deployment Environments; Monitoring Metrics & Thresholds
- Vendor(s) & Contract Controls (if any)
- Last Change Window; Next Review Date
Appendix C — Generative AI Release Gate (evidence checklist)
- Approved AIRA with mitigations closed or accepted
- Model Card + Data Sheet finalized
- Red‑team & safety evals meeting thresholds; jailbreak tests passed
- Privacy review (PTA/DPIA) completed
- Logging/alerting integrated; abuse report channel enabled
- Change ticket(s) approved and recorded per existing policy
- Rollback plan tested; owners on call for launch window
- User‑facing AI transparency statement prepared