Module 1: Why AI Governance Matters
AI systems can scale your impact, surface insights from hundreds of audits, and save time. But without governance, they can also hallucinate false information, breach confidentiality, and create liability.
The Promise
- Predict risk for 10,000 suppliers based on 500 audits
- Answer complex questions in seconds
- Identify patterns across your entire audit history
The Risks
- Hallucinations: AI generates false but plausible information
- Confidentiality breaches: Data leaks to public AI systems
- Liability: Biased or unsourced recommendations
❌ What Can Go Wrong
Scenario: Auditor pastes supplier report into ChatGPT.
Result: Confidential data stored on OpenAI servers, violating NDA.
✓ What Success Looks Like
Scenario: Team uses private RAG system to query audit reports.
Result: AI returns answers with source citations. Team verifies sources before acting.
Module 2: Preventing Hallucinations
AI hallucinations are when the system generates information that isn't supported by source data. The solution: mandatory source citations.
Every Claim Must Include:
- Source document (audit report ID)
- Finding code (e.g., "HOW.1", "HS.3")
- Date of audit
- Relevant excerpt
❌ Unacceptable
"Bangladesh Leather Works has critical child labor violations."
No source cited. Could be a hallucination.
✓ Acceptable
"Lahore Leather Works has child labor risk. [Source: S004, Finding CL.2, July 2024: 'Children ages 10-14 present during working hours']"
Verifiable claim with specific source.
Which AI output is acceptable for a supplier report?
Module 3: Data Privacy
Never upload confidential supplier data to public AI systems.
Approved for Confidential Data:
- Private RAG systems (sandboxed, company-owned)
- Azure OpenAI with data processing agreement
- Self-hosted LLMs
Prohibited for Confidential Data:
- ChatGPT (free or Plus)
- Claude (free tier)
- Any AI without a signed DPA
You need to summarize a confidential audit report. Which approach is acceptable?
Module 4: Human Review Requirements
AI assists, humans decide. For high-stakes decisions, human review is mandatory.
Mandatory Human Review:
- Terminating supplier relationships
- Blacklisting suppliers
- Legal escalations
- Public disclosures
Recommended Human Review:
- Prioritizing audit schedules
- Classifying finding severity
- Drafting corrective action plans
AI recommends blacklisting a supplier. What should you do?
Module 5: Incident Reporting
Report immediately if:
- AI generates false information that was used in a decision
- Confidential data was exposed to public AI
- AI shows systematic bias
- A supplier challenges an AI-assisted decision
Best Practices Summary
✓ Do
- Use private AI systems for confidential data
- Verify every claim against sources
- Apply human review for high-stakes decisions
- Report incidents immediately
❌ Don't
- Upload supplier data to public ChatGPT
- Use unsourced AI claims in reports
- Let AI make autonomous decisions
- Skip human review on high-stakes matters
A colleague pastes audit reports into ChatGPT. What do you do?
Which statement about AI in due diligence is true?
Certificate of Completion
This certifies that
[Your Name]
has completed
AI-Enabled Auditor Training