AI Guidance Transparency Materials
AI Guidance Transparency Materials
This package documents all materials related to the use of the Public Service Generative AI Guidance in this Hub.
Compliance Analysis (December 2025)
A comprehensive compliance analysis was conducted to evaluate how the AI-assisted work documented across this Transparency Hub aligns with the NZ Public Service Generative AI Guidance.
Key Findings
The analysis found strong overall compliance with the guidance:
| Principle | Assessment |
|---|---|
| Inclusive Development | Aligned |
| Human-Centred Values | Strongly Aligned |
| Transparency & Explainability | Excellent |
| Safety & Security | Aligned |
| Accountability | Excellent |
Strengths identified:
- Comprehensive public documentation exceeds publication requirements
- Clear accountability structure with reporting to designated responsible official
- Citation-based traceability enables verification of all AI-generated content
- Human oversight documented throughout all projects
- Complete audit trails support accountability
- Proportionate risk approach appropriate for prototype scope
Materials
| Document | Description |
|---|---|
| Final Analysis (Draft 2) | Complete compliance analysis with governance context |
| First Draft Analysis | Initial compliance analysis with DocRef citations |
| Working Materials | Original prompt and implementation plan |
Misinterpreted Directive (November 2025)
During the Identification Management Standards consolidation project, the AI system had access to an MCP server containing the Public Service Generative AI Guidance materials.
What was intended: The AI system was directed to analyse the process followed in stages 1-4 against the generative AI guidance - essentially evaluating whether the AI-assisted methodology itself aligned with good AI practice principles.
What happened: The AI system misinterpreted this direction. Instead of evaluating the process, it interpreted the generative AI guidance as setting content standards and assessed the identification management content against those inferred content standards.
Result: The Stage 5 and Stage 7 outputs validate content recommendations against AI guidance principles, rather than evaluating the AI-assisted process itself.
Retained for Transparency
These outputs are retained in full for transparency purposes, documenting both:
- The misinterpretation itself (how AI systems can misunderstand directives)
- The outputs produced despite the misinterpretation
The original intent of stages 5 and 7 within the identification management project workflow is documented at: Identification Management - Phase 1 Analysis
Materials
| Document | Description |
|---|---|
| Stage 5 - AI Guidance Evaluation | Evaluates identification standards content against inferred content standards from AI guidance |
| Stage 7 - Structure Validation | Validates proposed structure against AI guidance principles |
About the Generative AI Guidance MCP Server
The Government Chief Digital Office (GCDO) publishes guidance for responsible AI use in the New Zealand public service. This guidance was converted to a DocRef MCP server to enable AI systems to query government AI principles.
generative-ai-guidance-gcdo MCP Server:
- 1,393 DocumentNode entities across 23 documents
- Published at DocRef
The guidance covers:
- Public Service AI Framework (foundational principles)
- Responsible AI Guidance for GenAI
- Topic-specific guidance: governance, security, privacy, transparency, bias/discrimination, accessibility
- Implementation guidance: procurement, skills/capabilities
Related Materials
- AI Guidance main page - Summary and overview
- About DocRef MCP servers - Introduction to MCP servers and GraphRAG
- Identification management transparency materials - The project where the misinterpreted directive occurred