AI Guidance Evaluation - Transparency Record

Compliance Analysis: Transparency Hub vs AI Guidance

In December 2025, a compliance analysis was conducted to evaluate how the work documented in this Transparency Hub aligns with the NZ Public Service Generative AI Guidance.

Purpose

This analysis evaluates the AI-assisted regulatory work documented across this Hub against the five OECD-aligned principles in the government's AI guidance:

  1. Inclusive growth, sustainable development and well-being
  2. Human-centred values (rule of law, human rights, privacy)
  3. Transparency and explainability
  4. Safety and security (robustness)
  5. Accountability

Methodology

The analysis was conducted by:

  • Querying the generative-ai-guidance-gcdo MCP server (1,393 nodes across 23 guidance documents) publishing on DocRef here
  • Reviewing all transparency materials in this Hub
  • Mapping Hub practices against specific guidance requirements with DocRef citations

Key Findings

The analysis found strong overall compliance with the guidance:

Principle Assessment
Inclusive Development Aligned
Human-Centred Values Strongly Aligned
Transparency & Explainability Excellent
Safety & Security Aligned
Accountability Excellent

Strengths identified:

  • Comprehensive public documentation exceeds publication requirements
  • Clear accountability structure with reporting to designated responsible official
  • Citation-based traceability enables verification of all AI-generated content
  • Human oversight documented throughout all projects
  • Complete audit trails support accountability
  • Proportionate risk approach appropriate for prototype scope

Contextual notes:

  • Risk Assessment: Informal but proportionate to low-risk prototype work; formal assessment would be conducted before any production deployment
  • Cultural Appropriateness: Not assessed separately; appropriate given technical nature of materials and planned public consultation
  • Agency Policy: This compliance analysis itself serves as the agency policy reference exercise

Materials


Transparency Note: Misinterpreted Directive

During the Identification Management Standards consolidation project, the AI system had access to an MCP server containing the Public Service Generative AI Guidance materials.

What was intended: The AI system was directed to analyse the process followed in stages 1-4 against the generative AI guidance - essentially evaluating whether the AI-assisted methodology itself aligned with good AI practice principles.

What happened: The AI system misinterpreted this direction. Instead of evaluating the process, it interpreted the generative AI guidance as setting content standards and assessed the identification management content against those inferred content standards.

Result: The Stage 5 and Stage 7 outputs validate content recommendations against AI guidance principles, rather than evaluating the AI-assisted process itself.

Retained for Transparency

Stage 5 and Stage 7 outputs are retained in full for transparency purposes, documenting both:

  • The misinterpretation itself (how AI systems can misunderstand directives)
  • The outputs produced despite the misinterpretation

Original Intent

The original intent of stages 5 and 7 within the identification management project workflow is documented at: Identification Management - Phase 1 Analysis


Materials

View All Transparency Materials