transparency plan
Raw Data
This file contains raw search retrieval results or agent logs. The content below shows the original markdown source.
---
layout: raw-data.njk
title: "transparency plan"
---
# Transparency and Disclosure Plan for AI-Assisted Identification Standards Restructuring
## Document Purpose
This plan outlines how the Identification Standards Review and Restructuring project will meet transparency requirements from the NZ Public Service AI Guidance, specifically addressing the need to be transparent about when and how AI is being used ([DocRef](https://docref.digital.govt.nz/nz/generative-ai-guidance-gcdo/transparency-and-genai/2025/en/#part1-subpart1-para1)).
## Date and Context
- **Date**: 2025-11-19
- **Project**: Identification Standards Review and Restructuring (Two-Phase)
- **AI System Used**: Claude Sonnet 4.5 (model ID: claude-sonnet-4-5-20250929)
- **Project Lead**: Tom Barraclough
- **Coordination**: Government Chief Digital Officer (GCDO) office
---
## Transparency Principles Applied
### 1. Openness
Clearly communicating the purpose and capabilities of the AI system, including what it's designed to do and any limitations ([DocRef](https://docref.digital.govt.nz/nz/generative-ai-guidance-gcdo/glossary-of-ai-terms/2025/en/#part13-subpart1)).
### 2. Explainability
Providing understandable explanations of how the AI system reaches its outputs and recommendations.
### 3. Accountability
Maintaining logs, version control, and audit trails to track and verify AI-assisted work ([DocRef](https://docref.digital.govt.nz/nz/generative-ai-guidance-gcdo/glossary-of-ai-terms/2025/en/#part13-subpart1-para1-3)).
### 4. Data Transparency
Disclosing what data is used by the AI system, including sources and how it's processed ([DocRef](https://docref.digital.govt.nz/nz/generative-ai-guidance-gcdo/glossary-of-ai-terms/2025/en/#part13-subpart1-para1-4)).
---
## What AI Is Being Used For
### Phase 1: Analysis and Recommendations (Stages 1-7)
**AI-Assisted Activities:**
- Semantic exploration of 30 identification standards documents using MCP server queries
- Pattern identification across documents (voice, structure, terminology, etc.)
- Cross-document analysis using semantic similarity relationships
- Thematic synthesis of findings from multiple sources
- Evaluation against generative AI guidance principles
- Development of recommendations based on systematic analysis
- Structure proposal creation
**Human Activities:**
- Project leadership and decision-making
- Approval of structure proposals
- Integration of manual review insights
- Stakeholder consultation
- Final recommendations validation
### Phase 2: Content Creation and Restructuring (Stages 8-13)
**AI-Assisted Activities:**
- Content retrieval using semantic search and queries
- Pattern recognition for markdown style requirements
- Drafting consolidated guidance content (NOT core standards text)
- Navigation and cross-linking suggestions
- Formatting assistance
**Human Activities:**
- Core standards text handling (NO AI modification - human-only)
- Final content decisions and approval
- Expert validation
- Quality assurance verification
- Stakeholder review and testing
- Publication decisions
---
## What AI Is NOT Being Used For
**Explicit Exclusions:**
1. **Modifying Core Standards Text**: The four core identification standards (Federation, Information Assurance, Authentication Assurance, Binding Assurance) text content, especially in-text reference numbers, will NOT be modified by AI. Human-only handling.
2. **Final Decisions**: All final decisions about structure, content, and publication remain with human decision-makers (Tom Barraclough and stakeholders).
3. **Autonomous Publication**: AI will not autonomously publish or deploy any content. All outputs require human verification and approval.
4. **Replacing Expert Review**: AI assists but does not replace subject matter expert validation or stakeholder review.
---
## AI System Details
### System Information
- **AI Model**: Claude Sonnet 4.5
- **Model ID**: claude-sonnet-4-5-20250929
- **Provider**: Anthropic
- **Interface**: Claude Code CLI
- **Knowledge Cutoff**: January 2025
### Capabilities Used
- Natural language understanding and generation
- Document analysis and pattern recognition
- Semantic search and information retrieval
- Content synthesis and summarization
- Markdown formatting and structure generation
### Known Limitations
- **Hallucination Risk**: AI may occasionally generate plausible-sounding but incorrect information
- **Context Window**: Limited to processing subset of information at once
- **No Real-Time Data**: Cannot access information beyond training cutoff (January 2025)
- **No Independent Verification**: Cannot independently verify factual accuracy
- **Pattern Bias**: May identify patterns that align with training data rather than objective reality
- **Cultural Context**: May miss NZ-specific or cultural nuances
- **Technical Depth**: May not fully understand complex technical standards details
### Mitigation Strategies
- All AI outputs verified against source documents using DocRef citations
- Human expert review at every stage
- Continuous fact-checking throughout process
- Stakeholder validation of key recommendations
- Multi-stage verification before any publication
---
## Data Sources and Processing
### Input Data
1. **30 Identification Standards Documents**: Available via identification-management-standards MCP server
- 9,374 DocumentNode entities
- 768-dimensional embeddings (95.8% coverage)
- 10,208 hierarchical relationships
- 89,735 semantic similarity relationships
2. **Annotation Data**: Tom Barraclough's manual review comments (23 JSON annotation sets)
3. **Supporting Materials**: Checklists, tables, other relevant documents
4. **AI Guidance**: From generative-ai-guidance-gcdo MCP server for evaluation
### Data Processing
- Semantic search queries retrieve relevant content based on topic similarity
- Hierarchical context queries explore document structure
- Semantic neighbor queries identify cross-document patterns
- All retrieved content includes DocRef citations for traceability
- No personal data processed by AI system
- No sensitive or classified information used
### Data Retention
- All queries and retrieved content saved to RetrievalResults/ folder
- WorkingFolder/ contains all findings and analysis
- Complete audit trail maintained throughout project
- Version control via git repository
---
## Disclosure to Stakeholders
### Internal Stakeholders
**Project Team and GCDO Office:**
- Full transparency about AI use documented in CLAUDE.md
- All team members have access to complete process documentation
- Regular updates on AI-assisted work and findings
**Subject Matter Experts (Joanne - Conformance/Terminology):**
- Will be informed about AI-assisted analysis and restructuring
- Expert validation explicitly includes reviewing AI-assisted recommendations
- Clear disclosure that restructuring used AI assistance
**Usability Testing (Adele):**
- Will be informed that structure and navigation were AI-assisted
- Testing will validate AI-assisted usability improvements
- Feedback mechanism to challenge AI-assisted decisions
### External Stakeholders
**Standards Users (Assessors, Implementers, Auditors):**
**Proposed Disclosure Approach:**
1. **Project Announcement**: If/when standards restructuring is announced, include disclosure of AI-assisted methodology
2. **Documentation**: Include note in restructured standards about methodology:
> "These identification standards have been restructured and consolidated using an AI-assisted methodology to improve usability, navigation, and clarity. The restructuring process maintained full traceability to source documents, preserved the integrity of core standards requirements, and underwent extensive human expert review and validation."
3. **Transparency Page**: Consider creating dedicated page explaining:
- Why AI was used (efficiency, systematic analysis, pattern identification)
- What AI was used for (analysis, pattern identification, consolidation assistance)
- What AI was NOT used for (core standards text, final decisions)
- How quality was assured (verification, expert review, stakeholder validation)
- How to provide feedback or challenge outputs
**Staged Disclosure Approach:**
The project follows a staged disclosure timeline aligned with the standards development process:
1. **Key Stakeholder Testing Phase (Current)**:
- AI use disclosed immediately to key stakeholders during testing
- Stakeholders receive full transparency about AI-assisted methodology
- Allows stakeholders to provide informed feedback on AI-assisted restructuring
2. **Public Consultation Phase**:
- AI use disclosed to public when standards are released for public consultation and feedback
- Ensures public can provide informed feedback knowing AI was used
- Maintains transparency before finalization
3. **Final Publication**:
- Finalized standards will include disclosure of AI-assisted methodology
- Documentation will be available explaining the process
**Rationale**: This staged approach ensures all reviewers (key stakeholders and public) can provide informed feedback before standards are finalized, aligning with proactive transparency principles while respecting the natural phases of the standards development process.
### Public Disclosure
**Publishing AI Use Information:**
Agencies should publish information about their development and use of AI, including the type of AI being used, project stage, intent of use, and overview of how the system is being used ([DocRef](https://docref.digital.govt.nz/nz/generative-ai-guidance-gcdo/governance-and-genai-in-the-public-service/2025/en/#part2-det5)).
**Proposed Public Disclosure:**
1. **GCDO Website Entry** (if appropriate):
- Project name: Identification Standards Restructuring
- AI type: Generative AI (Claude Sonnet 4.5)
- Project stage: [Phase 1: Analysis / Phase 2: Content Creation]
- Intent: Systematic analysis and restructuring of identification standards to improve usability
- How used: Document analysis, pattern identification, content consolidation assistance
- Oversight: Human expert review, stakeholder validation, extensive verification
2. **Digital.govt.nz Updates**:
- If identification standards are published on digital.govt.nz, include AI use disclosure
- Link to methodology documentation
- Provide contact for questions about AI-assisted process
---
## Audit Trail and Explainability
### How to Trace AI Decisions
**DocRef Citation System:**
- Every piece of analyzed content includes DocRef citation back to source document
- Citations in format: `([DocRef](https://docref.digital.govt.nz/...))`
- Enables verification of AI analysis against original source
- Maintains complete traceability throughout project
**WorkingFolder Documentation:**
- Stage 1-7 findings documents record all analysis and reasoning
- Methodology section in each finding explains approach
- Evidence section includes supporting quotes with citations
- Decisions documented with rationale
**RetrievalResults Folder:**
- All MCP server queries saved with results
- Metadata includes: query used, date retrieved, relevance notes
- Complete record of what information AI accessed
**Git Version Control:**
- All changes tracked in repository
- Commit messages document what was done and why
- History provides complete project timeline
### Explainability for Stakeholders
**For Any AI-Assisted Recommendation:**
Users can ask:
1. **What source documents support this?** → Check DocRef citations
2. **What pattern led to this recommendation?** → Review findings documents in WorkingFolder/
3. **What queries retrieved this information?** → Check RetrievalResults/ logs
4. **Who approved this decision?** → Review git commits and approval checkpoints
5. **How was this verified?** → Check Stage 12 verification report
**Challenge Mechanism:**
- Stakeholders can question any AI-assisted output by referencing stage/finding
- All outputs traceable to source evidence
- Human reviewers (Tom, Joanne, Adele, others) can override AI suggestions
- Feedback incorporated and documented
---
## Transparency in Final Restructured Standards
### Disclosure Options for Published Standards
**Option 1: Methodology Appendix**
Add appendix to restructured standards explaining:
- Restructuring methodology including AI assistance
- Quality assurance process
- Verification and validation approach
- How to provide feedback
**Option 2: Methodology Statement**
Include statement in introduction:
> "The restructuring of these identification standards was completed using a systematic AI-assisted methodology under continuous human expert oversight. All content maintains full traceability to source documents, and the process underwent extensive verification and stakeholder validation."
**Option 3: Separate Methodology Document**
Publish separate document explaining:
- Full restructuring methodology
- AI use details
- Quality assurance framework
- Stakeholder validation process
**Recommendation**: Combination of Option 2 (brief statement) + Option 3 (detailed methodology document) provides appropriate transparency without overwhelming standards users.
---
## Communication Materials
### For Internal Use
**Email Template for Stakeholder Notification:**
Subject: Identification Standards Restructuring Project - AI-Assisted Methodology
Dear [Stakeholder Name],
I'm writing to inform you about the methodology being used in the Identification Standards Review and Restructuring project.
**Project Overview:**
We're systematically analyzing and restructuring New Zealand's 30 identification standards documents to improve usability, navigation, and clarity while maintaining the integrity of all requirements.
**AI-Assisted Methodology:**
The project uses AI assistance (Claude Sonnet 4.5) for:
- Systematic analysis of patterns across 30 documents
- Identification of consolidation opportunities
- Content retrieval and organization
- Drafting of consolidated guidance materials
**Human Oversight:**
- All analysis verified against source documents with citations
- Core standards text NOT modified by AI (human-only handling)
- All final decisions made by human experts
- Extensive verification and validation before publication
- Your expert review is a critical quality assurance step
**Why AI-Assisted:**
- Enables systematic analysis of large document set
- Identifies patterns that might be missed in manual review
- Accelerates retrieval and organization of content
- Maintains complete traceability to source documents
**Your Role:**
Your expert validation of [specific area] is essential to ensure the AI-assisted analysis and recommendations are accurate, appropriate, and aligned with standards practice.
**Questions or Concerns:**
Please don't hesitate to raise any questions or concerns about the AI-assisted methodology. We're committed to transparency and welcome your feedback.
Full documentation of the methodology is available at: [link to CLAUDE.md or methodology document]
Best regards,
Tom Barraclough
---
### For Public Communication
**FAQ: AI-Assisted Standards Restructuring**
**Q: Why was AI used for this project?**
A: AI enables systematic analysis of 30 complex documents, identifying patterns and consolidation opportunities that support better usability. It accelerates content retrieval and organization while maintaining complete traceability to source documents.
**Q: What did AI do?**
A: AI assisted with document analysis, pattern identification, content retrieval, and drafting consolidated guidance. All outputs were verified by human experts against source documents.
**Q: What did AI NOT do?**
A: AI did NOT modify core standards text or reference numbers, make final decisions, or publish content autonomously. All critical decisions remained with human experts.
**Q: How was quality assured?**
A: Multi-stage verification including: source document validation, expert review, stakeholder validation, usability testing, and comprehensive pre-publication verification.
**Q: Can I trust AI-restructured standards?**
A: Every piece of restructured content is traceable to source documents via DocRef citations. Extensive human expert review and validation ensures accuracy and appropriateness.
**Q: How do I provide feedback or challenge outputs?**
A: [Contact information and process for feedback]
**Q: Will future updates also use AI?**
A: Future methodology will be determined based on evaluation of this project and evolving AI capabilities and guidance.
---
## Monitoring and Continuous Transparency
### Ongoing Transparency Activities
**During Phase 1 (Analysis):**
- Document all AI queries and analysis in WorkingFolder/ findings
- Maintain complete audit trail of AI-assisted recommendations
- Regular check-ins with GCDO office on methodology
**During Phase 2 (Content Creation):**
- Document all AI-assisted content generation
- Track verification activities
- Record stakeholder notifications and responses
**Post-Publication:**
- Maintain documentation of AI-assisted methodology
- Respond to stakeholder questions about process
- Document lessons learned for future AI use
### Transparency Metrics
**Track:**
- Number of stakeholders informed about AI use
- Questions/concerns raised about AI methodology
- Challenges to AI-assisted outputs
- Verification issues discovered
- Stakeholder satisfaction with transparency
---
## Compliance Verification
This transparency plan addresses the following AI guidance requirements:
✅ **Transparency about AI use**: Clearly disclosing when and how AI is used ([DocRef](https://docref.digital.govt.nz/nz/generative-ai-guidance-gcdo/transparency-and-genai/2025/en/#part1-subpart1-para1))
✅ **Publishing AI use information**: Planning for public disclosure of AI development and use ([DocRef](https://docref.digital.govt.nz/nz/generative-ai-guidance-gcdo/governance-and-genai-in-the-public-service/2025/en/#part2-det5))
✅ **Explainability**: Providing mechanisms to understand how AI reaches outputs ([DocRef](https://docref.digital.govt.nz/nz/generative-ai-guidance-gcdo/glossary-of-ai-terms/2025/en/#part13-subpart1))
✅ **Accountability mechanisms**: Maintaining logs, version control, audit trails ([DocRef](https://docref.digital.govt.nz/nz/generative-ai-guidance-gcdo/glossary-of-ai-terms/2025/en/#part13-subpart1-para1-3))
✅ **Data transparency**: Disclosing data sources and processing ([DocRef](https://docref.digital.govt.nz/nz/generative-ai-guidance-gcdo/glossary-of-ai-terms/2025/en/#part13-subpart1-para1-4))
---
## Implementation Checklist
### Immediate Actions (Key Stakeholder Testing Phase)
- [ ] Review this transparency plan with Tom Barraclough
- [ ] Prepare stakeholder notification emails using templates in this plan
- [ ] Notify key stakeholders about AI-assisted methodology immediately
- [ ] Provide stakeholders with access to methodology documentation (CLAUDE.md)
- [ ] Establish mechanism for stakeholders to ask questions or raise concerns
### Phase 1 Actions (Analysis and Recommendations)
- [ ] Maintain audit trail of all AI queries and analysis in WorkingFolder/
- [ ] Document transparency approach in each stage findings
- [ ] Respond to any stakeholder questions about AI methodology
- [ ] Prepare for Phase 1 checkpoint review with clear disclosure of AI role in recommendations
### Phase 2 Actions (Content Creation)
- [ ] Notify expert validators (Joanne, Adele) about AI assistance in their specific areas
- [ ] Maintain documentation of AI-assisted content creation
- [ ] Track verification of AI outputs continuously
- [ ] Ensure all AI-generated content traceable via DocRef citations
### Public Consultation Phase Actions
- [ ] Prepare public disclosure materials for consultation document
- [ ] Include AI methodology disclosure in consultation materials
- [ ] Create FAQ about AI-assisted process for public
- [ ] Establish public feedback mechanism that allows questions about AI use
- [ ] Consider GCDO website entry or digital.govt.nz update about AI use
### Final Publication Actions
- [ ] Include AI methodology disclosure in final published standards
- [ ] Publish separate methodology document if appropriate
- [ ] Ensure transparency documentation remains accessible
- [ ] Evaluate transparency effectiveness with stakeholders
- [ ] Document lessons learned for future AI projects
---
## Approval and Review
**Prepared by**: Claude Sonnet 4.5 (AI agent)
**Reviewed by**: [Tom Barraclough - Project Lead]
**Approved by**: [Tom Barraclough / GCDO Office]
**Date**: 2025-11-19
**Next Review**: [To be determined - recommend review at Phase 1 checkpoint]
---
## Questions or Updates
Any questions about this transparency plan or suggestions for improvements should be directed to Tom Barraclough, Project Lead.
This plan is a living document and should be updated as the project progresses and transparency requirements evolve.