Raw Data
This file contains raw search retrieval results or agent logs. The content below shows the original markdown source.
---
layout: raw-data.njk
---
# Execution Plan: Creating The New Zealand API Standard
## Context Management Strategy
### Phase 1: Systematic Research with Local Storage (Steps 1-3)
Execute all searches from the instructions document and save results to organized local files:
**Research Files Structure**:
1. `research/01_design.md` - All design-related searches (~5 searches, ~150 results)
2. `research/02_development.md` - Development & REST API searches (~7 searches, ~210 results)
3. `research/03_security.md` - Security, auth, OAuth searches (~7 searches, ~280 results)
4. `research/04_deployment.md` - Versioning, gateway, deployment (~5 searches, ~150 results)
5. `research/05_operations.md` - Management, governance, monitoring (~7 searches, ~210 results)
6. `research/06_definitions.md` - All definition-tagged content (~3 searches, ~100 results)
7. `research/07_patterns.md` - Authentication patterns, examples (~3 searches, ~100 results)
8. `research/08_good_practices.md` - Good practice content by topic (~6 searches, ~180 results)
Each research file will contain:
- The search queries used (for traceability)
- All results with full content and DocRef citations
- Tags for each result
- Similarity scores to assess relevance
**Estimated total**: ~1,380 search results stored locally, accessible on-demand without loading context window.
### Phase 2: Content Drafting with Selective Context Loading (Steps 4-7)
When drafting each section of the Standard:
1. Read ONLY the relevant research file(s) for that section
2. Extract and synthesize content
3. Apply elevation criteria for should→MUST conversions
4. Write section to `nz-api-standard.md` progressively
5. Track all used DocRefs in `citations_tracker.md`
**Memory-efficient approach**: Never load all research at once. Read research files one at a time as needed.
### Phase 3: Appendices and Finalization (Steps 8-10)
1. Compile appendices from saved pattern and example files
2. Cross-reference using citations tracker
3. Quality review against checklist
4. Format verification
## Detailed Steps
### Step 1: Execute Design Phase Searches
- Run 5 searches per instructions (design principles, API types, specification, etc.)
- Save all results to `research/01_design.md`
- Include search queries, full results, citations, tags
- **Context impact**: One file write, then clear from memory
### Step 2: Execute Development Phase Searches
- Run 7 searches (HTTP methods, headers, error handling, testing, etc.)
- Save to `research/02_development.md`
- **Context impact**: One file write, then clear from memory
### Step 3: Execute Security Phase Searches
- Run 7 searches (OAuth, authentication, TLS, threat protection, etc.)
- Save to `research/03_security.md`
- Focus on Part B content with must-or-shall and good-practice tags
- **Context impact**: One file write, then clear from memory
### Step 4: Execute Deployment & Operations Searches
- Run 5 deployment searches → `research/04_deployment.md`
- Run 7 operations searches → `research/05_operations.md`
- **Context impact**: Two file writes, then clear from memory
### Step 5: Execute Cross-Cutting Searches
- Definitions (~100 results) → `research/06_definitions.md`
- Patterns (~100 results) → `research/07_patterns.md`
- Good practices by topic (~180 results) → `research/08_good_practices.md`
- **Context impact**: Three file writes, then clear from memory
### Step 6: Draft Standard Document - Section by Section
For each section, follow this pattern:
1. Read relevant research file ONLY
2. Create `nz-api-standard.md` with proper heading structure
3. Draft section content with DocRef citations
4. Apply elevation criteria where appropriate
5. Add to citations tracker
6. **Context impact**: Only one research file + working draft in memory at a time
**Drafting order**:
1. Title page & Introduction (read multiple research files for overview)
2. Definitions (read `06_definitions.md`)
3. Design Phase section (read `01_design.md` + `08_good_practices.md`)
4. Development Phase section (read `02_development.md` + `08_good_practices.md`)
5. Security Phase section (read `03_security.md` + `08_good_practices.md`)
6. Deployment Phase section (read `04_deployment.md`)
7. Operations Phase section (read `05_operations.md`)
### Step 7: Draft Appendices
1. Appendix A: Auth patterns (read `07_patterns.md` + `03_security.md`)
2. Appendix B: API examples (read `07_patterns.md`)
3. Appendix C: Error reference (read `02_development.md`)
4. Appendix D: Glossary (read `06_definitions.md`)
5. Appendix E: References (compile from all research files)
### Step 8: Quality Review
- Verify all citations present using `citations_tracker.md`
- Check formatting compliance (no numbering, ordered lists, etc.)
- Validate normative language (MUST/SHOULD/MAY)
- Ensure lifecycle flow
### Step 9: Final Verification
Run through quality checklist from instructions:
- [ ] Every section has DocRef citations
- [ ] Normative language correct
- [ ] No in-text numbering
- [ ] Lists only follow paragraphs
- [ ] Main headings at level 3 or 4
- [ ] All appendices complete
- [ ] Cross-references valid
### Step 10: Deliverable
- `nz-api-standard.md` (10,000-15,000 words, 250-400+ citations)
- `research/` directory with 8 research files (for traceability)
- `citations_tracker.md` (verification log)
## Key Principles
1. **Never load all research into context** - use file system as external memory
2. **Work incrementally** - one section at a time
3. **Maintain traceability** - every requirement traces to saved research
4. **Preserve citations** - never paraphrase DocRef links
5. **Apply elevation judiciously** - document rationale for should→MUST changes
## Database Structure Reference
- **Total nodes**: 5,612 (Part A: 700, Part B: 1,715, Part C: 3,197)
- **Citation format**: `([DocRef](https://docref.digital.govt.nz/nz/dia-apis-part[x]/2022/en/#[id]))`
- **Tags available**: must-or-shall, should-or-may, good-practice, definition, patterns, reference-examples-apis
- **Node types**: heading, para, section, list, table rows
- **Content sizes**: 50-3,000+ characters per node
## Estimated Execution
- **Research phase**: ~40 MCP queries, ~90 minutes
- **Drafting phase**: ~6-8 hours of systematic writing
- **Review phase**: ~1-2 hours
- **Total**: Comprehensive, traceable Standard document with full provenance