Local Parity Mode Inventory v1¶
Purpose¶
Define the intended relationship between:
- local Docker compose development
- local production-shaped parity (
kind) vm-105environment-factory sandbox- shared
platform_control
This note is an inventory and design baseline for C-LOCAL-DEV-KIND-PARITY-001.
Important scope boundary:
- this document is about local developer parity only
- vm-105 environment-factory work is related context, but not part of the implementation scope for C-LOCAL-DEV-KIND-PARITY-001
Current modes¶
1. Compose local-dev¶
Current owner:
- doc/operations/local-dev/README.md
- doc/operations/local-dev/docker-compose.yaml
- doc/operations/local-dev/docker-compose.app.yaml
Strengths: - fastest inner loop - disposable DB reset flow - good for API/web debugging - good for node-agent contract/unit and selected integration work
Limits:
- not Kubernetes-shaped
- not GitOps-shaped
- rollout behavior differs from platform_control
- ingress/service discovery/runtime placement differ from platform_control
- stateful component bring-up differs from the target production model
Conclusion: - keep as the fast development mode - do not force it to become the production-shaped reference environment
2. Shared platform_control¶
Current owner: - live continuity environment for deploy validation and current operator work
Strengths: - real release/deploy path - real ingress/TLS behavior - real control-plane integration path
Limits: - not disposable enough for architecture churn - too expensive/risky as the first place to invent bootstrap and parity fixes - mixes continuity needs with next-model experimentation
Conclusion: - keep as the continuity and deployment validation environment - do not use it as the primary sandbox for parity-model invention
3. vm-105 environment factory sandbox¶
Current owner:
- doc/operations/Platform_Control_Environment_Factory_Sandbox_v1.md
Strengths: - intended rebuild target - right place for bootstrap, GitOps, storage, secret-seeding, and destroy/recreate work - explicitly disallows manual repair as a success condition
Limits: - not the fastest inner loop - not ideal for small web/API iteration
Conclusion: - keep as a separate environment-factory track - do not treat it as part of local developer parity implementation
4. kind local parity¶
Current status: - desired in queue - not yet a first-class supported path
Why it is needed:
- local validation currently jumps from compose to platform_control
- many recent failures were topology/config/runtime-placement issues, not feature logic issues
- a local K8s-shaped path is needed before shared-environment deploy
Conclusion:
- add as the missing middle mode between compose and platform_control
Gaps found from recent work¶
Gap 1: topology-sensitive bugs escape compose¶
Observed class: - ingress and browser-facing URL issues - readiness ordering issues - deploy/runtime placement differences - release/deploy drift not visible in compose
Meaning: - compose alone is insufficient as the last local gate for parity-sensitive work
Gap 2: environment contracts were implicit¶
Observed class:
- GPUAAS_LOCAL_DEV_DIR path contract was not documented clearly enough
- local identity bootstrap assumed a fixed Keycloak port
- compose bind mount resolution was brittle
Meaning: - environment mode contracts need to be explicit and owned
Gap 3: no clear mode boundary¶
Current ambiguity:
- some work assumes compose should act like production
- some work assumes platform_control is the only place to validate deployment-shaped behavior
Meaning: - we need a documented split of responsibility across modes
Target model¶
Mode A: compose¶
Use for: - API/web feature development - schema/query/service debugging - contract and smoke checks - fast reset/reseed loops
Must provide: - stable host-port access - stable auth/bootstrap behavior for local flows - disposable seeded environment
Must not be required to provide: - K8s rollout semantics - GitOps convergence - production-like workload placement
Mode B: kind¶
Use for: - production-shaped local parity - ingress/service/config parity checks - release-shape runtime validation before shared deploy - validating K8s manifests and environment assumptions locally
Must provide:
- K8s-shaped deployment model
- ingress and DNS/TLS semantics close to platform_control
- config/secret delivery path close to shared deploy
- repeatable cluster bootstrap and teardown
Must not become:
- the permanent shared integration environment
- a substitute for vm-105 rebuild automation work
Mode C: vm-105¶
Use for:
- environment factory work
- destroy/recreate automation
- GitOps/bootstrap/stateful bring-up hardening
- proving the next platform_control model
Must provide: - one-button bootstrap - one-button validation - one-button destroy/recreate - no hidden manual repair
Mode D: platform_control¶
Use for: - shared continuity - current release validation - operator-facing deploy verification - real remote validation
Must not be: - the first place parity assumptions are tested
Recommended validation ladder¶
- feature logic in compose
- parity-sensitive validation in
kind - environment-factory and rebuild validation on
vm-105in its own track - shared continuity deploy on
platform_control
Frontend e2e sits beside this ladder, not above it. Playwright should prove browser
workflows, not discover local-dev harness failures or Kubernetes-shaped parity
drift indirectly. The ownership model is defined in
doc/governance/Frontend_E2E_Validation_Model_v1.md.
Scope for C-LOCAL-DEV-KIND-PARITY-001¶
In:
- define supported kind parity mode
- document mode boundaries: compose vs kind vs shared platform_control
- identify the minimum K8s-shaped stack needed for parity
- define operator commands and reset lifecycle for kind
- define what validations must move from compose-only to kind
Out:
- replacing compose as the default dev loop
- any vm-105 automation, GitOps, or destroy/recreate work
- production multi-host rollout design
Minimum kind parity surface¶
The first useful kind parity path should cover:
- ingress
- API
- web
- Keycloak
- Postgres
- Redis
- NATS
- Temporal
- registry and Vault only if required by the parity lane under test
It should be acceptable to phase this:
- phase 1: API/web/auth/ingress parity
- phase 2: async workers and stateful services
- phase 3: app-platform and bootstrap artifact flows
Automation requirements¶
This work is not complete unless another developer can create and use the parity environment without reconstructing operator knowledge from chat or shell history.
Required automation properties:
- one command to create/bootstrap the
kindparity environment - one command to validate readiness
- one command to tear it down cleanly
- one command to rebuild from zero
- no required manual patching of manifests, secrets, ingress, or local DNS during the happy path
Minimum deliverables:
- reproducible scripts under
scripts/ - repo-owned manifests/config under
infra/ordoc/operations/local-dev/ - runbook/README entry describing prerequisites and commands
- deterministic config rendering from env or committed defaults
Proposed command surface¶
Keep the operator/developer experience explicit and small.
Recommended first-pass commands:
make kind-parity-up- create cluster if missing
- load/build required images or configure local registry access
- apply manifests/config
-
wait for readiness
-
make kind-parity-status - show cluster and service readiness
-
print ingress/API/web/auth endpoints
-
make kind-parity-validate - run parity smoke checks
-
fail closed on missing ingress/config/readiness dependencies
-
make kind-parity-down -
delete the parity cluster and local artifacts created for it
-
make kind-parity-reset - full destroy/recreate
These should mirror the style of the current local-dev make targets rather than inventing a separate operator interface.
Proposed implementation phases¶
Phase 1: parity baseline¶
Goal: - prove API/web/auth/ingress behavior in a K8s-shaped local environment
In:
- kind cluster bootstrap
- ingress controller
- Keycloak
- API
- web
- Postgres
- Redis
- readiness and smoke checks
Out: - full app-platform runtime parity - full GitOps model
Success criteria:
- another developer can run kind-parity-up and reach browser-facing endpoints
- local parity validation catches ingress/URL/config mismatches before shared deploy
Phase 2: async and worker parity¶
Goal: - cover the control-plane worker/runtime graph that compose does not model well enough
In: - NATS - Temporal - provisioning worker - notification/outbox/app-runtime workers as required
Success criteria: - parity lane covers async startup/dependency issues that previously escaped compose
Phase 3: app-platform and bootstrap parity¶
Goal: - support parity-sensitive app-platform and node/bootstrap validation locally
In: - registry and Vault as needed - bootstrap artifact metadata path - app-platform runtime prerequisites
Success criteria:
- app-platform and bootstrap flows can be validated locally without immediately escalating to platform_control
Configuration model recommendation¶
Do not fork the entire environment model.
Recommended shape:
- share the same high-level config semantics across compose,
kind,vm-105, andplatform_control - render mode-specific manifests/config from that shared config
- keep local parity overrides explicit
- avoid hidden shell-only env requirements
Practical rule:
- compose and kind may differ in transport and placement
- they should not differ in contract semantics or secret/config meaning
Image and artifact model¶
The first kind parity slice should choose one of these clearly:
- build locally and
kind load docker-image - publish to a local/dev registry and pull by digest
Recommendation: - start with local build/load for the first slice - introduce registry-backed parity only when artifact-flow validation becomes the target of the lane
Reason: - reduces bootstrap complexity for v1 - keeps the first parity lane focused on topology/runtime behavior
Other-developer usability rule¶
The kind parity lane is not successful if only one operator can bring it up.
Evidence required:
- fresh-clone instructions work
- commands succeed without hidden local shell exports
- prerequisites are documented concretely
- failure modes point to actionable diagnostics, not “ask the operator who set it up”
Relationship to vm-105¶
kind and vm-105 should not compete.
Recommended split:
kind- single-machine local parity
-
rapid topology/config/runtime validation
-
vm-105 - environment factory
- rebuild automation
- GitOps/stateful/bootstrap hardening
This means:
- prove the local parity model in kind
- keep environment factory and destroy/recreate work on vm-105 as a separate queue/program
First concrete outputs for this queue item¶
The next implementation slice should produce:
kindparity bootstrap script(s)kindparity manifests/config layoutmake kind-parity-*targets- parity validation script(s)
- updated docs in local-dev and operations docs
Acceptance expansion¶
In addition to the current queue acceptance checks, the work should not be marked done until all of the following are true:
- a documented command path exists for create / validate / destroy / reset
- at least one other developer can follow the documented parity path without manual repair
- the parity lane is clearly separated from compose and
platform_control - the lane is reproducible enough to be used as a pre-deploy validation target
Open design decisions¶
- Should
kinduse local images loaded directly, or always pull from a local registry? - Should
kindreuse the same env file shape as compose, or render a separate parity config? - Which gates become mandatory on
kindbefore merge/deploy? - What is the minimum secret/bootstrap model needed locally before Vault is required?
- How much of
platform_controlGitOps shape should be reproduced inkindv1?
Recommendation¶
Proceed with:
- keep compose as the default fast loop
- add
kindas the first-class local parity mode - keep
vm-105as the production-shaped automation sandbox - keep
platform_controlas continuity and live deploy validation
This avoids overloading any one environment with every concern, while creating a real parity lane before shared deploy.