Montis.icu Unified Prefetch Architecture β v5.1 Design
The Montis.icu Coach App operates on an authority-driven architecture separating
measurement, interpretation, prescription,
and interface.
Data integrity originates from Intervals.icu,
computation executes in Railway (python), and conversational rendering
never alters canonical truth.
π§ Strategy β’ π’ Technical β’ π Coaching β’ π΅ Future
The system is intentionally divided into independent architectural layers, each with a single responsibility and defined authority.
- π§ Strategy & Philosophy β Governing architectural doctrine. Defines authority boundaries and non-negotiable contracts for interpretation.
- π’ Technical Pipeline - Operational Architecture β Deterministic data ingestion, validation, aggregation, and semantic serialization (URF v5.1). Immutable canonical outputs.
- π Coaching Intelligence Pipeline β Structural performance modelling (Tier-3), progression analysis, and deterministic adaptive prescription logic.
- π΅ Future Technical Pipeline Architecture β Model-agnostic conversational layer using MCP tools. Renders insights without computing or mutating metrics.
This separation enforces:
- Measurement Integrity
- Transparent Decision Intelligence
- Interface Abstraction Without Authority
Conversational AI is a rendering layer β never a computational authority.
π§ System Strategy & Architectural Philosophy
The Montis.icu Coach App is engineered around strict authority separation. No layer may override or recompute another layerβs outputs.
π Core Architectural Contract
Coaching logic never mutates canonical metrics.
Tier-1 and Tier-2 outputs are immutable and remain the single source of performance truth.
- Measurement is deterministic and audited.
- Interpretation is rule-based and traceable.
- Prescription executes via a deterministic state engine.
- LLMs render outcomes but never compute, infer, or adjust metrics.
This prevents metric drift, silent recomputation, probabilistic coaching artifacts, and black-box behavior.
π§± Responsibility Separation Model
Measurement β Technical Pipeline (Tier-0 / Tier-1 / Tier-2) Interpretation β Coaching Pipeline (Tier-3 Intelligence) Prescription β Adaptive Decision Engine (Deterministic Rules) Interface β LLM Rendering Layer (Read-Only)
No single layer owns more than one responsibility. This structure enables auditability, stateless execution, and predictable system evolution.
π― Strategic Positioning
Unlike heuristic AI dashboards or machine-learning coaching engines, this system does not invent, infer, or probabilistically manipulate performance metrics.
- No hidden state
- No probabilistic coaching inference
- No language-model metric computation
- No silent threshold adjustment
The result is a deterministic performance intelligence platform β not a black-box AI coach.
π Identity & Token Resolution
Montis resolves athlete identity at runtime using two independent sources: stored OAuth tokens (KV) or direct bearer tokens supplied by external clients.
Bearer token present β direct execution (no KV)
Worker-managed OAuth (KV) for MCP
π Separation of Concerns
- Execution access β MCP / /chat / /run_*
- Data access β OAuth token (KV or direct)
MCP grants control tool execution only.
OAuth tokens control access to athlete data.
βοΈ Unified Execution Model
Montis.icu operates a multi-entry, dual-identity execution architecture. Tool execution and athlete identity are resolved independently at runtime.
Client
β Worker (Edge)
β Dispatcher (Internal Tools)
β Railway (Computation)
β Intervals API (Data)
βοΈ Runtime Token Resolution
if (incoming_token) {
use incoming token // ChatGPT OAuth
} else {
load token from KV // Default model
}
This logic applies uniformly across:
- /run_* endpoints
- /chat/api
- /mcp
π§© Separation of Concerns
| Concern | Mechanism |
|---|---|
| Tool Execution | MCP / Chat / Direct endpoints |
| Athlete Identity | KV OAuth or Incoming Token |
MCP grants control tool access, not athlete data.
OAuth tokens control data access, not execution.
π§ Execution Patterns
Browser / CLI: Client β Worker β KV β Intervals MCP (Claude): LLM β MCP β Worker β KV β Intervals ChatGPT OAuth: ChatGPT β Token β Worker β Intervals
All paths converge into the same dispatcher and produce identical outputs.
βοΈ Current Operational Pipeline (Production)
Status: Live β’ Used in production β’ Backward compatible
π Cloudflare handles authentication, routing, and optional synthetic testing.
π Railway performs full computation, validation, and serialization.
π§© Current Architecture Guarantees
| Area | Implementation | Guarantee |
|---|---|---|
| Prefetch Flow | Cloudflare normalizes date params and injects optional test payloads | Safe staging tests without real data |
| Execution Model | Python execution on Railway (FastAPI + Pandas) | Deterministic, audited output |
| Tier-0 Validation | Baseline column checks (`moving_time`, `distance`, etc.) | No schema drift or missing fields |
| Tier-1 Audit | Filters invalid / null activities, computes daily summaries | Enforced numeric consistency |
| Tier-2 Metrics & Enforcement | Derived Metrics, Lock totals, enforce logical consistency | No variance bleed between scopes |
| Tier-3 Coaching | Forecast, Performance Intelligence and ESPE | Integrated performance modelling |
| Serialization | Single-pass semantic JSON (no float loss) | Stable numeric precision |
| Observability | Structured logs at all Tier boundaries | Traceable audit chain (lightβfullβwellness) |
| Schema Version | URF v5.1 unified contracts | Cross-version compatibility with GPT tool API |
π§ Unified Tool & LLM Architecture
| Capability | Implementation | Guarantee |
|---|---|---|
| Multi-LLM Support | MCP + Direct Tool Calls | Works across ChatGPT, Claude, Gemini |
| Dual OAuth Model | KV OAuth + Incoming Token | Flexible identity resolution |
| Execution Engine | Cloudflare Dispatcher β Railway | Single deterministic pipeline |
| Data Authority | Intervals API | Single source of truth |
| Semantic Contract | URF v5.1 | No recomputation or drift |
| Stateless Execution | No UI dependency | Fully reproducible |
βοΈ Execution Flow
Client / LLM
β
Cloudflare Worker (Edge)
β’ Token Resolver (incoming > KV)
β’ Routing
β
Dispatcher (Internal Tools)
β
Railway Engine (Tier 0β3)
β
Intervals API (Data)
β
Semantic JSON (URF v5.1)
β
LLM Rendering Layer (Read-only, tool-driven)
π Key Properties
- Single execution path β all clients use the same dispatcher
- Dual identity resolution β KV or incoming token
- No recomputation in LLMs
- Deterministic outputs across all interfaces
β οΈ Important Clarification
- MCP controls tool access
- OAuth controls data access
- ChatGPT uses direct OAuth (no MCP)
βοΈ Future Operational Pipeline
Status: Planned β’ Incremental rollout β’ No breaking changes
π§ System Diagram (Unified Execution Model)
ββββββββββββββββββββββββββββββ
β Client / LLM / Application β
β (ChatGPT, Claude, CLI, UI) β
ββββββββββββββββ¬ββββββββββββββ
β Tool call (direct OR MCP)
βΌ
ββββββββββββββββββββββββββββββββββββββ
β Cloudflare Worker (Edge) β
β β’ Token resolution (direct / KV) β
β β’ Routing + normalization β
β β’ Rate limiting / policy β
ββββββββββββββββ¬ββββββββββββββββββββββ
β Authenticated execution
βΌ
ββββββββββββββββββββββββββββββββββββββ
β Dispatcher (Internal Execution) β
β β’ Single execution path β
ββββββββββββββββ¬ββββββββββββββββββββββ
β
ββββββββ΄ββββββββββ
βΌ βΌ
ββββββββββββββββ ββββββββββββββββββββ
β Intervals APIβ β Railway Engine β
β (Data Source)β β (Tier 0β3) β
ββββββββ¬ββββββββ ββββββββββ¬ββββββββββ
β β
ββββββββββββββ¬ββββββββ
βΌ
ββββββββββββββββββββββββββββββββββββββ
β Semantic JSON (URF v5.1) β
β β’ Deterministic β
β β’ Context explicit β
β β’ No recomputation β
ββββββββββββββββ¬ββββββββββββββββββββββ
βΌ
ββββββββββββββββββββββββββββββββββββββ
β LLM Rendering Layer (Read-only) β
β β’ Narrative only β
β β’ No metric computation β
ββββββββββββββββββββββββββββββββββββββ
π Cloudflare handles authentication, routing, and optional synthetic testing.
π Railway performs full computation, validation, and serialization.
π‘Independence from LLMs via consistent execution (Railway)
π Key Insights
-
There is no UI state
Every interaction is stateless, reproducible, and tool-driven. -
LLMs never compute metrics
They only interpret pre-computed, audited semantic JSON. -
OAuth tokens are handled at the edge and never exposed to LLM reasoning
Cloudflare fully isolates authentication from reasoning. -
Intervals.icu remains the single source of truth
All metrics originate from published APIs or FIT-derived data. -
Context is explicit, never inferred
Each metric declares whether it is activity, 7d, 90d, or rolling. -
The system is headless by design
It works equally well for chat, voice, automation, or background agents.
βοΈ How It Works (End-to-End)
-
User or AI issues a natural-language request
Example: βExplain my weekly intensity balance and recovery status.β -
Request is converted into a tool call (direct or MCP)
- Strongly typed inputs
- Explicit report scope (weekly / season / activity)
-
Cloudflare Edge handles trust and policy
- OAuth token exchange with Intervals.icu
- Scope validation
- Rate limiting
- Parameter normalization
-
Railway executes the computation
- Tier-0: schema + column validation
- Tier-1: numeric consistency and filtering
- Tier-2: locked totals and scope enforcement
- No cross-window leakage (e.g. weekly β seasonal)
-
Canonical semantic JSON is produced
- URF v5.1 contract
- Explicit context windows
- Deterministic numeric precision
- No derived ambiguity
-
LLM renders the report
- Descriptive, coach-like language
- Anchored strictly to provided semantics
- No recomputation, guessing, or extrapolation
π§ Why This Matters
This architecture enables natural language coaching at scale without:
- Dashboards
- Sliders
- Charts
- Hidden state
- Silent recomputation
The result is a system that is:
- Auditable
- Secure
- Model-agnostic
- Future-proof
- Genuinely conversational
Natural language becomes the interface β not the source of truth.
Every decision is rule-based, auditable, and reproducible β never guessed or generated.
π Architectural Continuity
The system evolves toward a unified tool-based architecture (direct + MCP), without changing execution guarantees. It formalizes the same guarantees behind a standard tool interface.
- Same Railway execution engine
- Same Tier-0/1/2 enforcement
- Same URF v5.1 semantic contract
- Same Intervals.icu data authority
π― Decision Governance
- The Adaptive Decision Engine is the sole prescriptive authority.
- Tier-2 metrics provide diagnostic context.
- Tier-3 synthesizes capability state.
- LLMs never modify, override, or invent recommendations.
π§ Montis Intelligence Stack
From raw training load to adaptive coaching decisions β a closed-loop performance system.
π§ Coaching Intelligence Pipeline (Performance Layer)
Status: Active (v6 / ADE v2) β’ Rule-based β’ Closed-loop β’ Production
The Coaching Pipeline operates strictly on validated canonical data produced by the Technical Pipeline. It does not compute raw metrics. It transforms trusted semantic inputs into structured performance intelligence and executes deterministic training decisions.
Tier-3 now operates as a closed-loop system with hierarchical control: performance intelligence drives prescription, but phase governance can override execution.
CANONICAL SEMANTIC DATA (URF v5.1)
β
βΌ
Tier-3A: Stress Intelligence (PI)
(WDRM / ISDM / NDLI)
β
βΌ
Tier-3B: Progression Intelligence
ESPE v1 β Power Curve Adaptation (ACTIVE)
β
βΌ
Tier-3C: System Modeling
(PI + ESPE synthesis)
β
βΌ
ADE v2 β Adaptive Decision Engine
(Operational Layer β metrics driven)
β
βΌ
PHASE GOVERNANCE LAYER (Strategic Override)
(required_phase enforcement)
β
βΌ
COACHING SEMANTIC LAYER (v6)
β
βΌ
Final Structured Report
π Stage 1 β Stress Intelligence (PI)
Evaluates how the athlete expresses training load under fatigue.
- WDRM β Anaerobic repeatability and Wβ² depletion behavior
- ISDM β Durability and fatigue resistance
- NDLI β Neural load density and intensity clustering
Answers: βCan the athlete tolerate additional stress?β
π Stage 2 β Progression Intelligence (ESPE v1 β Active)
Tracks energy system adaptation using deterministic power curve comparison.
- Endurance (60min trend)
- Threshold (20min trend)
- VOβ Capacity (5min trend)
- Anaerobic Repeatability (1min + Wβ² behavior)
Answers: βIs current training producing adaptation?β
𧬠Stage 3 β System Modeling
Combines stress signals (PI) and progression signals (ESPE) into a unified physiological state model.
- Energy system balance
- Durability gradient
- Adaptation bias
- Plateau detection
This stage defines system capability and constraints.
π§ Stage 4 β ADE v2 (Operational Decision Layer)
Executes rule-based training adjustments based on current physiological state.
IF: Neural density high AND Repeatability decreasing AND FatigueTrend elevated THEN: Reduce VO2 duration 15% Convert tempo β endurance Insert OFF day
This layer answers: βWhat can the athlete handle right now?β
π§ Stage 5 β Phase Governance (Strategic Override Layer)
Enforces mesocycle intent over short-term optimisation.
Critical rule:
IF: required_phase != operational_state THEN: override ADE decision
This layer answers: βWhat should the athlete be doing now?β
- Prevents fatigue accumulation drift
- Forces recovery / taper when required
- Ends blocks when adaptation saturates
Hierarchy:
Phase (strategy) > ADE (execution)
π§ Training State Model (ADE v2)
Operational States: load_progression stable_load absorption_required recovery_priority
These states define execution capacity, but do not override phase intent.
π¦ Coaching Semantic Output (v6)
adaptive_layer: operational_state required_phase phase_alignment resolution adaptation_focus prescription_adjustment
The system supports:
- Athlete Mode β Clear directive
- Coach Mode β Full traceability
Complexity is abstracted in Athlete Mode, but fully accessible in Coach Mode. The system hides complexity β it never hides transparency.
π¬ Contact
For integration, customization, or coaching inquiries, connect via GitHub link below or DM via Intervals.icu DM and contribute in Intervals.icu Forum.
github.com/revo2wheels
Built with β€οΈ for endurance athletes β by Clive King.
Made in the Suisse Alps π¨π.
Powered by Intervals.icu, Cloudflare and the Railway Engine.