App Icon Montis.icu

Montis.icu Unified Prefetch Architecture β€” v5.1 Design

The Montis.icu Coach App operates on an authority-driven architecture separating measurement, interpretation, prescription, and interface.

Data integrity originates from Intervals.icu, computation executes in Railway (python), and conversational rendering never alters canonical truth.

🧠 Strategy β€’ 🟒 Technical β€’ 🟠 Coaching β€’ πŸ”΅ Future

The system is intentionally divided into independent architectural layers, each with a single responsibility and defined authority.

This separation enforces:

Conversational AI is a rendering layer β€” never a computational authority.

🧠 System Strategy & Architectural Philosophy

The Montis.icu Coach App is engineered around strict authority separation. No layer may override or recompute another layer’s outputs.


πŸ”’ Core Architectural Contract

Coaching logic never mutates canonical metrics.
Tier-1 and Tier-2 outputs are immutable and remain the single source of performance truth.

This prevents metric drift, silent recomputation, probabilistic coaching artifacts, and black-box behavior.


🧱 Responsibility Separation Model

Measurement    β†’ Technical Pipeline (Tier-0 / Tier-1 / Tier-2)
Interpretation β†’ Coaching Pipeline (Tier-3 Intelligence)
Prescription   β†’ Adaptive Decision Engine (Deterministic Rules)
Interface      β†’ LLM Rendering Layer (Read-Only)

No single layer owns more than one responsibility. This structure enables auditability, stateless execution, and predictable system evolution.


🎯 Strategic Positioning

Unlike heuristic AI dashboards or machine-learning coaching engines, this system does not invent, infer, or probabilistically manipulate performance metrics.

The result is a deterministic performance intelligence platform β€” not a black-box AI coach.

πŸ” Identity & Token Resolution

Montis resolves athlete identity at runtime using two independent sources: stored OAuth tokens (KV) or direct bearer tokens supplied by external clients.

Identity and Token Resolution Model

Bearer token present β†’ direct execution (no KV)
Worker-managed OAuth (KV) for MCP


πŸ”‘ Separation of Concerns

MCP grants control tool execution only.
OAuth tokens control access to athlete data.

βš™οΈ Unified Execution Model

Montis.icu operates a multi-entry, dual-identity execution architecture. Tool execution and athlete identity are resolved independently at runtime.

Client
  β†’ Worker (Edge)
    β†’ Dispatcher (Internal Tools)
      β†’ Railway (Computation)
        β†’ Intervals API (Data)

βš™οΈ Runtime Token Resolution

if (incoming_token) {
  use incoming token        // ChatGPT OAuth
} else {
  load token from KV        // Default model
}

This logic applies uniformly across:


🧩 Separation of Concerns

Concern Mechanism
Tool Execution MCP / Chat / Direct endpoints
Athlete Identity KV OAuth or Incoming Token

MCP grants control tool access, not athlete data.
OAuth tokens control data access, not execution.


🧠 Execution Patterns

Browser / CLI:
  Client β†’ Worker β†’ KV β†’ Intervals

MCP (Claude):
  LLM β†’ MCP β†’ Worker β†’ KV β†’ Intervals

ChatGPT OAuth:
  ChatGPT β†’ Token β†’ Worker β†’ Intervals

All paths converge into the same dispatcher and produce identical outputs.

βš™οΈ Current Operational Pipeline (Production)

Status: Live β€’ Used in production β€’ Backward compatible

Execution Flow

πŸ”„ Cloudflare handles authentication, routing, and optional synthetic testing.
πŸš‰ Railway performs full computation, validation, and serialization.

🧩 Current Architecture Guarantees

Area Implementation Guarantee
Prefetch Flow Cloudflare normalizes date params and injects optional test payloads Safe staging tests without real data
Execution Model Python execution on Railway (FastAPI + Pandas) Deterministic, audited output
Tier-0 Validation Baseline column checks (`moving_time`, `distance`, etc.) No schema drift or missing fields
Tier-1 Audit Filters invalid / null activities, computes daily summaries Enforced numeric consistency
Tier-2 Metrics & Enforcement Derived Metrics, Lock totals, enforce logical consistency No variance bleed between scopes
Tier-3 Coaching Forecast, Performance Intelligence and ESPE Integrated performance modelling
Serialization Single-pass semantic JSON (no float loss) Stable numeric precision
Observability Structured logs at all Tier boundaries Traceable audit chain (light→full→wellness)
Schema Version URF v5.1 unified contracts Cross-version compatibility with GPT tool API

🧠 Unified Tool & LLM Architecture

Capability Implementation Guarantee
Multi-LLM Support MCP + Direct Tool Calls Works across ChatGPT, Claude, Gemini
Dual OAuth Model KV OAuth + Incoming Token Flexible identity resolution
Execution Engine Cloudflare Dispatcher β†’ Railway Single deterministic pipeline
Data Authority Intervals API Single source of truth
Semantic Contract URF v5.1 No recomputation or drift
Stateless Execution No UI dependency Fully reproducible

βš™οΈ Execution Flow

Client / LLM
    ↓
Cloudflare Worker (Edge)
    β€’ Token Resolver (incoming > KV)
    β€’ Routing
    ↓
Dispatcher (Internal Tools)
    ↓
Railway Engine (Tier 0–3)
    ↓
Intervals API (Data)
    ↓
Semantic JSON (URF v5.1)
    ↓
LLM Rendering Layer (Read-only, tool-driven)

πŸ”‘ Key Properties


⚠️ Important Clarification

βš™οΈ Future Operational Pipeline

Status: Planned β€’ Incremental rollout β€’ No breaking changes


🧭 System Diagram (Unified Execution Model)

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Client / LLM / Application β”‚
β”‚ (ChatGPT, Claude, CLI, UI) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚ Tool call (direct OR MCP)
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Cloudflare Worker (Edge)           β”‚
β”‚ β€’ Token resolution (direct / KV)   β”‚
β”‚ β€’ Routing + normalization          β”‚
β”‚ β€’ Rate limiting / policy           β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚ Authenticated execution
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Dispatcher (Internal Execution)    β”‚
β”‚ β€’ Single execution path            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β”‚
        β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
        β–Ό                β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Intervals APIβ”‚   β”‚ Railway Engine   β”‚
β”‚ (Data Source)β”‚   β”‚ (Tier 0–3)       β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       β”‚                    β”‚
       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
                    β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Semantic JSON (URF v5.1)           β”‚
β”‚ β€’ Deterministic                    β”‚
β”‚ β€’ Context explicit                 β”‚
β”‚ β€’ No recomputation                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
               β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ LLM Rendering Layer (Read-only)    β”‚
β”‚ β€’ Narrative only                   β”‚
β”‚ β€’ No metric computation            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ”„ Cloudflare handles authentication, routing, and optional synthetic testing.
πŸš‰ Railway performs full computation, validation, and serialization.
πŸ’‘Independence from LLMs via consistent execution (Railway)

πŸ”‘ Key Insights


βš™οΈ How It Works (End-to-End)

  1. User or AI issues a natural-language request
    Example: β€œExplain my weekly intensity balance and recovery status.”
  2. Request is converted into a tool call (direct or MCP)
    • Strongly typed inputs
    • Explicit report scope (weekly / season / activity)
  3. Cloudflare Edge handles trust and policy
    • OAuth token exchange with Intervals.icu
    • Scope validation
    • Rate limiting
    • Parameter normalization
  4. Railway executes the computation
    • Tier-0: schema + column validation
    • Tier-1: numeric consistency and filtering
    • Tier-2: locked totals and scope enforcement
    • No cross-window leakage (e.g. weekly β‰  seasonal)
  5. Canonical semantic JSON is produced
    • URF v5.1 contract
    • Explicit context windows
    • Deterministic numeric precision
    • No derived ambiguity
  6. LLM renders the report
    • Descriptive, coach-like language
    • Anchored strictly to provided semantics
    • No recomputation, guessing, or extrapolation

🧠 Why This Matters

This architecture enables natural language coaching at scale without:

The result is a system that is:

Natural language becomes the interface β€” not the source of truth.

Every decision is rule-based, auditable, and reproducible β€” never guessed or generated.

πŸ” Architectural Continuity

The system evolves toward a unified tool-based architecture (direct + MCP), without changing execution guarantees. It formalizes the same guarantees behind a standard tool interface.

🎯 Decision Governance

🧠 Montis Intelligence Stack

From raw training load to adaptive coaching decisions β€” a closed-loop performance system.

Montis Intelligence Stack

🧠 Coaching Intelligence Pipeline (Performance Layer)

Status: Active (v6 / ADE v2) β€’ Rule-based β€’ Closed-loop β€’ Production

The Coaching Pipeline operates strictly on validated canonical data produced by the Technical Pipeline. It does not compute raw metrics. It transforms trusted semantic inputs into structured performance intelligence and executes deterministic training decisions.

Tier-3 now operates as a closed-loop system with hierarchical control: performance intelligence drives prescription, but phase governance can override execution.


CANONICAL SEMANTIC DATA (URF v5.1)
                β”‚
                β–Ό
Tier-3A: Stress Intelligence (PI)
(WDRM / ISDM / NDLI)
                β”‚
                β–Ό
Tier-3B: Progression Intelligence
ESPE v1 β€” Power Curve Adaptation (ACTIVE)
                β”‚
                β–Ό
Tier-3C: System Modeling
(PI + ESPE synthesis)
                β”‚
                β–Ό
ADE v2 β€” Adaptive Decision Engine
(Operational Layer β€” metrics driven)
                β”‚
                β–Ό
PHASE GOVERNANCE LAYER (Strategic Override)
(required_phase enforcement)
                β”‚
                β–Ό
COACHING SEMANTIC LAYER (v6)
                β”‚
                β–Ό
Final Structured Report

πŸ” Stage 1 β€” Stress Intelligence (PI)

Evaluates how the athlete expresses training load under fatigue.

Answers: β€œCan the athlete tolerate additional stress?”


πŸ“ˆ Stage 2 β€” Progression Intelligence (ESPE v1 β€” Active)

Tracks energy system adaptation using deterministic power curve comparison.

Answers: β€œIs current training producing adaptation?”


🧬 Stage 3 β€” System Modeling

Combines stress signals (PI) and progression signals (ESPE) into a unified physiological state model.

This stage defines system capability and constraints.


🧭 Stage 4 β€” ADE v2 (Operational Decision Layer)

Executes rule-based training adjustments based on current physiological state.

IF:
  Neural density high
  AND Repeatability decreasing
  AND FatigueTrend elevated
THEN:
  Reduce VO2 duration 15%
  Convert tempo β†’ endurance
  Insert OFF day

This layer answers: β€œWhat can the athlete handle right now?”


🧭 Stage 5 β€” Phase Governance (Strategic Override Layer)

Enforces mesocycle intent over short-term optimisation.

Critical rule:

IF:
  required_phase != operational_state
THEN:
  override ADE decision

This layer answers: β€œWhat should the athlete be doing now?”

Hierarchy:
Phase (strategy) > ADE (execution)


🧠 Training State Model (ADE v2)

Operational States:
  load_progression
  stable_load
  absorption_required
  recovery_priority

These states define execution capacity, but do not override phase intent.


πŸ“¦ Coaching Semantic Output (v6)

adaptive_layer:
  operational_state
  required_phase
  phase_alignment
  resolution
  adaptation_focus
  prescription_adjustment

The system supports:

Complexity is abstracted in Athlete Mode, but fully accessible in Coach Mode. The system hides complexity β€” it never hides transparency.

πŸ“¬ Contact

For integration, customization, or coaching inquiries, connect via GitHub link below or DM via Intervals.icu DM and contribute in Intervals.icu Forum.

github.com/revo2wheels

Built with ❀️ for endurance athletes β€” by Clive King.
Made in the Suisse Alps πŸ‡¨πŸ‡­.
Powered by Intervals.icu, Cloudflare and the Railway Engine.

⬆