Skip to content

Data Flow

KubeGlass data flow: HTTP request lifecycle through 10-step middleware pipeline and WebSocket multiplexing with watch, logs, and terminal channels

Every request passes through the middleware stack in order:

Client
→ RequestID (16-byte random, X-Request-ID header)
→ RequestLogger (zerolog: method, path, status, duration)
→ Recovery (catch panics → 500)
→ SecurityHeaders (CSP with nonces, HSTS, X-Frame-Options)
→ CORS (origin allowlist, preflight handling)
→ RateLimiter (per-IP token bucket: 100/s, burst 200)
→ ETag (conditional 304 responses)
→ Compression (Brotli quality 4 or gzip level 1)
→ Auth (Bearer token / header extraction + token cache)
→ Impersonation (user/groups from session for K8s API calls)
→ CSRF (non-simple header required for mutations)
→ BodyLimit (1 MiB default)
→ Timeout (30s per request)
→ Handler
PathAuthTimeout
/healthz, /livez, /readyzNoneServer default
/metricsNoneServer default
/api/v1/*Required30s
/ws/*RequiredNone (long-lived)
/*NoneServer default (static files)

A single WebSocket connection carries multiple logical streams. The envelope format:

{
"type": "resource_watch | log_stream | terminal_input | error | ping | pong",
"sessionID": "...",
"channel": "watch:<id> | logs:<id> | terminal:<id>",
"payload": { },
"timestamp": "2026-..."
}

Clients send subscribe messages to open new channels on an existing connection. Each channel has its own filter (namespace, resource type, label selector). Unsubscribe closes the channel without dropping the connection.

When the K8s API server sends a watch event (ADDED, MODIFIED, DELETED), the hub broadcasts it to every connection subscribed to a matching channel. The frontend’s useResourceWatch hook updates React Query cache entries in-place - no refetch needed.

Sessions use HMAC-SHA256 signed cookies (stateless verification):

Cookie = base64url(SessionData) + "." + base64url(HMAC-SHA256(payload, secret))

Session data: session ID (UUIDv4), user ID, groups, cluster context, expiry.

  1. Login - Validate credentials (OIDC / local), create session, set cookie
  2. Request - Decode cookie, verify HMAC (constant-time), check expiry, check revocation
  3. Context switch - Update the session’s cluster context (scoped to this session only)
  4. Logout - Write session ID to BoltDB revocation deny-list

Validated OIDC tokens are cached (SHA-256 key, 1-min TTL, 10k max entries) to avoid re-validating JWTs on every request. Background cleanup every 30s.

Drain sequence when SIGINT/SIGTERM is received:

  1. Cancel server context → all background goroutines exit
  2. Close stopCh → signal rate limiter, session pruning, health monitor
  3. Drain WebSocket connections (1–10s budget)
  4. Close OIDC validator
  5. httpServer.Shutdown() → stop accepting, drain in-flight requests
  6. Close BoltDB
  7. Wait for cleanup goroutines to complete

Drain timeout: configurable via KUBEGLASS_DRAIN_TIMEOUT (default 25s), safely under Kubernetes’ 30s terminationGracePeriodSeconds.