Skip to content

Multi-Tenancy

Multiple users share one KubeGlass instance with isolation at every layer:

LayerMechanism
AuthenticationOIDC or reverse-proxy impersonation. Each request carries the user’s identity and group memberships
K8s mutationsAll mutations use per-user impersonation - the K8s API server evaluates the user’s own RBAC permissions
K8s readsService account for full cluster visibility (intentional - shared dashboards need the full picture)
Context switchingPer-session, not server-wide. When user A switches to staging, user B stays on production
PreferencesTheme, density, default namespace, favorites stored per-user
Terminal sessionsSnapshots indexed per-user. Each user sees only their own session history

Every mutation passes through Kubernetes impersonation:

KubeGlass ServiceAccount
→ Impersonate-User: alice@company.com
→ Impersonate-Group: team-backend, developers
→ K8s API evaluates against alice's RBAC rules

This covers all write operations: create, delete, patch, exec, cordon, drain, Helm actions.

If alice doesn’t have delete permission on pods in the production namespace, KubeGlass returns 403 Forbidden - the same as if she ran kubectl delete pod directly.

Preferences are stored per-user in BoltDB and synced via:

  • GET /api/v1/me/preferences - Retrieve current preferences
  • PUT /api/v1/me/preferences - Partial update (only non-null fields are changed)

Available settings: theme, view density, default namespace, default context, page size, time format, favorite clusters, favorite namespaces, reduced motion, notification preferences.

The userId is always derived from the authentication context - it cannot be spoofed in the request body.

For impersonation to work, the KubeGlass service account needs:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kubeglass-impersonator
rules:
- apiGroups: [""]
resources: ["users", "groups"]
verbs: ["impersonate"]

The Helm chart includes this ClusterRole by default.

Read-only operations (list, get, watch) use the service account’s credentials for full cluster visibility. This is an intentional design choice:

  • Operators sharing a dashboard expect to see all resources
  • Visibility doesn’t equal access - users still can’t modify resources beyond their RBAC scope
  • This matches how tools like Kubernetes Dashboard and Grafana handle shared views

For environments requiring read isolation too, deploy separate KubeGlass instances with RBAC-scoped service accounts.