Troubleshooting
Cannot connect to cluster
Section titled “Cannot connect to cluster”- Verify your kubeconfig is valid:
kubectl cluster-info - Check KubeGlass can reach the API server:
curl -k https://<api-server>:6443/healthz - Review logs:
docker logs kubeglassorkubectl logs deploy/kubeglass
RBAC permission errors
Section titled “RBAC permission errors”KubeGlass uses user impersonation for all mutation
operations. The ServiceAccount needs the impersonate verb on users and groups:
apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: name: kubeglass-impersonatorrules: - apiGroups: [""] resources: ["users", "groups"] verbs: ["impersonate"]If you see forbidden errors on read operations, the ServiceAccount may lack
list / watch permissions on the resource type.
WebSocket disconnects
Section titled “WebSocket disconnects”Common causes:
- Reverse proxy timeout - Nginx defaults to 60s for WebSocket idle. Set
proxy_read_timeout 3600s;andproxy_send_timeout 3600s;. - Load balancer draining - Ensure sticky sessions or WebSocket-aware routing.
- Pod restart - KubeGlass clients auto-reconnect. If reconnection fails, check the Pod logs for OOM or crash loops.
Helm install fails
Section titled “Helm install fails”Error: INSTALLATION FAILED: cannot re-use a name that is still in useThe release name already exists. Use helm upgrade --install instead of
helm install, or uninstall first with helm uninstall kubeglass.
High memory usage
Section titled “High memory usage”KubeGlass caches API discovery results and active watch streams. In large clusters (1000+ resources), memory usage scales with the number of watched resource types.
Mitigations:
- Set
KUBEGLASS_MAX_PAGE_LIMITto reduce list response sizes - Use namespace filtering to limit the scope of watches
- Increase the Pod memory limit in the Helm values
Terminal not connecting
Section titled “Terminal not connecting”- Confirm the target Pod is running:
kubectl get pod <name> -n <namespace> - Verify exec permissions:
kubectl auth can-i create pods/exec -n <namespace> - Check for container selection - multi-container Pods require choosing the right container
- Review WebSocket connectivity (see above)
Metrics not loading
Section titled “Metrics not loading”If the dashboard metrics panel shows “No data”:
- The Kubernetes Metrics API (
metrics.k8s.io) must be available - check withkubectl top nodes - Alternatively, configure a Prometheus endpoint via
KUBEGLASS_PROMETHEUS_URL - Verify network policies allow KubeGlass to reach the metrics source
Still stuck?
Section titled “Still stuck?”- Check the error codes reference for the specific error code
- Search GitHub Issues for similar reports
- Open a new issue with the
requestIdfrom the error response and your KubeGlass version