Implements server-side persistence for AI chat sessions, allowing users to continue conversations across devices and browser sessions. Related to #1059. Backend: - Add chat session CRUD API endpoints (GET/PUT/DELETE) - Add persistence layer with per-user session storage - Support session cleanup for old sessions (90 days) - Multi-user support via auth context Frontend: - Rewrite aiChat store with server sync (debounced) - Add session management UI (new conversation, switch, delete) - Local storage as fallback/cache - Initialize sync on app startup when AI is enabled
5.9 KiB
Pulse on Kubernetes
This guide explains how to deploy the Pulse Server (Hub) and Pulse Agents on Kubernetes clusters, including immutable distributions like Talos Linux.
Prerequisites
- A Kubernetes cluster (v1.19+)
helm(v3+) installed locallykubectlconfigured to talk to your cluster
1. Deploying the Pulse Server
The Pulse Server is the central hub that collects metrics and manages agents.
Option A: Using Helm (Recommended)
-
Install the chart from the OCI registry:
helm upgrade --install pulse oci://ghcr.io/rcourtman/pulse-chart \ --namespace pulse \ --create-namespace \ --set persistence.enabled=true \ --set persistence.size=10GiNote
: For production, ensure you configure a proper
storageClassordeployment.strategy.type=Recreateif using ReadWriteOnce (RWO) volumes.
Option B: Generating Static Manifests (For Talos / GitOps)
If you cannot use Helm directly on the cluster (e.g., restricted Talos environment), you can generate standard Kubernetes YAML manifests:
helm template pulse oci://ghcr.io/rcourtman/pulse-chart \
--namespace pulse \
--set persistence.enabled=true \
> pulse-server.yaml
You can then apply this file:
kubectl apply -f pulse-server.yaml
2. Deploying the Pulse Agent
Important: Helm Chart Agent Is Legacy Docker-Only
The Helm chart includes an agent section, but it deploys the deprecated pulse-docker-agent (Docker socket metrics only). It does not deploy the unified pulse-agent.
If you need the unified agent on Kubernetes, use a custom DaemonSet as shown below.
Unified Agent on Kubernetes (DaemonSet)
To monitor Kubernetes resources, run the unified agent as a DaemonSet and enable the Kubernetes module.
Recommended options:
- Kubernetes-only monitoring:
PULSE_ENABLE_KUBERNETES=trueandPULSE_ENABLE_HOST=false(no host mounts required). - Kubernetes + node metrics:
PULSE_ENABLE_KUBERNETES=trueandPULSE_ENABLE_HOST=true(requires host mounts and privileged mode).
Minimal DaemonSet Example
This uses the main rcourtman/pulse image but runs the pulse-agent binary directly.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: pulse-agent
namespace: pulse
spec:
selector:
matchLabels:
app: pulse-agent
template:
metadata:
labels:
app: pulse-agent
spec:
serviceAccountName: pulse-agent
containers:
- name: pulse-agent
image: rcourtman/pulse:latest
command: ["/usr/local/bin/pulse-agent"]
args:
- --enable-kubernetes
env:
- name: PULSE_URL
value: "http://pulse-server.pulse.svc.cluster.local:7655"
- name: PULSE_TOKEN
value: "YOUR_API_TOKEN_HERE"
- name: PULSE_ENABLE_HOST
value: "false"
securityContext:
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
resources: {}
Use a token scoped for the agent:
kubernetes:reportfor Kubernetes reportinghost-agent:reportif you enable host metrics
Add Host Metrics (Optional)
If you want node CPU/memory/disk metrics, add privileged mode plus host mounts:
env:
- name: PULSE_ENABLE_HOST
value: "true"
- name: HOST_PROC
value: "/host/proc"
- name: HOST_SYS
value: "/host/sys"
- name: HOST_ETC
value: "/host/etc"
securityContext:
privileged: true
volumeMounts:
- name: host-proc
mountPath: /host/proc
readOnly: true
- name: host-sys
mountPath: /host/sys
readOnly: true
- name: host-root
mountPath: /host/root
readOnly: true
volumes:
- name: host-proc
hostPath:
path: /proc
- name: host-sys
hostPath:
path: /sys
- name: host-root
hostPath:
path: /
RBAC
The Kubernetes agent uses the in-cluster API and needs read access to cluster resources (nodes, pods, deployments, etc.). Create a read-only ClusterRole and bind it to the pulse-agent service account.
apiVersion: v1
kind: ServiceAccount
metadata:
name: pulse-agent
namespace: pulse
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pulse-agent-read
rules:
- apiGroups: [""]
resources: ["nodes", "pods"]
verbs: ["get", "list", "watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pulse-agent-read
subjects:
- kind: ServiceAccount
name: pulse-agent
namespace: pulse
roleRef:
kind: ClusterRole
name: pulse-agent-read
apiGroup: rbac.authorization.k8s.io
3. Talos Linux Specifics
Talos Linux is immutable, so you cannot install the agent via the shell script. Use the DaemonSet approach above.
Agent Configuration for Talos
-
Storage: Talos mounts the ephemeral OS on
/. Persistent data is usually in/var. The Pulse agent generally doesn't store state, but if it did, ensure it maps to a persistent path. -
Network: The agent will report the Pod IP by default. To report the Node IP, set
PULSE_REPORT_IPusing the Downward API:Add this to the DaemonSet
envsection:- name: PULSE_REPORT_IP valueFrom: fieldRef: fieldPath: status.hostIP
4. Troubleshooting
- Agent not showing in UI: Check logs for the DaemonSet pods, for example:
kubectl logs -l app=pulse-agent -n pulse. - "Permission Denied" on metrics: Ensure
securityContext.privileged: trueis set or proper capabilities are added. - Connection Refused: Ensure
PULSE_URLis correct and reachable from the agent pods.