57 KiB
Temperature Monitoring
Pulse can display real-time CPU and NVMe temperatures directly in your dashboard, giving you instant visibility into your hardware health.
Features
- CPU Package Temperature: Shows the overall CPU temperature when available
- Individual Core Temperatures: Tracks each CPU core
- NVMe Drive Temperatures: Monitors NVMe SSD temperatures (visible in the Storage tab's disk list)
- Color-Coded Display:
- Green: < 60°C (normal)
- Yellow: 60-80°C (warm)
- Red: > 80°C (hot)
Transport Architecture
Pulse attempts temperature collection in a fixed order so you can reason about which path is active for any node:
- HTTPS proxy (
pulse-sensor-proxy --http-mode) – each Proxmox host can expose its own TLS endpoint on port 8443. Pulse stores the proxy URL and bearer token innodes.encand talks to that proxy first. - Unix socket proxy – when Pulse runs on the same machine, it mounts
/run/pulse-sensor-proxyfrom the host and speaks to the proxy over a Unix socket with SO_PEERCRED authentication. - Direct SSH – final fallback for bare-metal installs. Container deployments keep this disabled unless
PULSE_DEV_ALLOW_CONTAINER_SSH=true.
If a node has an HTTPS proxy configured, Pulse does not fall back to socket or SSH. Instead it marks the node unavailable and surfaces the HTTP error in diagnostics so you can fix the underlying issue rather than masking it.
Deployment-Specific Setup
Important: Pick the transport that matches your deployment:
- Pulse running inside a container (Docker/LXC): Use the Unix-socket path (bind mount
/run/pulse-sensor-proxy) so SSH keys never leave the host. Details in Quick Start for Docker Deployments.- Pulse talking to remote Proxmox hosts / standalone nodes: Install
pulse-sensor-proxydirectly on each host with--standalone --http-modeso Pulse reaches it over HTTPS 8443. See HTTP Mode for Remote Hosts.- Docker-in-VM / “Pulse can’t see the host sensors”: Use
pulse-host-agent.- Native installs: Direct SSH works, but the proxy/host-agent options are preferred for key isolation.
Automation users: both installers (
install-sensor-proxy.shandinstall-host-agent.sh) accept non-interactive flags; jump to Automation-Friendly Installation for samples.
Transport decision matrix
| Pulse Deployment | Recommended Transport | Why |
|---|---|---|
| Pulse in Docker/LXC on the Proxmox host | Unix socket via /run/pulse-sensor-proxy bind mount |
Keeps SSH keys on the host, enforces SO_PEERCRED auth, no network exposure |
| Pulse in Docker inside a VM | pulse-host-agent on the Proxmox host |
VM can’t mount the host socket, host agent reports over HTTPS instead |
| Pulse (any host) monitoring additional Proxmox nodes on the LAN | install-sensor-proxy.sh --standalone --http-mode on each node |
Lets each node host its own proxy so Pulse reaches it over HTTPS 8443 |
| Bare-metal Pulse install on the same Proxmox host | Either socket or HTTP works; socket is simpler | You already have direct filesystem access; the installer auto-configures the socket |
Use the socket path wherever Pulse is containerised. Use HTTP mode when the sensors live on machines Pulse cannot mount directly.
Monitoring proxy health
Pulse surfaces the current transport status under Settings → Diagnostics → Temperature proxy.
- The Control plane sync table lists every proxy registered with the new control-plane channel (
install-sensor-proxy.shnow configures this automatically). Each entry shows the last time the proxy fetched its authorized node list, the expected refresh interval, and whether it is healthy, stale, or offline. - If a proxy falls behind more than one refresh interval you will see a yellow “Behind” badge; Pulse also adds a diagnostic note explaining which host is lagging. After four consecutive missed polls the badge turns red (“Offline”).
- HTTPS-mode proxies still appear under the HTTPS proxies section with reachability/error information, so you can see socket/HTTP transport issues side-by-side.
If a proxy never completes its first sync the diagnostics card will call that out explicitly (status “Pending”). Rerun the host installer or check the proxy journal (journalctl -u pulse-sensor-proxy) to resolve any startup problems, then refresh Diagnostics to confirm the sync is healthy.
Docker in VM Setup
Running Pulse in Docker inside a VM on Proxmox? The proxy socket cannot cross VM boundaries, so use pulse-host-agent instead.
pulse-host-agent runs natively on your Proxmox host and reports temperatures back to Pulse over HTTPS. This works across VM boundaries without requiring socket mounts or SSH configuration.
Setup steps:
-
Install lm-sensors on your Proxmox host (if not already installed):
apt-get update && apt-get install -y lm-sensors sensors-detect --auto -
Install pulse-host-agent on your Proxmox host:
# Generate an API token in Pulse (Settings → Security → API Tokens) with host-agent:report scope curl -fsSL http://your-pulse-vm:7655/install-host-agent.sh | \ bash -s -- --url http://your-pulse-vm:7655 --token YOUR_API_TOKEN -
Verify temperatures appear in Pulse UI under the Hosts tab
The host agent will report CPU, NVMe, and GPU temperatures alongside other system metrics. No proxy installation or socket mounting needed.
Quick Start for Docker Deployments
Running Pulse in Docker directly on Proxmox? Temperature monitoring requires installing a small service on your Proxmox host that reads hardware sensors. The Pulse container connects to this service through a shared socket.
Why this is needed: Docker containers cannot directly access hardware sensors. The proxy runs on your Proxmox host where it has access to sensor data, then shares that data with the Pulse container through a secure connection.
Follow these steps to set it up:
1. Install the proxy on your Proxmox host
SSH to your Proxmox host (not the Docker container) and run as root:
sudo curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-sensor-proxy.sh | \
sudo bash -s -- --standalone --pulse-server http://192.168.1.100:7655
Replace 192.168.1.100:7655 with your Pulse server's actual IP address and port.
The script will install and start the pulse-sensor-proxy service. You should see output confirming the installation succeeded.
2. Add bind mount to docker-compose.yml
Edit your docker-compose.yml file and add the highlighted line to your Pulse service volumes:
services:
pulse:
image: rcourtman/pulse:latest
ports:
- "7655:7655"
volumes:
- pulse-data:/data
- /run/pulse-sensor-proxy:/run/pulse-sensor-proxy:ro # Add this line (read-only)
volumes:
pulse-data:
This connects the proxy socket from your host into the container so Pulse can communicate with it.
Security Note: The socket mount is read-only (
:ro) to prevent compromised containers from tampering with the socket directory. The proxy enforces access control via SO_PEERCRED, so write access is not needed.
3. Restart Pulse container
docker compose down
docker compose up -d
Note: If you're using older Docker Compose v1, use docker-compose (with hyphen) instead.
4. Verify the setup
Check proxy is running on your Proxmox host:
sudo systemctl status pulse-sensor-proxy
You should see active (running) in green.
Check Pulse detected the proxy:
docker logs pulse 2>&1 | grep -i "temperature.*proxy"
Replace pulse with your container name if different (check with docker ps).
You should see: Temperature proxy detected - using secure host-side bridge
Check temperatures appear in the UI:
Open Pulse in your browser and check the node dashboard. CPU and drive temperatures should now be visible. If you still see blank temperature fields, proceed to troubleshooting below.
Having issues? See Troubleshooting below.
HTTP Mode for Remote Hosts
When Pulse cannot share the /run/pulse-sensor-proxy socket (for example, you run Pulse on one host but want temperatures from other Proxmox nodes), install the proxy directly on each target host and expose it over HTTPS 8443.
-
Install the proxy in HTTP mode on each Proxmox node:
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-sensor-proxy.sh | \ sudo bash -s -- --standalone --http-mode --pulse-server http://192.168.0.123:7655Replace the Pulse URL with the server that should receive temperatures. The installer:
- Generates a TLS certificate (
/etc/pulse-sensor-proxy/tls/) - Registers with Pulse and writes the proxy URL/token into
nodes.enc - Starts
pulse-sensor-proxy.servicelistening onhttps://<node>:8443
- Generates a TLS certificate (
-
Allow Pulse to reach port 8443 (host firewall, VLAN ACLs, etc.). Only Pulse needs access; the installer’s service file restricts the listener to HTTPS with bearer auth.
-
Verify the endpoint manually (optional but recommended):
TOKEN=$(sudo cat /etc/pulse-sensor-proxy/.http-auth-token) curl -k -H "Authorization: Bearer ${TOKEN}" "https://node.example:8443/temps?node=shortname"You should receive JSON with the
sensorspayload. -
Restart Pulse (or wait for config reload) so it notices the new proxy URL/token. Pulse will automatically try HTTP first for nodes with
TemperatureProxyURLconfigured, then fall back to the Unix socket (if mounted) and finally SSH.
This HTTP path complements the socket path—you can run both simultaneously. Containerised Pulse stacks still need the socket for their own host, while HTTP mode covers every additional Proxmox node on the LAN or across sites.
Pulse now isolates transport failures per node: when a proxy reports that a node is invalid or unreachable, Pulse cools down polling for that node only instead of tearing down the shared socket. You will see a cooldown note in the diagnostics card if a node keeps failing; fix the proxy or disable temperature monitoring for that node to resume collection.
Tip: When Pulse is running inside a container and temperatures are blocked, open Settings → Nodes → Edit node → Temperature monitoring. The UI now offers a one-click “Generate HTTPS proxy command” button that produces the exact
install-sensor-proxy.sh --standalone --http-mode --pulse-server …command for that node, so you can copy it straight to the host shell without rebuilding the instructions manually.
Disable Temperature Monitoring
Don't need the sensor data? Open Settings → Proxmox, edit any node, and scroll to the Advanced monitoring section. The temperature toggle there controls collection for all nodes:
- When disabled, Pulse skips every SSH/proxy request for temperature data.
- CPU and NVMe readings disappear from dashboards and node tables.
- You can re-enable it later without re-running the setup scripts.
For scripted environments, set either:
temperatureMonitoringEnabled: falsein/etc/pulse/system.json, orENABLE_TEMPERATURE_MONITORING=falsein the environment (locks the UI toggle until removed).
How It Works
Proxy-first architecture
Pulse keeps temperature collection outside the Pulse process whenever possible:
- pulse-sensor-proxy runs on each Proxmox host, owns the SSH keys, and reads sensor data locally.
- Pulse talks to the proxy over HTTPS (
--http-mode) when the host is remote, or over the Unix socket (/run/pulse-sensor-proxy) when Pulse runs on the same machine. - The proxy fan-outs to cluster members using its own SSH configuration, so the Pulse container never needs direct SSH access.
Benefits:
- SSH keys never enter the container
- HTTPS mode lets Pulse collect temperatures from any node reachable over TCP/8443 without sharing sockets across hosts
- Container compromise doesn't expose infrastructure credentials
- LXC: Automatically configured during installation (fully turnkey)
- Docker: Requires manual proxy installation and volume mount (see Quick Start above)
Manual installation (host-side)
When you need to provision the proxy yourself (for example via your own automation), run these steps on the host that runs your Pulse container:
-
Install the binary
curl -L https://github.com/rcourtman/Pulse/releases/download/<TAG>/pulse-sensor-proxy-linux-amd64 \ -o /usr/local/bin/pulse-sensor-proxy chmod 0755 /usr/local/bin/pulse-sensor-proxyUse the arm64/armv7 artefact if required.
-
Create the service account if missing
id pulse-sensor-proxy >/dev/null 2>&1 || \ useradd --system --user-group --no-create-home --shell /usr/sbin/nologin pulse-sensor-proxy -
Provision the data directories
install -d -o pulse-sensor-proxy -g pulse-sensor-proxy -m 0750 /var/lib/pulse-sensor-proxy install -d -o pulse-sensor-proxy -g pulse-sensor-proxy -m 0700 /var/lib/pulse-sensor-proxy/ssh -
(Optional) Add
/etc/pulse-sensor-proxy/config.yaml
Only needed if you want explicit subnet/metrics settings; otherwise the proxy auto-detects host CIDRs.allowed_source_subnets: - 192.168.1.0/24 metrics_address: 0.0.0.0:9127 # use "disabled" to switch metrics off http_enabled: true http_listen_addr: ":8443" http_tls_cert: /etc/pulse-sensor-proxy/tls/server.crt http_tls_key: /etc/pulse-sensor-proxy/tls/server.keyProvide
http_auth_token(32+ bytes of random data) and ensure the TLS files exist. Tokens configured here must match the value saved in Pulse for each node. -
Install the hardened systemd unit
Copy the unit fromscripts/install-sensor-proxy.shor create/etc/systemd/system/pulse-sensor-proxy.servicewith:[Unit] Description=Pulse Temperature Proxy After=network.target [Service] Type=simple User=pulse-sensor-proxy Group=pulse-sensor-proxy WorkingDirectory=/var/lib/pulse-sensor-proxy ExecStart=/usr/local/bin/pulse-sensor-proxy Restart=on-failure RestartSec=5s RuntimeDirectory=pulse-sensor-proxy RuntimeDirectoryMode=0775 UMask=0007 NoNewPrivileges=true ProtectSystem=strict ProtectHome=read-only ReadWritePaths=/var/lib/pulse-sensor-proxy ProtectKernelTunables=true ProtectKernelModules=true ProtectControlGroups=true ProtectClock=true PrivateTmp=true PrivateDevices=true ProtectProc=invisible ProcSubset=pid LockPersonality=true RemoveIPC=true RestrictSUIDSGID=true RestrictAddressFamilies=AF_UNIX AF_INET AF_INET6 RestrictNamespaces=true SystemCallFilter=@system-service SystemCallErrorNumber=EPERM CapabilityBoundingSet= AmbientCapabilities= KeyringMode=private LimitNOFILE=1024 StandardOutput=journal StandardError=journal SyslogIdentifier=pulse-sensor-proxy [Install] WantedBy=multi-user.target -
Enable the service
systemctl daemon-reload systemctl enable --now pulse-sensor-proxy.serviceConfirm the socket appears at
/run/pulse-sensor-proxy/pulse-sensor-proxy.sock. -
Expose the socket to Pulse
- Proxmox LXC: append
lxc.mount.entry: /run/pulse-sensor-proxy run/pulse-sensor-proxy none bind,create=dir 0 0to/etc/pve/lxc/<CTID>.confand restart the container. - Docker: bind mount
/run/pulse-sensor-proxyinto the container (- /run/pulse-sensor-proxy:/run/pulse-sensor-proxy:ro).
- Proxmox LXC: append
After the container restarts, the backend will automatically use the proxy. To refresh SSH keys on cluster nodes (e.g., after adding a new node), SSH to your Proxmox host and re-run the setup script: curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-sensor-proxy.sh | bash -s -- --ctid <your-container-id>
Post-install Verification
- Confirm proxy metrics
curl -s http://127.0.0.1:9127/metrics | grep pulse_proxy_build_info - Ensure adaptive polling sees the proxy
curl -s http://localhost:7655/api/monitoring/scheduler/health \ | jq '.instances[] | select(.key | contains("temperature")) | {key, pollStatus}'- Expect recent
lastSuccesstimestamps,breaker.state == "closed", anddeadLetter.present == false.
- Expect recent
- Check update history – Any future proxy restarts/rollbacks are logged under Settings → System → Updates; include the associated
event_idin post-change notes. - Measure queue depth/staleness – Grafana panels
pulse_monitor_poll_queue_depthandpulse_monitor_poll_staleness_secondsshould return to baseline within a few polling cycles.
Legacy SSH Architecture (native installs)
For native (non-containerized) installations, Pulse connects directly via SSH:
- Pulse uses SSH key authentication (like Ansible, Terraform, etc.)
- Runs
sensors -jcommand to read hardware temperatures - SSH key stored in Pulse's home directory
Important for native installs: Run every setup command as the same user account that executes the Pulse service (typically
pulse). The backend reads the SSH key from that user's home directory.
Requirements
- SSH Key Authentication: Your Pulse server needs SSH key access to nodes (no passwords)
- lm-sensors Package: Installed on nodes to read hardware sensors
- Passwordless root SSH (Proxmox clusters only): For proxy architecture, the Proxmox host running Pulse must have passwordless root SSH access to all cluster nodes. This is standard for Proxmox clusters but hardened environments may need to create an alternate service account.
Setup (Automatic)
The auto-setup script (Settings → Nodes → Setup Script) provides different experiences based on deployment type:
For LXC Deployments (Fully Automatic)
When run on a Proxmox host with Pulse in an LXC container:
- Run the auto-setup script on your Proxmox node
- The script automatically detects your Pulse LXC container
- Installs
pulse-sensor-proxyon the host - Configures the container bind mount automatically
- Sets up SSH keys and cluster discovery
- Fully turnkey - no manual steps required!
Note: The main
install.shalready installs the host-side proxy when you opt-in during bootstrap, so the Quick Setup script simply verifies it and moves on—you won’t be prompted a second time. Remote/standalone nodes still prompt to deploy their own HTTPS proxy.
For Docker Deployments (Manual Steps Required)
When Pulse runs in Docker, the setup script will show you manual steps:
- Create the Proxmox API token (manual)
- Add the node in Pulse UI
- For temperature monitoring: Follow the Quick Start for Docker above
For Node Configuration (All Deployments)
When prompted for SSH setup on Proxmox nodes:
- Choose "y" when asked about SSH configuration
- The script will:
- Install
lm-sensors - Run
sensors-detect --auto - Configure SSH keys (for standalone nodes)
- Install
If the node is part of a Proxmox cluster, the script will detect other members and offer to configure the same SSH/lm-sensors setup on each of them automatically.
Host-side responsibilities (Docker only)
Note: For LXC deployments, the setup script handles all of this automatically. This section applies to Docker deployments only.
- Run the host installer (
install-sensor-proxy.sh --standalone) on the Proxmox machine that hosts Pulse to install and maintain thepulse-sensor-proxyservice - Add the bind mount to your docker-compose.yml:
- /run/pulse-sensor-proxy:/run/pulse-sensor-proxy:ro - Re-run the host installer if the service or socket disappears after a host upgrade or configuration cleanup; the installer is idempotent
- The installer ships a self-heal timer (
pulse-sensor-proxy-selfheal.timer) that restarts or reinstalls the proxy if it ever goes missing; leave it enabled for automatic recovery - Hot dev builds warn when only a container-local proxy socket is present, signaling that the host proxy needs to be reinstalled before temperatures will flow back into Pulse
Turnkey Setup for Standalone Nodes (v4.25.0+)
For standalone nodes (not in a Proxmox cluster) running containerized Pulse, the setup script now automatically configures temperature monitoring with zero manual steps:
- The script detects the node is standalone (not in a cluster)
- Automatically fetches the temperature proxy's SSH public key from your Pulse server via
/api/system/proxy-public-key - Installs it with forced commands (
command="sensors -j") automatically - Temperature monitoring "just works" - no manual SSH key management needed!
Example output:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Standalone Node Temperature Setup
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Detected: This is a standalone node (not in a Proxmox cluster)
Fetching temperature proxy public key...
✓ Retrieved proxy public key
✓ Temperature proxy key installed (restricted to sensors -j)
✓ Standalone node temperature monitoring configured
The Pulse temperature proxy can now collect temperature data
from this node using secure SSH with forced commands.
Security:
- Public keys are safe to expose (it's in the name!)
- Forced commands restrict the key to only
command="sensors -j" - All other SSH features disabled (no-port-forwarding, no-pty, etc.)
- Works exactly like cluster setups, but fully automated
Note: This only works for containerized Pulse deployments where the temperature proxy is running. For native (non-containerized) installs, you'll still need to provide your Pulse server's public key manually as described in step 3 above.
Setup (Manual)
If you skipped SSH setup during auto-setup, you can configure it manually:
1. Generate SSH Key (on Pulse server)
# Run as the user running Pulse (usually the pulse service account)
ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
2. Copy Public Key to Proxmox Nodes
# Get your public key
cat ~/.ssh/id_rsa.pub
# Add it to each Proxmox node
ssh root@your-proxmox-node
mkdir -p /root/.ssh
chmod 700 /root/.ssh
echo "YOUR_PUBLIC_KEY_HERE" >> /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
3. Install lm-sensors (on each Proxmox node)
apt-get update
apt-get install -y lm-sensors
sensors-detect --auto
4. Test SSH Connection
From your Pulse server:
ssh root@your-proxmox-node "sensors -j"
You should see JSON output with temperature data.
Temperature Collection Pipeline
- HTTPS proxy – If a node has
TemperatureProxyURL+TemperatureProxyTokenconfigured (set automatically during--http-modeinstalls), Pulse callshttps://node:8443/temps?node=<shortname>with the bearer token. The proxy enforces TLS, CIDR allowlists, and rate limits before handing Pulse the JSON payload fromsensors -j. - Unix socket proxy – If no HTTPS proxy is configured but the
/run/pulse-sensor-proxysocket is mounted, Pulse talks to the proxy locally using SO_PEERCRED authentication. - SSH fallback – Bare-metal installs can still let Pulse SSH directly into the node, run
sensors -j, and parse the output. Containerized Pulse keeps this disabled unless explicitly overridden for development.
Regardless of transport, Pulse parses CPU package/core temperatures plus NVMe sensor data and surfaces it on the dashboard, Host details, and the Storage tab.
Troubleshooting
HTTPS proxy not responding
Symptom: Settings → Diagnostics → Temperature Proxy shows proxy_unreachable, invalid_token, or HTTP timeout errors for a node.
Verify connectivity:
# On the Pulse host/container
curl -vk https://node.example:8443/health \
-H "Authorization: Bearer $(sudo cat /etc/pulse-sensor-proxy/.http-auth-token)"
If that fails, confirm:
- Port 8443/TCP is reachable from the Pulse host.
/etc/pulse-sensor-proxy/config.yamllists the Pulse source CIDR inallowed_source_subnets./etc/pulse-sensor-proxy/.http-auth-tokenand/etc/pulse-sensor-proxy/tls/*exist with the correct permissions.
Resetting the proxy: When you need to rebuild the proxy configuration, run the uninstall path first to clear sockets, TLS keys, and state:
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-sensor-proxy.sh | \
sudo bash -s -- --uninstall --purge
Then reinstall with the desired flags (for example, --standalone --http-mode --pulse-server https://pulse:7655).
SSH Connection Attempts from Container ([preauth] Logs)
Symptom: Proxmox host logs (/var/log/auth.log) show repeated SSH connection attempts from your Pulse container:
Connection closed by authenticating user root <container-ip> port <port> [preauth]
This indicates a misconfiguration. Containerized Pulse should communicate via the sensor proxy, not direct SSH.
Common causes:
- Dev mode enabled (
PULSE_DEV_ALLOW_CONTAINER_SSH=trueenvironment variable) - Sensor proxy not installed or socket not accessible
- Leftover SSH keys from legacy installations
Fix:
- Docker: Follow Quick Start for Docker Deployments to install the proxy and add the bind mount
- LXC: Run the setup script on your Proxmox host (see Setup (Automatic))
- Dev mode: Remove
PULSE_DEV_ALLOW_CONTAINER_SSH=truefrom your environment/docker-compose - Verify: Check Pulse logs for
Temperature proxy detected - using secure host-side bridge
Once the proxy is properly configured, these log entries will stop immediately. See Container Security Considerations for why direct container SSH is blocked.
No Temperature Data Shown
Check Settings → Diagnostics → Temperature Proxy first; it usually reports the precise HTTPS or socket error. If diagnostics are clear but temperatures are still blank, validate the legacy SSH path:
Check SSH access:
# From Pulse server
ssh root@your-proxmox-node "echo test"
Check lm-sensors:
# On Proxmox node
sensors -j
Check Pulse logs:
journalctl -u pulse -f | grep -i temp
Temperature Shows as Unavailable
- lm-sensors may not be installed
- Node may not have temperature sensors
- SSH key authentication may not be working
ARM Devices (Raspberry Pi, etc.)
ARM devices typically don't have the same sensor interfaces. Temperature monitoring may not work or may show different sensors (like thermal_zone0 instead of coretemp).
Security & Architecture
How Temperature Collection Works
Temperature monitoring uses SSH key authentication - the same trusted method used by automation tools like Ansible, Terraform, and Saltstack for managing infrastructure at scale.
What Happens:
- Pulse connects to your node via SSH using a key (no passwords)
- Runs
sensors -jto get temperature readings in JSON format - Parses the data and displays it in the dashboard
- Disconnects (entire operation takes <1 second)
Security Design:
- ✅ Key-based authentication - More secure than passwords, industry standard
- ✅ Read-only operation -
sensorscommand only reads hardware data - ✅ Private key stays on Pulse server - Never transmitted or exposed
- ✅ Public key on nodes - Safe to store, can't be used to gain access
- ✅ Instantly revocable - Remove key from authorized_keys to disable
- ✅ Logged and auditable - All connections logged in
/var/log/auth.log
What Pulse Uses SSH For
Pulse reuses the SSH access only for the actions already described in Setup (Automatic) and How It Works: adding the public key during setup (if you opt in) and polling sensors -j each cycle. It does nothing else—no extra commands, file changes, or config edits—and revoking the key stops temperature collection immediately.
This is the same security model used by thousands of organizations for infrastructure automation.
Best Practices
- Dedicated key: Generate a separate SSH key just for Pulse (recommended)
- Firewall rules: Optionally restrict SSH to your Pulse server's IP
- Regular monitoring: Review auth logs if you want extra visibility
- Secure your Pulse server: Keep it updated and behind proper access controls
Command Restrictions (Default)
Pulse now writes the temperature key with a forced command so the connection can only execute sensors -j. Port/X11/agent forwarding and PTY allocation are all disabled automatically when you opt in through the setup script. Re-running the script upgrades older installs to the restricted entry without touching any of your other SSH keys.
# Example entry in /root/.ssh/authorized_keys installed by Pulse
command="sensors -j",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa AAAAB3NzaC1yc2E...
You can still manage the entry manually if you prefer, but no extra steps are required for new installations.
Performance Impact
- Minimal: SSH connection is made once per polling cycle
- Timeout: 5 seconds (non-blocking)
- Falls back gracefully if SSH fails
- No impact if SSH is not configured
Container Security Considerations
✅ Resolved in v4.24.0
Secure Proxy Architecture (Current)
As of v4.24.0, containerized deployments use pulse-sensor-proxy which eliminates the security concerns:
- SSH keys stored on host - Not accessible from container
- Unix socket communication - Pulse never touches SSH keys
- Automatic during installation - No manual configuration needed
- Container compromise = No credential exposure - Attacker gains nothing
For new installations: The proxy is installed automatically during LXC setup. No action required.
Installed from inside an existing LXC? The container-only installer cannot create the host bind mount. Run the host-side script below on your Proxmox node to enable temperature monitoring. When Pulse is running in that container, append the server URL so the proxy script can fall back to downloading the binary from Pulse itself if GitHub isn’t available.
For existing installations (pre-v4.24.0): Upgrade your deployment to use the proxy:
# On your Proxmox host
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-sensor-proxy.sh | \
bash -s -- --ctid <your-pulse-container-id> --pulse-server http://<pulse-container-ip>:7655
Heads up for v4.23.x: Those builds don't ship a standalone
pulse-sensor-proxybinary yet and the HTTP fallback still requires authentication. Either upgrade to a newer release, install Pulse from source (install.sh --source main), or pass a locally built binary with--local-binary /path/to/pulse-sensor-proxy.
Automation-Friendly Installation
For infrastructure-as-code tools (Ansible, Terraform, Salt, Puppet), the installer script is fully scriptable.
Installation Script Flags
install-sensor-proxy.sh [OPTIONS]
Required (choose one):
--ctid <id>- For LXC containers (auto-configures bind mount)--standalone- For Docker or standalone deployments
Optional:
--pulse-server <url>- Pulse server URL (for binary fallback if GitHub unavailable)--version <tag>- Specific version to install (default: latest)--local-binary <path>- Use local binary instead of downloading--quiet- Non-interactive mode (suppress progress output)--skip-restart- Don't restart LXC container after installation--uninstall- Remove the proxy service--purge- Remove data directories (use with --uninstall)
Behavior:
- ✅ Idempotent - Safe to re-run, won't break existing installations
- ✅ Non-interactive - Use
--quietfor automated deployments - ✅ Verifiable - Returns exit code 0 on success, non-zero on failure
Ansible Playbook Example
For LXC deployments:
---
- name: Install Pulse sensor proxy for LXC
hosts: proxmox_hosts
become: yes
tasks:
- name: Download installer script
get_url:
url: https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-sensor-proxy.sh
dest: /tmp/install-sensor-proxy.sh
mode: '0755'
- name: Install sensor proxy
command: >
/tmp/install-sensor-proxy.sh
--ctid {{ pulse_container_id }}
--pulse-server {{ pulse_server_url }}
--quiet
register: install_result
changed_when: "'already exists' not in install_result.stdout"
failed_when: install_result.rc != 0
- name: Verify proxy is running
systemd:
name: pulse-sensor-proxy
state: started
enabled: yes
register: service_status
For Docker deployments:
---
- name: Install Pulse sensor proxy for Docker
hosts: proxmox_hosts
become: yes
tasks:
- name: Download installer script
get_url:
url: https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-sensor-proxy.sh
dest: /tmp/install-sensor-proxy.sh
mode: '0755'
- name: Install sensor proxy (standalone mode)
command: >
/tmp/install-sensor-proxy.sh
--standalone
--pulse-server {{ pulse_server_url }}
--quiet
register: install_result
failed_when: install_result.rc != 0
- name: Verify proxy is running
systemd:
name: pulse-sensor-proxy
state: started
enabled: yes
- name: Ensure docker-compose includes sensor proxy bind mount
blockinfile:
path: /opt/pulse/docker-compose.yml
marker: "# {mark} ANSIBLE MANAGED - Sensor Proxy"
insertafter: "volumes:"
block: |
- /run/pulse-sensor-proxy:/run/pulse-sensor-proxy:ro
notify: restart pulse container
handlers:
- name: restart pulse container
community.docker.docker_compose:
project_src: /opt/pulse
state: restarted
Terraform Example
resource "null_resource" "pulse_sensor_proxy" {
for_each = var.proxmox_hosts
connection {
type = "ssh"
host = each.value.host
user = "root"
private_key = file(var.ssh_private_key)
}
provisioner "remote-exec" {
inline = [
"curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-sensor-proxy.sh -o /tmp/install-sensor-proxy.sh",
"chmod +x /tmp/install-sensor-proxy.sh",
"/tmp/install-sensor-proxy.sh --standalone --pulse-server ${var.pulse_server_url} --quiet",
"systemctl is-active pulse-sensor-proxy || exit 1"
]
}
triggers = {
pulse_version = var.pulse_version
}
}
Manual Configuration (No Script)
If you can't run the installer script, create the configuration manually:
1. Download binary:
curl -L https://github.com/rcourtman/Pulse/releases/latest/download/pulse-sensor-proxy-linux-amd64 \
-o /usr/local/bin/pulse-sensor-proxy
chmod 0755 /usr/local/bin/pulse-sensor-proxy
2. Create service user:
useradd --system --user-group --no-create-home --shell /usr/sbin/nologin pulse-sensor-proxy
usermod -aG www-data pulse-sensor-proxy # For pvecm access
3. Create directories:
install -d -o pulse-sensor-proxy -g pulse-sensor-proxy -m 0750 /var/lib/pulse-sensor-proxy
install -d -o pulse-sensor-proxy -g pulse-sensor-proxy -m 0700 /var/lib/pulse-sensor-proxy/ssh
install -d -o pulse-sensor-proxy -g pulse-sensor-proxy -m 0755 /etc/pulse-sensor-proxy
4. Create config (optional, for Docker):
# /etc/pulse-sensor-proxy/config.yaml
allowed_peer_uids: [1000] # Docker container UID
allow_idmapped_root: true
allowed_idmap_users:
- root
5. Install systemd service:
# Download from: https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-sensor-proxy.sh
# Extract the systemd unit from lines 630-730, or see systemd unit in installer script
systemctl daemon-reload
systemctl enable --now pulse-sensor-proxy
6. Verify:
systemctl status pulse-sensor-proxy
ls -l /run/pulse-sensor-proxy/pulse-sensor-proxy.sock
Configuration File Format
The proxy reads /etc/pulse-sensor-proxy/config.yaml (optional):
# Allowed UIDs that can connect to the socket (default: [0] = root only)
allowed_peer_uids: [0, 1000] # Allow root and UID 1000 (typical Docker)
# Allowed GIDs that can connect to the socket (peer is accepted when UID OR GID matches)
allowed_peer_gids: [0]
# Preferred capability-based allow-list (uids inherit read/write/admin as specified)
allowed_peers:
- uid: 0
capabilities: [read, write, admin]
- uid: 1000
capabilities: [read]
# Require host keys sourced from the Proxmox cluster known_hosts file (no ssh-keyscan fallback)
require_proxmox_hostkeys: false
# Allow ID-mapped root from LXC containers
allow_idmapped_root: true
allowed_idmap_users:
- root
# Source subnets for SSH key restrictions (auto-detected if not specified)
allowed_source_subnets:
- 192.168.1.0/24
- 10.0.0.0/8
# Rate limiting (per calling UID)
rate_limit:
per_peer_interval_ms: 1000 # 1 request per second
per_peer_burst: 5 # Allow burst of 5
# Metrics endpoint (default: 127.0.0.1:9127)
metrics_address: 127.0.0.1:9127 # or "disabled"
# Maximum bytes accepted from SSH sensor output (default 1 MiB)
max_ssh_output_bytes: 1048576
allowed_peers lets you scope access: grant the container UID only read to limit it to temperature fetching, while host-side automation can receive [read, write, admin]. Legacy allowed_peer_uids/gids remain for backward compatibility and imply full capabilities.
Environment Variable Overrides:
Config values can also be set via environment variables (useful for containerized proxy deployments):
# Add allowed subnets (comma-separated, appends to config file values)
PULSE_SENSOR_PROXY_ALLOWED_SUBNETS=192.168.1.0/24,10.0.0.0/8
# Allow/disallow ID-mapped root (overrides config file)
PULSE_SENSOR_PROXY_ALLOW_IDMAPPED_ROOT=true
Example systemd override:
# /etc/systemd/system/pulse-sensor-proxy.service.d/override.conf
[Service]
Environment="PULSE_SENSOR_PROXY_ALLOWED_SUBNETS=192.168.1.0/24"
Note: Socket path, SSH key directory, and audit log path are configured via command-line flags (see main.go), not the YAML config file.
Re-running After Changes
The installer is idempotent and safe to re-run:
# After adding a new Proxmox node to cluster
bash install-sensor-proxy.sh --standalone --pulse-server http://pulse:7655 --quiet
# After upgrading Pulse version
bash install-sensor-proxy.sh --standalone --pulse-server http://pulse:7655 --version v4.27.0 --quiet
# Verify installation
systemctl status pulse-sensor-proxy
Legacy Security Concerns (Pre-v4.24.0)
Older versions stored SSH keys inside the container, creating security risks:
- Compromised container = exposed SSH keys
- Even with forced commands, keys could be extracted
- Required manual hardening (key rotation, IP restrictions, etc.)
Hardening Recommendations (Legacy/Native Installs Only)
1. Key Rotation
Rotate SSH keys periodically (e.g., every 90 days):
# On Pulse server
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_new -N ""
# Update all nodes' authorized_keys
# Test connectivity
ssh -i ~/.ssh/id_ed25519_new node "sensors -j"
# Replace old key
mv ~/.ssh/id_ed25519_new ~/.ssh/id_ed25519
2. Secret Mounts (Docker)
Mount SSH keys from secure volumes:
version: '3'
services:
pulse:
image: rcourtman/pulse:latest
volumes:
- pulse-ssh-keys:/home/pulse/.ssh:ro # Read-only
- pulse-data:/data
volumes:
pulse-ssh-keys:
driver: local
driver_opts:
type: tmpfs # Memory-only, not persisted
device: tmpfs
3. Monitoring & Alerts
Enable SSH audit logging on Proxmox nodes:
# Install auditd
apt-get install auditd
# Watch SSH access
auditctl -w /root/.ssh -p wa -k ssh_access
# Monitor for unexpected commands
tail -f /var/log/audit/audit.log | grep ssh
4. IP Restrictions
Limit SSH access to your Pulse server IP in /etc/ssh/sshd_config:
Match User root Address 192.168.1.100
ForceCommand sensors -j
PermitOpen none
AllowAgentForwarding no
AllowTcpForwarding no
Verifying Proxy Installation
To check if your deployment is using the secure proxy:
# On Proxmox host - check proxy service
systemctl status pulse-sensor-proxy
# Check if socket exists
ls -l /run/pulse-sensor-proxy/pulse-sensor-proxy.sock
# View proxy logs
journalctl -u pulse-sensor-proxy -f
Forward these logs off-host for retention by following operations/sensor-proxy-log-forwarding.md.
In the Pulse container, check the logs at startup:
# Should see: "Temperature proxy detected - using secure host-side bridge"
journalctl -u pulse | grep -i proxy
Disabling Temperature Monitoring
To remove SSH access:
# On each Proxmox node
sed -i '/pulse@/d' /root/.ssh/authorized_keys
# Or remove just the forced command entry
sed -i '/command="sensors -j"/d' /root/.ssh/authorized_keys
Temperature data will stop appearing in the dashboard after the next polling cycle.
Operations & Troubleshooting
Managing the Proxy Service
The pulse-sensor-proxy service runs on the Proxmox host (outside the container).
Service Management:
# Check service status
systemctl status pulse-sensor-proxy
# Restart the proxy
systemctl restart pulse-sensor-proxy
# Stop the proxy (disables temperature monitoring)
systemctl stop pulse-sensor-proxy
# Start the proxy
systemctl start pulse-sensor-proxy
# Enable proxy to start on boot
systemctl enable pulse-sensor-proxy
# Disable proxy autostart
systemctl disable pulse-sensor-proxy
Log Locations
Proxy Logs (on Proxmox host):
# Follow proxy logs in real-time
journalctl -u pulse-sensor-proxy -f
# View last 50 lines
journalctl -u pulse-sensor-proxy -n 50
# View logs since last boot
journalctl -u pulse-sensor-proxy -b
# View logs with timestamps
journalctl -u pulse-sensor-proxy --since "1 hour ago"
Pulse Logs (in container):
# Check if proxy is being used
journalctl -u pulse | grep -i "proxy\|temperature"
# Should see: "Temperature proxy detected - using secure host-side bridge"
SSH Key Rotation
Rotate SSH keys periodically for security (recommended every 90 days).
Automated Rotation (Recommended):
The /opt/pulse/scripts/pulse-proxy-rotate-keys.sh script handles rotation safely with staging, verification, and rollback support:
# 1. Dry-run first (recommended)
sudo /opt/pulse/scripts/pulse-proxy-rotate-keys.sh --dry-run
# 2. Perform rotation
sudo /opt/pulse/scripts/pulse-proxy-rotate-keys.sh
What the script does:
- Generates new Ed25519 keypair in staging directory
- Pushes new key to all cluster nodes via proxy RPC
- Verifies SSH connectivity with new key on each node
- Atomically swaps keys (current → backup, staging → active)
- Preserves old keys for rollback
If rotation fails, rollback:
sudo /opt/pulse/scripts/pulse-proxy-rotate-keys.sh --rollback
Manual Rotation (Fallback):
If the automated script fails or is unavailable:
# 1. On Proxmox host, backup old keys
cd /var/lib/pulse-sensor-proxy/ssh/
cp id_ed25519 id_ed25519.backup
cp id_ed25519.pub id_ed25519.pub.backup
# 2. Generate new keypair
ssh-keygen -t ed25519 -f id_ed25519 -N "" -C "pulse-sensor-proxy-rotated"
# 3. Re-run setup to push keys to cluster
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-sensor-proxy.sh | \
bash -s -- --ctid <your-container-id>
# 4. Verify temperature data still works in Pulse UI
Automatic Cleanup When Nodes Are Removed (v4.26.0+)
Starting in v4.26.0, SSH keys are automatically removed when you delete a node from Pulse:
- When you remove a node in Pulse Settings → Nodes, Pulse signals the temperature proxy
- The proxy creates a cleanup request file at
/var/lib/pulse-sensor-proxy/cleanup-request.json - A systemd path unit detects the request and triggers the cleanup service
- The cleanup script automatically:
- SSHs to the specified node (or localhost if it's local)
- Removes the SSH key entries (
# pulse-managed-keyand# pulse-proxy-key) - Logs the cleanup action via syslog
Automatic cleanup works for:
- ✅ Cluster nodes - Full automatic cleanup (Proxmox clusters have unrestricted passwordless SSH)
- ⚠️ Standalone nodes - Cannot auto-cleanup due to forced command security (see below)
Standalone Node Limitation:
Standalone nodes use forced commands (command="sensors -j") for security. This same restriction prevents the cleanup script from running sed to remove keys. This is a security feature, not a bug - adding a workaround would defeat the forced command protection.
For standalone nodes:
- Keys remain after removal (but they're read-only - only
sensors -jaccess) - Low security risk - no shell access, no write access, no port forwarding
- Auto-cleanup on re-add - Setup script removes old keys when node is re-added
- Manual cleanup if needed:
ssh root@standalone-node "sed -i '/# pulse-proxy-key$/d' /root/.ssh/authorized_keys"
Monitoring Cleanup:
# Watch cleanup operations in real-time
journalctl -u pulse-sensor-cleanup -f
# View cleanup history
journalctl -u pulse-sensor-cleanup --since "1 week ago"
# Check if cleanup system is active
systemctl status pulse-sensor-cleanup.path
Manual Cleanup (if needed):
If automatic cleanup fails or you need to manually revoke access:
# On the node being removed, remove all Pulse SSH keys
ssh root@old-node "sed -i -e '/# pulse-managed-key\$/d' -e '/# pulse-proxy-key\$/d' /root/.ssh/authorized_keys"
# Or remove them locally
sed -i -e '/# pulse-managed-key$/d' -e '/# pulse-proxy-key$/d' /root/.ssh/authorized_keys
# No restart needed - proxy will fail gracefully for that node
# Temperature monitoring will continue for remaining nodes
Failure Modes
Proxy Not Running:
- Symptom: No temperature data in Pulse UI
- Check:
systemctl status pulse-sensor-proxyon Proxmox host - Fix:
systemctl start pulse-sensor-proxy
Socket Not Accessible in Container:
- Symptom: Pulse logs show "Temperature proxy not available - using direct SSH"
- Check:
ls -l /run/pulse-sensor-proxy/pulse-sensor-proxy.sockin container - Fix: Verify bind mount in LXC config (
/etc/pve/lxc/<CTID>.conf) - Should have:
lxc.mount.entry: /run/pulse-sensor-proxy run/pulse-sensor-proxy none bind,create=dir 0 0
pvecm Not Available:
- Symptom: Proxy fails to discover cluster nodes
- Cause: Pulse runs on non-Proxmox host
- Fallback: Use legacy direct SSH method (native installation)
Pulse Running Off-Cluster:
- Symptom: Proxy discovers local host but not remote cluster nodes
- Limitation: Proxy requires passwordless SSH between cluster nodes
- Solution: Ensure Proxmox host running Pulse has SSH access to all cluster nodes
Unauthorized Connection Attempts:
- Symptom: Proxy logs show "Unauthorized connection attempt"
- Cause: Process with non-root UID trying to access socket
- Normal: Only root (UID 0) or proxy's own user can access socket
- Check: Look for suspicious processes trying to access the socket
Monitoring the Proxy
Manual Monitoring (v1):
The proxy service includes systemd restart-on-failure, which handles most issues automatically. For additional monitoring:
# Check proxy health
systemctl is-active pulse-sensor-proxy && echo "Proxy is running" || echo "Proxy is down"
# Monitor logs for errors
journalctl -u pulse-sensor-proxy --since "1 hour ago" | grep -i error
# Verify socket exists and is accessible
test -S /run/pulse-sensor-proxy/pulse-sensor-proxy.sock && echo "Socket OK" || echo "Socket missing"
Alerting:
- Rely on systemd's automatic restart (
Restart=on-failure) - Monitor via journalctl for persistent failures
- Check Pulse UI for missing temperature data
Future: Integration with pulse-watchdog is planned for automated health checks and alerting (see #528).
Known Limitations
Single Proxy = Single Point of Failure:
- Each Proxmox host runs one pulse-sensor-proxy instance
- If the proxy service dies, temperature monitoring stops for all containers on that host
- This is acceptable for read-only telemetry, but be aware of the failure mode
- Systemd auto-restart (
Restart=on-failure) mitigates most outages - If multiple Pulse containers run on same host, they share the same proxy
Sensors Output Parsing Brittleness:
- Pulse depends on
sensors -jJSON output format from lm-sensors - Changes to sensor names, structure, or output format could break parsing
- Consider adding schema validation and instrumentation to detect issues early
- Monitor proxy logs for parsing errors:
journalctl -u pulse-sensor-proxy | grep -i "parse\|error"
Cluster Discovery Limitations:
- Proxy uses
pvecm statusto discover cluster nodes (requires Proxmox IPC access) - If Proxmox hardens IPC access or cluster topology changes unexpectedly, discovery may fail
- Standalone Proxmox nodes work but only monitor that single node
- Fallback: Re-run setup script manually to reconfigure cluster access
Rate Limiting & Scaling (updated in commit 46b8b8d):
What changed: pulse-sensor-proxy now defaults to 1 request per second with a burst of 5 per calling UID. Earlier builds throttled after two calls every five seconds, which caused temperature tiles to flicker or fall back to -- as soon as clusters reached three or more nodes.
Symptoms of saturation:
- Temperature widgets flicker between values and
--, or entire node rows disappear after adding new hardware Settings → System → Updatesshows no proxy restarts, yet scheduler health reports breaker openings for temperature pollers- Proxy logs include
limiter.rejectionorRate limit exceededentries for the container UID
Diagnose:
- Check scheduler health for temperature pollers:
Breakers that remain
curl -s http://localhost:7655/api/monitoring/scheduler/health \ | jq '.instances[] | select(.key | contains("temperature")) \ | {key, lastSuccess: .pollStatus.lastSuccess, breaker: .breaker.state, deadLetter: .deadLetter.present}'openor repeated dead letters indicate the proxy is rejecting calls. - Inspect limiter metrics on the host:
A rising counter confirms the limiter is backing off callers.
curl -s http://127.0.0.1:9127/metrics \ | grep -E 'pulse_proxy_limiter_(rejects|penalties)_total' - Review logs for throttling:
journalctl -u pulse-sensor-proxy -n 100 | grep -i "rate limit"
Tuning guidance: Add a rate_limit block to /etc/pulse-sensor-proxy/config.yaml (see cmd/pulse-sensor-proxy/config.example.yaml) when clusters grow beyond the defaults. Use the formula per_peer_interval_ms = polling_interval_ms / node_count and set per_peer_burst ≥ node_count to allow one full sweep per polling window.
| Deployment size | Nodes | 10 s poll interval → interval_ms | Suggested burst | Notes |
|---|---|---|---|---|
| Small | 1–3 | 1000 (default) | 5 | Works for most single Proxmox hosts. |
| Medium | 4–10 | 500 | 10 | Halves wait time; keep burst ≥ node count. |
| Large | 10–20 | 250 | 20 | Monitor CPU on proxy; consider staggering polls. |
| XL | 30+ | 100–150 | 30–50 | Only enable after validating proxy host capacity. |
Security note: Lower intervals increase throughput and reduce UI staleness, but they also allow untrusted callers to issue more RPCs per second. Keep per_peer_interval_ms ≥ 100 in production and continue to rely on UID allow-lists plus audit logs when raising limits.
SSH latency monitoring:
- Monitor SSH latency metrics:
curl -s http://127.0.0.1:9127/metrics | grep pulse_proxy_ssh_latency
Requires Proxmox Cluster Membership:
- Proxy requires passwordless root SSH between cluster nodes
- Standard for Proxmox clusters, but hardened environments may differ
- Alternative: Create dedicated service account with sudo access to
sensors
No Cross-Cluster Support:
- Proxy only manages the cluster its host belongs to
- Cannot bridge temperature monitoring across multiple disconnected clusters
- Each cluster needs its own Pulse instance with its own proxy
Common Issues
Temperature Data Stops Appearing:
- Check proxy service:
systemctl status pulse-sensor-proxy - Check proxy logs:
journalctl -u pulse-sensor-proxy -n 50 - Test SSH manually:
ssh root@node "sensors -j" - Verify socket exists:
ls -l /run/pulse-sensor-proxy/pulse-sensor-proxy.sock
New Cluster Node Not Showing Temperatures:
- Ensure lm-sensors installed:
ssh root@new-node "sensors -j" - Proxy auto-discovers on next poll (may take up to 1 minute)
- Re-run the setup script to configure SSH keys on the new node:
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-sensor-proxy.sh | bash -s -- --ctid <CTID>
Permission Denied Errors:
- Verify socket permissions:
ls -l /run/pulse-sensor-proxy/pulse-sensor-proxy.sock - Should be:
srw-rw---- 1 root root - Check Pulse runs as root in container:
pct exec <CTID> -- whoami
Proxy Service Won't Start:
- Check logs:
journalctl -u pulse-sensor-proxy -n 50 - Verify binary exists:
ls -l /usr/local/bin/pulse-sensor-proxy - Test manually:
/usr/local/bin/pulse-sensor-proxy --version - Check socket directory:
ls -ld /var/run
Future Improvements
Potential Enhancements (Roadmap):
-
Proxmox API Integration
- If future Proxmox versions expose temperature telemetry via API, retire SSH approach
- Would eliminate SSH key management and improve security posture
- Monitor Proxmox development for metrics/RRD temperature endpoints
-
Agent-Based Architecture
- Deploy lightweight agents on each node for richer telemetry
- Reduces SSH fan-out overhead for large clusters
- Trade-off: Adds deployment/update complexity
- Consider only if demand for additional metrics grows
-
SNMP/IPMI Support
- Optional integration for baseboard management controllers
- Better for hardware-level sensors (baseboard temps, fan speeds)
- Requires hardware/firmware support, so keep as optional add-on
-
Schema Validation
- Add JSON schema validation for
sensors -joutput - Detect format changes early with instrumentation
- Log warnings when unexpected sensor formats appear
- Add JSON schema validation for
-
Caching & Throttling
- Implement result caching for large clusters (10+ nodes)
- Reduce SSH overhead with configurable TTL
- Add request throttling to prevent SSH rate limiting
-
Automated Key Rotation
- Systemd timer for automatic 90-day rotation
- Already supported via
/opt/pulse/scripts/pulse-proxy-rotate-keys.sh - Just needs timer unit configuration (documented in hardening guide)
-
Health Check Endpoint
- Add
/healthendpoint separate from Prometheus metrics - Enable external monitoring systems (Nagios, Zabbix, etc.)
- Return proxy status, socket accessibility, and last successful poll
- Add
Contributions Welcome: If any of these improvements interest you, open a GitHub issue to discuss implementation!
Control-Plane Sync & Migration
As of v4.32 the sensor proxy registers with Pulse and syncs its authorized node list via /api/temperature-proxy/authorized-nodes. No more manual allowed_nodes maintenance or /etc/pve access is required.
New installs
Always pass the Pulse URL when installing:
curl -sSL https://pulse.example.com/api/install/install-sensor-proxy.sh \
| sudo bash -s -- --ctid 108 --pulse-server http://192.168.0.149:7655
The installer now:
- Registers the proxy with Pulse (even for socket-only mode)
- Saves
/etc/pulse-sensor-proxy/.pulse-control-token - Appends a
pulse_control_planeblock to/etc/pulse-sensor-proxy/config.yaml
Migrating existing hosts
If you installed before v4.32, run the migration helper on each host:
curl -sSL https://pulse.example.com/api/install/migrate-sensor-proxy-control-plane.sh \
| sudo bash -s -- --pulse-server http://192.168.0.149:7655
The script registers the existing proxy, writes the control token, updates the config, and restarts the service (use --skip-restart if you prefer to bounce it yourself). Once migrated, temperatures for every node defined in Pulse will continue working even if the proxy can’t reach /etc/pve or Corosync IPC.
After migration you should see Temperature data fetched successfully entries for each node in journalctl -u pulse-sensor-proxy, and Settings → Diagnostics will show the last control-plane sync time.
Getting Help
If temperature monitoring isn't working:
-
Collect diagnostic info:
# On Proxmox host systemctl status pulse-sensor-proxy journalctl -u pulse-sensor-proxy -n 100 > /tmp/proxy-logs.txt ls -la /run/pulse-sensor-proxy/pulse-sensor-proxy.sock # In Pulse container journalctl -u pulse -n 100 | grep -i temp > /tmp/pulse-temp-logs.txt -
Test manually:
# On Proxmox host - test SSH to a cluster node ssh root@cluster-node "sensors -j" -
Check GitHub Issues: https://github.com/rcourtman/Pulse/issues
-
Include in bug report:
- Pulse version
- Deployment type (LXC/Docker/native)
- Proxy logs
- Pulse logs
- Output of manual SSH test