Updated documentation to reflect new directory-level bind mount architecture: - Changed socket path from /var/run/pulse-temp-proxy.sock to /run/pulse-temp-proxy/pulse-temp-proxy.sock - Updated LXC bind mount syntax to directory-level (create=dir instead of create=file) - Added "Monitoring the Proxy" section with manual monitoring commands - Documents systemd restart-on-failure reliance for v1 - Notes future pulse-watchdog integration planned Related to #528
17 KiB
Temperature Monitoring
Pulse can display real-time CPU and NVMe temperatures directly in your dashboard, giving you instant visibility into your hardware health.
Features
- CPU Package Temperature: Shows the overall CPU temperature when available
- Individual Core Temperatures: Tracks each CPU core
- NVMe Drive Temperatures: Monitors NVMe SSD temperatures (visible in the Storage tab's disk list)
- Color-Coded Display:
- Green: < 60°C (normal)
- Yellow: 60-80°C (warm)
- Red: > 80°C (hot)
How It Works
Secure Architecture (v4.24.0+)
For containerized deployments (LXC/Docker), Pulse uses a secure proxy architecture:
- pulse-temp-proxy runs on the Proxmox host (outside the container)
- SSH keys are stored on the host filesystem (
/var/lib/pulse-temp-proxy/ssh/) - Pulse communicates with the proxy via unix socket
- The proxy handles all SSH connections to cluster nodes
Benefits:
- SSH keys never enter the container
- Container compromise doesn't expose infrastructure credentials
- Automatically configured during installation
- Transparent to users - no setup changes
Legacy Architecture (Pre-v4.24.0 / Native Installs)
For native (non-containerized) installations, Pulse connects directly via SSH:
- Pulse uses SSH key authentication (like Ansible, Terraform, etc.)
- Runs
sensors -jcommand to read hardware temperatures - SSH key stored in Pulse's home directory
Important for native installs: Run every setup command as the same user account that executes the Pulse service (typically
pulse). The backend reads the SSH key from that user's home directory.
Requirements
- SSH Key Authentication: Your Pulse server needs SSH key access to nodes (no passwords)
- lm-sensors Package: Installed on nodes to read hardware sensors
- Passwordless root SSH (Proxmox clusters only): For proxy architecture, the Proxmox host running Pulse must have passwordless root SSH access to all cluster nodes. This is standard for Proxmox clusters but hardened environments may need to create an alternate service account.
Setup (Automatic)
The auto-setup script (Settings → Nodes → Setup Script) will prompt you to configure SSH access for temperature monitoring:
- Run the auto-setup script on your Proxmox node
- When prompted for SSH setup, choose "y"
- Get your Pulse server's public key:
# On your Pulse server (run as the user running Pulse) cat ~/.ssh/id_rsa.pub - Paste the public key when prompted
- The script will:
- Add the key to
/root/.ssh/authorized_keys - Install
lm-sensors - Run
sensors-detect --auto
- Add the key to
If the node is part of a Proxmox cluster, the script will now detect the other members and offer to configure the same SSH/lm-sensors setup on each of them automatically—confirm when prompted to roll it out cluster-wide.
Setup (Manual)
If you skipped SSH setup during auto-setup, you can configure it manually:
1. Generate SSH Key (on Pulse server)
# Run as the user running Pulse (usually the pulse service account)
ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
2. Copy Public Key to Proxmox Nodes
# Get your public key
cat ~/.ssh/id_rsa.pub
# Add it to each Proxmox node
ssh root@your-proxmox-node
mkdir -p /root/.ssh
chmod 700 /root/.ssh
echo "YOUR_PUBLIC_KEY_HERE" >> /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys
3. Install lm-sensors (on each Proxmox node)
apt-get update
apt-get install -y lm-sensors
sensors-detect --auto
4. Test SSH Connection
From your Pulse server:
ssh root@your-proxmox-node "sensors -j"
You should see JSON output with temperature data.
How It Works
- Pulse uses SSH to connect to each node as root
- Runs
sensors -jto get temperature data in JSON format - Parses CPU temperatures (coretemp/k10temp)
- Parses NVMe temperatures (nvme-pci-*)
- Displays CPU temperatures on the overview dashboard and lists NVMe drive temperatures in the Storage tab's disk table when available
Troubleshooting
No Temperature Data Shown
Check SSH access:
# From Pulse server
ssh root@your-proxmox-node "echo test"
Check lm-sensors:
# On Proxmox node
sensors -j
Check Pulse logs:
journalctl -u pulse -f | grep -i temp
Temperature Shows as Unavailable
- lm-sensors may not be installed
- Node may not have temperature sensors
- SSH key authentication may not be working
ARM Devices (Raspberry Pi, etc.)
ARM devices typically don't have the same sensor interfaces. Temperature monitoring may not work or may show different sensors (like thermal_zone0 instead of coretemp).
Security & Architecture
How Temperature Collection Works
Temperature monitoring uses SSH key authentication - the same trusted method used by automation tools like Ansible, Terraform, and Saltstack for managing infrastructure at scale.
What Happens:
- Pulse connects to your node via SSH using a key (no passwords)
- Runs
sensors -jto get temperature readings in JSON format - Parses the data and displays it in the dashboard
- Disconnects (entire operation takes <1 second)
Security Design:
- ✅ Key-based authentication - More secure than passwords, industry standard
- ✅ Read-only operation -
sensorscommand only reads hardware data - ✅ Private key stays on Pulse server - Never transmitted or exposed
- ✅ Public key on nodes - Safe to store, can't be used to gain access
- ✅ Instantly revocable - Remove key from authorized_keys to disable
- ✅ Logged and auditable - All connections logged in
/var/log/auth.log
What Pulse Uses SSH For
Pulse reuses the SSH access only for the actions already described in Setup (Automatic) and How It Works: adding the public key during setup (if you opt in) and polling sensors -j each cycle. It does nothing else—no extra commands, file changes, or config edits—and revoking the key stops temperature collection immediately.
This is the same security model used by thousands of organizations for infrastructure automation.
Best Practices
- Dedicated key: Generate a separate SSH key just for Pulse (recommended)
- Firewall rules: Optionally restrict SSH to your Pulse server's IP
- Regular monitoring: Review auth logs if you want extra visibility
- Secure your Pulse server: Keep it updated and behind proper access controls
Command Restrictions (Default)
Pulse now writes the temperature key with a forced command so the connection can only execute sensors -j. Port/X11/agent forwarding and PTY allocation are all disabled automatically when you opt in through the setup script. Re-running the script upgrades older installs to the restricted entry without touching any of your other SSH keys.
# Example entry in /root/.ssh/authorized_keys installed by Pulse
command="sensors -j",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa AAAAB3NzaC1yc2E...
You can still manage the entry manually if you prefer, but no extra steps are required for new installations.
Performance Impact
- Minimal: SSH connection is made once per polling cycle
- Timeout: 5 seconds (non-blocking)
- Falls back gracefully if SSH fails
- No impact if SSH is not configured
Container Security Considerations
✅ Resolved in v4.24.0
Secure Proxy Architecture (Current)
As of v4.24.0, containerized deployments use pulse-temp-proxy which eliminates the security concerns:
- SSH keys stored on host - Not accessible from container
- Unix socket communication - Pulse never touches SSH keys
- Automatic during installation - No manual configuration needed
- Container compromise = No credential exposure - Attacker gains nothing
For new installations: The proxy is installed automatically during LXC setup. No action required.
For existing installations (pre-v4.24.0): Upgrade your deployment to use the proxy:
# On your Proxmox host
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install-temp-proxy.sh | \
bash -s -- --ctid <your-pulse-container-id>
Legacy Security Concerns (Pre-v4.24.0)
Older versions stored SSH keys inside the container, creating security risks:
- Compromised container = exposed SSH keys
- Even with forced commands, keys could be extracted
- Required manual hardening (key rotation, IP restrictions, etc.)
Hardening Recommendations (Legacy/Native Installs Only)
1. Key Rotation
Rotate SSH keys periodically (e.g., every 90 days):
# On Pulse server
ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_new -N ""
# Update all nodes' authorized_keys
# Test connectivity
ssh -i ~/.ssh/id_ed25519_new node "sensors -j"
# Replace old key
mv ~/.ssh/id_ed25519_new ~/.ssh/id_ed25519
2. Secret Mounts (Docker)
Mount SSH keys from secure volumes:
version: '3'
services:
pulse:
image: rcourtman/pulse:latest
volumes:
- pulse-ssh-keys:/home/pulse/.ssh:ro # Read-only
- pulse-data:/data
volumes:
pulse-ssh-keys:
driver: local
driver_opts:
type: tmpfs # Memory-only, not persisted
device: tmpfs
3. Monitoring & Alerts
Enable SSH audit logging on Proxmox nodes:
# Install auditd
apt-get install auditd
# Watch SSH access
auditctl -w /root/.ssh -p wa -k ssh_access
# Monitor for unexpected commands
tail -f /var/log/audit/audit.log | grep ssh
4. IP Restrictions
Limit SSH access to your Pulse server IP in /etc/ssh/sshd_config:
Match User root Address 192.168.1.100
ForceCommand sensors -j
PermitOpen none
AllowAgentForwarding no
AllowTcpForwarding no
Verifying Proxy Installation
To check if your deployment is using the secure proxy:
# On Proxmox host - check proxy service
systemctl status pulse-temp-proxy
# Check if socket exists
ls -l /run/pulse-temp-proxy/pulse-temp-proxy.sock
# View proxy logs
journalctl -u pulse-temp-proxy -f
In the Pulse container, check the logs at startup:
# Should see: "Temperature proxy detected - using secure host-side bridge"
journalctl -u pulse | grep -i proxy
Disabling Temperature Monitoring
To remove SSH access:
# On each Proxmox node
sed -i '/pulse@/d' /root/.ssh/authorized_keys
# Or remove just the forced command entry
sed -i '/command="sensors -j"/d' /root/.ssh/authorized_keys
Temperature data will stop appearing in the dashboard after the next polling cycle.
Operations & Troubleshooting
Managing the Proxy Service
The pulse-temp-proxy service runs on the Proxmox host (outside the container).
Service Management:
# Check service status
systemctl status pulse-temp-proxy
# Restart the proxy
systemctl restart pulse-temp-proxy
# Stop the proxy (disables temperature monitoring)
systemctl stop pulse-temp-proxy
# Start the proxy
systemctl start pulse-temp-proxy
# Enable proxy to start on boot
systemctl enable pulse-temp-proxy
# Disable proxy autostart
systemctl disable pulse-temp-proxy
Log Locations
Proxy Logs (on Proxmox host):
# Follow proxy logs in real-time
journalctl -u pulse-temp-proxy -f
# View last 50 lines
journalctl -u pulse-temp-proxy -n 50
# View logs since last boot
journalctl -u pulse-temp-proxy -b
# View logs with timestamps
journalctl -u pulse-temp-proxy --since "1 hour ago"
Pulse Logs (in container):
# Check if proxy is being used
journalctl -u pulse | grep -i "proxy\|temperature"
# Should see: "Temperature proxy detected - using secure host-side bridge"
SSH Key Rotation
Rotate SSH keys periodically for security (recommended every 90 days):
# 1. On Proxmox host, backup old keys
cd /var/lib/pulse-temp-proxy/ssh/
cp id_ed25519 id_ed25519.backup
cp id_ed25519.pub id_ed25519.pub.backup
# 2. Generate new keypair
ssh-keygen -t ed25519 -f id_ed25519 -N "" -C "pulse-temp-proxy-rotated"
# 3. Get the new public key
cat id_ed25519.pub
# 4. Add new key to all cluster nodes
# For each node in your cluster:
ssh root@node1 "echo 'NEW_PUBLIC_KEY_HERE' >> /root/.ssh/authorized_keys"
ssh root@node2 "echo 'NEW_PUBLIC_KEY_HERE' >> /root/.ssh/authorized_keys"
# ... repeat for all nodes
# 5. Restart proxy to use new keys
systemctl restart pulse-temp-proxy
# 6. Verify temperature data still works in Pulse UI
# 7. Remove old keys from nodes (after confirming new keys work)
ssh root@node1 "sed -i '/pulse-temp-proxy-old/d' /root/.ssh/authorized_keys"
Revoking Access When Nodes Leave
When removing a node from your cluster:
# On the node being removed, remove the proxy's public key
ssh root@old-node "sed -i '/pulse-temp-proxy/d' /root/.ssh/authorized_keys"
# No restart needed - proxy will fail gracefully for that node
# Temperature monitoring will continue for remaining nodes
Failure Modes
Proxy Not Running:
- Symptom: No temperature data in Pulse UI
- Check:
systemctl status pulse-temp-proxyon Proxmox host - Fix:
systemctl start pulse-temp-proxy
Socket Not Accessible in Container:
- Symptom: Pulse logs show "Temperature proxy not available - using direct SSH"
- Check:
ls -l /run/pulse-temp-proxy/pulse-temp-proxy.sockin container - Fix: Verify bind mount in LXC config (
/etc/pve/lxc/<CTID>.conf) - Should have:
lxc.mount.entry: /run/pulse-temp-proxy run/pulse-temp-proxy none bind,create=dir 0 0
pvecm Not Available:
- Symptom: Proxy fails to discover cluster nodes
- Cause: Pulse runs on non-Proxmox host
- Fallback: Use legacy direct SSH method (native installation)
Pulse Running Off-Cluster:
- Symptom: Proxy discovers local host but not remote cluster nodes
- Limitation: Proxy requires passwordless SSH between cluster nodes
- Solution: Ensure Proxmox host running Pulse has SSH access to all cluster nodes
Unauthorized Connection Attempts:
- Symptom: Proxy logs show "Unauthorized connection attempt"
- Cause: Process with non-root UID trying to access socket
- Normal: Only root (UID 0) or proxy's own user can access socket
- Check: Look for suspicious processes trying to access the socket
Monitoring the Proxy
Manual Monitoring (v1):
The proxy service includes systemd restart-on-failure, which handles most issues automatically. For additional monitoring:
# Check proxy health
systemctl is-active pulse-temp-proxy && echo "Proxy is running" || echo "Proxy is down"
# Monitor logs for errors
journalctl -u pulse-temp-proxy --since "1 hour ago" | grep -i error
# Verify socket exists and is accessible
test -S /run/pulse-temp-proxy/pulse-temp-proxy.sock && echo "Socket OK" || echo "Socket missing"
Alerting:
- Rely on systemd's automatic restart (
Restart=on-failure) - Monitor via journalctl for persistent failures
- Check Pulse UI for missing temperature data
Future: Integration with pulse-watchdog is planned for automated health checks and alerting (see #528).
Known Limitations
One Proxy Per Host:
- Each Proxmox host runs one pulse-temp-proxy instance
- If multiple Pulse containers run on same host, they share the same proxy
- All containers see the same temperature data from the same cluster
Requires Proxmox Cluster Membership:
- Proxy uses
pvecm nodesto discover cluster members - Standalone Proxmox nodes work but only monitor that single node
- For standalone nodes, proxy is less useful (direct SSH works fine)
Passwordless Root SSH Required:
- Proxy assumes passwordless root SSH between cluster nodes
- Standard for Proxmox clusters, but hardened environments may differ
- Alternative: Create dedicated service account with sudo access to
sensors
No Cross-Cluster Support:
- Proxy only manages the cluster its host belongs to
- Cannot bridge temperature monitoring across multiple disconnected clusters
- Each cluster needs its own Pulse instance with its own proxy
Common Issues
Temperature Data Stops Appearing:
- Check proxy service:
systemctl status pulse-temp-proxy - Check proxy logs:
journalctl -u pulse-temp-proxy -n 50 - Test SSH manually:
ssh root@node "sensors -j" - Verify socket exists:
ls -l /run/pulse-temp-proxy/pulse-temp-proxy.sock
New Cluster Node Not Showing Temperatures:
- Ensure lm-sensors installed:
ssh root@new-node "sensors -j" - Proxy auto-discovers on next poll (may take up to 1 minute)
- Force refresh by restarting Pulse:
pct restart <CTID>
Permission Denied Errors:
- Verify socket permissions:
ls -l /run/pulse-temp-proxy/pulse-temp-proxy.sock - Should be:
srw-rw---- 1 root root - Check Pulse runs as root in container:
pct exec <CTID> -- whoami
Proxy Service Won't Start:
- Check logs:
journalctl -u pulse-temp-proxy -n 50 - Verify binary exists:
ls -l /usr/local/bin/pulse-temp-proxy - Test manually:
/usr/local/bin/pulse-temp-proxy --version - Check socket directory:
ls -ld /var/run
Getting Help
If temperature monitoring isn't working:
-
Collect diagnostic info:
# On Proxmox host systemctl status pulse-temp-proxy journalctl -u pulse-temp-proxy -n 100 > /tmp/proxy-logs.txt ls -la /run/pulse-temp-proxy/pulse-temp-proxy.sock # In Pulse container journalctl -u pulse -n 100 | grep -i temp > /tmp/pulse-temp-logs.txt -
Test manually:
# On Proxmox host - test SSH to a cluster node ssh root@cluster-node "sensors -j" -
Check GitHub Issues: https://github.com/rcourtman/Pulse/issues
-
Include in bug report:
- Pulse version
- Deployment type (LXC/Docker/native)
- Proxy logs
- Pulse logs
- Output of manual SSH test