Major improvements to the install script based on comprehensive review:
## 1. Temperature Monitoring - No Restart Required ✨
- Ask about temperature monitoring BEFORE container creation (not after)
- Add bind mount during `pct create` instead of requiring restart later
- Quick mode defaults to "yes", Advanced mode asks user
- Host path: /run/pulse-sensor-proxy → /mnt/pulse-proxy in container
- Support --skip-restart flag in install-sensor-proxy.sh
- Eliminates disruptive container restart on fresh installs
## 2. Shell Injection Prevention 🔒
- Replace `eval pct create` with array-based command building
- Prevents quoting bugs with special characters in hostnames/nameservers
- Safer handling of user input in container creation
## 3. Non-Interactive Install Support 🤖
- Replace bare `read` with `safe_read_with_default` in prompts
- Prevents hangs when running `curl | bash` non-interactively
- Proper fallback to sensible defaults
## 4. Cleanup on Interrupt 🧹
- Track container ID globally during creation
- Properly cleanup orphaned containers on Ctrl+C/SIGTERM
- New handle_install_interrupt() function
- Prevents leftover containers after cancelled installs
## 5. Air-Gapped Network Support 🌐
- Replace 8.8.8.8 ping check with `hostname -I` IP detection
- Supports restricted/firewalled networks where external ping fails
- More reliable for DHCP-only environments
Changes:
- install.sh: Refactor temperature prompt timing and mount setup
- install.sh: Convert pct create to array-based args (lines 1018-1055)
- install.sh: Add handle_install_interrupt trap (lines 38-48)
- install.sh: Replace ping check with IP detection (line 1082)
- scripts/install-sensor-proxy.sh: Add --skip-restart flag support
- scripts/install-sensor-proxy.sh: Improve mount detection and updates
Impact:
- Fresh installs now complete without any container restarts
- Temperature monitoring works immediately after first boot
- Safer and more robust for automation/CI scenarios
- Better experience on restricted networks
Co-authored-by: Codex AI
- Remove confusing --main flag, use --source for clarity
- Fix timeout issues when building from source in LXC containers
- Increase timeout from 5min to 20min for source builds
- Add PULSE_CONTAINER_TIMEOUT env var for custom timeouts
- Support PULSE_CONTAINER_TIMEOUT=0 to disable timeout
- Fix misleading "Latest version: vX.X.X" message during source builds
- Update documentation to use --source instead of --main
- Simplify auto-update script logic for source builds
Changes:
- install.sh: Check BUILD_FROM_SOURCE early to skip version detection
- install.sh: Adaptive timeout (300s binary, 1200s source builds)
- install.sh: Better timeout error messages with recovery instructions
- README.md: Replace --main with --source in examples
- docs/INSTALL.md: Replace --main with --source in examples
- scripts/pulse-auto-update.sh: Remove --main special case
The installer was configuring pulse-backend.service.d but the actual
service is pulse.service, so the PULSE_SENSOR_PROXY_SOCKET environment
variable wasn't being set.
Changed: pulse-backend.service → pulse.service
This ensures Pulse actually uses the proxy socket for temperature
monitoring instead of attempting SSH connections.
Changed curl flags from -fsSL to -fSL to enable error output.
The -s flag was silencing all curl errors including SSL/TLS issues,
making it impossible to diagnose download failures.
With -S (show errors), stderr now captures meaningful error messages
like certificate problems, connection failures, etc.
- Back up container config before making mount modifications
- Restore original config if socket verification fails
- Clean up backup file on success or when verification is skipped
- Leave host-level resources (user, binary, service) in place for idempotency
This ensures failed installations don't leave containers in an
inconsistent state while keeping successfully installed host services
for faster re-runs.
Setup script improvements (config_handlers.go):
- Remove redundant mount configuration and container restart logic
- Let installer handle all mount/restart operations (single source of truth)
- Eliminate hard-coded mp0 assumption
Installer improvements (install-sensor-proxy.sh):
- Add mount configuration persistence validation via pct config check
- Surface pct set errors instead of silencing with 2>/dev/null
- Capture and display curl download errors with temp files
- Check systemd daemon-reload/enable/restart exit codes
- Show journalctl output when service fails to start
- Make socket verification fatal (was warning)
- Provide clear manual steps when hot-plug fails on running container
This makes the installation fail fast with actionable error messages
instead of silently proceeding with broken configuration.
Replaces immediate polling with queue-based scheduling:
- TaskQueue with min-heap (container/heap) for NextRun-ordered execution
- Worker goroutines that block on WaitNext() until tasks are due
- Tasks only execute when NextRun <= now, respecting adaptive intervals
- Automatic rescheduling after execution via scheduler.BuildPlan
- Queue depth tracking for backpressure-aware interval adjustments
- Upsert semantics for updating scheduled tasks without duplicates
Task 6 of 10 complete (60%). Ready for error/backoff policies.
Confirms adaptive scheduling logic is fully operational:
- EMA smoothing (alpha=0.6) to prevent interval oscillations
- Staleness-based interpolation between min/max intervals
- Error penalty (0.6x per error) for faster recovery detection
- Queue depth stretch (0.1x per task) for backpressure handling
- ±5% jitter to prevent thundering herd effects
- Per-instance state tracking for smooth transitions
Task 5 of 10 complete. Scheduler foundation ready for queue-based execution.
Added clear messaging to explain why the socket bind mount is configured,
focusing on the security benefits rather than technical implementation.
Changes:
- Add explanatory header "Secure Container Communication Setup"
- Explain the three key benefits:
• Container communicates via Unix socket (not SSH)
• No SSH keys exposed inside container (enhanced security)
• Proxy on host manages all temperature collection
- Update technical messages to be more user-friendly:
• "Configuring socket bind mount" instead of "Ensuring..."
• "Restarting container to activate secure communication"
• "Verifying secure communication channel"
• "✓ Secure socket communication ready"
• "Configuring Pulse to use proxy"
This helps users understand WHY the bind mount exists (security) rather
than just seeing technical implementation details.
Instead of warning the user to restart the container manually, the script
now automatically restarts it when the socket mount configuration is
updated. This ensures the mount is immediately active and temperature
monitoring works right away without user intervention.
Uses 'pct restart' if running, 'pct start' if stopped.
The install-sensor-proxy.sh script now tries GitHub releases first, then falls
back to downloading from the Pulse server if GitHub fails or doesn't have the
binary (common when building from main).
The LXC installer sets PULSE_SENSOR_PROXY_FALLBACK_URL to point to the Pulse
server running inside the newly created LXC, ensuring the proxy binary can be
downloaded from /api/install/pulse-sensor-proxy.
This fixes the issue where installing with --main would fail to install
pulse-sensor-proxy on the host because GitHub releases don't include it yet.
Adds a one-command Docker deployment flow that:
- Detects if running in LXC and installs Docker if needed
- Automatically installs pulse-sensor-proxy on the Proxmox host
- Configures bind mount for proxy socket into LXC
- Generates optimized docker-compose.yml with proxy socket
- Enables temperature monitoring via host-side proxy
The install-docker.sh script handles the complete setup including:
- Docker installation (if needed)
- ACL configuration for container UIDs
- Bind mount setup
- Automatic apparmor=unconfined for socket access
Accessible via: curl -sSL http://pulse:7655/api/install/install-docker.sh | bash
- Cleanup script now detects forced command restriction on standalone nodes
- Logs helpful message explaining limitation (security by design)
- Does not fail when standalone nodes cannot be cleaned up
- Documents that standalone node cleanup is limited by forced command security
- Automatic cleanup works fully for cluster nodes
- Manual cleanup command provided for standalone nodes if needed
- Cleanup script now tries proxy's SSH key first for standalone nodes
- Falls back to default SSH if proxy key not available
- Fixes cleanup failure when Proxmox host doesn't have direct SSH to standalone nodes
- Create cleanup script that removes Pulse SSH keys from nodes
- Add systemd path unit to watch for cleanup requests
- Add systemd service to execute cleanup script
- Update install-sensor-proxy.sh to install cleanup system
- Handles both cluster nodes (pulse-managed-key) and standalone nodes (pulse-proxy-key)
- Cleanup is triggered automatically when nodes are deleted from Pulse
- All cleanup actions are logged via syslog for auditability
Implements automatic temperature monitoring setup for standalone
Proxmox/Pimox nodes without manual SSH key configuration.
Changes:
- Add /api/system/proxy-public-key endpoint to expose proxy's SSH public key
- Setup script now detects standalone nodes (non-cluster)
- Auto-fetches and installs proxy SSH key with forced commands
- Add Raspberry Pi temperature support via cpu_thermal and /sys/class/thermal
- Enhance setup script with better error handling for lm-sensors installation
- Add RPi detection to skip lm-sensors and use native thermal interface
Security:
- Public key endpoint is safe (public keys are meant to be public)
- All installed keys use forced command="sensors -j" with full restrictions
- No shell access, port forwarding, or other SSH features enabled
Fixes two issues with the sensor proxy installation:
1. Local node IP detection now uses exact matching instead of substring matching to avoid false negatives
2. Removes duplicate output filtering in the setup script wrapper
These changes ensure that the proxy SSH key is correctly configured on the local node during cluster installations.
Implements automated cleanup workflow when nodes are deleted from Pulse, removing all monitoring footprint from the host. Changes include a new RPC handler in the sensor proxy for cleanup requests, enhanced node deletion modal with detailed cleanup explanations, and improved SSH key management with proper tagging for atomic updates.
Improvements to pulse-sensor-proxy:
- Fix cluster discovery to use pvecm status for IP addresses instead of node names
- Add standalone node support for non-clustered Proxmox hosts
- Enhanced SSH key push with detailed logging, success/failure tracking, and error reporting
- Add --pulse-server flag to installer for custom Pulse URLs
- Configure www-data group membership for Proxmox IPC access
UI and API cleanup:
- Remove unused "Ensure cluster keys" button from Settings
- Remove /api/diagnostics/temperature-proxy/ensure-cluster-keys endpoint
- Remove EnsureClusterKeys method from tempproxy client
The setup script already handles SSH key distribution during initial configuration,
making the manual refresh button redundant.
Made the setup and installation output more concise and reassuring for users. Less verbosity, clearer messaging.
**Setup script improvements:**
- Changed "Container Detection" → "Enhanced Security"
- Simplified prompts: "Enable secure proxy? [Y/n]"
- Cleaned up success messages: "✓ Secure proxy architecture enabled"
- Removed verbose status messages (node-by-node cleanup output)
- Only show essential information users need to see
**install-sensor-proxy.sh improvements:**
- Added --quiet flag to suppress verbose output
- In quiet mode, only shows: "✓ pulse-sensor-proxy installed and running"
- Full output still available when run manually
- Removed redundant "Installation complete!" banners
- Cleaner legacy key cleanup messaging
**Result:**
Users see a clean, professional installation flow that builds confidence. Technical details are hidden unless needed. Messages are clear and reassuring rather than verbose.
When pulse-sensor-proxy is installed, automatically remove old SSH keys that were stored in the container for security.
Changes:
**install-sensor-proxy.sh:**
- Checks container for SSH private keys (id_rsa, id_ed25519, etc.)
- Removes any found keys from container
- Warns user that legacy keys were cleaned up
- Explains proxy now handles SSH
**Setup script (config_handlers.go):**
- After successful proxy install, removes old SSH keys from all cluster nodes
- Cleans up authorized_keys entries that match the old container-based key
- Keeps only proxy-managed keys (pulse-sensor-proxy comment)
This provides a clean migration path from the old direct-SSH method to the secure proxy architecture. Users upgrading from pre-v4.24 versions get automatic cleanup of insecure container-stored keys.
Complete the pulse-sensor-proxy rename by updating the installer script name and all references to it.
Updated:
- Renamed scripts/install-temp-proxy.sh → scripts/install-sensor-proxy.sh
- Updated all documentation references
- Updated install.sh references
- Updated build-release.sh comments
The name "temp-proxy" implied a temporary or incomplete implementation. The new name better reflects its purpose as a secure sensor data bridge for containerized Pulse deployments.
Changes:
- Renamed cmd/pulse-temp-proxy/ to cmd/pulse-sensor-proxy/
- Updated all path constants and binary references
- Renamed environment variables: PULSE_TEMP_PROXY_* to PULSE_SENSOR_PROXY_*
- Updated systemd service and service account name
- Updated installation, rotation, and build scripts
- Renamed hardening documentation
- Maintained backward compatibility for key removal during upgrades
The pulse user in the container (UID 1001) needs to access the
/run/pulse-temp-proxy directory owned by root:root. Changed from
0770 (owner+group only) to 0775 (add world read+execute) so the
pulse user can access the socket.
Related to #528
Fixes LXC bind mount issue where socket-level mounts break when the
socket is recreated by systemd. Following Codex's recommendation to
bind mount the directory instead of the file.
Changes:
- Socket path: /run/pulse-temp-proxy/pulse-temp-proxy.sock
- Systemd: RuntimeDirectory=pulse-temp-proxy (auto-creates /run/pulse-temp-proxy)
- Systemd: RuntimeDirectoryMode=0770 for group access
- LXC mount: Bind entire /run/pulse-temp-proxy directory
- Install script: Upgrades old socket-level mounts to directory-level
- Install script: Detects and handles bind mount changes
This survives socket recreations and container restarts. The directory
mount persists even when systemd unlinks/recreates the socket file.
Related to #528
Allows testing the proxy installation with a locally-built binary instead
of requiring a GitHub release. This enables proper E2E testing before
shipping a release.
Usage:
./install-temp-proxy.sh --ctid 112 --local-binary /path/to/pulse-temp-proxy
The script will:
- Use the provided binary instead of downloading from GitHub
- Still handle all setup (systemd service, SSH keys, bind mounts)
- Allow full integration testing without a release
This solves the chicken-and-egg problem of needing a release to test
the installation process.
Related to #528
Updates build script and release checklist to include pulse-temp-proxy binaries:
- Build pulse-temp-proxy for all architectures (amd64, arm64, armv7)
- Include in tarballs alongside pulse and pulse-docker-agent
- Copy standalone binaries to release/ for install-temp-proxy.sh
- Update release checklist to upload standalone binaries as assets
This ensures install-temp-proxy.sh can download binaries from GitHub releases.
Addresses #528
Introduces pulse-temp-proxy architecture to eliminate SSH key exposure in containers:
**Architecture:**
- pulse-temp-proxy runs on Proxmox host (outside LXC/Docker)
- SSH keys stored on host filesystem (/var/lib/pulse-temp-proxy/ssh/)
- Pulse communicates via unix socket (bind-mounted into container)
- Proxy handles cluster discovery, key rollout, and temperature fetching
**Components:**
- cmd/pulse-temp-proxy: Standalone Go binary with unix socket RPC server
- internal/tempproxy: Client library for Pulse backend
- scripts/install-temp-proxy.sh: Idempotent installer for existing deployments
- scripts/pulse-temp-proxy.service: Systemd service for proxy
**Integration:**
- Pulse automatically detects and uses proxy when socket exists
- Falls back to direct SSH for native installations
- Installer automatically configures proxy for new LXC deployments
- Existing LXC users can upgrade by running install-temp-proxy.sh
**Security improvements:**
- Container compromise no longer exposes SSH keys
- SSH keys never enter container filesystem
- Maintains forced command restrictions
- Transparent to users - no workflow changes
**Documentation:**
- Updated TEMPERATURE_MONITORING.md with new architecture
- Added verification steps and upgrade instructions
- Preserved legacy documentation for native installs
Adds automatic data directory separation to prevent mock data from contaminating production alerts and configuration:
- hot-dev.sh: Explicitly sets PULSE_DATA_DIR based on PULSE_MOCK_MODE
- Production: /etc/pulse (real, persistent data)
- Mock: /opt/pulse/tmp/mock-data (isolated, throwaway data)
- clean-mock-alerts.sh: New utility to remove mock contamination from production alerts
- Auto-creates mock data directory when switching to mock mode
This prevents issues where mock alerts appear in production alert history after switching between modes.
Add green checkmark icon to "No active alerts" empty state for better visual feedback. Separate mock and production data directories to prevent contamination. Mock mode now uses /opt/pulse/tmp/mock-data while production uses /etc/pulse. Update toggle script to dynamically set data directory based on mode.
- Strip trailing slash from PULSE_URL in install script to prevent double-slash URLs
- Add path normalization in router for defense-in-depth on public endpoint matching
- Fixes issue #528 where users copying URLs with trailing slashes got 401 errors
The install script now normalizes PULSE_URL with ${PULSE_URL%/} before concatenating
with /download/pulse-docker-agent, preventing https://example.com//download URLs.
The router normalization provides additional resilience for path matching, though the
existing path traversal check already blocks double slashes at ServeHTTP level.
Add comprehensive PMG monitoring with mail statistics, queue depth tracking,
spam distribution analysis, and quarantine monitoring. Includes full discovery
support and UI consistency improvements across all Proxmox products.
Backend:
- Add pkg/pmg package with complete API client for PMG operations
- Implement mail statistics collection (inbound/outbound, spam, virus, bounces)
- Add queue depth monitoring (active, deferred, hold, incoming queues)
- Support spam score distribution and quarantine totals
- Add PMG-specific discovery logic to differentiate from PVE on port 8006
- Extend mock data generator with realistic PMG instances and metrics
- Add PMG node configuration support in config system
Frontend:
- Create MailGateway.tsx component with detailed PMG dashboard
- Display mail flow statistics with time-series charts
- Show queue depth with color-coded warnings (>50 messages or >30min age)
- Add spam distribution histogram and quarantine status
- Support cluster node status with individual queue monitoring
- Add PMG to network discovery with purple branding and mail icon
- Implement conditional navigation (hide PMG tab when no instances configured)
- Standardize discovery UI controls across PVE/PBS/PMG settings pages
API:
- Add /api/config/pmg endpoints for node configuration
- Support PMG-specific monitoring toggles (mail stats, queues, quarantine)
- Extend system settings with PMG configuration options
Discovery:
- Detect PMG vs PVE on shared port 8006 using /api2/json/statistics/mail endpoint
- Return 'pmg' type for mail gateway servers in discovery results
- Update DiscoveryModal to display PMG servers with appropriate styling
This completes ecosystem monitoring support for all three Proxmox products:
Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.