Adds IncludeAllDeployments option to show all deployments, not just
problem ones (where replicas don't match desired). This provides parity
with the existing --kube-include-all-pods flag.
- Add IncludeAllDeployments to kubernetesagent.Config
- Add --kube-include-all-deployments flag and PULSE_KUBE_INCLUDE_ALL_DEPLOYMENTS env var
- Update collectDeployments to respect the new flag
- Add test for IncludeAllDeployments functionality
- Update UNIFIED_AGENT.md documentation
Addresses feedback from PR #855
The acquire() function blocked indefinitely without respecting context
cancellation. When clients disconnect while waiting for the per-node
lock, goroutines would remain blocked forever, connections accumulate
in CLOSE_WAIT state, and rate limiter semaphores are never released.
Added acquireContext() that respects context cancellation and updated
both HTTP and RPC handlers to use it. This prevents:
- Goroutine leaks from cancelled requests
- CLOSE_WAIT connection accumulation
- Cascading failures from filled semaphores
Related to #832
On dual-stack systems with net.ipv6.bindv6only=1 (like some Proxmox 8
configurations), Go's net.Listen("tcp", "0.0.0.0:8443") may still bind
to IPv6-only. This caused IPv4 localhost connections to hang while
IPv6 worked.
Fix by detecting IPv4 addresses and explicitly using "tcp4" network
type when creating the listener. Related to #805
On systems with net.ipv6.bindv6only=1 (including some Proxmox 8
configurations), using ":8443" results in IPv6-only binding. Users
reported curl to 127.0.0.1:8443 hanging while [::1]:8443 worked.
Changed default from ":8443" to "0.0.0.0:8443" to explicitly bind IPv4.
Related to #805
Add 42 test cases for security-critical validation utility functions:
- TestStripNodeDelimiters (10 cases): IPv6 bracket handling, edge cases
- TestParseNodeIP (10 cases): IPv4/IPv6 parsing with bracket support
- TestNormalizeAllowlistEntry (11 cases): case normalization, whitespace
handling, IPv6 full form compression
- TestIPAllowed (11 cases): CIDR matching, hosts map lookup, nil handling
These functions are used for node allowlist validation to prevent SSRF
attacks in the sensor proxy.
- hashIPToUID: 11 test cases covering IP hashing for rate limiting
(determinism, range bounds, collision detection, boundary values)
- extractNodesFromYAML: 17 test cases covering YAML node list parsing
(map format, list format, mixed types, edge cases)
First test files for config_cmd.go and http_server.go utilities.
- Add pagination (100 items per page) to prevent UI lockup with 2500+ backups
- Show year in date labels for non-current year backups
- Reset to page 1 when filters change
- Add First/Previous/Next/Last navigation controls
Fixes#541
Test Capability.Has, parseCapabilityList, capabilityNames, and
constant values. 54 test cases covering bitmask operations, parsing,
case insensitivity, whitespace handling, unknown values, and round-trip
consistency.
Remove ~900 lines of unused code identified by static analysis:
Go:
- internal/logging: Remove 10 unused functions (InitFromConfig, New,
FromContext, WithLogger, etc.) that were built but never integrated
- cmd/pulse-sensor-proxy: Remove 7 dead validation functions for a
removed command execution feature
- internal/metrics: Remove 8 unused notification metric functions and
10 Prometheus metrics that were never wired up
Frontend:
- Delete ActivationBanner.tsx stub component
- Remove unused exports: stopMetricsSampler, getSamplerStatus,
formatSpeedCompact, parseMetricKey, getResourceAlerts
- Fix SA4006 unused value issues in ssh.go, validation.go, generator.go
- Replace deprecated ioutil with io/os in config.go
- Replace deprecated tar.TypeRegA with tar.TypeReg
- Remove deprecated rand.Seed calls (auto-seeded in Go 1.20+)
- Fix always-true nil check in main.go
- Fix impossible nil comparison in tempproxy/client.go
- Add nil check for config in monitor.New()
Scripts like install.sh and install-sensor-proxy.sh are now attached
as release assets and downloaded from releases/latest/download/ URLs.
This ensures users always get scripts compatible with their installed
version, even while development continues on main.
Changes:
- build-release.sh: copy install scripts to release directory
- create-release.yml: upload scripts as release assets
- Updated all documentation and code references to use release URLs
- Scripts reference each other via release URLs for consistency
Fixes an issue where pvecm status output using decimal node IDs (e.g. '1' instead of '0x1') caused node discovery to fail. Added test case for this format.
Remove all code paths that manipulate config files without Phase 2 locking:
1. Installer: Remove ensure_allowed_nodes_file_reference() call (line 1674)
- Migration now handled exclusively by config migrate-to-file
2. Installer: Make migration failures fatal in update_allowed_nodes()
- Prevents fallback to unsafe Python manipulation
3. Daemon sanitizer: Remove os.WriteFile() call
- Now only sanitizes in-memory copy, doesn't write back to disk
- Logs warning instructing admin to run `config migrate-to-file`
4. Self-heal script: Replace 132 lines of Python with CLI call
- sanitize_allowed_nodes() now calls `config migrate-to-file`
- Eliminates uncoordinated Python-based config rewriting
All config mutations now flow exclusively through Phase 2 CLI with
atomic operations and file locking. No code paths remain that can
create duplicate allowed_nodes blocks.
Addresses Codex review feedback on Phase 2 gaps.
The migrate-to-file command now calls sanitizeDuplicateAllowedNodesBlocks
before parsing the config, allowing it to handle corrupted configs with
duplicate allowed_nodes blocks.
This ensures migration works even on hosts that were affected by the
original corruption issue.
Final security hardening based on second Codex review:
**Lock File Permission Fix (Security)**
- Lock file now created with 0600 instead of 0644
- Prevents unprivileged users from opening lock and holding LOCK_EX
- Without this, any local user could DoS the installer/self-heal
- Added f.Chmod(0600) to fix permissions on existing lock files
**Deadlock Prevention (Future-Proofing)**
- Added documentation for future multi-file locking scenarios
- Specifies consistent lock ordering requirement (config.yaml.lock before allowed_nodes.yaml.lock)
- Prevents potential deadlocks if future commands modify multiple files
- Current implementation only locks one file, so no immediate issue
**Testing:**
✅ Lock file created as `-rw-------` (0600)
✅ Existing lock files with wrong perms get fixed
✅ Unprivileged users can no longer DoS the lock
**Codex Validation:**
- Locking is now correct (persistent .lock file, held during entire operation)
- Atomic writes complete while lock is held
- Validation honors actual config paths
- Empty lists supported for operational flexibility
- Error propagation prevents silent failures
- No remaining race conditions or security issues
Phase 2 is now complete and Codex-verified as secure.
Related to Phase 2 fixes commit 804a638ea
Fixes critical issues found by Codex code review:
**1. Fixed file locking race condition (CRITICAL)**
- Lock file was being replaced by atomic rename, invalidating the lock
- New approach: lock a separate `.lock` file that persists across renames
- Ensures concurrent writers (installer + self-heal timer) are properly serialized
- Without this fix, corruption was still possible despite Phase 2
**2. Fixed validation to honor configured allowed_nodes_file path**
- validate command now uses loadConfig() to read actual config
- Respects allowed_nodes_file setting instead of assuming default path
- Prevents false positives/negatives when path is customized
**3. Allow empty allowed_nodes lists**
- Empty lists are valid (admin may clear for security, or rely on IPC validation)
- validate no longer fails on empty lists
- set-allowed-nodes --replace with zero nodes now supported
- Critical for operational flexibility
**4. Installer error propagation**
- update_allowed_nodes failures now exit installer with error
- Prevents silent failures that leave stale allowlists
- Self-heal will abort instead of masking CLI errors
**Technical Details:**
- withLockedFile() now locks `<path>.lock` instead of target file
- Lock held for entire duration of read-modify-write-rename
- atomicWriteFile() completes while lock is still held
- Empty lists represented as `allowed_nodes: []` in YAML
**Testing:**
✅ Lock file created and persists across operations
✅ Empty list can be written with --replace
✅ Validation passes with empty lists
✅ Config path from allowed_nodes_file honored
✅ Concurrent operations properly serialized
These fixes ensure Phase 2 actually eliminates corruption by design.
Identified by Codex code review
Related to Phase 2 commit 3dc073a28
Implements bullet-proof configuration management to completely eliminate
allowed_nodes corruption by design. This builds on Phase 1 (file-only mode)
by replacing all shell/Python config manipulation with proper Go tooling.
**New Features:**
- `pulse-sensor-proxy config validate` - parse and validate config files
- `pulse-sensor-proxy config set-allowed-nodes` - atomic node list updates
- File locking via flock prevents concurrent write races
- Atomic writes (temp file + rename) ensure consistency
- systemd ExecStartPre validation prevents startup with bad config
**Architectural Changes:**
1. Installer now calls config CLI instead of embedded Python/shell scripts
2. All config mutations go through single authoritative writer
3. Deduplication and normalization handled in Go (reuses existing logic)
4. Sanitizer kept as noisy failsafe (warns if corruption still occurs)
**Implementation Details:**
- New cmd/pulse-sensor-proxy/config_cmd.go with cobra commands
- withLockedFile() wrapper ensures exclusive access
- atomicWriteFile() uses temp + rename pattern
- Installer update_allowed_nodes() simplified to CLI calls
- Both systemd service modes include ExecStartPre validation
**Why This Works:**
- Single code path for all writes (no shell/Python divergence)
- File locking serializes self-heal timer + manual installer runs
- Validation gate prevents proxy from starting with corrupt config
- CLI uses same YAML parser as the daemon (guaranteed compatibility)
**Phase 2 Benefits:**
- Corruption impossible by design (not just detected and fixed)
- No more Python dependency for config management
- Atomic operations prevent partial writes
- Clear error messages on validation failures
The defensive sanitizer remains active but now logs loudly if triggered,
allowing us to confirm Phase 2 eliminates corruption in production before
removing the safety net entirely.
This completes the fix for the recurring temperature monitoring outages.
Related to Phase 1 commit 53dec6010
The getTemperatureLocal() function was running sensors without a timeout,
which could cause HTTP requests to hang if the sensors command stalled.
This adds context.Context parameter and uses exec.CommandContext to ensure
local temperature collection respects the same 15-second timeout as SSH-based
collection.
Fixes issue where HTTP mode worked for remote nodes but timed out for
self-monitoring on the same host.
Critical fix for intermittent HTTP endpoint hangs identified by Codex analysis.
## Root Cause
SSH collection via getTemperatureViaSSH() had no timeout, causing HTTP
handlers to block indefinitely when sensors command hung. This held node-level
mutexes and rate limit slots, creating cascading failures where subsequent
requests queued indefinitely.
## Solution
- Thread request context through to SSH execution
- Add exec.CommandContext with 15s timeout (vs 30s HTTP client timeout)
- Create execCommandWithLimitsContext() to wrap SSH commands
- Ensures handlers always release locks and respond within deadline
## Impact
- HTTP temps endpoint now responds in ~70ms consistently
- Temperature data successfully collected and displayed in Pulse
- Eliminates 'context deadline exceeded' errors
- Prevents node gate deadlocks from slow/stuck SSH sessions
Related to Codex session 019a7e99-00fc-7903-afa3-01100baf47c6
## HTTP Server Fixes
- Add source IP middleware to enforce allowed_source_subnets
- Fix missing source subnet validation for external HTTP requests
- HTTP health endpoint now respects subnet restrictions
## Installer Improvements
- Auto-configure allowed_source_subnets with Pulse server IP
- Add cluster node hostnames to allowed_nodes (not just IPs)
- Fix node validation to accept both hostnames and IPs
- Add Pulse server reachability check before installation
- Add port availability check for HTTP mode
- Add automatic rollback on service startup failure
- Add HTTP endpoint health check after installation
- Fix config backup and deduplication (prevent duplicate keys)
- Fix IPv4 validation with loopback rejection
- Improve registration retry logic with detailed errors
- Add automatic LXC bind mount cleanup on uninstall
## Temperature Collection Fixes
- Add local temperature collection for self-monitoring nodes
- Fix node identifier matching (use hostname not SSH host)
- Fix JSON double-encoding in HTTP client response
Related to #XXX (temperature monitoring fixes)
This implements HTTP/HTTPS support for pulse-sensor-proxy to enable
temperature monitoring across multiple separate Proxmox instances.
Architecture changes:
- Dual-mode operation: Unix socket (local) + HTTPS (remote)
- Unix socket remains default for security/performance (no breaking change)
- HTTP mode enables temps from external PVE hosts
Backend implementation:
- Add HTTPS server with TLS + Bearer token authentication to sensor-proxy
- Add TemperatureProxyURL and TemperatureProxyToken fields to PVEInstance
- Add HTTP client (internal/tempproxy/http_client.go) for remote proxy calls
- Update temperature collector to prefer HTTP proxy when configured
- Fallback logic: HTTP proxy → Unix socket → direct SSH (if not containerized)
Configuration:
- pulse-sensor-proxy config: http_enabled, http_listen_addr, http_tls_cert/key, http_auth_token
- PVEInstance config: temperature_proxy_url, temperature_proxy_token
- Environment variables: PULSE_SENSOR_PROXY_HTTP_* for all HTTP settings
Security:
- TLS 1.2+ with modern cipher suites
- Constant-time token comparison (timing attack prevention)
- Rate limiting applied to HTTP requests (shared with socket mode)
- Audit logging for all HTTP requests
Next steps:
- Update installer script to support HTTP mode + auto-registration
- Add Pulse API endpoint for proxy registration
- Generate TLS certificates during installation
- Test multi-instance temperature collection
Related to #571 (multi-instance architecture)
Increased default rate limits to handle Pulse startup polling:
- Per-peer burst: 5 → 10 requests (handles multi-node clusters with retries)
- Per-peer interval: 1s → 500ms (1 QPS → 2 QPS, 60/min → 120/min)
This prevents the proxy from being disabled during Pulse startup when it
polls all nodes simultaneously. The previous limits were too restrictive
for clusters with 3+ nodes.
Codex independent review identified a critical security issue: when cluster
validation fails, the previous fix fell back to permissive mode (allowing
ALL nodes), making the proxy a potential SSRF/network scanner for any
container that could reach the socket.
NEW BEHAVIOR:
When cluster validation is unavailable (IPC blocked), fall back to
localhost-only validation instead of permissive mode. This maintains
security while still allowing self-monitoring.
Implementation:
- Added validateAsLocalhost() method to nodeValidator
- Calls discoverLocalHostAddresses() to get local IPs/hostnames
- Only allows requests matching the local host
- Blocks requests to other cluster members or arbitrary hosts
Test results on delly (clustered node with IPC blocked):
- Request to 192.168.0.5 (self): ALLOWED, temps fetched
- Request to 192.168.0.134 (cluster peer): BLOCKED with node_not_localhost
- No more "allowing all nodes" security regression
Related to #571 - addresses Codex security audit feedback
This prevents the proxy from being abused as a network scanner while
still solving the original temperature monitoring issue.
Changes based on independent Codex review:
1. Elevated log level from Debug to Warn for permissive mode fallback
- Operators now see "SECURITY: Cluster validation unavailable" in
journalctl at default log level
- Added similar warning on startup when running in permissive mode
- Makes it obvious when node validation is bypassed
2. Added runtime fallback for AF_NETLINK restrictions
- New discoverLocalHostAddressesFallback() shells out to 'ip addr'
- Triggered when net.Interfaces() fails with netlinkrib error
- Ensures existing installations work even without systemd unit update
- Logs recommendation to update systemd unit for better performance
3. Improved security awareness
- Changed message to explicitly state "allowing all nodes"
- Recommends configuring allowed_nodes for security
- Makes permissive fallback behavior transparent to operators
Related to #571 - temperature monitoring on standalone nodes
These changes ensure the fix works for existing installations that
haven't updated their systemd units, while clearly communicating when
the proxy is running in an insecure permissive mode.