The in-memory metrics buffer was changed from 1000 to 86400 points per
metric to support 30-day sparklines, but this pre-allocated ~18 MB per
guest (7 slices × 86400 × 32 bytes). With 50 guests that's 920 MB —
explaining why users needed to double their LXC memory after upgrading
to 5.1.0.
- Revert in-memory buffer to 1000 points / 24h retention
- Remove eager slice pre-allocation (use append growth instead)
- Add LTTB (Largest Triangle Three Buckets) downsampling algorithm
- Chart endpoints now use a two-tier strategy: in-memory for ranges
≤ 2h, SQLite persistent store + LTTB for longer ranges
- Reduce frontend ring buffer from 86400 to 2000 points
Related to #1190
Proxmox reports cumulative byte counters that update unevenly across
polling intervals, causing a steady 100 Mbps download to appear as
spikes up to 450 Mbps in sparkline charts. Replace per-interval rate
calculation with a 4-sample sliding window (30s at 10s polling) that
averages over the full span — the same approach Prometheus rate() uses.
Initialize rateTracker in ApplyHostReport test monitors to prevent nil
pointer panic when CalculateRates is called during host report processing.
Add pre-push hook guard that blocks pushing version tags directly —
releases must go through the create-release.yml workflow.
Host network sparklines were displaying wildly incorrect values (e.g., 147 GB/s
for an idle Raspberry Pi) because cumulative byte counters (total bytes since
boot) were being stored directly instead of being converted to rates.
Changes:
- monitor.go: Use RateTracker to calculate network rates for hosts, matching
the existing pattern used for VMs and containers. Only record network
metrics when we have enough samples to calculate valid rates.
- router.go: Remove network metrics from live fallback for hosts since we
can't calculate rates from a single snapshot. Better to show nothing than
misleading cumulative totals.
The fix follows the established codebase pattern where:
1. Agent reports cumulative RXBytes/TXBytes
2. RateTracker compares consecutive samples to calculate bytes/second
3. Rates are stored in metrics history for sparkline display
Guest polling (CheckGuest) runs before CheckNode in each poll cycle,
so the display name cache was empty when the first guest alert was
created. This caused the initial notification to use the raw Proxmox
node name. Fix by seeding the cache from modelNodes (which are already
available) before guest polling starts.
Related to #1188
Alerts previously showed the raw Proxmox node name (e.g., "on pve") even
when users configured a display name (e.g., "SPACEX") via Settings or the
host agent --hostname flag. This affected the alert UI, email notifications,
and webhook payloads.
Add NodeDisplayName field to the alert chain: cache display names in the
alert Manager (populated by CheckNode/CheckHost on every poll), resolve
them at alert creation via preserveAlertState, refresh on metric updates,
and enrich at read time in GetActiveAlerts. Update models.Alert, the
syncAlertsToState conversion, email templates, Apprise body text, webhook
payloads, and all frontend rendering paths.
Related to #1188
Expand the smartctl collector to capture detailed SMART attributes (SATA
and NVMe), propagate them through the full data pipeline, persist them
as time-series metrics, and display them in an interactive disk detail
drawer with historical sparkline charts.
Backend: add SMARTAttributes struct, writeSMARTMetrics for persistent
storage, "disk" resource type in metrics API with live fallback.
Frontend: enhanced DiskList with Power-On column and SMART warnings,
new DiskDetail drawer matching NodeDrawer styling patterns, generic
HistoryChart metric support with proper tooltip formatting.
Proxmox's FreeMem field reports free memory relative to the balloon's
guest-visible total (total_mem), not relative to MaxMem. When ballooning
is active and the VM's memory has been reduced, subtracting FreeMem from
MaxMem produces wildly inflated usage (e.g. 97% when actual usage is 20%).
Proxmox's Mem field is already calculated as (total_mem - free_mem),
giving the correct used bytes regardless of balloon state. Swap the
priority so Mem is checked before FreeMem.
Related to #1185
When PVE backup polling detects permission errors (403/401/permission
denied), track them per instance and surface them via the scheduler
health endpoint.
The Backups page now fetches instance warnings and displays a banner
when backup permission issues are detected, telling users exactly how
to fix the problem.
Related to #1139
Security fixes:
- Auto-register now requires settings:write scope for API tokens
- X-Forwarded-For in auto-register only trusted from verified proxies
- Public URL capture requires authentication (no loopback bypass)
- Lockout reset now uses RequireAdmin for session users
Reliability fixes:
- Docker stop command expiration clears PendingUninstall flag
- Cancelled notifications get completed_at set and are cleaned up
MetricsHistory.Cleanup() was defined but never called, and even if called,
it only removed old data points without deleting map entries for deleted
containers/VMs. Each stale entry leaked ~224KB (7 pre-allocated slices).
Changes:
- Call metricsHistory.Cleanup() and rateTracker.Cleanup() in maintenance loop
- Delete map entries entirely when all data points have expired
- Return nil instead of empty slice in cleanupMetrics() to release backing arrays
- Add Cleanup() method to RateTracker with 24-hour stale threshold
- Add debug logging to track cleanup activity
Related to #1153
- Fix SSRF and rate limit bypass in SendEnhancedWebhook by validating the rendered URL.
- Fix rate limit spoofing in updates API by using secure IP extraction (trusted proxies).
- Fix memory leak in metrics history by correctly clearing fully stale data series.
- Fix public URL poisoning by preventing overwrites when explicitly configured.
- Fix data race in webhook notifications by removing shared state
- Fix duplicate monitors on config reload by stopping old instances
- Prevent metrics ID deletion on transient startup errors
- Support Bearer auth header for config export/import endpoints
- Initialize Alert and Notification managers with tenant-specific data directories
- Add panic recovery to WebSocket safeSend for stability
- Record host metrics to history for sparkline support
- Fixed early return in handleAlertResolved that skipped incident recording
when quiet hours suppressed recovery notifications
- Added Host Agent alert delay configuration (backend + UI)
- Host Agents now have dedicated time threshold settings like other resource types
Related to #1179
- Reduce minimum seed duration from 7 days to 1 hour for faster startup
on resource-constrained systems (like demo server 1GB droplet)
- Reduce sleep times from 200ms to 50ms between resource processing
- Add diagnostic logging throughout mock metrics seeding to help debug
issues where sparklines show no data
- Add progress logging for nodes, VMs, containers, storage, docker hosts
The PBS backup snapshot cache only compared BackupCount and LastBackup
timestamp to decide whether to re-fetch. When PBS verify jobs complete,
neither field changes — only the Verification field on individual
snapshots changes — so the cache served stale data indefinitely.
Add a 10-minute TTL per backup group so verification status changes are
picked up periodically. Also add panic recovery to PBS and PVE backup
goroutines, and use runtimeCtx for PBS backup polling to respect
monitor shutdown.
Closes#1174
- Move all SQLite pragmas from db.Exec() to DSN parameters so every
connection the pool creates gets busy_timeout and other settings.
Previously only the first connection had these applied.
- Set MaxOpenConns(1) on audit, RBAC, and notification databases
(metrics already had this). Fixes potential for multiple connections
where new ones lack busy_timeout.
- Increase busy_timeout from 5s to 30s across all databases to
tolerate disk I/O pressure during backup windows.
- Fix nested query deadlocks in GetRoles(), GetUserAssignments(), and
CancelByAlertIDs() that would deadlock with MaxOpenConns(1).
- Fix circuit breaker retryInterval not resetting on recovery, which
caused the next trip to start at 5-minute backoff instead of 5s.
Related to #1156
- #1163: Add node badges to storage resources in threshold tables
(ResourceTable.tsx, ResourceCard.tsx)
- #1162: Fix PBS backup alerts showing datastore as node name
(alerts.go - use "Unknown" for orphaned backups)
- #1153: Fix memory leaks in tracking maps
- Add max 48 sample limit for pmgQuarantineHistory
- Add max 10 entry limit for flappingHistory
- Add cleanup for dockerUpdateFirstSeen
- Add cleanupTrackingMaps() for auth, polling, and circuit breaker maps
Note: #1149 fix (chat sessions null check) is in AISettings.tsx
which has other pending changes - will be committed separately.
Implements multi-tenant infrastructure for organization-based data isolation.
Feature is gated behind PULSE_MULTI_TENANT_ENABLED env var and requires
Enterprise license - no impact on existing users.
Core components:
- TenantMiddleware: extracts org ID, validates access, 501/402 responses
- AuthorizationChecker: token/user access validation for organizations
- MultiTenantChecker: WebSocket upgrade gating with license check
- Per-tenant audit logging via LogAuditEventForTenant
- Organization model with membership support
Gating behavior:
- Feature flag disabled: 501 Not Implemented for non-default orgs
- Flag enabled, no license: 402 Payment Required
- Default org always works regardless of flag/license
Documentation added: docs/MULTI_TENANT.md
- Updated monitoring reload and metrics history to be tenant-aware
- Refactored update manager and checksum validation for multi-tenancy
- Enhanced test coverage for agent updates and metrics storage
Implements Phase 1-2 of multi-tenancy support using a directory-per-tenant
strategy that preserves existing file-based persistence.
Key changes:
- Add MultiTenantPersistence manager for org-scoped config routing
- Add TenantMiddleware for X-Pulse-Org-ID header extraction and context propagation
- Add MultiTenantMonitor for per-tenant monitor lifecycle management
- Refactor handlers (ConfigHandlers, AlertHandlers, AIHandlers, etc.) to be
context-aware with getConfig(ctx)/getMonitor(ctx) helpers
- Add Organization model for future tenant metadata
- Update server and router to wire multi-tenant components
All handlers maintain backward compatibility via legacy field fallbacks
for single-tenant deployments using the "default" org.
- Add FeatureLongTermMetrics license feature for Pro tier
- Implement tiered storage in metrics store (raw, minute, hourly, daily)
- Add covering index for unified history query performance
- Seed mock data for 90 days with appropriate aggregation tiers
- Update PULSE_PRO.md to document the feature
- 7-day history remains free, 30d/90d requires Pro license
Remove proxy-related temperature code paths:
- temperature.go: remove proxy client integration and fallback logic
- config.go: remove SensorProxyEnabled and related config fields
- monitor.go: remove proxy client initialization and state
Temperature monitoring now relies solely on the unified agent approach.
The sensor proxy approach for temperature monitoring has been superseded
by the unified agent architecture where host agents report temperature
data directly. This removes:
- cmd/pulse-sensor-proxy/ - standalone proxy daemon
- internal/tempproxy/ - client library
- internal/api/*temperature_proxy* - API handlers and tests
- internal/api/sensor_proxy_gate* - feature gate
- internal/monitoring/*proxy_test* - proxy-specific tests
- scripts/*sensor-proxy* - installation and management scripts
- security/apparmor/, security/seccomp/ - proxy security profiles
Temperature monitoring remains available via the unified agent approach.
- Add PendingUpdates and PendingUpdatesCheckedAt fields to Node model
- Add GetNodePendingUpdates method to Proxmox client (calls /nodes/{node}/apt/update)
- Add 30-minute polling cache to avoid excessive API calls
- Add pendingUpdates to frontend Node type
- Add color-coded badge in NodeSummaryTable (yellow: 1-9, orange: 10+)
- Update test stubs for interface compliance
Requires Sys.Audit permission on Proxmox API token to read apt updates.
GetHostAgentConfig was loading profiles and assignments from disk on
every agent report (every 10-30 seconds per host). With multiple hosts,
this caused disk I/O contention that eventually led to request timeouts.
Added in-memory caching with 60-second TTL:
- Fast path reads from cache without locks when valid
- Double-checked locking pattern for cache refresh
- Cache auto-invalidates after TTL, no manual invalidation needed
Adds automatic Docker detection for Proxmox LXC containers:
- New HasDocker and DockerCheckedAt fields on Container model
- Docker socket check via connected agents on first run, restart, or start
- Parallel checking with timeouts for efficiency
- Caches results and only re-checks after state transitions
This enables the AI to know which LXC containers are Docker hosts
for better infrastructure guidance.
The agent was crashing with 'fatal error: concurrent map writes' when
handleCheckUpdatesCommand spawned a goroutine that called collectOnce
concurrently with the main collection loop. Both code paths access
a.prevContainerCPU without synchronization.
Added a.cpuMu mutex to protect all accesses to prevContainerCPU in:
- pruneStaleCPUSamples()
- collectContainer() delete operation
- calculateContainerCPUPercent()
Related to #1063
Users with removable/unmounted datastores (e.g., external HDDs for
offline backup) experienced excessive PBS log entries because Pulse
was querying all datastores including unavailable ones.
Added `excludeDatastores` field to PBS node configuration that accepts
patterns to exclude specific datastores from monitoring:
- Exact names: "exthdd1500gb"
- Prefix patterns: "ext*"
- Suffix patterns: "*hdd"
- Contains patterns: "*removable*"
Pattern matching is case-insensitive.
Fixes#1105
The previous fix (4090d981) addressed memory reporting for Docker agents
running in LXC containers, but the same issue also affects host agents.
When gopsutil runs inside an LXC, it can read Total and Free memory
from cgroup limits, but reports 0 for Used memory.
Added the same Total - Free fallback calculation to the host agent
processing path, which populates the Hosts tab.
Fixes#1075