The dashboard and storage pages incorrectly filtered nodes by type === 'pve',
but the backend provides nodes with type === 'node'. This caused the 'No Proxmox
VE nodes configured' message to appear even when nodes were present.
- Dashboard.tsx: Simplified check to props.nodes.length === 0
- Storage.tsx: Changed filter to type === 'node'
Fixes#1192
- Remove redundant StackedDiskBar from drawer (was showing disks twice)
- Keep vertical disk list with mountpoint, percentage, and used/total sizes
- Add usage-based color coding matching table rows (green/yellow/red)
- Use same rgba colors as StackedDiskBar for visual consistency
Backend fix:
- Added presence check in UpdateEmailConfig to detect when rateLimit is
omitted from JSON (vs explicitly set to 0)
- Preserves existing rateLimit value when field is not present in request
- Added comprehensive integration tests covering all scenarios
Frontend fix:
- Added rateLimit to EmailConfig interface
- Fixed getEmailConfig to read rateLimit from server response
- Fixed updateEmailConfig to include rateLimit when set
- Fixed two places in Alerts.tsx that hardcoded rateLimit: 60
Additional fixes:
- Added Array.isArray guards in DiagnosticsPanel sanitization
- Initialized Nodes/PBS arrays in diagnostics response to prevent null
Closes rate limit persistence bug where updating email settings would
reset the rate limit to default value.
The security hardening in beae4c86 added a settings:write scope
requirement to /api/auto-register, but agent install tokens only have
host-agent:report scope. This broke Proxmox auto-registration for all
agent-generated tokens. Accept either settings:write or host-agent:report
scope for auto-registration.
Fixes#1191
Agent tokens created from the Settings UI and the backend install
command handler were missing the agent:exec scope, which was added
as a security requirement in 60f9e6f0. This caused all newly
installed agents to fail registration with "Agent exec token missing
required scope: agent:exec".
Fixes#1191
The in-memory metrics buffer was changed from 1000 to 86400 points per
metric to support 30-day sparklines, but this pre-allocated ~18 MB per
guest (7 slices × 86400 × 32 bytes). With 50 guests that's 920 MB —
explaining why users needed to double their LXC memory after upgrading
to 5.1.0.
- Revert in-memory buffer to 1000 points / 24h retention
- Remove eager slice pre-allocation (use append growth instead)
- Add LTTB (Largest Triangle Three Buckets) downsampling algorithm
- Chart endpoints now use a two-tier strategy: in-memory for ranges
≤ 2h, SQLite persistent store + LTTB for longer ranges
- Reduce frontend ring buffer from 86400 to 2000 points
Related to #1190