feat: add --kube-include-all-deployments flag for Kubernetes agent

Adds IncludeAllDeployments option to show all deployments, not just
problem ones (where replicas don't match desired). This provides parity
with the existing --kube-include-all-pods flag.

- Add IncludeAllDeployments to kubernetesagent.Config
- Add --kube-include-all-deployments flag and PULSE_KUBE_INCLUDE_ALL_DEPLOYMENTS env var
- Update collectDeployments to respect the new flag
- Add test for IncludeAllDeployments functionality
- Update UNIFIED_AGENT.md documentation

Addresses feedback from PR #855
This commit is contained in:
rcourtman
2025-12-18 20:58:30 +00:00
parent 9bc63441a1
commit 2b48b0a459
48 changed files with 1903 additions and 686 deletions

View File

@@ -32,9 +32,9 @@ Designed for homelabs, sysadmins, and MSPs who need a "single pane of glass" wit
### Pulse AI *(New in 5.0)*
- **Chat Assistant**: Ask questions about your infrastructure in natural language
- **Patrol Mode**: Automated health checks with proactive issue detection
- **Auto-Fix**: Automatically resolve common issues with AI-guided remediation
- **Predictive Intelligence**: Forecast problems before they happen
- **Patrol**: Background health checks that generate findings on a schedule
- **Alert Analysis**: Optional AI analysis when alerts fire
- **Cost Tracking**: Track usage and costs per provider/model
### Multi-Platform
- **Proxmox VE/PBS/PMG**: Full monitoring and management

View File

@@ -41,14 +41,20 @@ environment where `PULSE_DOCKER=true`/`/.dockerenv` is detected.
#### Migration Path (Production)
1. **Install the sensor proxy on the Proxmox host that runs Pulse**
```bash
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install-sensor-proxy.sh | \
bash -s -- --ctid <pulse-lxc-id> --pulse-server https://pulse.example.com:7655
```
- For Docker or standalone hosts where Pulse cannot mount the socket (or when collecting temps from *other* nodes), run the installer with `--standalone --http-mode` so each host exposes its own HTTPS endpoint on port 8443.
2. **Let the installer manage the hardened systemd unit** it creates the dedicated `pulse-sensor-proxy` user, installs TLS material, and restarts the service safely.
3. **Restart the Pulse container** so it binds to `/run/pulse-sensor-proxy` (for local temps) and registers the HTTPS proxy (for remote temps). The backend automatically prefers the HTTPS proxy, then the socket, and never stores SSH keys in the container.
Preferred option (no SSH keys, no proxy wiring):
1. Install the unified agent (`pulse-agent`) on each Proxmox host with Proxmox integration enabled.
- Use the UI to generate an install command in **Settings → Agents**, or run:
```bash
curl -fsSL http://pulse.example.com:7655/install.sh | \
sudo bash -s -- --url http://pulse.example.com:7655 --token <api-token> --enable-proxmox
```
Deprecated option (existing installs only):
- `pulse-sensor-proxy` is deprecated in Pulse v5 and is not recommended for new deployments.
- Existing installs continue to work during the migration window, but plan to move to `pulse-agent --enable-proxmox`.
- Canonical temperature docs: `docs/TEMPERATURE_MONITORING.md`
#### Removing Old SSH Keys
@@ -68,15 +74,15 @@ docker exec pulse rm -rf /home/pulse/.ssh/id_ed25519*
┌─────────────────────────────────────┐
│ Proxmox Host │
│ ┌───────────────────────────────┐ │
│ │ pulse-sensor-proxy (svc user)│ │
│ │ · Runs sensors -j │ │
│ │ · Exposes Unix socket + TLS │ │
│ │ pulse-agent │ │
│ │ · Reads sensors locally │ │
│ │ · Sends metrics via HTTPS │ │
│ └───────────────────────────────┘ │
│ │ │
│ │ /run/pulse-sensor-proxy.sock
│ │ HTTPS + API token │
│ │ │
│ ┌─────────▼─────────────────────┐ │
│ │ Pulse container (bind mount) │ │
│ │ Pulse (Docker/LXC container) │ │
│ │ · No SSH keys │ │
│ │ · No host root privileges │ │
│ └───────────────────────────────┘ │
@@ -95,14 +101,17 @@ rotate keys regularly.
```bash
# Detect vulnerable containers
ls /home/pulse/.ssh/id_ed25519* 2>/dev/null && echo "⚠️ SSH keys present"
# Check container logs for proxy detection
docker logs pulse | grep -i "temperature proxy detected"
# Verify the host service
systemctl status pulse-sensor-proxy
```
Verify temperature collection is agent-based:
- UI: **Settings → Agents** shows each Proxmox host connected and reporting.
- On each Proxmox host:
```bash
systemctl status pulse-agent
journalctl -u pulse-agent -n 200 --no-pager
```
**Documentation:** https://github.com/rcourtman/Pulse/blob/main/SECURITY.md#critical-security-notice-for-container-deployments
**Issues:** https://github.com/rcourtman/pulse/issues
**Private disclosures:** security@pulseapp.io
@@ -397,7 +406,7 @@ curl -H "Authorization: Bearer your-original-token" http://localhost:7655/api/ex
### Runtime Logging Configuration
**New in v4.24.0:** Adjust logging settings dynamically without restarting Pulse.
Pulse supports configurable logging (level, format, optional file output, rotation) via environment variables.
#### Security Benefits
- Enable debug logging temporarily for incident investigation
@@ -407,13 +416,7 @@ curl -H "Authorization: Bearer your-original-token" http://localhost:7655/api/ex
#### Configuration Options
**Via UI:**
Navigate to **Settings → System → Logging**:
- **Log Level**: `debug`, `info`, `warn`, `error`
- **Log Format**: `json` (for log aggregation), `text` (human-readable)
- **File Rotation**: size limits, retention policies
**Via Environment Variables:**
**Via environment variables:**
```bash
# Systemd
sudo systemctl edit pulse
@@ -421,16 +424,16 @@ sudo systemctl edit pulse
Environment="LOG_LEVEL=info"
Environment="LOG_FORMAT=json"
Environment="LOG_MAX_SIZE=100" # MB per log file
Environment="LOG_MAX_BACKUPS=10" # Number of rotated logs to keep
Environment="LOG_MAX_AGE=30" # Days to retain logs
Environment="LOG_COMPRESS=true" # Compress rotated logs
# Docker
docker run \
-e LOG_LEVEL=info \
-e LOG_FORMAT=json \
-e LOG_MAX_SIZE=100 \
-e LOG_MAX_BACKUPS=10 \
-e LOG_MAX_AGE=30 \
-e LOG_COMPRESS=true \
rcourtman/pulse:latest
```
@@ -438,7 +441,7 @@ docker run \
- Debug logs may contain sensitive data—enable only when needed
- JSON format recommended for security monitoring and SIEM
- Adjust retention based on compliance requirements
- Changes are logged to audit trail
- Changes take effect on restart
## CORS (Cross-Origin Resource Sharing)
@@ -499,8 +502,7 @@ curl -s http://localhost:7655/api/monitoring/scheduler/health | jq
- **Backoff Delays**: Increased backoff may indicate rate limiting or errors
- **Error Rates**: Track failed API calls and authentication attempts
**Dashboard Access:**
Navigate to **Settings → System → Monitoring** for visual representation of scheduler health.
There is currently no dedicated scheduler-health UI in v5. Use the API endpoint above (or export diagnostics from **Settings → Diagnostics**) when troubleshooting.
## Security Best Practices

View File

@@ -205,10 +205,11 @@ func main() {
Logger: &logger,
KubeconfigPath: cfg.KubeconfigPath,
KubeContext: cfg.KubeContext,
IncludeNamespaces: cfg.KubeIncludeNamespaces,
ExcludeNamespaces: cfg.KubeExcludeNamespaces,
IncludeAllPods: cfg.KubeIncludeAllPods,
MaxPods: cfg.KubeMaxPods,
IncludeNamespaces: cfg.KubeIncludeNamespaces,
ExcludeNamespaces: cfg.KubeExcludeNamespaces,
IncludeAllPods: cfg.KubeIncludeAllPods,
IncludeAllDeployments: cfg.KubeIncludeAllDeployments,
MaxPods: cfg.KubeMaxPods,
}
agent, err := kubernetesagent.New(kubeCfg)
@@ -336,10 +337,11 @@ type Config struct {
// Kubernetes
KubeconfigPath string
KubeContext string
KubeIncludeNamespaces []string
KubeExcludeNamespaces []string
KubeIncludeAllPods bool
KubeMaxPods int
KubeIncludeNamespaces []string
KubeExcludeNamespaces []string
KubeIncludeAllPods bool
KubeIncludeAllDeployments bool
KubeMaxPods int
}
func loadConfig() Config {
@@ -364,6 +366,7 @@ func loadConfig() Config {
envKubeIncludeNamespaces := utils.GetenvTrim("PULSE_KUBE_INCLUDE_NAMESPACES")
envKubeExcludeNamespaces := utils.GetenvTrim("PULSE_KUBE_EXCLUDE_NAMESPACES")
envKubeIncludeAllPods := utils.GetenvTrim("PULSE_KUBE_INCLUDE_ALL_PODS")
envKubeIncludeAllDeployments := utils.GetenvTrim("PULSE_KUBE_INCLUDE_ALL_DEPLOYMENTS")
envKubeMaxPods := utils.GetenvTrim("PULSE_KUBE_MAX_PODS")
// Defaults
@@ -418,6 +421,7 @@ func loadConfig() Config {
kubeconfigFlag := flag.String("kubeconfig", envKubeconfig, "Path to kubeconfig (optional; uses in-cluster config if available)")
kubeContextFlag := flag.String("kube-context", envKubeContext, "Kubeconfig context (optional)")
kubeIncludeAllPodsFlag := flag.Bool("kube-include-all-pods", utils.ParseBool(envKubeIncludeAllPods), "Include all non-succeeded pods (may be large)")
kubeIncludeAllDeploymentsFlag := flag.Bool("kube-include-all-deployments", utils.ParseBool(envKubeIncludeAllDeployments), "Include all deployments, not just problem ones")
kubeMaxPodsFlag := flag.Int("kube-max-pods", defaultInt(envKubeMaxPods, 200), "Max pods included in report")
showVersion := flag.Bool("version", false, "Print the agent version and exit")
@@ -474,10 +478,11 @@ func loadConfig() Config {
HealthAddr: strings.TrimSpace(*healthAddrFlag),
KubeconfigPath: strings.TrimSpace(*kubeconfigFlag),
KubeContext: strings.TrimSpace(*kubeContextFlag),
KubeIncludeNamespaces: kubeIncludeNamespaces,
KubeExcludeNamespaces: kubeExcludeNamespaces,
KubeIncludeAllPods: *kubeIncludeAllPodsFlag,
KubeMaxPods: *kubeMaxPodsFlag,
KubeIncludeNamespaces: kubeIncludeNamespaces,
KubeExcludeNamespaces: kubeExcludeNamespaces,
KubeIncludeAllPods: *kubeIncludeAllPodsFlag,
KubeIncludeAllDeployments: *kubeIncludeAllDeploymentsFlag,
KubeMaxPods: *kubeMaxPodsFlag,
}
}

View File

@@ -1,9 +1,9 @@
# pulse-sensor-proxy
> **⚠️ Deprecated:** The sensor-proxy is deprecated in favor of the unified Pulse agent.
> For new installations, use `install.sh --enable-proxmox` instead. The agent provides
> temperature monitoring plus additional features (AI, automatic Proxmox API setup).
> See [TEMPERATURE_MONITORING.md](/docs/security/TEMPERATURE_MONITORING.md) for details.
> **Deprecated in v5:** `pulse-sensor-proxy` is deprecated and not recommended for new deployments.
> Temperature monitoring should be done via the unified agent (`pulse-agent --enable-proxmox`).
> This README is retained for existing installations during the migration window.
> See `docs/TEMPERATURE_MONITORING.md`.
The sensor proxy keeps SSH identities and temperature polling logic on the
Proxmox host while presenting a small RPC surface (Unix socket or HTTPS) to the
@@ -16,7 +16,7 @@ capabilities, and produces append-only audit logs.
| --- | --- |
| **Recommended (automated)** | `curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install-sensor-proxy.sh \| bash` |
| **Manual build** | `go build ./cmd/pulse-sensor-proxy` and `sudo install -m 0755 pulse-sensor-proxy /usr/local/bin/` |
| **Prebuilt artifact** | Copy the binary from `/opt/pulse/bin/pulse-sensor-proxy-*` inside the Pulse Docker image or download via `/download/pulse-sensor-proxy?platform=linux&arch=amd64`. |
| **Prebuilt artifact** | Download `pulse-sensor-proxy-<platform>-<arch>` from GitHub Releases, or copy it from `/opt/pulse/bin/` inside the Pulse Docker image. |
The installer script provisions:
@@ -176,7 +176,7 @@ Set alerts on:
| Symptom | Guidance |
| --- | --- |
| Service fails to start with "Config validation failed" | Run `pulse-sensor-proxy config validate` to see specific errors. Check for duplicate keys or malformed YAML. |
| Config corruption detected during startup | Older versions had dual code paths. Update to v4.31.1+ and reinstall proxy. The migration runs automatically. |
| Config corruption detected during startup | Older versions had dual code paths. Update to the latest release and reinstall the proxy. The migration runs automatically. |
| Temperature monitoring stops working after config change | Validate config first with `pulse-sensor-proxy config validate`, then restart service: `systemctl restart pulse-sensor-proxy`. |
| `Cannot open audit log file` | Check permissions on `/var/log/pulse/sensor-proxy`. Remove `chattr +a` only during rotation. |
| `connection denied` in audit log | UID/GID not listed in `allowed_peers`. Verify Pulse container UID mapping. |

View File

@@ -2,7 +2,9 @@
![Version: 5.0.0-rc.4](https://img.shields.io/badge/Version-5.0.0--rc.4-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 5.0.0-rc.4](https://img.shields.io/badge/AppVersion-5.0.0--rc.4-informational?style=flat-square)
Helm chart for deploying the Pulse hub and optional Docker monitoring agent.
Helm chart for deploying the Pulse server and an optional legacy Docker monitoring agent.
Note: the `agent.*` values currently deploy `pulse-docker-agent`, which is deprecated in favor of the unified agent (`pulse-agent`) where possible.
**Homepage:** <https://github.com/rcourtman/Pulse>

View File

@@ -98,6 +98,8 @@ server:
failureThreshold: 3
agent:
# Legacy: this deploys the deprecated `pulse-docker-agent`.
# For new deployments, prefer installing the unified agent (`pulse-agent`) on the hosts you want to monitor.
enabled: false
kind: DaemonSet # Supported: DaemonSet | Deployment
replicaCount: 1

View File

@@ -1,123 +1,64 @@
# Pulse AI
Pulse AI is an intelligent monitoring assistant that helps you understand and manage your infrastructure through natural language conversations, proactive monitoring, and automated issue resolution.
Pulse AI adds an optional assistant for troubleshooting, summarization, and proactive monitoring. It is **off by default** and can be enabled per instance.
## Features
## What Pulse AI Can Do
### Chat Assistant
Ask questions about your infrastructure in plain English:
- "What's using the most CPU right now?"
- "Show me VMs with high memory usage"
- "Why is my backup failing?"
- "Summarize the health of my cluster"
### Patrol Mode
Automated monitoring that runs on a schedule to:
- Analyze resource utilization patterns
- Detect potential issues before they cause problems
- Generate actionable recommendations
- Track trends over time
### Auto-Fix
Automatically resolve common issues:
- Restart stuck services
- Clear disk space
- Restart unresponsive containers
- Apply recommended optimizations
### Predictive Intelligence
- Identify resources trending toward problems
- Forecast disk space exhaustion
- Detect unusual patterns
- Alert on anomalies
- **Interactive chat**: Ask questions about current cluster state and recent health signals.
- **Patrol**: Background checks that generate findings on a schedule.
- **Alert analysis**: Optional analysis when alerts fire (token-efficient).
- **Command proposals and execution**: When enabled, Pulse can propose commands and (optionally) execute them via connected agents.
- **Cost tracking**: Tracks usage and supports a monthly budget target.
## Configuration
### Enable Pulse AI
Configure in the UI:
1. Navigate to **Settings → AI**
2. Toggle **Enable Pulse AI**
3. Configure your AI provider
- **Settings → AI**
AI settings are stored encrypted at rest in `ai.enc` under the Pulse config directory (`/etc/pulse` for systemd installs, `/data` for Docker/Kubernetes).
### Supported Providers
| Provider | Models | Notes |
|----------|--------|-------|
| **OpenAI** | GPT-4, GPT-4 Turbo, GPT-3.5 | Recommended for best results |
| **Anthropic** | Claude 3 Opus, Sonnet, Haiku | Excellent for complex analysis |
| **Ollama** | Llama 3, Mistral, etc. | Self-hosted, privacy-focused |
| **OpenRouter** | Multiple models | Pay-per-use routing |
| **Custom** | Any OpenAI-compatible API | For enterprise deployments |
Pulse supports multiple providers configured independently:
### API Key Setup
- **Anthropic** (API key or OAuth)
- **OpenAI**
- **DeepSeek**
- **Google Gemini**
- **Ollama** (self-hosted)
- **OpenAI-compatible base URL** (for providers that implement the OpenAI API shape)
```bash
# Environment variable (recommended for production)
export PULSE_AI_PROVIDER=openai
export PULSE_AI_API_KEY=sk-...
### Models
# Or configure via Settings UI
```
Pulse uses model identifiers in the form:
### Patrol Configuration
- `provider:model-name`
| Setting | Default | Description |
|---------|---------|-------------|
| Patrol Enabled | Off | Run automated checks |
| Patrol Interval | 30 minutes | How often to run patrol |
| Auto-Fix Enabled | Off | Allow automatic remediation |
| Auto-Fix Model | Same as chat | Model for auto-fix decisions |
You can set separate models for:
## Usage
- Chat (`chat_model`)
- Patrol (`patrol_model`)
- Auto-fix remediation (`auto_fix_model`)
### Chat Interface
Access the AI chat from the bottom-right corner of any page. Type your question and press Enter.
### Testing and Model Discovery
**Example queries:**
```
"What VMs are using more than 80% memory?"
"Show me the status of all backups"
"Why is node pve1 showing high load?"
"Compare resource usage between this week and last week"
```
- Test provider connectivity: `POST /api/ai/test` and `POST /api/ai/test/{provider}`
- List available models (queried live from the provider): `GET /api/ai/models`
### Context Selection
Click on any resource (VM, container, host) to add it to the AI context. This helps the AI provide more specific answers.
## Safety Controls
### Patrol Reports
When Patrol is enabled, Pulse AI will:
1. Run periodic health checks
2. Generate findings (issues, warnings, info)
3. Offer to auto-fix issues (if enabled)
4. Track trends over time
Pulse includes settings that control how “active” AI features are:
## Cost Tracking
- **Autonomous mode** (`autonomous_mode`): when enabled, AI may execute actions without a separate approval step in the UI.
- **Patrol auto-fix** (`patrol_auto_fix`): allows patrol findings to trigger remediation attempts.
- **Alert-triggered analysis** (`alert_triggered_analysis`): limits AI to analyzing specific events when alerts occur.
Track AI API usage in **Settings → AI → Cost Dashboard**:
- Daily/monthly usage statistics
- Cost breakdown by feature
- Usage trends over time
## Privacy & Security
- **Data stays local**: Only resource metadata is sent to AI providers
- **No training**: Your data is never used for model training
- **Audit logging**: All AI interactions are logged
- **Self-hosted option**: Use Ollama for complete data privacy
If you enable execution features, ensure agent tokens and scopes are appropriately restricted and that audit logging is enabled.
## Troubleshooting
### AI not responding
1. Check API key is configured correctly
2. Verify network connectivity to AI provider
3. Check Settings → AI for error messages
- **AI not responding**: verify provider credentials in **Settings → AI** and confirm `GET /api/ai/models` works.
- **OAuth issues (Anthropic)**: verify the OAuth flow is completing and that Pulse can reach the callback endpoint.
- **No execution capability**: confirm at least one compatible agent is connected and that the instance has execution enabled.
### Patrol not running
1. Ensure Patrol is enabled in Settings
2. Check system resource availability
3. Review logs: `journalctl -u pulse -f`
### Auto-fix not working
1. Enable Auto-Fix in Settings → AI
2. Verify the connected agents have execute permissions
3. Check the Auto-Fix model is configured

View File

@@ -115,8 +115,12 @@ Triggers a test alert to all configured channels.
## ⚙️ System Settings
### Get Settings
`GET /api/config/system`
Retrieve current system configuration.
`GET /api/system/settings`
Retrieve current system settings.
### Update Settings
`POST /api/system/settings/update`
Update system settings. Requires admin + `settings:write`.
### Toggle Mock Mode
`POST /api/system/mock-mode`
@@ -140,34 +144,47 @@ Initiate OIDC login flow.
---
## 🤖 Pulse AI *(New in 5.0)*
## 🤖 Pulse AI *(v5)*
### Get AI Settings
`GET /api/settings/ai`
Returns current AI configuration (providers, models, patrol status).
Returns current AI configuration (providers, models, patrol status). Requires admin + `settings:read`.
### Update AI Settings
`PUT /api/settings/ai`
Configure AI providers, API keys, and preferences.
`PUT /api/settings/ai/update` (or `POST /api/settings/ai/update`)
Configure AI providers, API keys, and preferences. Requires admin + `settings:write`.
### Chat
`POST /api/ai/chat`
Send a message to the AI assistant.
```json
{ "message": "What VMs are using the most CPU?", "context": ["vm-100", "vm-101"] }
```
### List Models
`GET /api/ai/models`
Lists models available to the configured providers (queried live from provider APIs).
### Patrol Status
`GET /api/ai/patrol/status`
Get current patrol status and recent findings.
### Execute (Chat + Tools)
`POST /api/ai/execute`
Runs an AI request which may return tool calls, findings, or suggested actions.
### Patrol Findings
`GET /api/ai/patrol/findings`
List all patrol findings with severity and recommendations.
### Execute (Streaming)
`POST /api/ai/execute/stream`
Streaming variant of execute (used by the UI for incremental responses).
### Patrol
- `GET /api/ai/patrol/status`
- `GET /api/ai/patrol/findings`
- `GET /api/ai/patrol/history`
- `POST /api/ai/patrol/run` (admin)
### Cost Tracking
`GET /api/ai/cost?period=30d`
Get AI usage statistics and costs.
`GET /api/ai/cost/summary`
Get AI usage statistics (includes retention window details).
## 📈 Metrics Store (v5)
### Store Stats
`GET /api/metrics-store/stats`
Returns stats for the persistent metrics store (SQLite-backed).
### History
`GET /api/metrics-store/history`
Returns historical metric series for a resource and time range.
---
@@ -181,6 +198,10 @@ The unified agent combines host, Docker, and Kubernetes monitoring. Use `--enabl
See [UNIFIED_AGENT.md](UNIFIED_AGENT.md) for installation instructions.
### Unified Agent Installer Script
`GET /install.sh`
Serves the universal `install.sh` used to install `pulse-agent` on target machines.
### Legacy Agents (Deprecated)
`GET /download/pulse-host-agent` - *Deprecated, use pulse-agent*
`GET /download/pulse-docker-agent` - *Deprecated, use pulse-agent --enable-docker*

View File

@@ -15,7 +15,7 @@ Pulse 5.0 introduces one-click updates for supported deployment types, making it
### When an Update is Available
1. Navigate to **Settings → System Updates**
1. Navigate to **Settings → System Updates**
2. If an update is available, you'll see an **"Install Update"** button
3. Click the button to open the confirmation dialog
4. Review the update details:
@@ -44,7 +44,7 @@ A real-time progress modal shows:
### Update Preferences
In **Settings → System Updates**:
In **Settings → System Updates**:
| Setting | Description |
|---------|-------------|
@@ -67,7 +67,7 @@ PULSE_UPDATE_CHANNEL=rc
```bash
# Pull latest image
docker pull ghcr.io/rcourtman/pulse:latest
docker pull rcourtman/pulse:latest
# Restart container
docker-compose down && docker-compose up -d
@@ -76,21 +76,19 @@ docker-compose down && docker-compose up -d
### ProxmoxVE LXC (Manual)
```bash
# Inside the container
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install.sh | bash
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install.sh | bash
```
### Systemd Service (Manual)
```bash
# Download new release
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/install.sh | bash
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install.sh | bash
```
### Source Build
```bash
cd /opt/pulse
cd /path/to/pulse
git pull
make build
sudo systemctl restart pulse
@@ -107,17 +105,13 @@ Pulse creates a backup before updating. If the update fails:
3. Error details are logged
### Manual Rollback
```bash
# Backups are stored in /etc/pulse/backups/
ls /etc/pulse/backups/
If rollback is supported for your deployment, use the **Rollback** action from the update history in **Settings → System → Updates**.
# Restore a specific backup
sudo /opt/pulse/scripts/restore-backup.sh /etc/pulse/backups/pulse-backup-20250101.tar.gz
```
Backups are stored as `backup-<timestamp>/` folders inside the Pulse data directory (`/etc/pulse` or `/data`).
## Update History
View past updates in **Settings → System Updates → Update History**:
View past updates in **Settings → System Updates → Update History**:
- Previous versions installed
- Update timestamps
- Success/failure status

View File

@@ -8,6 +8,13 @@ Pulse uses a split-configuration model to ensure security and flexibility.
| `system.json` | General Settings | 📝 Standard |
| `nodes.enc` | Node Credentials | 🔒 **Encrypted** (AES-256-GCM) |
| `alerts.json` | Alert Rules | 📝 Standard |
| `email.enc` | SMTP settings | 🔒 **Encrypted** |
| `webhooks.enc` | Webhook URLs + headers | 🔒 **Encrypted** |
| `apprise.enc` | Apprise notification config | 🔒 **Encrypted** |
| `oidc.enc` | OIDC provider config | 🔒 **Encrypted** |
| `api_tokens.json` | API token records (hashed) | 🔒 **Sensitive** |
| `ai.enc` | AI settings and credentials | 🔒 **Encrypted** |
| `metrics.db` | Persistent metrics history (SQLite) | 📝 Standard |
All files are located in `/etc/pulse/` (Systemd) or `/data/` (Docker/Kubernetes).
@@ -67,7 +74,7 @@ Controls runtime behavior like ports, logging, and polling intervals. Most of th
"frontendPort": 7655, // Public port
"logLevel": "info", // debug, info, warn, error
"autoUpdateEnabled": false, // Enable auto-update checks
"adaptivePollingEnabled": true // Smart polling for large clusters
"adaptivePollingEnabled": false // Smart polling for large clusters
}
```
@@ -97,11 +104,22 @@ Environment variables take precedence over `system.json`.
| `PMG_POLLING_INTERVAL` | PMG metrics polling frequency | `60s` |
| `ENABLE_BACKUP_POLLING` | Enable backup job monitoring | `true` |
| `BACKUP_POLLING_INTERVAL` | Backup polling frequency | `0` (Auto) |
| `ENABLE_TEMPERATURE_MONITORING` | Enable SSH temperature checks | `true` |
| `SSH_PORT` | SSH port for temperature checks | `22` |
| `ENABLE_TEMPERATURE_MONITORING` | Enable temperature monitoring (where supported) | `true` |
| `SSH_PORT` | SSH port for legacy SSH-based temperature collection | `22` |
| `ADAPTIVE_POLLING_ENABLED` | Enable smart polling for large clusters | `false` |
| `WEBHOOK_BATCH_DELAY` | Delay before sending batched webhooks | `10s` |
### Metrics Retention (Tiered)
Persistent metrics history uses tiered retention windows. These values are stored in `system.json` and can be adjusted for storage vs history depth:
- `metricsRetentionRawHours`
- `metricsRetentionMinuteHours`
- `metricsRetentionHourlyDays`
- `metricsRetentionDailyDays`
See [METRICS_HISTORY.md](METRICS_HISTORY.md) for details.
---
## 🔔 Alerts (`alerts.json`)

62
docs/DEPLOYMENT_MODELS.md Normal file
View File

@@ -0,0 +1,62 @@
# Deployment Models
Pulse supports multiple deployment models. This page clarifies what differs between them and where “truth” lives (paths, updates, and operational constraints).
## Summary
| Model | Recommended for | Data/config path | Updates |
| --- | --- | --- | --- |
| Proxmox VE LXC (installer) | Proxmox-first deployments | `/etc/pulse` | In-app updates supported |
| systemd (bare metal / VM) | Traditional Linux hosts | `/etc/pulse` | In-app updates supported |
| Docker | Quick evaluation and container stacks | `/data` (bind mount / volume) | Image pull + restart |
| Kubernetes (Helm) | Cluster operators | `/data` (PVC) | Helm upgrade |
## Common Ports
- UI/API: `7655/tcp`
- Prometheus metrics: `9091/tcp` (`/metrics` on a separate listener)
Docker and Kubernetes do not publish `9091` unless you explicitly expose it.
## Where Configuration Lives
Pulse uses a split config model:
- **Local auth and secrets**: `.env` (not editable in the UI)
- **System settings**: `system.json` (editable in the UI unless locked by env)
- **Nodes and credentials**: `nodes.enc` (encrypted)
- **AI config**: `ai.enc` (encrypted)
- **Metrics history**: `metrics.db` (SQLite)
Path mapping:
- systemd/LXC: `/etc/pulse/*`
- Docker/Helm: `/data/*`
## Updates by Model
### systemd and Proxmox LXC
Use the UI:
- **Settings → System → Updates**
These deployments can apply updates by downloading a release and swapping binaries/config safely with backups and history.
### Docker
Pull a new image and restart:
```bash
docker pull rcourtman/pulse:latest
docker compose up -d
```
### Kubernetes (Helm)
Upgrade the chart (OCI):
```bash
helm upgrade pulse oci://ghcr.io/rcourtman/pulse-chart -n pulse
```

View File

@@ -105,7 +105,9 @@ docker compose up -d
```bash
docker exec pulse rm /data/.env
docker restart pulse
# Access UI to run setup wizard again
# Access UI again. Pulse will require a bootstrap token for setup.
# Get it with:
docker exec pulse /app/pulse bootstrap-token
```
- **Logs**

317
docs/DOCS_AUDIT_V5.md Normal file
View File

@@ -0,0 +1,317 @@
# Pulse v5 Documentation Audit (pre-stable)
This is a working audit of Pulse documentation as of `VERSION=5.0.0-rc.4`, focused on release readiness for a v5 stable cut.
## Status (updated 2025-12-18)
Most of the issues identified in this audit have been addressed in-repo:
- Updated install recommendation and bootstrap-token guidance across entrypoints (`README.md`, `docs/INSTALL.md`, `docs/FAQ.md`, `docs/TROUBLESHOOTING.md`, `docs/DOCKER.md`)
- Rewritten AI and API docs to match the current v5 implementation (`docs/AI.md`, `docs/API.md`)
- Rewritten metrics history docs to match SQLite store + tiered retention (`docs/METRICS_HISTORY.md`)
- Fixed adaptive polling defaults and rollout paths (`docs/monitoring/ADAPTIVE_POLLING.md`, `docs/operations/ADAPTIVE_POLLING_ROLLOUT.md`)
- Reduced temperature monitoring contradictions by making the agent the recommended path and scoping sensor-proxy as a legacy/alternative (`docs/TEMPERATURE_MONITORING.md`, `docs/security/TEMPERATURE_MONITORING.md`, `SECURITY.md`, sensor-proxy docs)
- Updated Helm/Kubernetes docs to prefer OCI distribution and flag the legacy agent block (`docs/KUBERNETES.md`, `deploy/helm/pulse/README.md`, `deploy/helm/pulse/values.yaml`)
- Added missing “operator clarity” docs (`docs/DEPLOYMENT_MODELS.md`, `docs/UPGRADE_v5.md`)
- Link validation run: no broken relative `.md` links found at time of update
## Goals
- Identify docs that are **stale**, **contradictory**, or **redundant**
- Identify **missing docs** needed for a v5 stable release
- Produce an actionable “what to change, where” checklist
## Highest-Priority Fixes (release-blockers)
### 1) Temperature monitoring guidance is contradictory
There are multiple competing “truths” about how temperature monitoring works in v5:
- `SECURITY.md` describes container deployments as requiring `pulse-sensor-proxy` and explicitly blocks SSH-based temps in containers.
- Multiple docs under `docs/security/` and `cmd/pulse-sensor-proxy/README.md` claim `pulse-sensor-proxy` is deprecated in favor of the unified agent.
- `docs/TEMPERATURE_MONITORING.md` is an extensive sensor-proxy-first guide and reads as “current”, but conflicts with the “deprecated” banner elsewhere.
- The backend still has extensive support and UX flows for sensor proxy install/register (`/api/install/install-sensor-proxy.sh`, temperature proxy diagnostics, container SSH blocking guidance).
Action:
- Decide the **canonical** v5 story:
- **Option A (agent-first)**: “Install `pulse-agent --enable-proxmox` on each Proxmox host for temperatures and management. `pulse-sensor-proxy` is legacy or edge-case only.”
- **Option B (proxy-first for containers)**: “If Pulse runs in Docker/LXC, temperatures require `pulse-sensor-proxy` (socket/HTTPS). The agent is optional for other features.”
- Update all docs to align with the chosen story, and ensure `SECURITY.md` reflects it unambiguously.
Status:
- Docs updated to be agent-first, with `pulse-sensor-proxy` treated as a legacy/alternative option.
- Remaining work is primarily product positioning and long-term deprecation decisions, not broken documentation.
Files involved:
- `SECURITY.md`
- `docs/TEMPERATURE_MONITORING.md`
- `docs/security/TEMPERATURE_MONITORING.md`
- `docs/security/SENSOR_PROXY_HARDENING.md`
- `docs/security/SENSOR_PROXY_NETWORK.md`
- `docs/security/SENSOR_PROXY_APPARMOR.md`
- `docs/operations/SENSOR_PROXY_CONFIG.md`
- `docs/operations/SENSOR_PROXY_LOGS.md`
- `cmd/pulse-sensor-proxy/README.md`
### 2) AI docs do not match the actual v5 API and configuration model
`docs/AI.md` and the AI section in `docs/API.md` appear written for an older/alternate API surface:
- `docs/AI.md` documents `PULSE_AI_PROVIDER` and `PULSE_AI_API_KEY` env vars, but the current implementation persists encrypted AI config in `ai.enc` and supports multi-provider credentials (Anthropic/OpenAI/DeepSeek/Gemini/Ollama) plus Anthropic OAuth.
- `docs/API.md` references endpoints like `POST /api/ai/chat` and `PUT /api/settings/ai` that do not match the router (current endpoints include `/api/ai/execute`, `/api/ai/models`, `/api/settings/ai/update`, OAuth endpoints, patrol stream, cost summary).
Action:
- Rewrite AI docs to match current behavior:
- Providers actually supported
- How keys/tokens are stored (encrypted) and what the UI exposes
- Anthropic OAuth flow and security implications
- Patrol and command execution (“autonomous mode”) safety controls
- Correct API endpoints and auth requirements
Files involved:
- `docs/AI.md`
- `docs/API.md`
- `internal/config/ai.go` (source of truth for config fields)
- `internal/api/router.go` (source of truth for endpoints)
Status:
- `docs/AI.md` rewritten to match multi-provider + encrypted config.
- `docs/API.md` AI endpoints updated to match router.
### 3) Installation “recommended path” is inconsistent across docs
- `README.md` recommends “Proxmox LXC (Recommended)” via GitHub `install.sh`.
- `docs/INSTALL.md` and `docs/FAQ.md` currently present Docker as the easiest/recommended path.
Action:
- Pick one recommendation hierarchy and make it consistent:
- If Proxmox LXC is the primary path, it should be the top section in `docs/INSTALL.md` and the FAQ answer should reflect it.
Files involved:
- `README.md`
- `docs/INSTALL.md`
- `docs/FAQ.md`
Status:
- Install docs now consistently present Proxmox VE LXC installer as the recommended path and include bootstrap-token retrieval.
### 4) Kubernetes/Helm docs and chart docs are out of date for v5
- `docs/KUBERNETES.md` references a chart repo URL and “Docker Agent sidecar”.
- `deploy/helm/pulse/README.md` describes “optional Docker monitoring agent” and defaults to `ghcr.io/rcourtman/pulse-docker-agent`.
Action:
- Update Helm docs to match the v5 agent direction:
- If `pulse-docker-agent` is deprecated, the chart should not reference it as primary.
- Align chart distribution instructions (Helm repo vs OCI).
Files involved:
- `docs/KUBERNETES.md`
- `deploy/helm/pulse/README.md`
- `deploy/helm/pulse/values.yaml`
- `deploy/helm/pulse/templates/*`
Status:
- `docs/KUBERNETES.md` updated to prefer OCI chart installs and flag the legacy agent block.
- `deploy/helm/pulse/README.md` and `deploy/helm/pulse/values.yaml` now label the agent workload as legacy.
## Redundant / Duplicated Docs (needs consolidation)
### Auto-update docs: two competing sources
- `docs/AUTO_UPDATE.md` describes “Settings → System Updates” and includes docker image instructions that differ from other docs.
- `docs/operations/AUTO_UPDATE.md` documents systemd timers and edits `/var/lib/pulse/system.json` which appears stale for current config defaults (`/etc/pulse/system.json`).
Action:
- Choose one canonical page (likely `docs/AUTO_UPDATE.md`) and:
- Move operational/timer details into it (or link to a clearly “advanced ops” page)
- Fix stale paths and service names
- Remove or clearly label the non-canonical duplicate
Files involved:
- `docs/AUTO_UPDATE.md`
- `docs/operations/AUTO_UPDATE.md`
Status:
- Both documents updated to current UI naming and paths; optional future work is to consolidate into a single canonical page.
### Temperature monitoring docs: two sources with different “truth”
- `docs/TEMPERATURE_MONITORING.md` (sensor proxy focused, extensive)
- `docs/security/TEMPERATURE_MONITORING.md` (agent recommended, proxy “legacy”)
Action:
- Collapse into one canonical document with a clear decision tree, then:
- Keep the other as a short redirect page, or delete it.
Files involved:
- `docs/TEMPERATURE_MONITORING.md`
- `docs/security/TEMPERATURE_MONITORING.md`
Status:
- `docs/TEMPERATURE_MONITORING.md` is now the canonical deep-dive, and `docs/security/TEMPERATURE_MONITORING.md` is a security/overview page.
### Adaptive polling docs disagree with defaults and file paths
- `docs/monitoring/ADAPTIVE_POLLING.md` claims adaptive polling is enabled by default and says env default is `true`.
- Code defaults `AdaptivePollingEnabled=false` and `docs/operations/ADAPTIVE_POLLING_ROLLOUT.md` references `/var/lib/pulse/system.json`.
Action:
- Make one canonical doc, fix defaults and paths, and ensure UI path matches current navigation.
Files involved:
- `docs/monitoring/ADAPTIVE_POLLING.md`
- `docs/operations/ADAPTIVE_POLLING_ROLLOUT.md`
Status:
- Defaults and paths updated to match current behavior.
## Stale / Incorrect Content (targeted findings)
### `docs/API.md`
Issues:
- AI endpoints mismatch current router paths (examples: `POST /api/ai/chat` vs current `/api/ai/execute`; settings update path differs).
- “complete REST API documentation” claim is optimistic. Its a curated subset plus a “check router.go” note.
Action:
- Update AI section to match `internal/api/router.go`.
- Consider splitting into:
- “Stable/public API” (guaranteed)
- “Internal/subject to change” (documented but not stable)
### `docs/METRICS_HISTORY.md`
Issues:
- Documents `PULSE_METRICS_*_RETENTION_DAYS` env vars that do not appear to exist in the server config.
- Claims metrics are stored under `/etc/pulse/data/metrics/`, but the metrics store is SQLite (`metrics.db`) under the configured data directory.
Action:
- Rewrite this doc to match the tiered retention model and actual storage format/location.
### `docs/FAQ.md`
Issues:
- Recommends Docker as easiest install, conflicts with repo README.
- Password reset guidance does not mention the bootstrap token requirement that can appear after removing `.env`.
- Mentions `METRICS_RETENTION_DAYS` which does not appear to be a current server config knob (v5 uses tiered retention settings).
Action:
- Align install recommendation with v5 positioning.
- Update auth reset steps to include bootstrap token retrieval where applicable.
- Replace metrics retention knob guidance with current retention model and UI location.
### `docs/TROUBLESHOOTING.md` and `docs/DOCKER.md`
Issues:
- “Forgot password” flow implies you can just rerun the setup wizard after deleting `.env`, but first-time setup can require the bootstrap token.
Action:
- Update password reset steps and link to the bootstrap token section in `docs/INSTALL.md`.
### `docs/RELEASE_NOTES.md`
Issues:
- Entire document is v4.x release notes.
Action:
- Replace with v5 release notes (or move to `docs/releases/` and add v5.0.0 as the top section).
- For the v5 stable cut, include: breaking changes, migration notes, and versioned “what changed since v4”.
### `cmd/pulse-sensor-proxy/README.md`
Issues:
- Mentions downloading via `/download/pulse-sensor-proxy` but the server router does not expose this endpoint.
- “Deprecated” banner conflicts with current server behavior and security guidance.
Action:
- Either bring it in line with the chosen v5 temperature story, or clearly scope it as legacy.
### Broken local link
- `docs/TEMPERATURE_MONITORING.md` contains an absolute link to `/opt/pulse/cmd/pulse-sensor-proxy/README.md` which does not work in GitHub.
Action:
- Replace with a repo-relative link (or link to the canonical temperature doc).
### Widespread “runtime path” drift (`/opt/pulse/...`)
Several user-facing docs mix:
- repository paths (`/opt/pulse/...`) used in this dev workspace, and
- runtime paths used in real installs (`/etc/pulse`, `/data`, `/var/log/pulse`, systemd units).
This creates confusion and broken copy-paste commands.
Examples to review:
- `docs/ZFS_MONITORING.md` references `/opt/pulse/.env` and `/opt/pulse/pulse.log`.
- `docs/operations/*` references `/var/lib/pulse/system.json` rather than `/etc/pulse/system.json`.
Action:
- Adopt a consistent convention across docs:
- **Runtime**: `/etc/pulse` (systemd/LXC), `/data` (Docker/Helm)
- **Repo/dev**: `/opt/pulse` only in development docs
- **Logs**: `journalctl -u pulse` (systemd) and `docker logs` (Docker), plus `/var/log/pulse/*` only if actually used in production images.
## Missing Docs for a v5 Stable Release (recommended additions)
### v5 upgrade guide (v4 → v5)
Add a single canonical page covering:
- “What changes in v5” in operator terms
- Any breaking changes and required actions
- Post-upgrade verification checklist (health endpoint, scheduler health, agents connected, temps, notifications)
- Rollback guidance for each deployment model (Docker, systemd/LXC, Helm)
Suggested path:
- `docs/UPGRADE_v5.md` (or `docs/MIGRATION_v5.md`)
### “Deployment model matrix”
Many docs implicitly assume a deployment type. Add a short matrix page that answers:
- What works on Docker vs Proxmox LXC vs systemd vs Helm
- How updates work per model
- Where config lives per model
- What “recommended” means (and why)
Suggested path:
- `docs/DEPLOYMENT_MODELS.md`
### AI safety and permissions
If v5 ships AI “execute/run-command” features:
- Document default safety posture
- What autonomous mode does
- Required scopes/roles
- Audit logging expectations
- Clear warning section for production
Suggested path:
- Expand `docs/AI.md` with a “Safety” section, or add `docs/AI_SAFETY.md`.
## Quick “Status” Inventory (what to touch for v5)
This is a fast triage list to help plan the doc refresh. Treat anything marked “Review” as “verify against current behavior”.
- Rewrite: `docs/AI.md`
- Rewrite: `docs/METRICS_HISTORY.md`
- Rewrite: `docs/RELEASE_NOTES.md` (or replace with v5 release notes)
- Update + align: `docs/INSTALL.md`, `docs/FAQ.md`, `docs/TROUBLESHOOTING.md`
- Update: `docs/API.md` (especially AI endpoints)
- Decide canonical + consolidate:
- `docs/TEMPERATURE_MONITORING.md` vs `docs/security/TEMPERATURE_MONITORING.md`
- `docs/AUTO_UPDATE.md` vs `docs/operations/AUTO_UPDATE.md`
- `docs/monitoring/ADAPTIVE_POLLING.md` vs `docs/operations/ADAPTIVE_POLLING_ROLLOUT.md`
- Review (Helm): `docs/KUBERNETES.md`, `deploy/helm/pulse/README.md`
- Review (paths): `docs/ZFS_MONITORING.md` (and any other doc that uses `/opt/pulse/...` in user instructions)
## Suggested “Doc Refresh” Execution Order
1. Decide v5 canonical stories (agent vs proxy for temps, AI capabilities, Helm strategy).
2. Update the primary entrypoints:
- `README.md`
- `docs/README.md`
- `docs/INSTALL.md`
3. Fix contradictions and remove duplicates (temperature, auto-update, adaptive polling).
4. Update `docs/API.md` to reflect current endpoints (especially AI).
5. Add v5 upgrade guide and deployment matrix.
6. Sweep FAQ + troubleshooting for the new canonical flows.

View File

@@ -3,7 +3,14 @@
## 🛠️ Installation & Setup
### What's the easiest way to install?
Use Docker:
If you run Proxmox VE, use the official LXC installer (recommended):
```bash
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install.sh | bash
```
If you prefer Docker:
```bash
docker run -d --name pulse -p 7655:7655 -v pulse_data:/data rcourtman/pulse:latest
```
@@ -11,8 +18,12 @@ docker run -d --name pulse -p 7655:7655 -v pulse_data:/data rcourtman/pulse:late
See [INSTALL.md](INSTALL.md) for all options (Docker Compose, Kubernetes, systemd).
### How do I add a node?
**Auto-discovery (Recommended)**: Go to **Settings → Nodes**, find your node in the "Discovered" list, click "Setup Script", and run the provided command on your Proxmox host.
**Manual**: Go to **Settings → Nodes → Add Node** and enter the credentials manually.
Go to **Settings → Proxmox**.
- **Recommended (Agent setup)**: choose **Setup mode: Agent** and run the generated install command on the Proxmox host.
- **Manual**: choose **Setup mode: Manual** and enter the credentials (password or API token) for the Proxmox API.
If you want Pulse to find servers automatically, enable discovery in **Settings → System → Network** and then return to **Settings → Proxmox** to review discovered servers.
### How do I change the port?
- **Systemd**: `sudo systemctl edit pulse`, add `Environment="FRONTEND_PORT=8080"`, restart.
@@ -36,9 +47,12 @@ Yes! If Pulse detects Ceph storage, it automatically queries cluster health, OSD
Yes. Go to **Alerts → Thresholds** and set any value to `-1` to disable it. You can do this globally or per-resource (VM/Node).
### How do I monitor temperature?
Pulse uses a secure sensor proxy.
1. Install `lm-sensors` on your host (`apt install lm-sensors && sensors-detect`).
2. Run the Pulse setup script on the node again to install the sensor proxy.
Install the unified agent on your Proxmox hosts with Proxmox integration enabled:
1. Install `lm-sensors` on the host (`apt install lm-sensors && sensors-detect`)
2. Install `pulse-agent` with `--enable-proxmox`
`pulse-sensor-proxy` is deprecated in v5 and is not recommended for new deployments.
See [Temperature Monitoring](TEMPERATURE_MONITORING.md).
---
@@ -50,16 +64,22 @@ See [Temperature Monitoring](TEMPERATURE_MONITORING.md).
```bash
docker exec pulse rm /data/.env
docker restart pulse
# Access UI to run setup wizard again
# Access UI again. Pulse will require a bootstrap token for setup.
# Get it with:
docker exec pulse /app/pulse bootstrap-token
```
**Systemd**:
Delete `/etc/pulse/.env` and restart the service.
Delete `/etc/pulse/.env` and restart the service. Pulse will require a bootstrap token for setup:
```bash
sudo pulse bootstrap-token
```
### How do I enable HTTPS?
Set `HTTPS_ENABLED=true` and provide `TLS_CERT_FILE` and `TLS_KEY_FILE` environment variables. See [Configuration](CONFIGURATION.md#https--tls).
### Can I use Single Sign-On (SSO)?
Yes. Pulse supports OIDC (Settings → Security → OIDC) and Proxy Auth (Authentik, Authelia). See [Proxy Auth Guide](PROXY_AUTH.md).
Yes. Pulse supports OIDC in **Settings → Security → Single Sign-On** and Proxy Auth (Authentik, Authelia). See [Proxy Auth Guide](PROXY_AUTH.md) and [OIDC](OIDC.md).
---
@@ -67,7 +87,7 @@ Yes. Pulse supports OIDC (Settings → Security → OIDC) and Proxy Auth (Authen
### No data showing?
- Check Proxmox API is reachable (port 8006).
- Verify credentials in **Settings → Nodes**.
- Verify credentials in **Settings → Proxmox**.
- Check logs: `journalctl -u pulse -f` or `docker logs -f pulse`.
### Connection refused?
@@ -78,4 +98,4 @@ Yes. Pulse supports OIDC (Settings → Security → OIDC) and Proxy Auth (Authen
Set `ALLOWED_ORIGINS=https://your-domain.com` environment variable if accessing Pulse from a different domain.
### High memory usage?
Reduce `METRICS_RETENTION_DAYS` (default 7) via environment variable if running on very constrained hardware.
If you are storing long history windows, reduce metrics retention (see [METRICS_HISTORY.md](METRICS_HISTORY.md)). Also confirm your polling intervals match your environment size.

View File

@@ -4,6 +4,15 @@ Pulse offers flexible installation options from Docker to enterprise-ready Kuber
## 🚀 Quick Start (Recommended)
### Proxmox VE (LXC installer)
If you run Proxmox VE, the easiest and most “Pulse-native” deployment is the official installer which creates and configures a lightweight LXC container.
Run this on your Proxmox host:
```bash
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install.sh | bash
```
### Docker
Ideal for containerized environments or testing.
@@ -46,22 +55,27 @@ volumes:
Deploy to your cluster using our Helm chart.
```bash
helm repo add pulse https://rcourtman.github.io/Pulse/
helm repo update
helm install pulse pulse/pulse \
helm upgrade --install pulse oci://ghcr.io/rcourtman/pulse-chart \
--namespace pulse \
--create-namespace
```
See [KUBERNETES.md](KUBERNETES.md) for ingress and persistence configuration.
### 2. Bare Metal / Systemd
For bare-metal Linux servers, download the release binary directly.
For Linux servers (VM or bare metal), use the official installer:
```bash
# Download and extract
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/pulse-$(uname -s | tr '[:upper:]' '[:lower:]')-$(uname -m).tar.gz | tar xz
sudo mv pulse /usr/local/bin/
sudo chmod +x /usr/local/bin/pulse
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install.sh | sudo bash
```
<details>
<summary><strong>Manual systemd install (advanced)</strong></summary>
```bash
# Download the correct tarball from GitHub Releases and extract it
# https://github.com/rcourtman/Pulse/releases
sudo install -m 0755 pulse /usr/local/bin/pulse
# Create systemd service
sudo tee /etc/systemd/system/pulse.service > /dev/null << 'EOF'
@@ -85,6 +99,7 @@ sudo mkdir -p /etc/pulse
sudo systemctl daemon-reload
sudo systemctl enable --now pulse
```
</details>
---
@@ -96,16 +111,16 @@ Pulse is secure by default. On first launch, you must retrieve a **Bootstrap Tok
| Platform | Command |
|----------|---------|
| **Docker** | `docker exec pulse cat /data/.bootstrap_token` |
| **Kubernetes** | `kubectl exec -it <pod> -- cat /data/.bootstrap_token` |
| **Systemd** | `sudo cat /etc/pulse/.bootstrap_token` |
| **Docker** | `docker exec pulse cat /data/.bootstrap_token` or `docker exec pulse /app/pulse bootstrap-token` |
| **Kubernetes** | `kubectl exec -it <pod> -- cat /data/.bootstrap_token` or `kubectl exec -it <pod> -- /app/pulse bootstrap-token` |
| **Systemd** | `sudo cat /etc/pulse/.bootstrap_token` or `sudo pulse bootstrap-token` |
### Step 2: Create Admin Account
1. Open `http://<your-ip>:7655`
2. Paste the **Bootstrap Token**.
3. Create your **Admin Username** and **Password**.
> **Note**: If you configure `PULSE_AUTH_USER` and `PULSE_AUTH_PASS` via environment variables, this step is skipped.
> **Note**: If you configure authentication via environment variables (`PULSE_AUTH_USER`/`PULSE_AUTH_PASS` and/or `API_TOKENS`), the bootstrap token is automatically removed and this step is skipped.
---
@@ -114,7 +129,7 @@ Pulse is secure by default. On first launch, you must retrieve a **Bootstrap Tok
### Automatic Updates (Systemd only)
Pulse can self-update to the latest stable version.
**Enable via UI**: Settings → System → Automatic Updates
**Enable via UI**: Settings → System → Updates
### Manual Update
| Platform | Command |

View File

@@ -4,25 +4,21 @@ Deploy Pulse to Kubernetes using the official Helm chart.
## 🚀 Installation
1. **Add Repo**
1. **Install (OCI chart, recommended)**
```bash
helm repo add pulse https://rcourtman.github.io/Pulse/
helm repo update
```
2. **Install**
```bash
helm install pulse pulse/pulse \
helm upgrade --install pulse oci://ghcr.io/rcourtman/pulse-chart \
--namespace pulse \
--create-namespace
```
3. **Access**
2. **Access**
```bash
kubectl -n pulse port-forward svc/pulse 7655:7655
```
Open `http://localhost:7655` to complete setup.
> If you installed using a Helm repository URL previously, you can keep using it. OCI is the preferred distribution format going forward.
---
## ⚙️ Configuration
@@ -35,7 +31,9 @@ Configure via `values.yaml` or `--set` flags.
| `ingress.enabled` | Enable Ingress | `false` |
| `persistence.enabled` | Enable PVC for /data | `true` |
| `persistence.size` | PVC Size | `8Gi` |
| `agent.enabled` | Enable Docker Agent sidecar | `false` |
| `agent.enabled` | Enable legacy docker agent workload | `false` |
> Note: the `agent.*` block is legacy and currently references `pulse-docker-agent`. For new deployments, prefer the unified agent (`pulse-agent`) where possible.
### Example `values.yaml`
@@ -59,7 +57,7 @@ server:
API_TOKENS: "my-token"
agent:
enabled: true
enabled: false
secretEnv:
create: true
data:
@@ -76,8 +74,7 @@ helm upgrade --install pulse pulse/pulse -n pulse -f values.yaml
## 🔄 Upgrades
```bash
helm repo update
helm upgrade pulse pulse/pulse -n pulse
helm upgrade pulse oci://ghcr.io/rcourtman/pulse-chart -n pulse
```
**Rollback**:

View File

@@ -19,32 +19,26 @@ Pulse 5.0 adds support for monitoring Proxmox Mail Gateway instances alongside y
4. Enter connection details:
- Host: Your PMG IP or hostname
- Port: 8006 (default)
- API Token ID: e.g., `root@pam!pulse`
- API Token Secret: Your token secret
- API Token ID: e.g., `root@pmg!pulse` (format: `<user>@<realm>!<token-id>`)
- API Token Secret: Your token secret (shown once when you create the token)
### Via Discovery
Pulse can automatically discover PMG instances on your network:
1. Go to **Settings → Discovery**
2. Enable network discovery
3. PMG instances on port 8006 will be detected
4. Click to add discovered instances
1. Enable discovery in **Settings → System → Network**
2. Go to **Settings → Proxmox**
3. PMG instances on port 8006 are detected and shown in the Proxmox discovery panels
4. Click a discovered PMG server to add it
## API Token Setup on PMG
Create an API token on your PMG server:
Create an API token on your PMG server (recommended). The easiest method is via the PMG web UI:
```bash
# SSH to your PMG server
pveum user token add root@pam pulse --privsep 0
- Create a token for a user (for example `root@pmg`)
- Copy the token secret when it is displayed (it is typically shown once)
# Note the token secret - it's only shown once!
```
Required permissions:
- `Sys.Audit` - Read system status
- `Datastore.Audit` - Read mail statistics
If you see 403/permission errors, start by testing with a token for an admin user to confirm connectivity, then tighten permissions once you know which PMG endpoints your instance requires.
## Dashboard

View File

@@ -1,87 +1,74 @@
# Metrics History
# Metrics History (Persistent)
Pulse 5.0 introduces persistent metrics history, allowing you to view historical resource usage data and trends over time.
Pulse persists metrics history to disk so trend views and sparklines survive restarts.
## Features
## Storage Location
- **Persistent Storage**: Metrics are saved to disk and survive restarts
- **Configurable Retention**: Set how long to keep different metric types
- **Trend Analysis**: View resource usage patterns over time
- **Spark Lines**: See at-a-glance trends in the dashboard
Metrics history is stored in a SQLite database named `metrics.db` under the Pulse data directory:
## Configuration
- **systemd/LXC installs**: typically `/etc/pulse/metrics.db`
- **Docker/Kubernetes installs**: typically `/data/metrics.db`
### Retention Settings
## Retention Model (Tiered)
Configure retention periods in **Settings → General → Metrics History**:
Pulse keeps multiple resolutions of the same data, which allows longer history without storing raw samples forever:
| Metric Type | Default | Description |
|-------------|---------|-------------|
| **Host Metrics** | 7 days | CPU, memory, disk for hypervisors |
| **Guest Metrics** | 7 days | VM and container metrics |
| **Container Metrics** | 3 days | Docker/Podman container stats |
| **Aggregate Metrics** | 30 days | Cluster-wide summaries |
- **Raw** (high-resolution, short window)
- **Minute aggregates**
- **Hourly aggregates**
- **Daily aggregates**
### Environment Variables
Default retention values (subject to change) are:
```bash
# Override via environment
PULSE_METRICS_HOST_RETENTION_DAYS=14
PULSE_METRICS_GUEST_RETENTION_DAYS=14
PULSE_METRICS_CONTAINER_RETENTION_DAYS=7
PULSE_METRICS_AGGREGATE_RETENTION_DAYS=60
- Raw: 2 hours
- Minute: 24 hours
- Hourly: 7 days
- Daily: 90 days
## Advanced: Retention Tuning
Tiered retention is stored in `system.json` in the Pulse data directory:
- **systemd/LXC installs**: typically `/etc/pulse/system.json`
- **Docker/Kubernetes installs**: typically `/data/system.json`
Keys:
```json
{
"metricsRetentionRawHours": 2,
"metricsRetentionMinuteHours": 24,
"metricsRetentionHourlyDays": 7,
"metricsRetentionDailyDays": 90
}
```
## Storage
Metrics are stored in `/etc/pulse/data/metrics/` (or your configured data directory).
### Disk Usage
Approximate storage requirements:
- ~1 KB per resource per hour
- 10 hosts × 50 guests × 7 days ≈ 8 MB
### Database Maintenance
Pulse automatically:
- Compacts old data
- Prunes metrics beyond retention period
- Optimizes storage during low-usage periods
After changing these values, restart Pulse.
## API Access
Query historical metrics via the API:
Pulse exposes the persistent metrics store via:
- `GET /api/metrics-store/stats`
- `GET /api/metrics-store/history`
### History Query Parameters
`GET /api/metrics-store/history` supports:
- `resourceType` (required): `node`, `guest`, `storage`, `docker`, `dockerHost`
- `resourceId` (required): resource identifier
- `metric` (optional): `cpu`, `memory`, `disk`, etc. Omit to return all metrics for the resource.
- `range` (optional): `1h`, `6h`, `12h`, `24h`, `7d`, `30d`, `90d` (default `24h`)
Example:
```bash
# Get metrics for a specific resource
curl -H "X-API-Token: $TOKEN" \
"http://localhost:7655/api/metrics/history?resource=vm-100&hours=24"
# Get aggregated cluster metrics
curl -H "X-API-Token: $TOKEN" \
"http://localhost:7655/api/metrics/history?type=aggregate&days=7"
"http://localhost:7655/api/metrics-store/history?resourceType=guest&resourceId=vm-100&range=7d&metric=cpu"
```
## Visualization
### Dashboard Sparklines
The dashboard shows 24-hour trend sparklines for each resource, updating in real-time.
### Detailed Charts
Click on any resource to see detailed historical charts with:
- Selectable time ranges (1h, 6h, 24h, 7d, 30d)
- Multiple metric overlays (CPU, memory, disk, network)
- Zoom and pan controls
## Troubleshooting
### Metrics not persisting
1. Check data directory permissions
2. Verify disk space availability
3. Check logs: `journalctl -u pulse | grep metrics`
### High disk usage
1. Reduce retention periods in Settings
2. Exclude low-value resources from history
3. Run manual cleanup: Settings → General → Clear Old Metrics
- **No sparklines / empty history**: confirm the instance can write to the data directory and that `metrics.db` exists.
- **Large disk usage**: reduce polling frequency first. If you need tighter retention, adjust the tiered retention settings in `system.json` (advanced) and restart Pulse.

View File

@@ -1,34 +1,36 @@
# 🚚 Migrating Pulse
**Updated for Pulse v4.24.0+**
This guide covers migrating Pulse to a new host using the built-in encrypted export/import workflow.
## 🚀 Quick Migration Guide
### ❌ DON'T: Copy Files
Never copy `/etc/pulse` or `/var/lib/pulse` manually. Encryption keys and credentials will break.
Never copy `/etc/pulse` (or `/data` in Docker/Kubernetes) manually. Encryption keys and credentials can break.
### ✅ DO: Use Export/Import
#### 1. Export (Old Server)
1. Go to **Settings → Configuration Management**.
2. Click **Export Configuration**.
3. Enter a strong passphrase and save the `.enc` file.
1. Go to **Settings → System → Backups**.
2. Click **Create Backup**.
3. Enter a strong passphrase and download the encrypted backup.
#### 2. Import (New Server)
1. Install a fresh Pulse instance.
2. Go to **Settings → Configuration Management**.
3. Click **Import Configuration** and upload your file.
2. Go to **Settings → System → Backups**.
3. Click **Restore Configuration** and upload your file.
4. Enter the passphrase.
## 📦 What Gets Migrated
| Included ✅ | Not Included ❌ |
| :--- | :--- |
| Nodes & Credentials | Historical Metrics |
| Alert Settings | Alert History |
| Email & Webhooks | Auth Settings (Passwords/Tokens) |
| System Settings | Update Rollback History |
| Guest Metadata | |
| Nodes & credentials | Historical metrics history (`metrics.db`) |
| Alerts & overrides | Browser sessions and local cookies |
| Notifications (email, webhooks, Apprise) | Local login username/password (`.env`) |
| System settings (`system.json`) | Update history/backup folders |
| API token records | |
| OIDC config | |
| Guest metadata/notes | |
## 🔄 Common Scenarios
@@ -45,12 +47,12 @@ The export file works across all installation methods. You can migrate from Dock
## 📋 Post-Migration Checklist
Because authentication secrets are excluded from exports, you must:
Because local login credentials are stored in `.env` (not part of exports), you must:
1. **Re-create Admin User**: If not using `.env` overrides, create your admin account on the new instance.
2. **Re-issue API Tokens**:
* Go to **Settings → Security**.
* Generate new tokens for your agents and scripts.
2. **Confirm API access**:
* If you created API tokens in the UI, those token records are included in the export and should continue working.
* If you used `.env`-based `API_TOKENS`/`API_TOKEN`, reconfigure them on the new host.
3. **Update Agents**:
* **Unified Agent**: Update the `--token` flag in your service definition.
* **Docker**: Update `PULSE_TOKEN` in your container config.
@@ -58,7 +60,7 @@ Because authentication secrets are excluded from exports, you must:
## 🔒 Security
* **Encryption**: Exports are encrypted with PBKDF2 (100k iterations).
* **Encryption**: Exports are encrypted with passphrase-based encryption (PBKDF2 + AES-GCM).
* **Storage**: Safe to store in cloud backups or password managers.
* **Passphrase**: Use a strong, unique passphrase (min 12 chars).
@@ -67,4 +69,4 @@ Because authentication secrets are excluded from exports, you must:
* **"Invalid passphrase"**: Ensure exact match (case-sensitive).
* **Missing Nodes**: Verify export date.
* **Connection Errors**: Update node IPs in Settings if they changed.
* **Logging**: Re-configure log levels in **Settings → System → Logging** if needed.
* **Logging**: Adjust `LOG_LEVEL`/`LOG_FORMAT` via environment variables if needed.

View File

@@ -2,6 +2,8 @@
The Control Plane synchronizes `pulse-sensor-proxy` instances with the Pulse server, ensuring they trust the correct nodes without manual configuration.
> **Deprecated in v5:** `pulse-sensor-proxy` (and its control-plane sync) is deprecated and not recommended for new deployments. New installs should use `pulse-agent --enable-proxmox` for temperature monitoring.
## 🏗️ Architecture
```mermaid
@@ -37,4 +39,3 @@ pulse_control_plane:
* **Tokens**: The `ctrl_token` is unique per proxy instance.
* **Least Privilege**: The proxy only knows about nodes explicitly added to Pulse.
* **Fallback**: If the control plane is unreachable, the proxy uses its last known good configuration.

View File

@@ -0,0 +1,401 @@
# Pulse Pro Implementation Plan
**Goal**: Gate AI features behind a Pro license to create a sustainable income stream.
**Timeline**: ~1-2 weeks of focused work
---
## Phase 1: License System Architecture
### 1.1 License Format (Simple JWT)
```go
// internal/license/license.go
type LicenseData struct {
LicenseID string `json:"lid"` // Unique license ID
Email string `json:"email"` // Customer email
Tier string `json:"tier"` // "pro", "msp", "enterprise"
IssuedAt time.Time `json:"iat"`
ExpiresAt time.Time `json:"exp"` // Empty = lifetime
MaxNodes int `json:"max_nodes"` // 0 = unlimited
Features []string `json:"features"` // ["ai_chat", "ai_patrol", "ai_alerts"]
}
```
**Why JWT?**
- Self-contained (no license server needed for validation)
- Signed with your private key, verified with public key embedded in binary
- Can be verified offline (important for air-gapped homelabs)
- Standard format, easy to generate from any payment processor webhook
### 1.2 License Validation Flow
```
┌────────────────────────────────────────────────────────────────┐
│ User purchases license on LemonSqueezy/Gumroad │
│ ↓ │
│ Webhook hits your simple license API (can be CloudFlare Worker)│
│ ↓ │
│ Generate JWT signed with private key │
│ ↓ │
│ Email license key to customer │
│ ↓ │
│ User pastes key in Pulse Settings → Pro tab │
│ ↓ │
│ Pulse validates signature with embedded public key │
│ ↓ │
│ Store encrypted license in config dir (license.enc) │
└────────────────────────────────────────────────────────────────┘
```
---
## Phase 2: Backend Implementation
### 2.1 New Files to Create
```
internal/license/
├── license.go # License struct, validation, JWT parsing
├── license_test.go # Tests
└── features.go # Feature flags (what Pro includes)
internal/config/
└── license.go # License persistence (store/load from disk)
```
### 2.2 License Service
```go
// internal/license/license.go
package license
import (
"crypto/ed25519"
"encoding/base64"
"errors"
"time"
"github.com/golang-jwt/jwt/v5"
)
// Embedded public key (compiled into binary)
// Generate keypair: go run ./cmd/license-keygen
var publicKeyBase64 = "YOUR_PUBLIC_KEY_HERE"
type Service struct {
license *LicenseData
loaded bool
}
func NewService() *Service {
return &Service{}
}
func (s *Service) LoadFromKey(licenseKey string) error {
// Parse and validate JWT
// Store in s.license
}
func (s *Service) IsValid() bool {
if s.license == nil {
return false
}
if !s.license.ExpiresAt.IsZero() && time.Now().After(s.license.ExpiresAt) {
return false
}
return true
}
func (s *Service) HasFeature(feature string) bool {
if !s.IsValid() {
return false
}
for _, f := range s.license.Features {
if f == feature || f == "all" {
return true
}
}
return false
}
// Feature constants
const (
FeatureAIChat = "ai_chat"
FeatureAIPatrol = "ai_patrol"
FeatureAIAlerts = "ai_alerts"
FeatureOIDC = "oidc" // SSO/OIDC authentication
FeatureKubernetes = "kubernetes" // K8s cluster monitoring
FeatureMultiUser = "multi_user" // Multiple user accounts
FeatureAPIAccess = "api_access" // Full API access for integrations
FeatureWhiteLabel = "white_label" // Custom branding (MSP tier)
FeatureAll = "all"
)
```
### 2.3 Integration Points
Modify these files to check license:
| File | What to Gate |
|------|--------------|
| `internal/api/ai_handlers.go` | Chat endpoints, patrol endpoints |
| `internal/ai/patrol.go` | Patrol service start |
| `internal/ai/service.go` | AI chat service |
| `internal/ai/alert_triggered.go` | Alert analysis |
| `internal/api/oidc_handlers.go` | OIDC/SSO configuration |
| `internal/api/kubernetes_handlers.go` | K8s cluster endpoints |
| `internal/monitoring/kubernetes/` | K8s monitoring service |
**Example gating in ai_handlers.go:**
```go
func (h *AISettingsHandler) HandleChat(w http.ResponseWriter, r *http.Request) {
// Check Pro license
if !h.licenseService.HasFeature(license.FeatureAIChat) {
utils.WriteJSONError(w, http.StatusPaymentRequired,
"AI Chat requires Pulse Pro. Visit https://pulserelay.pro to upgrade.")
return
}
// ... existing logic
}
```
---
## Phase 3: Frontend Implementation
### 3.1 New Settings Tab: "Pro License"
Location: `frontend-modern/src/routes/settings/+page.svelte` (or equivalent)
```
┌──────────────────────────────────────────────────────────────┐
│ ⚡ Pulse Pro │
├──────────────────────────────────────────────────────────────┤
│ │
│ ┌────────────────────────────────────────────────────────┐ │
│ │ License Key │ │
│ │ [________________________________________________] │ │
│ │ [Activate License] │ │
│ └────────────────────────────────────────────────────────┘ │
│ │
│ ✅ License Status: Active (Pro) │
│ 📧 Licensed to: user@example.com │
│ 📅 Expires: Never (Lifetime) │
│ │
│ ───────────────────────────────────────────────────────── │
│ │
│ Included Features: │
│ ✅ AI Chat Assistant │
│ ✅ AI Patrol (Background Health Checks) │
│ ✅ AI Alert Analysis │
│ ✅ Priority Support │
│ │
│ ───────────────────────────────────────────────────────── │
│ │
│ Don't have a license? │
│ [Get Pulse Pro →] https://pulserelay.pro │
│ │
└──────────────────────────────────────────────────────────────┘
```
### 3.2 Graceful Degradation for Unlicensed Users
When AI features are accessed without a license:
- **AI Settings Tab**: Show features but disabled with "Upgrade to Pro" message
- **Chat Button**: Show but with "Pro" badge, clicking prompts upgrade
- **Patrol Findings**: Hide or show "Enable with Pro" placeholder
**Don't be hostile.** The free version should still feel complete. Pro is an enhancement, not a hostage situation.
---
## Phase 4: Payment & License Generation
### 4.1 Payment Processor: LemonSqueezy
**Why LemonSqueezy over alternatives?**
- Handles global VAT/sales tax automatically
- Generates invoices (enterprises need this)
- Good webhook support for automation
- Reasonable fees (~5% + 50¢)
- Supports both subscription and one-time payments
### 4.2 Pricing Structure (Suggested)
| Tier | Price | Features | Target |
|------|-------|----------|--------|
| **Pro Monthly** | $12/month | AI features, OIDC/SSO, K8s monitoring | Individuals |
| **Pro Annual** | $99/year | Same as monthly, 2 months free | Power users |
| **Pro Lifetime** | $249 one-time | All Pro features, forever | Homelabbers who hate subscriptions |
| **MSP** | $49/month | All Pro + unlimited instances, white-label, multi-tenant | MSPs |
| **Enterprise** | Custom | All features + support SLA, on-prem license server | Large orgs |
### 4.3 Feature Matrix
| Feature | Free | Pro | MSP | Enterprise |
|---------|------|-----|-----|------------|
| Proxmox VE/PBS/PMG monitoring | ✅ | ✅ | ✅ | ✅ |
| Docker/Podman monitoring | ✅ | ✅ | ✅ | ✅ |
| Alerts (Discord, Slack, etc.) | ✅ | ✅ | ✅ | ✅ |
| Metrics history | ✅ | ✅ | ✅ | ✅ |
| Backup explorer | ✅ | ✅ | ✅ | ✅ |
| **AI Chat** | ❌ | ✅ | ✅ | ✅ |
| **AI Patrol** | ❌ | ✅ | ✅ | ✅ |
| **AI Alert Analysis** | ❌ | ✅ | ✅ | ✅ |
| **OIDC/SSO** | ❌ | ✅ | ✅ | ✅ |
| **Kubernetes monitoring** | ❌ | ✅ | ✅ | ✅ |
| Unlimited instances | ❌ | ❌ | ✅ | ✅ |
| White-label branding | ❌ | ❌ | ✅ | ✅ |
| Multi-tenant mode | ❌ | ❌ | ✅ | ✅ |
| Priority support | ❌ | Email | Email | Dedicated |
| SLA | ❌ | ❌ | ❌ | ✅ |
### 4.3 License Generation Service
A simple Cloudflare Worker or Vercel Edge Function:
```javascript
// Simplified license generator (LemonSqueezy webhook handler)
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
const webhook = await request.json()
if (webhook.meta.event_name === 'order_created') {
const license = generateLicense({
email: webhook.data.attributes.user_email,
tier: 'pro',
expiresAt: null, // lifetime for now
})
// Send license via email
await sendLicenseEmail(webhook.data.attributes.user_email, license)
}
return new Response('OK')
}
function generateLicense(data) {
// Sign JWT with private key
// Return base64-encoded license key
}
```
---
## Phase 5: Launch Communication
### 5.1 Changelog Entry
```markdown
## v5.0.0 - The AI Update
### 🚀 Major Changes
**Introducing Pulse Pro**
Pulse 5.0 includes powerful AI features that require a Pro license:
- **AI Chat**: Natural language interface to your infrastructure
- **AI Patrol**: Background health monitoring and insights
- **AI Alert Analysis**: Smart analysis when alerts fire
Core monitoring features remain **completely free and open source**.
Pro licenses support ongoing development and enable me to work on Pulse full-time.
[Get Pulse Pro →](https://pulserelay.pro)
---
*Pulse has grown from a weekend project to something used by thousands.
To keep improving it, I need to make it sustainable. Thank you for your support!*
— Richard
```
### 5.2 Preemptive FAQ
Add to README or docs:
**Q: Why are AI features paid?**
A: AI features require significant development effort and ongoing maintenance. Pro licenses let me work on Pulse sustainably while keeping core monitoring free.
**Q: Will monitoring features become paid?**
A: No. Proxmox/Docker/K8s monitoring, alerts, history, and all current free features will remain free forever.
**Q: What if I'm already using AI in the RC?**
A: Thank you for testing! RC users were beta testers helping shape these features. The final release requires a Pro license.
**Q: I can't afford Pro.**
A: Email me (richard@pulserelay.pro). I offer discounts for students, hobbyists in financial hardship, and open source contributors.
**Q: Can I self-host without Pro?**
A: Absolutely. Pulse works great without AI features. Pro is optional.
---
## Implementation Order
1. **Week 1: Backend**
- [ ] Create `internal/license/` package
- [ ] Implement JWT validation with embedded public key
- [ ] Add license persistence (encrypted storage)
- [ ] Gate AI endpoints with license checks
- [ ] Add `/api/license` endpoints (check, activate)
2. **Week 2: Frontend + Payment**
- [ ] Add Pro License settings tab
- [ ] Update AI settings to show Pro-gated state
- [ ] Set up LemonSqueezy product
- [ ] Create license generation webhook
- [ ] Set up pulserelay.pro landing page (can be simple)
- [ ] Write announcement blog post
3. **Launch**
- [ ] Release v5.0.0 stable
- [ ] Post to Reddit (/r/homelab, /r/Proxmox, /r/selfhosted)
- [ ] Post to GitHub Discussions
- [ ] Email mailing list (if you have one)
---
## Security Considerations
1. **Private key**: Never commit to repo. Store in password manager + secure backup.
2. **License validation**: Always verify signature, never trust claims without verification.
3. **Obfuscation**: Consider light obfuscation of license check code (not for security, but to discourage trivial patching).
4. **Grace period**: If validation fails, maybe grant 7-day grace period before disabling (better UX).
---
## What NOT to Do
- ❌ Phone-home license validation (breaks air-gapped installs)
- ❌ Aggressive license enforcement (pisses off users)
- ❌ Remove free features to "encourage" upgrades
- ❌ Make the free version feel crippled
- ❌ Hide that it's paid (be upfront in README)
---
## Success Metrics (First 90 Days)
| Metric | Target |
|--------|--------|
| Pro licenses sold | 50-100 |
| Monthly revenue | $500-$1000 |
| Churn rate | <5% |
| Negative community reactions | <10 vocal complaints |
| GitHub stars lost | <50 |
If you hit these numbers, you've validated the model. Then you can expand to MSP tier, add features, etc.
---
*This plan can be adjusted based on your preferences. Want me to help implement any specific part?*

View File

@@ -10,8 +10,12 @@ Welcome to the Pulse documentation portal. Here you'll find everything you need
Step-by-step guides for Docker, Kubernetes, and bare metal.
- **[Configuration](CONFIGURATION.md)**
Learn how to configure authentication, notifications (Email, Discord, etc.), and system settings.
- **[Deployment Models](DEPLOYMENT_MODELS.md)**
Where config lives, how updates work, and what differs per deployment.
- **[Migration Guide](MIGRATION.md)**
Moving to a new server? Here's how to export and import your data safely.
- **[Upgrade to v5](UPGRADE_v5.md)**
Practical upgrade guidance and post-upgrade checks.
- **[FAQ](FAQ.md)**
Common questions and quick answers.
@@ -28,11 +32,11 @@ Welcome to the Pulse documentation portal. Here you'll find everything you need
## ✨ New in 5.0
- **[Pulse AI](AI.md)** Intelligent monitoring assistant with chat, patrol mode, and auto-fix.
- **[Pulse AI](AI.md)** Optional assistant for chat, patrol findings, alert analysis, and execution workflows.
- **[Metrics History](METRICS_HISTORY.md)** Persistent metrics storage with configurable retention.
- **[Mail Gateway](MAIL_GATEWAY.md)** Proxmox Mail Gateway (PMG) monitoring.
- **[Auto Updates](AUTO_UPDATE.md)** One-click updates for supported deployments.
- **[Kubernetes](KUBERNETES.md)** Complete K8s cluster monitoring via agents.
- **[Kubernetes](KUBERNETES.md)** Helm deployment (ingress, persistence, HA patterns).
## 📡 Monitoring & Agents

View File

@@ -1,128 +1,8 @@
# Pulse v4.31.0
# Release Notes
## What's Changed
Pulse release notes live on GitHub:
https://github.com/rcourtman/Pulse/releases
### Temperature monitoring over HTTPS
- `pulse-sensor-proxy` now exposes an authenticated HTTPS endpoint per Proxmox host. Pulse stores each proxys URL + bearer token and always polls `https://node:8443/temps` before falling back to local sockets or SSH, eliminating the fragile “single proxy for every node” chain.
- Installations auto-register via the new `/api/temperature-proxy/register` endpoint, generate 4096-bit certificates, enforce CIDR allowlists, and log every HTTP request through the proxys audit pipeline.
- The backend temperature collector understands proxy URLs/tokens, respects strict timeouts, and publishes richer diagnostics so operators can see which node failed and why.
For historical v4 notes that previously lived in this repo, see:
`docs/releases/RELEASE_NOTES_v4.md`
### Installer, diagnostics, and UI updates
- `scripts/install-sensor-proxy.sh` gained `--http-mode` / `--http-addr`, automatic TLS generation, rollback-on-failure, allowed subnet auto-population, and a comprehensive uninstall path that purges sockets, TLS secrets, and LXC bind mounts.
- A new `Settings → Diagnostics → Temperature Proxy` table surfaces proxy health, registration status, and the errors returned by the HTTPS endpoint.
- `scripts/tests/test-sensor-proxy-http.sh` exercises the HTTP installer path end-to-end inside Docker to prevent regressions.
### Host agent refinements
- Windows PowerShell installers/uninstallers now log verbosely, harden permissions, and clean up services more reliably.
- Linux host-agent scripts aligned with the new diagnostics UX and scoped token workflow so onboarding is less error-prone.
## Upgrade Notes
Temperature monitoring will not work for remote nodes until every Proxmox host is reinstalled with the new HTTPS workflow. Follow these steps per host:
```bash
# 1. Remove any pre-v4.31.0 proxy install
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install-sensor-proxy.sh | \
sudo bash -s -- --uninstall --purge
# 2. Install the HTTP-enabled proxy and register it with Pulse
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install-sensor-proxy.sh | \
sudo bash -s -- --standalone --http-mode --pulse-server https://your-pulse-host:7655
```
Only the Pulse server (or container host) needs network access to TCP/8443 on each node. After reinstalling, open **Settings → Diagnostics → Temperature Proxy** to confirm each node reports “HTTPS proxy healthy”. If not, grab the diagnostics entry or run:
```bash
curl -vk https://node.example:8443/health \
-H "Authorization: Bearer $(sudo cat /etc/pulse-sensor-proxy/.http-auth-token)"
```
## Installation
- **Install or upgrade with the helper script**
```bash
curl -sL https://github.com/rcourtman/Pulse/releases/latest/download/install.sh | bash
```
- **Binary upgrade on systemd hosts**
```bash
sudo systemctl stop pulse
curl -fsSL https://github.com/rcourtman/Pulse/releases/download/v4.31.0/pulse-v4.31.0-linux-amd64.tar.gz \
| sudo tar -xz -C /opt/pulse --strip-components=1
sudo systemctl start pulse
```
- **Docker update**
```bash
docker pull rcourtman/pulse:v4.31.0
docker stop pulse || true
docker rm pulse || true
docker run -d --name pulse --restart unless-stopped -p 7655:7655 rcourtman/pulse:v4.31.0
```
- **Helm upgrade**
```bash
helm upgrade --install pulse oci://ghcr.io/rcourtman/pulse-chart \
--version 4.31.0 \
--namespace pulse --create-namespace
```
## Downloads
- Multi-arch Linux tarballs (amd64/arm64/armv7)
- Standalone sensor proxy binaries (now include HTTP mode)
- Helm chart archive (pulse-4.31.0-helm.tgz)
- SHA256 checksums (checksums.txt)
- Docker tags: rcourtman/pulse:v4.31.0, :4.31, :4, :latest
---
# Pulse v4.26.1
## What's Changed
### New
- Standalone host agents now ship with guided Linux, macOS, and Windows installers that stream registration status back to Pulse, generate scoped commands from **Settings → Agents**, and feed host metrics into alerts alongside Proxmox and Docker.
- Alert thresholds gained host-level overrides, connectivity toggles, and snapshot size guardrails so you can tune offline behaviour per host while keeping a global policy for other resources.
- API tokens now support fine-grained scopes with a redesigned manager that previews command templates, highlights unused credentials, and makes revocation a single click.
- Proxmox replication jobs surface in a dedicated **Settings → Hosts → Replication** view with API plumbing to track task health and bubble failures into the monitoring pipeline.
- Docker Swarm environments now receive service/task-aware reporting with configurable scope, plus a Docker settings view that highlights manager/worker roles, stack health, rollout status, and service alert thresholds.
### Improvements
- Dashboard loads and drawer links respond faster thanks to cached guest metadata, reduced polling allocations, and inline URL editing that no longer flashes on WebSocket updates.
- Settings navigation is reorganized with dedicated Docker and Hosts sections, richer filters, and platform icons that make agent onboarding and discovery workflows clearer.
- LXC guests now report dynamic interface IPs, configuration metadata, and queue metrics so alerting, discovery, and drawers stay accurate even during rapid container churn.
- Notifications consolidate into a consistent toast system, with clearer feedback during agent setup, token generation, and background job state changes.
### Bug Fixes
- Enforced explicit node naming and respected custom Proxmox ports so cluster discovery, overrides, and disk monitoring defaults remain intact after edits.
- Hardened setup-token flows and checksum handling in the installers to prevent stale credentials and guarantee the correct binaries are fetched.
- Treated 501 responses from the Proxmox API as non-fatal during failover, restored FreeBSD disk counter parsing, and stopped guest link icons from re-triggering animations on updates.
- Preserved inline editor state across WebSocket refreshes and ensured Docker host identifiers stay collision-safe in mixed environments.
## Installation
- **Install or upgrade with the helper script**
```bash
curl -sL https://github.com/rcourtman/Pulse/releases/latest/download/install.sh | bash
```
- **Binary upgrade on systemd hosts**
```bash
sudo systemctl stop pulse
curl -fsSL https://github.com/rcourtman/Pulse/releases/download/v4.26.1/pulse-v4.26.1-linux-amd64.tar.gz \
| sudo tar -xz -C /opt/pulse --strip-components=1
sudo systemctl start pulse
```
- **Docker update**
```bash
docker pull rcourtman/pulse:v4.26.1
docker stop pulse || true
docker rm pulse || true
docker run -d --name pulse --restart unless-stopped -p 7655:7655 rcourtman/pulse:v4.26.1
```
- **Helm upgrade**
```bash
helm upgrade --install pulse oci://ghcr.io/rcourtman/pulse-chart \
--version 4.26.1 \
--namespace pulse --create-namespace
```
## Downloads
- Multi-arch Linux tarballs (amd64/arm64/armv7)
- Standalone sensor proxy binaries
- Helm chart archive (pulse-4.26.1-helm.tgz)
- SHA256 checksums (checksums.txt)
- Docker tags: rcourtman/pulse:v4.26.1, :4.26, :4, :latest

View File

@@ -0,0 +1,375 @@
# Security Audit Report - Pulse Application
## Date: 2025-12-18
## Auditor: Claude (Gemini)
---
## Executive Summary
This document presents the findings from a comprehensive security audit of the Pulse monitoring application. The audit examined authentication, authorization, cryptography, input validation, SSRF prevention, command execution, and general security practices.
### Overall Assessment
**Security Posture: A- (Excellent with minor recommendations)**
The codebase demonstrates a mature security posture with:
- ✅ Strong cryptographic practices (bcrypt, SHA3-256, AES-256-GCM)
- ✅ Comprehensive SSRF protection for webhooks
- ✅ CSRF protection for session-based authentication
- ✅ Rate limiting and account lockout
- ✅ Command execution policy with blocklist/allowlist
- ✅ Proper input sanitization and validation
- ✅ Security headers implementation
- ✅ Audit logging
A prior security audit on 2025-11-07 addressed 9 critical to low severity issues in the sensor-proxy component, all of which were successfully remediated.
---
## Audit Scope
### Components Reviewed
1. **Authentication System** (`internal/api/auth.go`, `internal/auth/`)
2. **Session Management** (`internal/api/security.go`, session stores)
3. **Cryptography** (`internal/crypto/crypto.go`)
4. **API Token Management** (`internal/api/security_tokens.go`)
5. **OIDC Integration** (`internal/api/security_oidc.go`)
6. **Webhook/Notification Security** (`internal/notifications/`)
7. **Command Execution** (`internal/agentexec/policy.go`)
8. **Database Operations** (`internal/metrics/store.go`)
9. **Configuration & Secrets** (`internal/config/`)
---
## Strengths Identified
### 1. Authentication & Password Security ✅
- **bcrypt hashing** with cost factor 12 for passwords
- **SHA3-256** for API token hashing
- **Constant-time comparison** for token validation (prevents timing attacks)
- **12-character minimum** password length requirement
- **Automatic hashing** of plain-text passwords on startup
### 2. Session Security ✅
- **HttpOnly cookies** for session tokens
- **Secure flag** set based on HTTPS detection
- **SameSite policy** properly configured (Lax/None based on proxy detection)
- **24-hour session expiry** with sliding window extension
- **Session invalidation** on password change
### 3. Rate Limiting & Account Lockout ✅
- **10 attempts/minute** for auth endpoints
- **5 failed attempts** triggers 15-minute lockout
- **Per-username AND per-IP** tracking
- **Lockout bypass prevention** (both must be clear)
### 4. CSRF Protection ✅
- **CSRF tokens** generated per session
- **Separate CSRF cookie** (not HttpOnly, readable by JS)
- **Header/form validation** for state-changing requests
- **Safe methods** (GET, HEAD, OPTIONS) exempted
- **API token auth** correctly bypasses CSRF (not vulnerable)
### 5. SSRF Prevention ✅
- **Webhook URL validation** with DNS resolution check
- **Private IP blocking** (RFC1918, link-local, loopback)
- **Cloud metadata endpoint blocking** (169.254.169.254, etc.)
- **Configurable allowlist** for internal webhooks
- **DNS rebinding protection** via IP resolution verification
### 6. Encryption at Rest ✅
- **AES-256-GCM** for credential encryption
- **Unique nonce** generation per encryption operation
- **Key file protections** with existence validation before encryption
- **Orphaned data prevention** (refuses to encrypt if key deleted)
### 7. Security Headers ✅
- Content-Security-Policy
- X-Frame-Options (DENY by default)
- X-Content-Type-Options: nosniff
- X-XSS-Protection
- Referrer-Policy
- Permissions-Policy
### 8. Command Execution Policy ✅
- **Blocklist** for dangerous commands (rm -rf, mkfs, dd, etc.)
- **Auto-approve list** for read-only inspection commands
- **Require approval** for service control, package management
- **Sudo normalization** for consistent policy application
### 9. SQL Injection Prevention ✅
- **Parameterized queries** used throughout metrics store
- **Prepared statements** for batch operations
- No string concatenation in SQL queries
### 10. XSS Prevention ✅
- **DOMPurify** for markdown rendering
- **HTML entity encoding** in tooltips
- **Allowed tag/attribute lists** for sanitized content
- **LLM output sanitization** (AI chat)
---
## Findings & Recommendations
### HIGH SEVERITY: None Identified
### MEDIUM SEVERITY
#### M1. Admin Bypass Debug Mode 🟡
**Location:** `internal/api/auth.go:675-691`
**Finding:**
The `adminBypassEnabled()` function allows bypassing authentication when both `ALLOW_ADMIN_BYPASS=1` and `PULSE_DEV=true` are set. While properly gated for development only:
```go
if os.Getenv("ALLOW_ADMIN_BYPASS") != "1" {
return
}
if os.Getenv("PULSE_DEV") == "true" || strings.EqualFold(os.Getenv("NODE_ENV"), "development") {
log.Warn().Msg("Admin authentication bypass ENABLED (development mode)")
adminBypassState.enabled = true
}
```
**Risk:** Accidental production deployment with these env vars could expose full admin access.
**Recommendation:**
1. Add prominent warning log at startup if either var is set
2. Consider disallowing in Docker `PULSE_DOCKER=true` mode
3. Document this explicitly as a development-only feature
---
#### M2. Recovery Token Exposure Window 🟡
**Location:** Session and recovery token stores
**Finding:**
Recovery tokens for password reset appear to be stored in JSON files. While tokens are hashed:
- File permissions should be verified as 0600
- Token expiration should be enforced server-side (appears to be implemented)
**Recommendation:**
1. Verify file permissions are set correctly (0600) during token store initialization
2. Add cleanup routine for expired tokens
---
### LOW SEVERITY
#### L1. Cookie Security in HTTP Proxies 🔵
**Location:** `internal/api/auth.go:75-107`
**Finding:**
When behind an HTTP (non-HTTPS) proxy, cookies fall back to `SameSite=Lax` with `Secure=false`. This is functionally necessary but reduces security.
**Recommendation:**
1. Log a warning when cookies are set without Secure flag
2. Add documentation recommending HTTPS termination at proxy
---
#### L2. Session Token Entropy 🔵
**Location:** `internal/api/auth.go:109-118`
**Finding:**
Session tokens are 32 bytes (256 bits) of entropy via `crypto/rand`, which is excellent. However, the error handling falls back to empty string:
```go
if _, err := cryptorand.Read(b); err != nil {
log.Error().Err(err).Msg("Failed to generate secure session token")
return "" // Fallback - should never happen
}
```
**Recommendation:**
Consider returning an error or panicking rather than returning empty string, as an empty session token could have undefined behavior.
---
#### L3. OIDC State Parameter Validation 🔵
**Location:** `internal/api/security_oidc.go`
**Finding:**
OIDC configuration is properly validated and state parameters should be verified during the OAuth flow. This should be confirmed in the callback handler.
**Recommendation:**
1. Verify state parameter is generated with sufficient entropy
2. Ensure state parameter has short expiration (5-10 minutes)
---
#### L4. Apprise CLI Command Execution 🔵
**Location:** `internal/notifications/notifications.go`
**Finding:**
The Apprise CLI path and targets are passed to `exec.CommandContext`. While the CLI path is configurable:
```go
args := []string{"-t", title, "-b", body}
args = append(args, cfg.Targets...)
execFn := n.appriseExec
```
**Risk:** If an attacker can control `cfg.Targets`, they might inject malicious arguments.
**Recommendation:**
1. Validate that targets match expected Apprise URL format
2. Consider sanitizing or escaping special characters in targets
---
### INFORMATIONAL
#### I1. Dependencies
The `go.mod` shows modern, well-maintained dependencies:
- Go 1.24.0 (latest stable)
- `golang.org/x/crypto v0.45.0` (current)
- `github.com/coreos/go-oidc/v3 v3.17.0` (current)
**Recommendation:**
Run `govulncheck` periodically to scan for known vulnerabilities.
---
#### I2. GitGuardian Integration
The `.gitguardian.yaml` is properly configured to:
- Ignore documentation and example files
- Block placeholder patterns
- Scan actual code and configuration
---
#### I3. Existing Security Audit
The previous audit (2025-11-07) addressed critical vulnerabilities in the sensor-proxy:
- Socket directory tampering (CRITICAL) ✅ Fixed
- SSRF via get_temperature (CRITICAL) ✅ Fixed
- Connection exhaustion DoS (CRITICAL) ✅ Fixed
- Multi-UID rate limit bypass (CRITICAL) ✅ Fixed
- Incomplete GID authorization (MEDIUM) ✅ Fixed
- Unbounded SSH output (MEDIUM) ✅ Fixed
- Weak host key validation (MEDIUM) ✅ Fixed
- Insufficient capability separation (MEDIUM) ✅ Fixed
- Missing systemd hardening (LOW) ✅ Fixed
---
## Security Architecture Summary
### Data Flow Security
```
┌─────────────────────────────────────────────────────────────────┐
│ Client (Browser/API) │
│ ┌─────────────────────────────────────────────────────────────┐│
│ │ HTTPS + TLS │ Session Cookie (HttpOnly, Secure, SameSite) ││
│ │ │ CSRF Token (Cookie + Header validation) ││
│ │ │ API Token (Header: X-API-Token) ││
│ └─────────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Pulse Server │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Rate Limiter │ │ Auth Middleware │ │ CSRF Handler │ │
│ │ 10 auth/min │ │ Session/Token │ │ State-changing │ │
│ │ 500 api/min │ │ Validation │ │ operations │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
│ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Account Lockout │ │ Command Policy │ │ SSRF Prevention │ │
│ │ 5 attempts/15m │ │ Block/Allow/ │ │ Private IP │ │
│ │ │ │ Require Approval│ │ blocklist │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
│ │
│ ┌─────────────────────────────────────────────────────────────┐│
│ │ Encryption at Rest (AES-256-GCM) ││
│ │ - Node credentials: /etc/pulse/nodes.enc ││
│ │ - Email settings: /etc/pulse/email.enc ││
│ │ - Webhooks: /etc/pulse/webhooks.enc ││
│ │ - OIDC config: /etc/pulse/oidc.enc ││
│ └─────────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────────┘
```
### Password/Token Hashing
| Credential Type | Algorithm | Parameters |
|-----------------|-----------|------------|
| User Passwords | bcrypt | Cost factor 12 |
| API Tokens | SHA3-256 | - |
| Encryption Key | AES-256-GCM | 32-byte random key |
| Session Tokens | Random | 32 bytes (256-bit) |
---
## Compliance Checklist
| Requirement | Status | Notes |
|-------------|--------|-------|
| Password hashing | ✅ | bcrypt, cost 12 |
| Session management | ✅ | Secure cookies, 24h expiry |
| CSRF protection | ✅ | Token-based |
| Rate limiting | ✅ | Auth + API endpoints |
| Encryption at rest | ✅ | AES-256-GCM |
| HTTPS support | ✅ | TLS configurable |
| Security headers | ✅ | CSP, X-Frame-Options, etc. |
| Audit logging | ✅ | Auth events logged |
| Input validation | ✅ | SQL params, webhook URLs |
| Command execution control | ✅ | Policy-based |
---
## Recommendations Summary
### Priority 1 (Consider Addressing)
- [ ] M1: Add additional safeguards for dev mode bypass
- [ ] M2: Verify recovery token file permissions
### Priority 2 (Optional Improvements)
- [ ] L1: Add warning logs for non-secure cookies
- [ ] L2: Improve session token generation error handling
- [ ] L3: Document OIDC state parameter security
- [ ] L4: Add Apprise target validation
### Priority 3 (Ongoing)
- [ ] I1: Run `govulncheck` regularly
- [ ] Keep dependencies updated
- [ ] Review GitGuardian alerts
---
## Conclusion
The Pulse application demonstrates a **strong security posture** with comprehensive protections against common web application vulnerabilities. The codebase shows evidence of security-conscious development practices:
1. **Defense in depth** with multiple layers of authentication and authorization
2. **Secure defaults** requiring explicit configuration to reduce security
3. **Modern cryptography** using industry-standard algorithms
4. **Comprehensive validation** of user inputs and external URLs
5. **Audit trails** for security-relevant events
The identified findings are primarily of low to medium severity and represent opportunities for hardening rather than critical vulnerabilities.
**Final Security Grade: A-**
---
## References
- **Previous Audit:** `docs/SECURITY_AUDIT_2025-11-07.md`
- **Security Policy:** `SECURITY.md`
- **Security Changelog:** `docs/SECURITY_CHANGELOG.md`
---
## Audit Team
**Auditor:** Claude (Gemini 2.5)
**Methodology:** Static code analysis and architecture review
**Audit Duration:** 2025-12-18 (single session)
**Files Reviewed:** ~50 source files across 10 packages
---
**For security concerns or questions:**
https://github.com/rcourtman/Pulse/issues

View File

@@ -374,9 +374,10 @@ go build ./cmd/pulse-sensor-proxy
### References
- **Audit Report:** `/opt/pulse/docs/SECURITY_AUDIT_2025-11-07.md` (to be created)
- **Security Architecture:** `/opt/pulse/docs/TEMPERATURE_MONITORING_SECURITY.md`
- **Hardening Guide:** `/opt/pulse/docs/PULSE_SENSOR_PROXY_HARDENING.md`
- **Audit Report:** `docs/SECURITY_AUDIT_2025-11-07.md`
- **Audit Report:** `docs/SECURITY_AUDIT_2025-12-18.md`
- **Temperature Monitoring Overview:** `docs/security/TEMPERATURE_MONITORING.md`
- **Sensor Proxy Hardening:** `docs/security/SENSOR_PROXY_HARDENING.md`
---

View File

@@ -2,19 +2,37 @@
Monitor real-time CPU and NVMe temperatures for your Proxmox nodes.
> **Deprecation notice (v5):** `pulse-sensor-proxy` is deprecated and not recommended for new deployments. Temperature monitoring should be done via the unified agent (`pulse-agent --enable-proxmox`). Existing proxy installs can continue during the migration window, but plan to migrate to the agent.
## Recommended: Pulse Agent
For new installations, prefer the unified agent on Proxmox hosts. It reads sensors locally and reports temperatures to Pulse without SSH keys or proxy wiring.
```bash
curl -fsSL http://<pulse-ip>:7655/install.sh | \
bash -s -- --url http://<pulse-ip>:7655 --token <api-token> --enable-proxmox
```
If you use the agent method, the rest of this document (sensor proxy) is optional. See `docs/security/TEMPERATURE_MONITORING.md` for the security model overview.
## 🚀 Quick Start
### 1. Enable in Pulse
Go to **Settings → Nodes → [Node] → Advanced Monitoring** and enable "Temperature Monitoring".
### 1. Install the agent on Proxmox hosts
Install the unified agent on each Proxmox host with Proxmox integration enabled (example in the section above).
### 2. Install Sensor Proxy
The setup depends on your deployment:
### 2. Enable temperature monitoring (optional)
Go to **Settings → Proxmox → [Node] → Advanced Monitoring** and enable "Temperature monitoring" if you want to collect temperatures for that node.
| Deployment | Recommended Method |
| :--- | :--- |
| **LXC (Pulse)** | Run the **Setup Script** in Pulse UI. It auto-installs the proxy on the host. |
| **Docker (Pulse)** | Install proxy on host + bind mount socket. (See below) |
| **Remote Node** | Install proxy in **HTTP Mode** on the remote node. |
## Deprecated: pulse-sensor-proxy (existing installs only)
This section is retained for existing installations during the migration window.
If you are starting fresh on Pulse v5, do not deploy `pulse-sensor-proxy`. Use the agent method above.
If you already have the proxy deployed:
- Keep it running while you migrate to `pulse-agent --enable-proxmox`.
- Expect future removal in a major release. Do not treat the proxy as a long-term solution.
## 📦 Docker Setup (Manual)
@@ -61,7 +79,7 @@ If you have Pulse running on **Server A** and want to monitor temperatures on **
| Issue | Solution |
| :--- | :--- |
| **No Data** | Check **Settings → DiagnosticsTemperature Proxy**. |
| **No Data** | Check **Settings → Diagnostics** (Temperature Proxy section). |
| **Proxy Unreachable** | Ensure port `8443` is open on the remote node. |
| **"Permission Denied"** | Re-run the installer to fix permissions or SSH keys. |
| **LXC Issues** | Ensure the container has the bind mount: `lxc.mount.entry: /run/pulse-sensor-proxy ...` |
@@ -98,7 +116,7 @@ If you can't run the installer script, create the configuration manually:
```bash
curl -L https://github.com/rcourtman/Pulse/releases/latest/download/pulse-sensor-proxy-linux-amd64 \
-o /tmp/pulse-sensor-proxy
install -D -m 0755 /tmp/pulse-sensor-proxy /opt/pulse/sensor-proxy/bin/pulse-sensor-proxy
install -D -m 0755 /tmp/pulse-sensor-proxy /usr/local/bin/pulse-sensor-proxy
```
**2. Create service user:**
@@ -128,7 +146,7 @@ Allowed nodes live in `/etc/pulse-sensor-proxy/allowed_nodes.yaml`; change them
**5. Install systemd service:**
```bash
# Download from: https://github.com/rcourtman/Pulse/releases/latest/download/install-sensor-proxy.sh
# Extract the systemd unit from the installer (ExecStartPre/ExecStart use /opt/pulse/sensor-proxy/bin)
# Extract the systemd unit from the installer (ExecStartPre/ExecStart typically uses /usr/local/bin/pulse-sensor-proxy)
systemctl daemon-reload
systemctl enable --now pulse-sensor-proxy
```
@@ -230,16 +248,15 @@ The installer is idempotent and safe to re-run:
# After adding a new Proxmox node to cluster
bash install-sensor-proxy.sh --standalone --pulse-server http://pulse:7655 --quiet
# After upgrading Pulse version
bash install-sensor-proxy.sh --standalone --pulse-server http://pulse:7655 --version v4.27.0 --quiet
# Verify installation
systemctl status pulse-sensor-proxy
```
### Legacy Security Concerns (Pre-v4.24.0)
### Legacy SSH Security Concerns
Older versions stored SSH keys inside the container, creating security risks:
SSH-based temperature collection from inside containers is unsafe. Pulse blocks this by default for container deployments.
In legacy/non-container setups where you intentionally use SSH, the main risks are:
- Compromised container = exposed SSH keys
- Even with forced commands, keys could be extracted
@@ -402,14 +419,15 @@ Rotate SSH keys periodically for security (recommended every 90 days).
**Automated Rotation (Recommended):**
The `/opt/pulse/scripts/pulse-proxy-rotate-keys.sh` script handles rotation safely with staging, verification, and rollback support:
The `pulse-proxy-rotate-keys.sh` helper script handles rotation safely with staging, verification, and rollback support:
```bash
# 1. Dry-run first (recommended)
sudo /opt/pulse/scripts/pulse-proxy-rotate-keys.sh --dry-run
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/pulse-proxy-rotate-keys.sh | \
sudo bash -s -- --dry-run
# 2. Perform rotation
sudo /opt/pulse/scripts/pulse-proxy-rotate-keys.sh
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/pulse-proxy-rotate-keys.sh | sudo bash
```
**What the script does:**
@@ -421,7 +439,8 @@ sudo /opt/pulse/scripts/pulse-proxy-rotate-keys.sh
**If rotation fails, rollback:**
```bash
sudo /opt/pulse/scripts/pulse-proxy-rotate-keys.sh --rollback
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/pulse-proxy-rotate-keys.sh | \
sudo bash -s -- --rollback
```
**Manual Rotation (Fallback):**
@@ -444,11 +463,11 @@ curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install-s
# 4. Verify temperature data still works in Pulse UI
```
### Automatic Cleanup When Nodes Are Removed (v4.26.0+)
### Automatic Cleanup When Nodes Are Removed
Starting in v4.26.0, SSH keys are **automatically removed** when you delete a node from Pulse:
SSH keys are automatically removed when you delete a node from Pulse:
1. **When you remove a node** in Pulse Settings → Nodes, Pulse signals the temperature proxy
1. **When you remove a node** in Pulse (**Settings → Proxmox**), Pulse signals the temperature proxy
2. **The proxy creates a cleanup request** file at `/var/lib/pulse-sensor-proxy/cleanup-request.json`
3. **A systemd path unit detects the request** and triggers the cleanup service
4. **The cleanup script automatically:**
@@ -572,7 +591,7 @@ test -S /run/pulse-sensor-proxy/pulse-sensor-proxy.sock && echo "Socket OK" || e
- Proxy uses `pvecm status` to discover cluster nodes (requires Proxmox IPC access)
- If Proxmox hardens IPC access or cluster topology changes unexpectedly, discovery may fail
- Standalone Proxmox nodes work but only monitor that single node
- Fallback: Re-run setup script manually to reconfigure cluster access
- Fallback: re-run the proxy installer script to reconfigure cluster access
**Rate Limiting & Scaling** (updated in commit 46b8b8d):
@@ -637,7 +656,7 @@ test -S /run/pulse-sensor-proxy/pulse-sensor-proxy.sock && echo "Socket OK" || e
**New Cluster Node Not Showing Temperatures:**
1. Ensure lm-sensors installed: `ssh root@new-node "sensors -j"`
2. Proxy auto-discovers on next poll (may take up to 1 minute)
3. Re-run the setup script to configure SSH keys on the new node: `curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install-sensor-proxy.sh | bash -s -- --ctid <CTID>`
3. Re-run the proxy installer script to configure SSH keys on the new node: `curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install-sensor-proxy.sh | bash -s -- --ctid <CTID>`
**Permission Denied Errors:**
1. Verify socket permissions: `ls -l /run/pulse-sensor-proxy/pulse-sensor-proxy.sock`
@@ -646,55 +665,13 @@ test -S /run/pulse-sensor-proxy/pulse-sensor-proxy.sock && echo "Socket OK" || e
**Proxy Service Won't Start:**
1. Check logs: `journalctl -u pulse-sensor-proxy -n 50`
2. Verify binary exists: `ls -l /opt/pulse/sensor-proxy/bin/pulse-sensor-proxy`
3. Test manually: `/opt/pulse/sensor-proxy/bin/pulse-sensor-proxy --version`
2. Verify binary exists: `ls -l /usr/local/bin/pulse-sensor-proxy`
3. Test manually: `/usr/local/bin/pulse-sensor-proxy --version`
4. Check socket directory: `ls -ld /var/run`
### Future Improvements
**Potential Enhancements (Roadmap):**
1. **Proxmox API Integration**
- If future Proxmox versions expose temperature telemetry via API, retire SSH approach
- Would eliminate SSH key management and improve security posture
- Monitor Proxmox development for metrics/RRD temperature endpoints
2. **Agent-Based Architecture**
- Deploy lightweight agents on each node for richer telemetry
- Reduces SSH fan-out overhead for large clusters
- Trade-off: Adds deployment/update complexity
- Consider only if demand for additional metrics grows
3. **SNMP/IPMI Support**
- Optional integration for baseboard management controllers
- Better for hardware-level sensors (baseboard temps, fan speeds)
- Requires hardware/firmware support, so keep as optional add-on
4. **Schema Validation**
- Add JSON schema validation for `sensors -j` output
- Detect format changes early with instrumentation
- Log warnings when unexpected sensor formats appear
5. **Caching & Throttling**
- Implement result caching for large clusters (10+ nodes)
- Reduce SSH overhead with configurable TTL
- Add request throttling to prevent SSH rate limiting
6. **Automated Key Rotation**
- Systemd timer for automatic 90-day rotation
- Already supported via `/opt/pulse/scripts/pulse-proxy-rotate-keys.sh`
- Just needs timer unit configuration (documented in hardening guide)
7. **Health Check Endpoint**
- Add `/health` endpoint separate from Prometheus metrics
- Enable external monitoring systems (Nagios, Zabbix, etc.)
- Return proxy status, socket accessibility, and last successful poll
**Contributions Welcome:** If any of these improvements interest you, open a GitHub issue to discuss implementation!
## Configuration Management
Starting with v4.31.1, the sensor proxy includes a built-in CLI for safe configuration management. This prevents config corruption that caused 99% of temperature monitoring failures.
The sensor proxy includes a built-in CLI for safe configuration management. It uses locking and atomic writes to prevent config corruption.
### Quick Reference
@@ -717,19 +694,19 @@ pulse-sensor-proxy config set-allowed-nodes --replace --merge 192.168.0.1
**See also:**
- [Sensor Proxy Config Management Guide](operations/SENSOR_PROXY_CONFIG.md) - Complete runbook
- [Sensor Proxy CLI Reference](/opt/pulse/cmd/pulse-sensor-proxy/README.md) - Full command documentation
- [Sensor Proxy CLI Reference](../cmd/pulse-sensor-proxy/README.md) - Full command documentation
## Control-Plane Sync & Migration
As of v4.32 the sensor proxy registers with Pulse and syncs its authorized node list via `/api/temperature-proxy/authorized-nodes`. No more manual `allowed_nodes` maintenance or `/etc/pve` access is required.
The sensor proxy can register with Pulse and sync its authorized node list via `/api/temperature-proxy/authorized-nodes`. This avoids manual `allowed_nodes` maintenance and reduces reliance on `/etc/pve` access.
### New installs
Always pass the Pulse URL when installing:
```bash
curl -sSL https://pulse.example.com/api/install/install-sensor-proxy.sh \
| sudo bash -s -- --ctid 108 --pulse-server http://192.168.0.149:7655
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install-sensor-proxy.sh | \
sudo bash -s -- --ctid <pulse-lxc-id> --pulse-server http://<pulse-ip>:7655
```
The installer now:
@@ -740,11 +717,11 @@ The installer now:
### Migrating existing hosts
If you installed before v4.32, run the migration helper on each host:
If your proxy was installed without control-plane sync enabled, run the migration helper on each host:
```bash
curl -sSL https://pulse.example.com/api/install/migrate-sensor-proxy-control-plane.sh \
| sudo bash -s -- --pulse-server http://192.168.0.149:7655
curl -fsSL http://<pulse-ip>:7655/api/install/migrate-sensor-proxy-control-plane.sh | \
sudo bash -s -- --pulse-server http://<pulse-ip>:7655
```
The script registers the existing proxy, writes the control token, updates the config, and restarts the service (use `--skip-restart` if you prefer to bounce it yourself). Once migrated, temperatures for every node defined in Pulse will continue working even if the proxy cant reach `/etc/pve` or Corosync IPC.

View File

@@ -7,10 +7,16 @@
```bash
docker exec pulse rm /data/.env
docker restart pulse
# Access UI to run setup wizard again
# Access UI again. Pulse will require a bootstrap token for setup.
# Get it with:
docker exec pulse /app/pulse bootstrap-token
```
**Systemd**:
Delete `/etc/pulse/.env` and restart the service.
Delete `/etc/pulse/.env` and restart the service. Pulse will require a bootstrap token for setup:
```bash
sudo pulse bootstrap-token
```
### Port change didn't take effect
1. Check which service is running: `systemctl status pulse` (or `pulse-backend`).
@@ -47,7 +53,7 @@ Delete `/etc/pulse/.env` and restart the service.
**Temperature data missing**
- Install `lm-sensors` on the host.
- Run `sensors-detect`.
- Re-run the Pulse setup script to install the sensor proxy.
- Install the unified agent on the Proxmox host with `--enable-proxmox`.
- See [Temperature Monitoring](TEMPERATURE_MONITORING.md).
**Docker hosts appearing/disappearing**
@@ -78,10 +84,15 @@ grep "request_id=abc123" /var/log/pulse/pulse.log
### Check Permissions (Proxmox)
If Pulse can't see VMs or storage, check the user permissions on Proxmox:
```bash
pveum user permissions pulse-monitor@pam
pveum user permissions <user>@pam
```
Required: `PVEAuditor` role.
Recommended: `VM.Audit`, `Sys.Audit` (added by setup script).
At minimum, ensure the user/token has read access for inventory and metrics:
- `Sys.Audit`
- `VM.Monitor`
- `Datastore.Audit`
For VM disk usage via QEMU guest agent, also ensure `VM.GuestAgent.Audit` (PVE 9+).
### Recovery Mode
If you are completely locked out, you can trigger a recovery token from the localhost CLI:

View File

@@ -52,6 +52,7 @@ curl -fsSL http://<pulse-ip>:7655/install.sh | \
| `--kube-include-namespace` | `PULSE_KUBE_INCLUDE_NAMESPACES` | Limit namespaces (repeatable or CSV, wildcards supported) | *(all)* |
| `--kube-exclude-namespace` | `PULSE_KUBE_EXCLUDE_NAMESPACES` | Exclude namespaces (repeatable or CSV, wildcards supported) | *(none)* |
| `--kube-include-all-pods` | `PULSE_KUBE_INCLUDE_ALL_PODS` | Include all non-succeeded pods | `false` |
| `--kube-include-all-deployments` | `PULSE_KUBE_INCLUDE_ALL_DEPLOYMENTS` | Include all deployments, not just problems | `false` |
| `--kube-max-pods` | `PULSE_KUBE_MAX_PODS` | Max pods per report | `200` |
| `--disable-auto-update` | `PULSE_DISABLE_AUTO_UPDATE` | Disable auto-updates | `false` |
| `--insecure` | `PULSE_INSECURE_SKIP_VERIFY` | Skip TLS verification | `false` |

66
docs/UPGRADE_v5.md Normal file
View File

@@ -0,0 +1,66 @@
# Upgrade to Pulse v5
This is a practical guide for upgrading an existing Pulse install to v5.
## Before You Upgrade
- Create an encrypted config backup: **Settings → System → Backups → Create Backup**
- Confirm you can access the host/container console (for rollback and bootstrap token retrieval)
- Review the v5 release notes on GitHub before upgrading
## Upgrade Paths
### systemd and Proxmox LXC installs
Preferred path:
- **Settings → System → Updates**
If you prefer CLI, use the official installer for the target version:
```bash
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install.sh | \
sudo bash -s -- --version v5.0.0
```
### Docker
```bash
docker pull rcourtman/pulse:latest
docker compose up -d
```
### Kubernetes (Helm)
```bash
helm upgrade pulse oci://ghcr.io/rcourtman/pulse-chart -n pulse
```
## Post-Upgrade Checklist
- Confirm version: `GET /api/version`
- Confirm scheduler health: `GET /api/monitoring/scheduler/health`
- Confirm nodes are polling and no breakers are stuck open
- Confirm notifications still send (send a test)
- Confirm agents are connected (if used)
## Notes and Common Gotchas
### Bootstrap token on fresh auth setup
If you reset auth (for example by deleting `.env`), Pulse may require a bootstrap token before you can complete setup.
- Docker: `docker exec pulse /app/pulse bootstrap-token`
- systemd/LXC: `sudo pulse bootstrap-token`
### Temperature monitoring in containers
If Pulse runs in a container and you are relying on SSH-based temperature collection, v5 blocks that in hardened configurations.
Preferred option:
- Install the unified agent (`pulse-agent`) on Proxmox hosts with `--enable-proxmox`
Deprecated option (existing installs only):
- `pulse-sensor-proxy` continues to work for now, but it is deprecated in v5 and not recommended for new installs. Plan to migrate to the unified agent.

View File

@@ -20,14 +20,13 @@ Monitor actual disk usage inside your VMs using the QEMU Guest Agent.
* **Proxmox Permissions**:
* **Proxmox 8**: `VM.Monitor`
* **Proxmox 9+**: `VM.GuestAgent.Audit`
* *Note: The Pulse setup script handles these permissions automatically.*
## 🔧 Troubleshooting
| Issue | Solution |
| :--- | :--- |
| **Disk shows "-"** | Hover over the dash for details. Common causes: Agent not running, disabled in config, or permission denied. |
| **Permission Denied** | Re-run the Pulse setup script to update permissions, or manually add `VM.GuestAgent.Audit`. |
| **Permission Denied** | Ensure your Proxmox token/user has `VM.GuestAgent.Audit` (PVE 9+) or `VM.Monitor` (PVE 8). |
| **Agent Timeout** | Increase timeouts via env vars if network is slow: `GUEST_AGENT_FSINFO_TIMEOUT=10s`. |
| **Windows VMs** | Ensure the **QEMU Guest Agent** service is running in Windows Services. |

View File

@@ -39,5 +39,5 @@ For generic webhooks, use Go templates to format the JSON payload.
## 🛡️ Security
- **Private IPs**: By default, webhooks to private IPs are blocked. Allow them in **Settings → System → Webhook Security**.
- **Private IPs**: By default, webhooks to private IPs are blocked. Allow them in **Settings → System → Network → Webhook Security**.
- **Headers**: Add custom headers (e.g., `Authorization: Bearer ...`) in the webhook config.

View File

@@ -23,7 +23,7 @@ pveum acl modify /nodes -user pulse-monitor@pam -role PVEAuditor
ZFS monitoring is **enabled by default**. To disable it:
```bash
# Add to /opt/pulse/.env
# Add to /etc/pulse/.env (systemd/LXC) or /data/.env (Docker/Kubernetes)
PULSE_DISABLE_ZFS_MONITORING=true
```
@@ -39,4 +39,4 @@ PULSE_DISABLE_ZFS_MONITORING=true
**No ZFS Data?**
1. Check permissions: `pveum user permissions pulse-monitor@pam`.
2. Verify pools exist: `zpool list`.
3. Check logs: `grep ZFS /opt/pulse/pulse.log`.
3. Check logs: `journalctl -u pulse -n 200 | grep -i zfs`.

View File

@@ -9,19 +9,22 @@ Pulse uses an adaptive scheduler to optimize polling based on instance health an
* **Backoff**: Exponential retry delays (5s to 5m).
## ⚙️ Configuration
Adaptive polling is **enabled by default**.
Adaptive polling is **disabled by default**.
### UI
**Settings → System → Monitoring**.
There is currently no dedicated UI for adaptive polling in v5.
### Environment Variables
| Variable | Default | Description |
| :--- | :--- | :--- |
| `ADAPTIVE_POLLING_ENABLED` | `true` | Enable/disable. |
| `ADAPTIVE_POLLING_ENABLED` | `false` | Enable/disable. |
| `ADAPTIVE_POLLING_BASE_INTERVAL` | `10s` | Healthy poll rate. |
| `ADAPTIVE_POLLING_MIN_INTERVAL` | `5s` | Active/busy rate. |
| `ADAPTIVE_POLLING_MAX_INTERVAL` | `5m` | Idle/backoff rate. |
### system.json
You can also set `adaptivePollingEnabled` (and related interval fields) in `system.json` and restart Pulse.
## 📊 Metrics
Exposed at `:9091/metrics`.

View File

@@ -2,6 +2,12 @@
Pulse exposes metrics at `/metrics` (default port `9091`).
Example scrape target:
- `http://<pulse-host>:9091/metrics`
This listener is separate from the main UI/API port (`7655`). In Docker and Kubernetes you must expose `9091` explicitly if you want to scrape it from outside the container/pod.
## 🌐 HTTP Ingress
| Metric | Type | Description |
| :--- | :--- | :--- |

View File

@@ -1,18 +1,20 @@
# 🚀 Adaptive Polling Rollout
Safely enable dynamic scheduling (v4.24.0+).
Safely enable dynamic scheduling (v5+).
## 📋 Pre-Flight
1. **Snapshot Health**:
```bash
curl -s http://localhost:7655/api/monitoring/scheduler/health | jq .
curl -s -H "X-API-Token: $TOKEN" http://localhost:7655/api/monitoring/scheduler/health | jq .
```
2. **Check Metrics**: Ensure `pulse_monitor_poll_queue_depth` is stable.
## 🟢 Enable
Choose one method:
* **UI**: Settings → System → Monitoring → Adaptive Polling.
* **CLI**: `jq '.AdaptivePollingEnabled=true' /var/lib/pulse/system.json > tmp && mv tmp system.json`
* **UI**: Not currently exposed in the v5 UI (use CLI or env vars).
* **CLI**:
- systemd/LXC: `jq '.adaptivePollingEnabled=true' /etc/pulse/system.json > /tmp/system.json && sudo mv /tmp/system.json /etc/pulse/system.json`
- Docker/Kubernetes: edit `/data/system.json` in the volume and restart the container/pod
* **Env**: `ADAPTIVE_POLLING_ENABLED=true` (Docker/K8s).
## 🔍 Monitor (First 15m)
@@ -25,6 +27,6 @@ watch -n 5 'curl -s http://localhost:9091/metrics | grep pulse_monitor_poll_queu
## ↩️ Rollback
If instability occurs > 10m:
1. **Disable**: Toggle off via UI or Env.
1. **Disable**: Remove the env var override or set `adaptivePollingEnabled=false` in `system.json`.
2. **Restart**: Required if using Env/CLI overrides.
3. **Verify**: Confirm queue drains.

View File

@@ -1,5 +1,8 @@
# 🔄 Sensor Proxy Audit Log Rotation
> **Deprecated in v5:** `pulse-sensor-proxy` is deprecated and not recommended for new deployments.
> This document is retained for existing installations during the migration window.
The proxy writes append-only, hash-chained logs to `/var/log/pulse/sensor-proxy/audit.log`.
## ⚠️ Important

View File

@@ -13,16 +13,16 @@ Manage Pulse auto-updates on host-mode installations.
## 🚀 Enable/Disable
### Via UI (Recommended)
**Settings → System → Updates → Automatic Updates**.
**Settings → System → Updates**.
### Via CLI
```bash
# Enable
sudo jq '.autoUpdateEnabled=true' /var/lib/pulse/system.json > tmp && sudo mv tmp /var/lib/pulse/system.json
sudo jq '.autoUpdateEnabled=true' /etc/pulse/system.json > /tmp/system.json && sudo mv /tmp/system.json /etc/pulse/system.json
sudo systemctl enable --now pulse-update.timer
# Disable
sudo jq '.autoUpdateEnabled=false' /var/lib/pulse/system.json > tmp && sudo mv tmp /var/lib/pulse/system.json
sudo jq '.autoUpdateEnabled=false' /etc/pulse/system.json > /tmp/system.json && sudo mv /tmp/system.json /etc/pulse/system.json
sudo systemctl disable --now pulse-update.timer
```
@@ -40,8 +40,9 @@ journalctl -u pulse-update -f
## ↩️ Rollback
If an update fails:
1. Check logs: `/var/log/pulse/update-YYYYMMDDHHMMSS.log`.
2. Revert manually:
2. Use the **Rollback** action in **Settings → System → Updates** if available for your deployment type.
3. If you need to pin a specific version, re-run the installer with a version:
```bash
sudo /opt/pulse/install.sh --version v4.30.0
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install.sh | \
sudo bash -s -- --version vX.Y.Z
```
Or use the **Rollback** button in the UI if available.

View File

@@ -1,10 +1,10 @@
# ⚙️ Sensor Proxy Configuration
> **⚠️ Deprecated:** The sensor-proxy is deprecated in favor of the unified Pulse agent.
> For new installations, use `install.sh --enable-proxmox` instead.
> See [TEMPERATURE_MONITORING.md](/docs/security/TEMPERATURE_MONITORING.md).
> **Deprecated in v5:** `pulse-sensor-proxy` is deprecated and not recommended for new deployments.
> Use `pulse-agent --enable-proxmox` for temperature monitoring.
> This document is retained for existing installations during the migration window.
Safe configuration management using the CLI (v4.31.1+).
Safe configuration management using the built-in CLI.
## 📂 Files
* **`config.yaml`**: General settings (logging, metrics).

View File

@@ -1,8 +1,8 @@
# 📝 Sensor Proxy Log Forwarding
> **⚠️ Deprecated:** The sensor-proxy is deprecated in favor of the unified Pulse agent.
> For new installations, use `install.sh --enable-proxmox` instead.
> See [TEMPERATURE_MONITORING.md](/docs/security/TEMPERATURE_MONITORING.md).
> **Deprecated in v5:** `pulse-sensor-proxy` is deprecated and not recommended for new deployments.
> Use `pulse-agent --enable-proxmox` for temperature monitoring.
> This document is retained for existing installations during the migration window.
Forward `audit.log` and `proxy.log` to a central SIEM via RELP + TLS.
@@ -16,7 +16,7 @@ sudo REMOTE_HOST=logs.example.com \
CA_CERT=/path/to/ca.crt \
CLIENT_CERT=/path/to/client.crt \
CLIENT_KEY=/path/to/client.key \
/opt/pulse/scripts/setup-log-forwarding.sh
bash -c "$(curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/setup-log-forwarding.sh)"
```
## 📋 What It Does

View File

@@ -0,0 +1,137 @@
# Release Notes (v4 archive)
This file archives the v4-era release notes that previously lived at `docs/RELEASE_NOTES.md`.
For current releases, refer to GitHub Releases:
https://github.com/rcourtman/Pulse/releases
---
# Pulse v4.31.0
## What's Changed
### Temperature monitoring over HTTPS
- `pulse-sensor-proxy` now exposes an authenticated HTTPS endpoint per Proxmox host. Pulse stores each proxys URL + bearer token and always polls `https://node:8443/temps` before falling back to local sockets or SSH, eliminating the fragile “single proxy for every node” chain.
- Installations auto-register via the new `/api/temperature-proxy/register` endpoint, generate 4096-bit certificates, enforce CIDR allowlists, and log every HTTP request through the proxys audit pipeline.
- The backend temperature collector understands proxy URLs/tokens, respects strict timeouts, and publishes richer diagnostics so operators can see which node failed and why.
### Installer, diagnostics, and UI updates
- `scripts/install-sensor-proxy.sh` gained `--http-mode` / `--http-addr`, automatic TLS generation, rollback-on-failure, allowed subnet auto-population, and a comprehensive uninstall path that purges sockets, TLS secrets, and LXC bind mounts.
- A new `Settings → Diagnostics → Temperature Proxy` table surfaces proxy health, registration status, and the errors returned by the HTTPS endpoint.
- `scripts/tests/test-sensor-proxy-http.sh` exercises the HTTP installer path end-to-end inside Docker to prevent regressions.
### Host agent refinements
- Windows PowerShell installers/uninstallers now log verbosely, harden permissions, and clean up services more reliably.
- Linux host-agent scripts aligned with the new diagnostics UX and scoped token workflow so onboarding is less error-prone.
## Upgrade Notes
Temperature monitoring will not work for remote nodes until every Proxmox host is reinstalled with the new HTTPS workflow. Follow these steps per host:
```bash
# 1. Remove any pre-v4.31.0 proxy install
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install-sensor-proxy.sh | \
sudo bash -s -- --uninstall --purge
# 2. Install the HTTP-enabled proxy and register it with Pulse
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install-sensor-proxy.sh | \
sudo bash -s -- --standalone --http-mode --pulse-server https://your-pulse-host:7655
```
Only the Pulse server (or container host) needs network access to TCP/8443 on each node. After reinstalling, open **Settings → Diagnostics → Temperature Proxy** to confirm each node reports “HTTPS proxy healthy”. If not, grab the diagnostics entry or run:
```bash
curl -vk https://node.example:8443/health \
-H "Authorization: Bearer $(sudo cat /etc/pulse-sensor-proxy/.http-auth-token)"
```
## Installation
- **Install or upgrade with the helper script**
```bash
curl -sL https://github.com/rcourtman/Pulse/releases/latest/download/install.sh | bash
```
- **Binary upgrade on systemd hosts**
```bash
sudo systemctl stop pulse
curl -fsSL https://github.com/rcourtman/Pulse/releases/download/v4.31.0/pulse-v4.31.0-linux-amd64.tar.gz \
| sudo tar -xz -C /opt/pulse --strip-components=1
sudo systemctl start pulse
```
- **Docker update**
```bash
docker pull rcourtman/pulse:v4.31.0
docker stop pulse || true
docker rm pulse || true
docker run -d --name pulse --restart unless-stopped -p 7655:7655 rcourtman/pulse:v4.31.0
```
- **Helm upgrade**
```bash
helm upgrade --install pulse oci://ghcr.io/rcourtman/pulse-chart \
--version 4.31.0 \
--namespace pulse --create-namespace
```
## Downloads
- Multi-arch Linux tarballs (amd64/arm64/armv7)
- Standalone sensor proxy binaries (now include HTTP mode)
- Helm chart archive (pulse-4.31.0-helm.tgz)
- SHA256 checksums (checksums.txt)
- Docker tags: rcourtman/pulse:v4.31.0, :4.31, :4, :latest
---
# Pulse v4.26.1
## What's Changed
### New
- Standalone host agents now ship with guided Linux, macOS, and Windows installers that stream registration status back to Pulse, generate scoped commands from **Settings → Agents**, and feed host metrics into alerts alongside Proxmox and Docker.
- Alert thresholds gained host-level overrides, connectivity toggles, and snapshot size guardrails so you can tune offline behaviour per host while keeping a global policy for other resources.
- API tokens now support fine-grained scopes with a redesigned manager that previews command templates, highlights unused credentials, and makes revocation a single click.
- Proxmox replication jobs surface in a dedicated **Settings → Hosts → Replication** view with API plumbing to track task health and bubble failures into the monitoring pipeline.
- Docker Swarm environments now receive service/task-aware reporting with configurable scope, plus a Docker settings view that highlights manager/worker roles, stack health, rollout status, and service alert thresholds.
### Improvements
- Dashboard loads and drawer links respond faster thanks to cached guest metadata, reduced polling allocations, and inline URL editing that no longer flashes on WebSocket updates.
- Settings navigation is reorganized with dedicated Docker and Hosts sections, richer filters, and platform icons that make agent onboarding and discovery workflows clearer.
- LXC guests now report dynamic interface IPs, configuration metadata, and queue metrics so alerting, discovery, and drawers stay accurate even during rapid container churn.
- Notifications consolidate into a consistent toast system, with clearer feedback during agent setup, token generation, and background job state changes.
### Bug Fixes
- Enforced explicit node naming and respected custom Proxmox ports so cluster discovery, overrides, and disk monitoring defaults remain intact after edits.
- Hardened setup-token flows and checksum handling in the installers to prevent stale credentials and guarantee the correct binaries are fetched.
- Treated 501 responses from the Proxmox API as non-fatal during failover, restored FreeBSD disk counter parsing, and stopped guest link icons from re-triggering animations on updates.
- Preserved inline editor state across WebSocket refreshes and ensured Docker host identifiers stay collision-safe in mixed environments.
## Installation
- **Install or upgrade with the helper script**
```bash
curl -sL https://github.com/rcourtman/Pulse/releases/latest/download/install.sh | bash
```
- **Binary upgrade on systemd hosts**
```bash
sudo systemctl stop pulse
curl -fsSL https://github.com/rcourtman/Pulse/releases/download/v4.26.1/pulse-v4.26.1-linux-amd64.tar.gz \
| sudo tar -xz -C /opt/pulse --strip-components=1
sudo systemctl start pulse
```
- **Docker update**
```bash
docker pull rcourtman/pulse:v4.26.1
docker stop pulse || true
docker rm pulse || true
docker run -d --name pulse --restart unless-stopped -p 7655:7655 rcourtman/pulse:v4.26.1
```
- **Helm upgrade**
```bash
helm upgrade --install pulse oci://ghcr.io/rcourtman/pulse-chart \
--version 4.26.1 \
--namespace pulse --create-namespace
```
## Downloads
- Multi-arch Linux tarballs (amd64/arm64/armv7)
- Standalone sensor proxy binaries
- Helm chart archive (pulse-4.26.1-helm.tgz)
- SHA256 checksums (checksums.txt)
- Docker tags: rcourtman/pulse:v4.26.1, :4.26, :4, :latest

View File

@@ -1,8 +1,8 @@
# 🛡️ Sensor Proxy Hardening
# 🛡️ Sensor Proxy AppArmor (Optional)
> **⚠️ Deprecated:** The sensor-proxy is deprecated in favor of the unified Pulse agent.
> For new installations, use `install.sh --enable-proxmox` instead.
> See [TEMPERATURE_MONITORING.md](/docs/security/TEMPERATURE_MONITORING.md).
> **Deprecated in v5:** `pulse-sensor-proxy` is deprecated and not recommended for new deployments.
> Use `pulse-agent --enable-proxmox` for temperature monitoring.
> This document is retained for existing installations during the migration window.
Secure `pulse-sensor-proxy` with AppArmor and Seccomp.
@@ -14,7 +14,8 @@ Profile: `security/apparmor/pulse-sensor-proxy.apparmor`
### Install & Enforce
```bash
sudo install -m 0644 security/apparmor/pulse-sensor-proxy.apparmor /etc/apparmor.d/pulse-sensor-proxy
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/security/apparmor/pulse-sensor-proxy.apparmor | \
sudo tee /etc/apparmor.d/pulse-sensor-proxy >/dev/null
sudo apparmor_parser -r /etc/apparmor.d/pulse-sensor-proxy
sudo aa-enforce pulse-sensor-proxy
```
@@ -36,7 +37,10 @@ SystemCallAllow=accept;connect;recvfrom;sendto;recvmsg;sendmsg;sendmmsg;getsockn
### Containers (Docker/Podman)
```bash
podman run --seccomp-profile /opt/pulse/security/seccomp/pulse-sensor-proxy.json ...
curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/security/seccomp/pulse-sensor-proxy.json | \
sudo tee /etc/pulse-sensor-proxy.seccomp.json >/dev/null
podman run --seccomp-profile /etc/pulse-sensor-proxy.seccomp.json ...
```
## 🔍 Verification

View File

@@ -1,8 +1,8 @@
# 🛡️ Sensor Proxy Hardening
> **⚠️ Deprecated:** The sensor-proxy is deprecated in favor of the unified Pulse agent.
> For new installations, use `install.sh --enable-proxmox` instead.
> See [TEMPERATURE_MONITORING.md](/docs/security/TEMPERATURE_MONITORING.md).
> **Deprecated in v5:** `pulse-sensor-proxy` is deprecated and not recommended for new deployments.
> Use `pulse-agent --enable-proxmox` for temperature monitoring.
> This document is retained for existing installations during the migration window.
The `pulse-sensor-proxy` runs on the host to securely collect temperatures, keeping SSH keys out of containers.
@@ -48,7 +48,7 @@ SSH keys are restricted to `sensors -j` only.
**Rotation**:
```bash
/opt/pulse/scripts/pulse-sensor-proxy-rotate-keys.sh
bash -c "$(curl -fsSL https://raw.githubusercontent.com/rcourtman/Pulse/main/scripts/pulse-proxy-rotate-keys.sh)"
```
* **Dry Run**: Add `--dry-run`.
* **Rollback**: Add `--rollback`.
@@ -56,6 +56,9 @@ SSH keys are restricted to `sensors -j` only.
## 🚨 Incident Response
If compromised:
1. **Stop Proxy**: `systemctl stop pulse-sensor-proxy`.
2. **Rotate Keys**: Remove old keys from nodes manually or use `pulse-sensor-proxy-rotate-keys.sh`.
2. **Rotate Keys**: Remove old keys from nodes manually or use the rotation helper above.
3. **Audit Logs**: Check `journalctl -u pulse-sensor-proxy`.
4. **Reinstall**: Run `/opt/pulse/scripts/install-sensor-proxy.sh`.
4. **Reinstall**:
```bash
curl -fsSL https://github.com/rcourtman/Pulse/releases/latest/download/install-sensor-proxy.sh | sudo bash
```

View File

@@ -1,8 +1,8 @@
# 🌐 Sensor Proxy Network Segmentation
> **⚠️ Deprecated:** The sensor-proxy is deprecated in favor of the unified Pulse agent.
> For new installations, use `install.sh --enable-proxmox` instead.
> See [TEMPERATURE_MONITORING.md](/docs/security/TEMPERATURE_MONITORING.md).
> **Deprecated in v5:** `pulse-sensor-proxy` is deprecated and not recommended for new deployments.
> Use `pulse-agent --enable-proxmox` for temperature monitoring.
> This document is retained for existing installations during the migration window.
Isolate the proxy to prevent lateral movement.

View File

@@ -1,13 +1,16 @@
# 🌡️ Temperature Monitoring
Pulse supports two methods for collecting hardware temperatures from Proxmox nodes.
This page describes the recommended v5 approach for temperature monitoring and the security tradeoffs between approaches.
For the full sensor-proxy setup guide (socket mounts, HTTP mode, troubleshooting), see:
`docs/TEMPERATURE_MONITORING.md`.
## Recommended: Pulse Agent
The simplest and most feature-rich method is installing the Pulse agent on your Proxmox nodes:
```bash
curl -fsSL http://your-pulse-server:7655/api/download/install.sh | bash -s -- \
curl -fsSL http://your-pulse-server:7655/install.sh | bash -s -- \
--url http://your-pulse-server:7655 \
--token YOUR_TOKEN \
--enable-proxmox
@@ -15,20 +18,16 @@ curl -fsSL http://your-pulse-server:7655/api/download/install.sh | bash -s -- \
**Benefits:**
- ✅ One-command setup
- ✅ Automatic API token creation
- ✅ Temperature monitoring built-in
- ✅ Enables AI features for VM/container management
- ✅ No SSH keys or proxy configuration required
The agent runs `sensors -j` locally and reports temperatures directly to Pulse.
---
## Legacy: Sensor Proxy (SSH-based)
## Deprecated: Sensor Proxy (Host Service)
For users who prefer not to install an agent on their hypervisor, the sensor-proxy method is still available.
> **Note:** This method is deprecated and will be removed in a future release. Consider migrating to the agent-based approach.
`pulse-sensor-proxy` is deprecated in v5 and is not recommended for new deployments. This section is retained for existing installations during the migration window.
### 🛡️ Security Model
* **Isolation**: SSH keys live on the host, not in the container.
@@ -57,3 +56,9 @@ All requests logged to system journal:
journalctl -u pulse-sensor-proxy
```
Logs include: `uid`, `pid`, `method`, `node`, `correlation_id`.
### Related Docs
- Sensor proxy hardening: `docs/security/SENSOR_PROXY_HARDENING.md`
- Network segmentation: `docs/security/SENSOR_PROXY_NETWORK.md`
- AppArmor/Seccomp: `docs/security/SENSOR_PROXY_APPARMOR.md`

View File

@@ -6,7 +6,6 @@ import { copyToClipboard } from '@/utils/clipboard';
import { showSuccess, showError } from '@/utils/toast';
import { getPulseBaseUrl } from '@/utils/url';
import { NodesAPI } from '@/api/nodes';
import { apiFetchJSON } from '@/utils/apiClient';
import { SectionHeader } from '@/components/shared/SectionHeader';
import {
formField,
@@ -40,12 +39,6 @@ type TemperatureTransportDetail = {
disable?: boolean;
};
interface ProxyInstallResponse {
command: string;
pulseURL: string;
node?: string;
}
const deriveNameFromHost = (host: string): string => {
let value = host.trim();
if (!value) {
@@ -98,9 +91,6 @@ export const NodeModal: Component<NodeModalProps> = (props) => {
const [quickSetupCommand, setQuickSetupCommand] = createSignal('');
const [quickSetupToken, setQuickSetupToken] = createSignal('');
const [quickSetupExpiry, setQuickSetupExpiry] = createSignal<number | null>(null);
const [proxyInstallCommand, setProxyInstallCommand] = createSignal('');
const [loadingProxyCommand, setLoadingProxyCommand] = createSignal(false);
const [proxyCommandError, setProxyCommandError] = createSignal<string | null>(null);
const [agentInstallCommand, setAgentInstallCommand] = createSignal('');
const [loadingAgentCommand, setLoadingAgentCommand] = createSignal(false);
const [agentCommandError, setAgentCommandError] = createSignal<string | null>(null);
@@ -164,36 +154,6 @@ export const NodeModal: Component<NodeModalProps> = (props) => {
}
return undefined;
};
const shouldOfferProxyCommand = () =>
props.nodeType === 'pve' && Boolean(props.editingNode?.id) && Boolean(temperatureTransportDetail()?.disable);
const fetchProxyInstallCommand = async () => {
if (loadingProxyCommand()) {
return;
}
setLoadingProxyCommand(true);
setProxyCommandError(null);
setProxyInstallCommand('');
try {
const nodeName = props.editingNode?.name ? encodeURIComponent(props.editingNode!.name) : '';
const query = nodeName ? `?node=${nodeName}` : '';
const response = (await apiFetchJSON(
`/api/temperature-proxy/install-command${query}`,
)) as ProxyInstallResponse;
if (!response || typeof response.command !== 'string') {
throw new Error('Proxy installer command unavailable');
}
setProxyInstallCommand(response.command);
showSuccess('HTTPS proxy command ready', undefined, 2000);
} catch (error) {
const message =
error instanceof Error ? error.message : 'Failed to generate HTTPS proxy command';
setProxyCommandError(message);
showError(message);
logger.error('Failed to load proxy install command', error);
} finally {
setLoadingProxyCommand(false);
}
};
const quickSetupExpiryLabel = () => {
const expiry = quickSetupExpiry();
if (!expiry) {
@@ -2105,56 +2065,6 @@ export const NodeModal: Component<NodeModalProps> = (props) => {
{temperatureTransportDetail()?.message}
</p>
</Show>
<Show when={shouldOfferProxyCommand()}>
<div class="mt-3 rounded border border-blue-200 bg-blue-50 p-2 text-xs text-blue-800 dark:border-blue-700 dark:bg-blue-900/20 dark:text-blue-100 space-y-2">
<div class="font-semibold">Temperature proxy (legacy)</div>
<div class="text-amber-700 dark:text-amber-300 text-[11px]">
Recommended: Install the Pulse agent instead (Settings Agents) for temperatures + AI features.
</div>
<div>Or generate a one-line installer command for the standalone temperature proxy:</div>
<div class="flex flex-wrap gap-2">
<button
type="button"
class="rounded bg-blue-600 px-2 py-1 text-white hover:bg-blue-700 disabled:opacity-70"
onClick={() => void fetchProxyInstallCommand()}
disabled={loadingProxyCommand()}
>
{loadingProxyCommand() ? 'Generating…' : 'Generate command'}
</button>
<a
href="/api/install/install-sensor-proxy.sh"
target="_blank"
rel="noreferrer"
class="rounded border border-blue-400 px-2 py-1 text-blue-700 hover:bg-blue-100 dark:text-blue-200 dark:border-blue-300 dark:hover:bg-blue-900/40"
>
Download installer script
</a>
</div>
<Show when={proxyCommandError()}>
<p class="text-xs text-red-600 dark:text-red-300">
{proxyCommandError()}
</p>
</Show>
<Show when={proxyInstallCommand()}>
<pre class="overflow-x-auto rounded bg-white/70 px-2 py-1 font-mono text-[0.65rem] text-gray-800 dark:bg-gray-900/40 dark:text-gray-200">
{proxyInstallCommand()}
</pre>
<button
type="button"
class="rounded bg-blue-600 px-2 py-1 text-white hover:bg-blue-700"
onClick={() => {
const command = proxyInstallCommand();
if (command) {
void copyToClipboard(command);
showSuccess('Proxy installer command copied');
}
}}
>
Copy command
</button>
</Show>
</div>
</Show>
<Show when={!temperatureMonitoringEnabledValue()}>
<p class="mt-3 rounded border border-blue-200 bg-blue-50 p-2 text-xs text-blue-700 dark:border-blue-700 dark:bg-blue-900/20 dark:text-blue-200">
Pulse will skip SSH temperature polling for this node. Existing dashboard readings will stop refreshing.

View File

@@ -4738,6 +4738,12 @@ func (r *Router) handleDownloadInstallerScript(w http.ResponseWriter, req *http.
return
}
log.Warn().
Str("path", req.URL.Path).
Str("remote", req.RemoteAddr).
Msg("Deprecated pulse-sensor-proxy installer requested - use pulse-agent --enable-proxmox instead")
w.Header().Set("Warning", `299 - "pulse-sensor-proxy is deprecated in v5; use pulse-agent --enable-proxmox"`)
// Try pre-built location first (in container)
scriptPath := "/opt/pulse/scripts/install-sensor-proxy.sh"
content, err := os.ReadFile(scriptPath)
@@ -4842,6 +4848,12 @@ func (r *Router) handleTemperatureProxyInstallCommand(w http.ResponseWriter, req
return
}
log.Warn().
Str("path", req.URL.Path).
Str("remote", req.RemoteAddr).
Msg("Deprecated sensor-proxy install command requested - use pulse-agent --enable-proxmox instead")
w.Header().Set("Warning", `299 - "pulse-sensor-proxy is deprecated in v5; use pulse-agent --enable-proxmox"`)
baseURL := strings.TrimSpace(r.resolvePublicURL(req))
if baseURL == "" {
http.Error(w, "Pulse public URL is not configured", http.StatusBadRequest)

View File

@@ -44,10 +44,11 @@ type Config struct {
KubeContext string
// Report shaping
IncludeNamespaces []string
ExcludeNamespaces []string
IncludeAllPods bool // Include all non-succeeded pods (still capped)
MaxPods int // Max pods included in the report
IncludeNamespaces []string
ExcludeNamespaces []string
IncludeAllPods bool // Include all non-succeeded pods (still capped)
IncludeAllDeployments bool // Include all deployments, not just problem ones
MaxPods int // Max pods included in the report
}
type Agent struct {
@@ -600,7 +601,7 @@ func (a *Agent) collectDeployments(ctx context.Context) ([]agentsk8s.Deployment,
if !a.namespaceAllowed(dep.Namespace) {
continue
}
if !isProblemDeployment(dep) {
if !a.cfg.IncludeAllDeployments && !isProblemDeployment(dep) {
continue
}

View File

@@ -183,6 +183,39 @@ func TestCollectDeployments_FiltersProblems(t *testing.T) {
}
}
func TestCollectDeployments_IncludeAllDeployments(t *testing.T) {
replicas := int32(3)
okReplicas := int32(2)
clientset := fake.NewSimpleClientset(
&appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{Namespace: "a", Name: "bad"},
Spec: appsv1.DeploymentSpec{Replicas: &replicas},
Status: appsv1.DeploymentStatus{AvailableReplicas: 2, ReadyReplicas: 2, UpdatedReplicas: 2},
},
&appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{Namespace: "a", Name: "ok"},
Spec: appsv1.DeploymentSpec{Replicas: &okReplicas},
Status: appsv1.DeploymentStatus{AvailableReplicas: 2, ReadyReplicas: 2, UpdatedReplicas: 2},
},
)
a := &Agent{
cfg: Config{IncludeAllDeployments: true},
kubeClient: clientset,
includeNamespaces: nil,
excludeNamespaces: nil,
}
deps, err := a.collectDeployments(context.Background())
if err != nil {
t.Fatalf("collectDeployments: %v", err)
}
if len(deps) != 2 {
t.Fatalf("expected 2 deployments with IncludeAllDeployments=true, got %d (%+v)", len(deps), deps)
}
}
func TestCollectNodes_MapsReadyRolesAndResources(t *testing.T) {
clientset := fake.NewSimpleClientset(
&corev1.Node{