Getting Started with Nebula Pulse
Nebula Pulse is a Docker Swarm management platform that gives you a visual control plane for your container infrastructure. Deploy stacks, monitor services, auto-scale workloads, and integrate with your Git workflows — all from a single interface.
What is Nebula Pulse?
At its core, Nebula Pulse connects to the Docker Swarm manager on your host and exposes a rich UI and API for managing your cluster. A lightweight Nebula Agent runs as a global service on every node, collecting real-time CPU and memory metrics that the platform uses for autoscaling and observability.
🆓 Free / Community
- Deploy & destroy stacks
- View logs & service state
- Manual scaling
- Up to 5 users
- Audit logging
⚡ Pro
- Real-time CPU & memory metrics
- CPU-based autoscaling
- Multi-node management via SSH
- Node label management
- Backup & restore
- Full RBAC, unlimited users
🏢 Enterprise
- AI Assistant (YAML generation)
- GitOps auto-deploy
- SSO (Google, Azure AD, LDAP)
- Full REST API access
- Audit & compliance exports
Architecture Overview
Nebula Pulse is designed to run on a single manager node or across a multi-node swarm. The following services make up the platform:
| Service | Purpose | Required |
|---|---|---|
nebula_app | Node.js API + backend logic | Yes |
nebula_web | Nginx serving the frontend SPA | Yes |
nebula_db | PostgreSQL — users, audit logs, GitOps config | Yes |
nebula_agent | Rust telemetry agent (global service on all nodes) | Pro+ |
ai_sidecar | Python RAG assistant + Chroma vector DB | Enterprise |
ai_ollama | Local LLM inference (Ollama) | Enterprise |
Installation
Run the automated installer on your Docker Swarm manager node. For existing installations, see .
curl -fsSL https://get.nebula-pulse.io/install.sh | sudo bash
The installer will:
- Initialize Docker Swarm (if not already active)
- Generate a random admin password and JWT secret
- Deploy the full Nebula Pulse stack
- Print the access URL and admin credentials
First Login
- Open the URL printed by the installer (default port
80) - Sign in with username
adminand the generated password - You will be prompted to change your password on first login
- After changing your password, the default credential is cleared from the environment for security
Security note: After you change your password, Nebula removes the ADMIN_PASSWORD environment variable from the Docker Swarm service spec so it cannot be reinjected on restart.
User Roles
| Role | Capabilities |
|---|---|
| Admin | Full access — users, nodes, stacks, audit logs, license management |
| DevOps | Deploy/destroy stacks, scale services, manage nodes |
| Developer | View stacks and logs, trigger GitOps syncs |
| User | Read-only — view service status and metrics |
Deployment Modes
Single Node
All services run on one machine. Ideal for development, staging, or small production workloads. Docker Swarm runs in single-node mode — you can still use the full Nebula Pulse feature set.
Multi-Node Cluster
Add worker nodes from the Swarm Topology tab using the SSH join feature (Pro+). The Nebula Agent deploys automatically to every node, enabling per-node metrics and cross-node visibility.
Air-Gapped / Offline
An offline installer package is available. License activation works via file upload — no external network access required.
Upgrading Nebula Pulse
Every Nebula Pulse release package includes an upgrade.sh script that updates the application in-place without touching your credentials, stacks, database, or license. Running install.sh on an existing installation automatically detects this and delegates to upgrade.sh — so either script works.
How It Works
The upgrade process is non-destructive by design. New application code is laid over the existing installation while critical data is stashed, restored, and verified. The Docker Swarm stack is then rolled out with docker stack deploy, which performs a rolling update on all services.
Database schema migrations run automatically on first login after an upgrade — no manual SQL required. New tables and columns are added with CREATE TABLE IF NOT EXISTS / ALTER TABLE ADD COLUMN IF NOT EXISTS, so existing data is always preserved.
What Is Preserved
| Item | Location | Why It Matters |
|---|---|---|
.env | /opt/nebula/.env | JWT secret, DB credentials, SMTP config — regenerating would invalidate all active sessions and break integrations |
license.key | /opt/nebula/license.key | Commercial license — must survive upgrades |
stacks/ | /opt/nebula/stacks/ | All user-created stack YAML files |
ai/documents/ | /opt/nebula/ai/documents/ | User-uploaded documents for the RAG assistant |
certs/ | /opt/nebula/certs/ | TLS certificates |
| PostgreSQL data | Docker named volume nebula_db_fresh | All users, audit logs, GitOps config, secrets metadata — stored in a Docker volume outside the filesystem |
New environment variables introduced in a release (e.g. a new integration setting) are automatically appended to your .env with their default values. Existing values are never modified.
Upgrade Steps
Extract the new release package and run the upgrade script as root:
tar -xzf nebulaPulse.tar.gz
cd nebula_dist
sudo ./upgrade.sh
Alternatively, running install.sh on an existing host detects the existing installation and automatically delegates to upgrade.sh:
sudo ./install.sh # same as upgrade.sh when /opt/nebula/.env exists
The script runs through 7 stages with clear progress output:
/opt/nebula_backups/Pre-Upgrade Backup
A backup is automatically created before any files are modified. Backups are stored at:
/opt/nebula_backups/pre_upgrade_<version>_<timestamp>.tar.gz
Each backup contains:
.env— all environment variableslicense.key— your license filenginx.conf— web server configstacks/— all stack YAML filesdatabase_pre_upgrade.sql— full PostgreSQL dump
Retain backups before major version upgrades. Backups are not automatically pruned. Consider moving them off-host before a major release upgrade.
Rollback
If an upgrade does not go as expected, restore from the pre-upgrade backup:
1. Restore application files
# Extract the backup
cd /opt/nebula_backups
tar -xzf pre_upgrade_4.0.0_20260412_120000.tar.gz
# Restore .env and stacks
cp pre_upgrade_4.0.0_20260412_120000/.env /opt/nebula/.env
cp -r pre_upgrade_4.0.0_20260412_120000/stacks /opt/nebula/
2. Restore the previous release package
Re-run upgrade.sh from the previous release package to roll the application files back, then redeploy the stack.
3. Restore the database (if needed)
DB=$(docker ps -q -f name=nebula_db | head -n 1)
cat pre_upgrade_4.0.0_20260412_120000/database_pre_upgrade.sql \
| docker exec -i $DB psql -U nebula nebula_db
DB rollback is rarely needed. Schema migrations only add columns or tables — they never drop existing data. A DB restore is only necessary if a migration introduced data that must be reverted.
Automation / CI
Pass --yes to skip the interactive confirmation prompt for use in automated pipelines:
sudo ./upgrade.sh --yes
The script exits with code 0 on success and non-zero on any failure. All output is written to stdout/stderr, making it compatible with standard CI log capture.
Upgrade summary output
On completion, upgrade.sh prints a summary including the dashboard URL, the backup archive path, and the version transition:
════════════════════════════════════════════════════
✅ UPGRADE COMPLETE
════════════════════════════════════════════════════
Dashboard : http://192.168.1.132:80
Backup : /opt/nebula_backups/pre_upgrade_4.0.0_20260412_120000.tar.gz
Upgraded : 4.0.0 → 4.1.0
Features & Tiers
Nebula Pulse ships with three licensing tiers. All tiers include core stack management and audit logging. Advanced metrics, autoscaling, GitOps, and AI capabilities are unlocked progressively.
Free / Community
No license key required. The Free tier is fully functional for individuals and small teams getting started with Docker Swarm management.
Included features
- Deploy and destroy stacks from YAML
- Service state monitoring (Running / Degraded / Down)
- Replica count (actual vs desired)
- Node placement visibility
- Manual scaling with +/− buttons
- Container log viewer
- Stack configuration viewer
- Audit logging
- Up to 5 user accounts
Limitations
- No real-time CPU / memory metrics (Docker Swarm API only)
- No autoscaling
- No multi-node SSH join/remove
- No backup & restore
- No SSO or identity provider integration
Pro
Unlocks deep metrics from the Nebula Agent running on each node, autoscaling, and multi-node cluster management.
Everything in Free, plus:
Deep Metrics (Nebula Agent)
| Metric | Source | Example |
|---|---|---|
| vCPU % | Kernel cgroups via agent | 45.2% |
| RAM | Memory cgroups via agent | 256 MiB |
Autoscaling Labels
deploy:
labels:
nebula.scale.cpu: "70" # Scale up when CPU exceeds 70%
nebula.scale.min: "2" # Always keep at least 2 replicas
nebula.scale.max: "10" # Never exceed 10 replicas
- Node management — add nodes via SSH, drain and remove nodes
- Node label management for placement constraints
- File browser for container filesystem access
- One-click backup download (ZIP: stacks, configs, license)
- Full Role-Based Access Control (RBAC) with unlimited users
Enterprise
Full platform capabilities, including AI-powered assistance, GitOps integration, and enterprise identity providers.
Everything in Pro, plus:
- AI Assistant — Natural language Docker Compose generation, RAG-powered documentation queries, troubleshooting help
- GitOps — Auto-deploy on git push from GitHub, GitLab, or Bitbucket
- SSO — Google Workspace, Azure AD / Entra ID, LDAP / Active Directory via Dex OIDC
- REST API — Full programmatic access for external integrations
- Audit exports — CSV and JSON export for compliance reporting
Scaling Protection
Nebula prevents accidental scaling of stateful services. If a service name matches a known stateful pattern, a warning is shown before scaling proceeds:
| Protected Patterns |
|---|
postgres, mysql, mariadb, mongodb |
redis, elasticsearch, cassandra |
rabbitmq, kafka, zookeeper |
Any service with _db or -db in its name |
License Enforcement
| Days Remaining | UI Indicator | Feature Access |
|---|---|---|
| > 30 days | None | Full |
| 10–30 days | Yellow warning | Full |
| 1–10 days | Orange warning | Full |
| 0–7 days (Grace) | Red banner | Full (grace period) |
| Expired | Lockout banner | Reverts to Free |
Warning thresholds differ by billing period: 30 days for yearly subscriptions, 10 days for monthly.
Stack Management
Nebula Pulse manages Docker Swarm stacks with automatic directory organisation, config injection, network creation, and bind-mount volume setup. Each stack gets its own directory under /opt/nebula/stacks/.
Directory Structure
stacks/
rabbitmq/
docker-compose.yml # Stack YAML
rabbitmq-config # Config file saved to disk
rabbitmq-definitions # Config file saved to disk
nginx/
docker-compose.yml
html/ # Auto-created bind mount directory
kafka/
docker-compose.yml
Legacy flat stack files (stacks/nginx.yml) are automatically migrated to the directory structure on platform startup.
Deploying a Stack
Via the UI
- Click STACKS → + Deploy Stack
- Enter a stack name — alphanumeric, dashes, and underscores only. Names cannot start with
nebula. - Paste or type your Docker Compose YAML. Enterprise users can use AI Autocraft to generate YAML from a plain-English description.
- If the YAML references Docker configs, input fields appear automatically for each config file.
- Click EXECUTE to deploy.
What happens during deployment
- Config creation — Docker configs are saved to disk and registered in Swarm
- Network creation — External overlay networks are pre-created
- Volume directories — Local bind-mount paths under
/opt/nebula/stacks/are auto-created - YAML processing — The
configs:top-level block is injected withexternal: true - Stack deploy —
docker stack deploy -c <yaml> <name>is executed - Audit log — Success or failure is recorded with full deployment details
Via the API
POST /api/stack/deploy
Content-Type: application/json
{
"name": "mystack",
"yaml": "version: '3.8'\nservices:\n web:\n image: nginx:alpine",
"configs": {
"config-name": "file content here..."
}
}
Destroying a Stack
- Click STACKS → Destroy Stack
- Select the stack from the dropdown
- Confirm the deletion
On destroy, Nebula:
- Removes the stack via
docker stack rm - Waits 3 seconds for services to drain, then removes Docker configs
- Deletes the stack directory from disk
- Writes an audit log entry
Placement Constraints
Pin services to specific nodes using label constraints:
deploy:
placement:
constraints:
- node.labels.rabbitmq == 1
Use Node Management to assign labels to nodes before deploying stacks with placement constraints.
Stack APIs
| Endpoint | Method | Description |
|---|---|---|
/api/stack/deploy | POST | Deploy a stack with optional configs |
/api/stack/destroy | POST | Destroy a stack and clean up configs |
/api/stacks/list | GET | List all deployed stacks |
/api/stack/:name/yaml | GET | Get the YAML for a stack |
/api/stack/:name/configs | GET | Get config references for a stack |
/api/stack/configs/parse | POST | Parse YAML to detect config references |
Troubleshooting
"No suitable node" error
The stack has placement constraints that no nodes satisfy. Use Node Label Management to add the required labels to your nodes.
Config files not appearing in deploy modal
Make sure the YAML has a configs: section at the service level. Config detection is automatic when you type or paste YAML.
Stack failing after config changes
Docker Swarm configs are immutable. When redeploying, Nebula removes the old config and creates a new one. The 3-second drain delay handles in-flight requests.
Bind mount directory not created
Only directories under /opt/nebula/stacks/ are auto-created. For other paths, create the directory manually before deploying.
Autoscaling
Nebula Pulse automatically scales Docker Swarm services horizontally based on CPU utilisation. The autoscaler evaluates each enabled service every 15 seconds and adjusts replicas within the bounds you define.
Autoscaling Labels
Enable autoscaling by adding three labels to the deploy.labels section of your stack YAML:
| Label | Type | Description |
|---|---|---|
nebula.scale.cpu | String (0–100) | CPU % threshold to trigger scale-up |
nebula.scale.min | String (≥ 1) | Minimum replica count (floor) |
nebula.scale.max | String | Maximum replica count (ceiling) |
Labels must be strings — use quotes: "70" not 70. Labels must be in the deploy.labels section, not the top-level service labels.
version: '3.8'
services:
web:
image: nginx:alpine
deploy:
replicas: 1
labels:
nebula.scale.cpu: "70" # Scale up when CPU > 70%
nebula.scale.min: "1" # Keep at least 1 replica
nebula.scale.max: "10" # Never exceed 10 replicas
resources:
limits:
cpus: '0.5'
memory: 256M
Scale Behavior
Scale Up
- Triggers when average CPU exceeds
nebula.scale.cpu - Adds replicas gradually (+1 or +2 at a time)
- Never exceeds
nebula.scale.max - Check interval: 15 seconds
Scale Down
- Triggers when average CPU drops below 50% of the target threshold
- Example: target = 70% → scale down kicks in below 35%
- Removes replicas gradually to avoid disruption
- Never goes below
nebula.scale.min
CPU Limits and Scaling Speed
CPU percentage is calculated relative to the service's resources.limits.cpus value. Lower CPU limits make percentages rise faster under load, which causes the autoscaler to react more quickly.
resources:
limits:
cpus: '0.25' # Lower limit = faster CPU % rise = faster scaling
memory: 128M
reservations:
cpus: '0.1'
memory: 64M
Examples
Stateless Web Application
version: '3.8'
services:
app:
image: nginx:alpine
ports:
- "8080:80"
deploy:
replicas: 2
labels:
nebula.scale.cpu: "60"
nebula.scale.min: "2"
nebula.scale.max: "8"
resources:
limits:
cpus: '0.5'
memory: 256M
restart_policy:
condition: on-failure
networks:
- app_net
networks:
app_net:
driver: overlay
High-Availability API Service
version: '3.8'
services:
api:
image: myapp/api:latest
ports:
- "3000:3000"
deploy:
replicas: 3
labels:
nebula.scale.cpu: "75"
nebula.scale.min: "3"
nebula.scale.max: "15"
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
cpus: '0.25'
memory: 256M
networks:
- backend
networks:
backend:
driver: overlay
Recommended Thresholds by Service Type
| Service Type | CPU Threshold | Notes |
|---|---|---|
| Web / frontend servers | 60–70% | Low latency sensitive |
| API / microservices | 70–75% | General purpose |
| Databases | 80–85% | Scale conservatively; stateful |
| Background workers | 75–80% | Latency tolerant |
Testing Autoscaling
# Using Apache Bench (100k requests, 50 concurrent, 60 seconds)
ab -n 100000 -c 50 -t 60 http://your-service:port/
# Using hey (simpler syntax)
hey -z 60s -c 50 http://your-service:port/
# Monitor replicas in real time
watch docker service ls
Troubleshooting
Service not scaling
- Confirm labels are in the
deploy.labelssection (not top-levellabels) - Check that
resources.limits.cpusis set on the service - Verify the service is actually under load — check with
docker stats - Ensure
nebula.scale.min<nebula.scale.max
Scaling too aggressively
- Increase the CPU threshold
- Reduce
nebula.scale.max - Increase CPU resource limits so the percentage rises more slowly
Scaling too slowly
- Lower the CPU threshold
- Decrease CPU limits so percentage rises faster
- Increase the initial replica count
GitOps
Nebula Enterprise's GitOps integration automatically deploys updated stack definitions when you push to a configured Git branch. You maintain your infrastructure as code and Nebula handles the deployment.
How GitOps Works
main branchRequirements
- Nebula Enterprise license
- Git repository (GitHub, GitLab, or Bitbucket)
- Repository accessible over HTTPS from your Nebula host
- Nebula accessible from the internet (or from your Git provider's IP range)
Setup
Step 1 — Add Repository in Nebula
- Click GITOPS in the admin panel
- Fill in the repository details: Name, Provider, Repository URL, Branch, and Stack Path
- Click ADD REPOSITORY
- Copy the generated Webhook URL and Secret for the next step
Configuring Webhook Providers
GitHub
- Go to your repo → Settings → Webhooks → Add webhook
- Set Payload URL to the Nebula webhook URL
- Set Content type to
application/json - Paste the Nebula-generated Secret
- Select Just the push event and click Add webhook
GitLab
- Go to your repo → Settings → Webhooks
- Paste the webhook URL and Secret token from Nebula
- Check Push events and click Add webhook
Repository Structure
Organise your repository with stack files under the configured Stack Path (default: /stacks):
/stacks
├── frontend.yml
├── backend.yml
├── database.yml
└── monitoring.yml
# Or by environment:
/stacks
├── production/
│ ├── app.yml
│ └── db.yml
└── staging/
├── app.yml
└── db.yml
Best Practices
Use environment branches
main → Production deployments
staging → Staging environment
develop → Development testing
Configure a separate Nebula GitOps repository entry for each branch.
Pin image versions
services:
api:
image: myorg/api:v1.2.3 # ✅ Specific version
# NOT: image: myorg/api:latest # ❌ Unpredictable
Use Docker secrets for sensitive values
services:
app:
secrets:
- db_password
secrets:
db_password:
external: true # Created separately, never committed to Git
Validate before pushing
# Validate locally before pushing
docker-compose -f stacks/app.yml config
# GitHub Actions validation workflow
# .github/workflows/validate.yml
name: Validate Stacks
on:
push:
paths: ['stacks/**']
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Validate all stack files
run: |
for f in stacks/*.yml; do
docker-compose -f "$f" config > /dev/null && echo "✅ $f"
done
Troubleshooting
Webhook not triggering
- Verify the webhook URL is correct in your Git provider
- Check the secret matches exactly — no trailing spaces
- Ensure Nebula is accessible from the internet
- Review webhook delivery logs in your Git provider
Deployment fails after push
- Check Nebula app logs: click LOGS on the
nebula_appservice - Verify YAML syntax is valid (
docker-compose config) - Ensure container images are accessible from all swarm nodes
- Confirm the stack path in Nebula matches your repository structure
Signature verification failed
- GitHub — Regenerate the webhook secret in Nebula and update it in GitHub
- GitLab — Ensure the
X-Gitlab-Tokenheader matches the Nebula secret exactly
Security Considerations
- All webhooks use cryptographic signature verification (HMAC-SHA256 for GitHub)
- Webhook secrets are generated using cryptographically secure random bytes
- Use HTTPS URLs for repository access; SSH is not yet supported
- For private repositories:
https://[email protected]/org/repo.git - Consider IP allowlisting your webhook endpoint
- Rotate webhook secrets periodically
Node Management
Pro and Enterprise users can manage their Docker Swarm topology directly from Nebula Pulse — adding worker nodes via SSH, assigning placement labels, and draining or removing nodes from the cluster.
Swarm Topology View
The Swarm Topology table shows all nodes in your cluster with their current status, role, IP address, and assigned labels. Each node displays real-time CPU and memory usage collected by the Nebula Agent.
Labels & Placement Constraints
Docker Swarm node labels are key-value pairs that control which nodes run specific services. Nebula provides a UI for managing labels without touching the CLI.
Labels are set on nodes (via Nebula's Node Management UI), while autoscaling labels are set on services (in your stack YAML's deploy.labels section).
Example placement constraint
deploy:
placement:
constraints:
- node.labels.rabbitmq == 1 # Only run on nodes with this label
Common label patterns
| Label | Purpose |
|---|---|
rabbitmq=1, rabbitmq=2 | Pin specific RabbitMQ nodes to specific hosts |
postgres=primary | Pin the primary database to a dedicated host |
zone=us-east-1a | Availability zone awareness |
tier=frontend | Group nodes by role |
env=production | Environment isolation |
Label format rules
- Keys and values: alphanumeric with dots (
.), dashes (-), or underscores (_) - Maximum 128 characters per key or value
- Examples:
rabbitmq=1,app.role=database,zone=us-east
Managing Labels via UI
Adding a label (Admin only)
- Go to the Swarm Topology table
- Find the target node and click + TAG in the Tags column
- Type the label in
key=valueformat (e.g.rabbitmq=1) - Press Enter or click ADD — the label is applied immediately
Removing a label (Admin only)
- Find the label badge on the node
- Click the red × on the badge and confirm
Automatic Labels
When a node joins the swarm via Nebula's SSH join feature, it automatically receives an ip_address label set to its IP address.
API Reference
| Endpoint | Method | Description |
|---|---|---|
/api/nodes/:id/labels | POST | Add or update labels on a node |
/api/nodes/:id/labels/:key | DELETE | Remove a label from a node |
/api/nodes/list | GET | List all nodes with current labels |
POST /api/nodes/abc123/labels
Authorization: Bearer <jwt>
Content-Type: application/json
{ "labels": { "rabbitmq": "1", "zone": "us-east" } }
→ { "success": true, "message": "Labels updated" }
Audit Logging
All label operations are recorded in the audit trail:
- NODE_LABEL_ADD — Label added or updated (includes node ID and key/value)
- NODE_LABEL_REMOVE — Label removed (includes node ID and key)
Docker Configs
Nebula Pulse supports Docker Swarm configs — a mechanism for injecting configuration files into containers at deploy time. Applications like RabbitMQ, HAProxy, Nginx, and Kafka often require configuration files that must exist before services start.
How It Works
Automatic config detection
When you paste YAML into the Deploy Stack modal, Nebula automatically parses it for Docker config references. If any service contains a configs: section, the UI shows input fields for each config file — one textarea per config.
Deploy flow with configs
- Open the Deploy Stack modal
- Enter a stack name and paste your YAML
- A purple CONFIG FILES section appears listing each detected config with its target path
- Paste the content for each config file into its textarea
- Click DEPLOY
Behind the scenes
- Config content is saved to disk at
/opt/nebula/stacks/<stackname>/ - Each config is registered in Docker Swarm via
docker config create - The YAML is modified to add a top-level
configs:block withexternal: true - The stack is deployed with
docker stack deploy
On stack destroy
- Nebula reads the stack YAML to identify config references
- Waits 3 seconds for services to drain
- Removes each config with
docker config rm - Deletes the stack directory and config files from disk
RabbitMQ Cluster Example
Stack YAML
version: "3.8"
services:
rabbitmq1:
image: rabbitmq:3.13-management
hostname: rabbitmq1
configs:
- source: rabbitmq-config
target: /etc/rabbitmq/rabbitmq.conf
- source: rabbitmq-definitions
target: /etc/rabbitmq/definitions.json
networks:
- rmq_net
deploy:
placement:
constraints:
- node.labels.rabbitmq == 1
networks:
rmq_net:
driver: overlay
rabbitmq-config content
cluster_formation.peer_discovery_backend = classic_config
cluster_formation.classic_config.nodes.1 = rabbit@rabbitmq1
cluster_formation.classic_config.nodes.2 = rabbit@rabbitmq2
cluster_formation.classic_config.nodes.3 = rabbit@rabbitmq3
loopback_users.guest = false
Redeployment
Docker Swarm configs are immutable. When redeploying a stack that already has configs, Nebula removes the old configs first, then creates new ones with the updated content before redeploying the stack.
API Reference
| Endpoint | Method | Description |
|---|---|---|
/api/stack/configs/parse | POST | Parse YAML to detect config references |
/api/stack/deploy | POST | Deploy stack with optional configs object |
/api/stack/:name/configs | GET | Get configs associated with a deployed stack |
POST /api/stack/configs/parse
Content-Type: application/json
{ "yaml": "version: '3.8'\nservices:\n rabbitmq:\n configs:\n - source: my-config\n target: /etc/app/config.conf" }
→ {
"success": true,
"configs": [{ "name": "my-config", "target": "/etc/app/config.conf" }]
}
Audit Logging
Nebula Pulse records a comprehensive audit trail of all user actions and system events. Audit logs are available to admin users across all license tiers and include automatic 90-day retention with configurable cleanup.
Event Types
Authentication Events
| Action | Description |
|---|---|
LOGIN_SUCCESS | User successfully authenticated |
LOGIN_FAILURE | Failed login attempt (wrong credentials) |
User Management Events
| Action | Description |
|---|---|
USER_CREATE | New user account created |
USER_DELETE | User account deleted |
PASSWORD_CHANGE | User changed their own password |
PASSWORD_RESET | Admin reset a user's password |
Stack & Service Events
| Action | Description |
|---|---|
STACK_DEPLOY | Stack deployed — success or failure both logged |
STACK_DESTROY | Stack destroyed |
SERVICE_SCALE | Service scaled up or down |
SERVICE_RESTART | Service force-restarted |
Node Events
| Action | Description |
|---|---|
NODE_JOIN | New node joined the swarm via SSH |
NODE_REMOVE | Node drained and removed from swarm |
NODE_LABEL_ADD | Label added or updated on a node |
NODE_LABEL_REMOVE | Label removed from a node |
GitOps Events (Enterprise)
| Action | Description |
|---|---|
GITOPS_REPO_ADD | Git repository added for GitOps |
GITOPS_REPO_DELETE | Git repository removed |
GITOPS_SYNC | Manual or webhook-triggered sync |
System Events
| Action | Description |
|---|---|
LICENSE_ACTIVATE | License key activated |
BACKUP_DOWNLOAD | Backup ZIP downloaded |
FILE_UPLOAD | File uploaded to a container |
Log Entry Fields
Every audit log entry contains:
| Field | Description |
|---|---|
timestamp | When the event occurred (ISO 8601) |
username | Who performed the action |
action | Event type code (e.g. STACK_DEPLOY) |
category | Event category (AUTH, STACK, NODE, etc.) |
target_type | What was affected (stack, node, user, etc.) |
target_name | Name of the affected resource |
ip_address | Client IP (forwarded through Nginx proxy) |
user_agent | Browser / client user agent string |
status | success or failure |
details | Additional context as JSON (error messages, etc.) |
session_id | JWT session identifier |
API & Export
Query audit logs
GET /api/audit/logs?page=1&limit=50&category=STACK&status=failure
Authorization: Bearer <admin-jwt>
Available query parameters: page, limit, category, action, username, status, startDate, endDate
Get audit statistics
GET /api/audit/stats
→ Event counts by category, status breakdown, failed logins, active users
Export logs
GET /api/audit/export?format=csv&startDate=2025-01-01&endDate=2025-12-31
→ Returns CSV or JSON file download
Viewing Logs in the UI
- Log in as an admin user
- Click the admin menu dropdown (top right)
- Select Audit Logs
- Use the filters to search by category, action, user, date range, or status
- Click Export to download as CSV or JSON
Retention Policy
- Default retention: 90 days
- Configurable via
AUDIT_RETENTION_DAYSenvironment variable - Automatic daily cleanup of expired entries
- Cleanup runs on a 24-hour interval
Licensing
Nebula Pulse uses JWT-based license keys to gate features across three tiers. The license file is stored at /opt/nebula/license.key and validated on startup and every hour thereafter.
License Tiers
| Tier | License Required | Key Capabilities |
|---|---|---|
| Free | None | Stack management, manual scaling, audit logging, up to 5 users |
| Pro | JWT license key | + Real-time metrics, autoscaling, multi-node management, RBAC, backup |
| Enterprise | JWT license key | + AI assistant, GitOps, SSO, full REST API |
License Key Format
License keys are signed JWT tokens containing:
| Claim | Description |
|---|---|
tier | pro or enterprise |
client | Company or client name |
billing | monthly or yearly |
features | Array of enabled features |
exp | Expiration timestamp (Unix) |
iss | nebula-pulse |
Activating a License
Via the UI
- In the Nebula Pulse UI, click ACTIVATE PRO (visible on the Free tier)
- Paste your license key into the input field
- Click ACTIVATE
- The page reloads with the new tier active
Via file system
# Write directly to the license file
echo "your.jwt.license.key" > /opt/nebula/license.key
# Or use the generator tool (if you have access)
node /opt/nebula/tools/generate-license.js \
--tier enterprise \
--client "Acme Corp" \
--expires 2027-01-01 \
--output /opt/nebula/license.key
License Status API
GET /api/license/status
→ {
"tier": "enterprise",
"billing": "yearly",
"client": "Acme Corp",
"expired": false,
"gracePeriod": false,
"expiresAt": "2027-01-01T00:00:00.000Z",
"daysRemaining": 300,
"features": ["backup", "nodes", "scaling", "rbac", "ai", "sso", "audit", "api"]
}
Expiry & Grace Period
- Licenses are validated on startup and every hour
- On expiry, a 7-day grace period begins — all features remain active
- After the grace period, the system reverts to Free tier
| Days Until Expiry | Warning | Billing Type |
|---|---|---|
| 30 days | Yellow banner | Yearly |
| 10 days | Yellow banner | Monthly |
| 1–10 days | Orange banner | Both |
| 0–7 days (expired) | Red banner — grace period active | Both |
Troubleshooting
"License key has expired"
Generate a new key with a future expiration date and activate it via the UI or by writing to /opt/nebula/license.key.
Features not appearing after activation
The license is re-read on startup and every hour. If features still don't appear, restart the nebula_app service to force an immediate re-validation.
Key not working
Ensure the license was generated with the correct signing secret. Both the generator and the server must use the same secret key.