Getting Started with Nebula Pulse

Nebula Pulse is a Docker Swarm management platform that gives you a visual control plane for your container infrastructure. Deploy stacks, monitor services, auto-scale workloads, and integrate with your Git workflows — all from a single interface.

What is Nebula Pulse?

At its core, Nebula Pulse connects to the Docker Swarm manager on your host and exposes a rich UI and API for managing your cluster. A lightweight Nebula Agent runs as a global service on every node, collecting real-time CPU and memory metrics that the platform uses for autoscaling and observability.

🆓 Free / Community

  • Deploy & destroy stacks
  • View logs & service state
  • Manual scaling
  • Up to 5 users
  • Audit logging

⚡ Pro

  • Real-time CPU & memory metrics
  • CPU-based autoscaling
  • Multi-node management via SSH
  • Node label management
  • Backup & restore
  • Full RBAC, unlimited users

🏢 Enterprise

  • AI Assistant (YAML generation)
  • GitOps auto-deploy
  • SSO (Google, Azure AD, LDAP)
  • Full REST API access
  • Audit & compliance exports

Architecture Overview

Nebula Pulse is designed to run on a single manager node or across a multi-node swarm. The following services make up the platform:

ServicePurposeRequired
nebula_appNode.js API + backend logicYes
nebula_webNginx serving the frontend SPAYes
nebula_dbPostgreSQL — users, audit logs, GitOps configYes
nebula_agentRust telemetry agent (global service on all nodes)Pro+
ai_sidecarPython RAG assistant + Chroma vector DBEnterprise
ai_ollamaLocal LLM inference (Ollama)Enterprise

Installation

Run the automated installer on your Docker Swarm manager node. For existing installations, see .

bash
curl -fsSL https://get.nebula-pulse.io/install.sh | sudo bash

The installer will:

  1. Initialize Docker Swarm (if not already active)
  2. Generate a random admin password and JWT secret
  3. Deploy the full Nebula Pulse stack
  4. Print the access URL and admin credentials

First Login

  1. Open the URL printed by the installer (default port 80)
  2. Sign in with username admin and the generated password
  3. You will be prompted to change your password on first login
  4. After changing your password, the default credential is cleared from the environment for security

Security note: After you change your password, Nebula removes the ADMIN_PASSWORD environment variable from the Docker Swarm service spec so it cannot be reinjected on restart.

User Roles

RoleCapabilities
AdminFull access — users, nodes, stacks, audit logs, license management
DevOpsDeploy/destroy stacks, scale services, manage nodes
DeveloperView stacks and logs, trigger GitOps syncs
UserRead-only — view service status and metrics

Deployment Modes

Single Node

All services run on one machine. Ideal for development, staging, or small production workloads. Docker Swarm runs in single-node mode — you can still use the full Nebula Pulse feature set.

Multi-Node Cluster

Add worker nodes from the Swarm Topology tab using the SSH join feature (Pro+). The Nebula Agent deploys automatically to every node, enabling per-node metrics and cross-node visibility.

Air-Gapped / Offline

An offline installer package is available. License activation works via file upload — no external network access required.

Upgrading Nebula Pulse

Every Nebula Pulse release package includes an upgrade.sh script that updates the application in-place without touching your credentials, stacks, database, or license. Running install.sh on an existing installation automatically detects this and delegates to upgrade.sh — so either script works.

How It Works

The upgrade process is non-destructive by design. New application code is laid over the existing installation while critical data is stashed, restored, and verified. The Docker Swarm stack is then rolled out with docker stack deploy, which performs a rolling update on all services.

Database schema migrations run automatically on first login after an upgrade — no manual SQL required. New tables and columns are added with CREATE TABLE IF NOT EXISTS / ALTER TABLE ADD COLUMN IF NOT EXISTS, so existing data is always preserved.

What Is Preserved

ItemLocationWhy It Matters
.env/opt/nebula/.envJWT secret, DB credentials, SMTP config — regenerating would invalidate all active sessions and break integrations
license.key/opt/nebula/license.keyCommercial license — must survive upgrades
stacks//opt/nebula/stacks/All user-created stack YAML files
ai/documents//opt/nebula/ai/documents/User-uploaded documents for the RAG assistant
certs//opt/nebula/certs/TLS certificates
PostgreSQL dataDocker named volume nebula_db_freshAll users, audit logs, GitOps config, secrets metadata — stored in a Docker volume outside the filesystem

New environment variables introduced in a release (e.g. a new integration setting) are automatically appended to your .env with their default values. Existing values are never modified.

Upgrade Steps

Extract the new release package and run the upgrade script as root:

bash
tar -xzf nebulaPulse.tar.gz
cd nebula_dist
sudo ./upgrade.sh

Alternatively, running install.sh on an existing host detects the existing installation and automatically delegates to upgrade.sh:

bash
sudo ./install.sh   # same as upgrade.sh when /opt/nebula/.env exists

The script runs through 7 stages with clear progress output:

[1/7] Pre-flight checks — root, Docker/Swarm active, version comparison
[2/7] Pre-upgrade backup — DB dump + critical files archived to /opt/nebula_backups/
[3/7] Stash preserved data to temp dir
[4/7] Deploy new application files (backend, frontend, ai code, agent)
[5/7] Restore preserved data + merge new env vars
[6/7] Update Docker configs & images, rolling redeploy
[7/7] Health check — per-service replica status

Pre-Upgrade Backup

A backup is automatically created before any files are modified. Backups are stored at:

bash
/opt/nebula_backups/pre_upgrade_<version>_<timestamp>.tar.gz

Each backup contains:

  • .env — all environment variables
  • license.key — your license file
  • nginx.conf — web server config
  • stacks/ — all stack YAML files
  • database_pre_upgrade.sql — full PostgreSQL dump

Retain backups before major version upgrades. Backups are not automatically pruned. Consider moving them off-host before a major release upgrade.

Rollback

If an upgrade does not go as expected, restore from the pre-upgrade backup:

1. Restore application files

bash
# Extract the backup
cd /opt/nebula_backups
tar -xzf pre_upgrade_4.0.0_20260412_120000.tar.gz

# Restore .env and stacks
cp pre_upgrade_4.0.0_20260412_120000/.env /opt/nebula/.env
cp -r pre_upgrade_4.0.0_20260412_120000/stacks /opt/nebula/

2. Restore the previous release package

Re-run upgrade.sh from the previous release package to roll the application files back, then redeploy the stack.

3. Restore the database (if needed)

bash
DB=$(docker ps -q -f name=nebula_db | head -n 1)
cat pre_upgrade_4.0.0_20260412_120000/database_pre_upgrade.sql \
  | docker exec -i $DB psql -U nebula nebula_db

DB rollback is rarely needed. Schema migrations only add columns or tables — they never drop existing data. A DB restore is only necessary if a migration introduced data that must be reverted.

Automation / CI

Pass --yes to skip the interactive confirmation prompt for use in automated pipelines:

bash
sudo ./upgrade.sh --yes

The script exits with code 0 on success and non-zero on any failure. All output is written to stdout/stderr, making it compatible with standard CI log capture.

Upgrade summary output

On completion, upgrade.sh prints a summary including the dashboard URL, the backup archive path, and the version transition:

text
════════════════════════════════════════════════════
✅ UPGRADE COMPLETE
════════════════════════════════════════════════════

   Dashboard  : http://192.168.1.132:80
   Backup     : /opt/nebula_backups/pre_upgrade_4.0.0_20260412_120000.tar.gz
   Upgraded   : 4.0.0 → 4.1.0

Features & Tiers

Nebula Pulse ships with three licensing tiers. All tiers include core stack management and audit logging. Advanced metrics, autoscaling, GitOps, and AI capabilities are unlocked progressively.

Free / Community

No license key required. The Free tier is fully functional for individuals and small teams getting started with Docker Swarm management.

Included features

  • Deploy and destroy stacks from YAML
  • Service state monitoring (Running / Degraded / Down)
  • Replica count (actual vs desired)
  • Node placement visibility
  • Manual scaling with +/− buttons
  • Container log viewer
  • Stack configuration viewer
  • Audit logging
  • Up to 5 user accounts

Limitations

  • No real-time CPU / memory metrics (Docker Swarm API only)
  • No autoscaling
  • No multi-node SSH join/remove
  • No backup & restore
  • No SSO or identity provider integration

Pro

Unlocks deep metrics from the Nebula Agent running on each node, autoscaling, and multi-node cluster management.

Everything in Free, plus:

Deep Metrics (Nebula Agent)

MetricSourceExample
vCPU %Kernel cgroups via agent45.2%
RAMMemory cgroups via agent256 MiB

Autoscaling Labels

yaml
deploy:
  labels:
    nebula.scale.cpu: "70"   # Scale up when CPU exceeds 70%
    nebula.scale.min: "2"    # Always keep at least 2 replicas
    nebula.scale.max: "10"   # Never exceed 10 replicas
  • Node management — add nodes via SSH, drain and remove nodes
  • Node label management for placement constraints
  • File browser for container filesystem access
  • One-click backup download (ZIP: stacks, configs, license)
  • Full Role-Based Access Control (RBAC) with unlimited users

Enterprise

Full platform capabilities, including AI-powered assistance, GitOps integration, and enterprise identity providers.

Everything in Pro, plus:

  • AI Assistant — Natural language Docker Compose generation, RAG-powered documentation queries, troubleshooting help
  • GitOps — Auto-deploy on git push from GitHub, GitLab, or Bitbucket
  • SSO — Google Workspace, Azure AD / Entra ID, LDAP / Active Directory via Dex OIDC
  • REST API — Full programmatic access for external integrations
  • Audit exports — CSV and JSON export for compliance reporting

Scaling Protection

Nebula prevents accidental scaling of stateful services. If a service name matches a known stateful pattern, a warning is shown before scaling proceeds:

Protected Patterns
postgres, mysql, mariadb, mongodb
redis, elasticsearch, cassandra
rabbitmq, kafka, zookeeper
Any service with _db or -db in its name

License Enforcement

Days RemainingUI IndicatorFeature Access
> 30 daysNoneFull
10–30 daysYellow warningFull
1–10 daysOrange warningFull
0–7 days (Grace)Red bannerFull (grace period)
ExpiredLockout bannerReverts to Free

Warning thresholds differ by billing period: 30 days for yearly subscriptions, 10 days for monthly.

Stack Management

Nebula Pulse manages Docker Swarm stacks with automatic directory organisation, config injection, network creation, and bind-mount volume setup. Each stack gets its own directory under /opt/nebula/stacks/.

Directory Structure

text
stacks/
  rabbitmq/
    docker-compose.yml       # Stack YAML
    rabbitmq-config          # Config file saved to disk
    rabbitmq-definitions     # Config file saved to disk
  nginx/
    docker-compose.yml
    html/                    # Auto-created bind mount directory
  kafka/
    docker-compose.yml

Legacy flat stack files (stacks/nginx.yml) are automatically migrated to the directory structure on platform startup.

Deploying a Stack

Via the UI

  1. Click STACKS+ Deploy Stack
  2. Enter a stack name — alphanumeric, dashes, and underscores only. Names cannot start with nebula.
  3. Paste or type your Docker Compose YAML. Enterprise users can use AI Autocraft to generate YAML from a plain-English description.
  4. If the YAML references Docker configs, input fields appear automatically for each config file.
  5. Click EXECUTE to deploy.

What happens during deployment

  1. Config creation — Docker configs are saved to disk and registered in Swarm
  2. Network creation — External overlay networks are pre-created
  3. Volume directories — Local bind-mount paths under /opt/nebula/stacks/ are auto-created
  4. YAML processing — The configs: top-level block is injected with external: true
  5. Stack deploydocker stack deploy -c <yaml> <name> is executed
  6. Audit log — Success or failure is recorded with full deployment details

Via the API

http
POST /api/stack/deploy
Content-Type: application/json

{
  "name": "mystack",
  "yaml": "version: '3.8'\nservices:\n  web:\n    image: nginx:alpine",
  "configs": {
    "config-name": "file content here..."
  }
}

Destroying a Stack

  1. Click STACKSDestroy Stack
  2. Select the stack from the dropdown
  3. Confirm the deletion

On destroy, Nebula:

  • Removes the stack via docker stack rm
  • Waits 3 seconds for services to drain, then removes Docker configs
  • Deletes the stack directory from disk
  • Writes an audit log entry

Placement Constraints

Pin services to specific nodes using label constraints:

yaml
deploy:
  placement:
    constraints:
      - node.labels.rabbitmq == 1

Use Node Management to assign labels to nodes before deploying stacks with placement constraints.

Stack APIs

EndpointMethodDescription
/api/stack/deployPOSTDeploy a stack with optional configs
/api/stack/destroyPOSTDestroy a stack and clean up configs
/api/stacks/listGETList all deployed stacks
/api/stack/:name/yamlGETGet the YAML for a stack
/api/stack/:name/configsGETGet config references for a stack
/api/stack/configs/parsePOSTParse YAML to detect config references

Troubleshooting

"No suitable node" error

The stack has placement constraints that no nodes satisfy. Use Node Label Management to add the required labels to your nodes.

Config files not appearing in deploy modal

Make sure the YAML has a configs: section at the service level. Config detection is automatic when you type or paste YAML.

Stack failing after config changes

Docker Swarm configs are immutable. When redeploying, Nebula removes the old config and creates a new one. The 3-second drain delay handles in-flight requests.

Bind mount directory not created

Only directories under /opt/nebula/stacks/ are auto-created. For other paths, create the directory manually before deploying.

Autoscaling

Nebula Pulse automatically scales Docker Swarm services horizontally based on CPU utilisation. The autoscaler evaluates each enabled service every 15 seconds and adjusts replicas within the bounds you define.

Autoscaling Labels

Enable autoscaling by adding three labels to the deploy.labels section of your stack YAML:

LabelTypeDescription
nebula.scale.cpuString (0–100)CPU % threshold to trigger scale-up
nebula.scale.minString (≥ 1)Minimum replica count (floor)
nebula.scale.maxStringMaximum replica count (ceiling)

Labels must be strings — use quotes: "70" not 70. Labels must be in the deploy.labels section, not the top-level service labels.

yaml
version: '3.8'
services:
  web:
    image: nginx:alpine
    deploy:
      replicas: 1
      labels:
        nebula.scale.cpu: "70"   # Scale up when CPU > 70%
        nebula.scale.min: "1"    # Keep at least 1 replica
        nebula.scale.max: "10"   # Never exceed 10 replicas
      resources:
        limits:
          cpus: '0.5'
          memory: 256M

Scale Behavior

Scale Up

  • Triggers when average CPU exceeds nebula.scale.cpu
  • Adds replicas gradually (+1 or +2 at a time)
  • Never exceeds nebula.scale.max
  • Check interval: 15 seconds

Scale Down

  • Triggers when average CPU drops below 50% of the target threshold
  • Example: target = 70% → scale down kicks in below 35%
  • Removes replicas gradually to avoid disruption
  • Never goes below nebula.scale.min

CPU Limits and Scaling Speed

CPU percentage is calculated relative to the service's resources.limits.cpus value. Lower CPU limits make percentages rise faster under load, which causes the autoscaler to react more quickly.

yaml
resources:
  limits:
    cpus: '0.25'    # Lower limit = faster CPU % rise = faster scaling
    memory: 128M
  reservations:
    cpus: '0.1'
    memory: 64M

Examples

Stateless Web Application

yaml
version: '3.8'
services:
  app:
    image: nginx:alpine
    ports:
      - "8080:80"
    deploy:
      replicas: 2
      labels:
        nebula.scale.cpu: "60"
        nebula.scale.min: "2"
        nebula.scale.max: "8"
      resources:
        limits:
          cpus: '0.5'
          memory: 256M
      restart_policy:
        condition: on-failure
    networks:
      - app_net

networks:
  app_net:
    driver: overlay

High-Availability API Service

yaml
version: '3.8'
services:
  api:
    image: myapp/api:latest
    ports:
      - "3000:3000"
    deploy:
      replicas: 3
      labels:
        nebula.scale.cpu: "75"
        nebula.scale.min: "3"
        nebula.scale.max: "15"
      resources:
        limits:
          cpus: '1.0'
          memory: 512M
        reservations:
          cpus: '0.25'
          memory: 256M
    networks:
      - backend

networks:
  backend:
    driver: overlay

Recommended Thresholds by Service Type

Service TypeCPU ThresholdNotes
Web / frontend servers60–70%Low latency sensitive
API / microservices70–75%General purpose
Databases80–85%Scale conservatively; stateful
Background workers75–80%Latency tolerant

Testing Autoscaling

bash
# Using Apache Bench (100k requests, 50 concurrent, 60 seconds)
ab -n 100000 -c 50 -t 60 http://your-service:port/

# Using hey (simpler syntax)
hey -z 60s -c 50 http://your-service:port/

# Monitor replicas in real time
watch docker service ls

Troubleshooting

Service not scaling

  1. Confirm labels are in the deploy.labels section (not top-level labels)
  2. Check that resources.limits.cpus is set on the service
  3. Verify the service is actually under load — check with docker stats
  4. Ensure nebula.scale.min < nebula.scale.max

Scaling too aggressively

  • Increase the CPU threshold
  • Reduce nebula.scale.max
  • Increase CPU resource limits so the percentage rises more slowly

Scaling too slowly

  • Lower the CPU threshold
  • Decrease CPU limits so percentage rises faster
  • Increase the initial replica count

GitOps

Nebula Enterprise's GitOps integration automatically deploys updated stack definitions when you push to a configured Git branch. You maintain your infrastructure as code and Nebula handles the deployment.

How GitOps Works

1 — Developer pushes to main branch
2 — GitHub / GitLab sends webhook to Nebula
3 — Nebula verifies signature & clones repository
4 — Stack YAML files are deployed to Swarm
5 — Services updated automatically

Requirements

  • Nebula Enterprise license
  • Git repository (GitHub, GitLab, or Bitbucket)
  • Repository accessible over HTTPS from your Nebula host
  • Nebula accessible from the internet (or from your Git provider's IP range)

Setup

Step 1 — Add Repository in Nebula

  1. Click GITOPS in the admin panel
  2. Fill in the repository details: Name, Provider, Repository URL, Branch, and Stack Path
  3. Click ADD REPOSITORY
  4. Copy the generated Webhook URL and Secret for the next step

Configuring Webhook Providers

GitHub

  1. Go to your repo → SettingsWebhooksAdd webhook
  2. Set Payload URL to the Nebula webhook URL
  3. Set Content type to application/json
  4. Paste the Nebula-generated Secret
  5. Select Just the push event and click Add webhook

GitLab

  1. Go to your repo → SettingsWebhooks
  2. Paste the webhook URL and Secret token from Nebula
  3. Check Push events and click Add webhook

Repository Structure

Organise your repository with stack files under the configured Stack Path (default: /stacks):

text
/stacks
  ├── frontend.yml
  ├── backend.yml
  ├── database.yml
  └── monitoring.yml

# Or by environment:
/stacks
  ├── production/
  │   ├── app.yml
  │   └── db.yml
  └── staging/
      ├── app.yml
      └── db.yml

Best Practices

Use environment branches

text
main      → Production deployments
staging   → Staging environment
develop   → Development testing

Configure a separate Nebula GitOps repository entry for each branch.

Pin image versions

yaml
services:
  api:
    image: myorg/api:v1.2.3    # ✅ Specific version
    # NOT: image: myorg/api:latest  # ❌ Unpredictable

Use Docker secrets for sensitive values

yaml
services:
  app:
    secrets:
      - db_password

secrets:
  db_password:
    external: true    # Created separately, never committed to Git

Validate before pushing

bash
# Validate locally before pushing
docker-compose -f stacks/app.yml config

# GitHub Actions validation workflow
# .github/workflows/validate.yml
name: Validate Stacks
on:
  push:
    paths: ['stacks/**']
jobs:
  validate:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Validate all stack files
        run: |
          for f in stacks/*.yml; do
            docker-compose -f "$f" config > /dev/null && echo "✅ $f"
          done

Troubleshooting

Webhook not triggering

  1. Verify the webhook URL is correct in your Git provider
  2. Check the secret matches exactly — no trailing spaces
  3. Ensure Nebula is accessible from the internet
  4. Review webhook delivery logs in your Git provider

Deployment fails after push

  1. Check Nebula app logs: click LOGS on the nebula_app service
  2. Verify YAML syntax is valid (docker-compose config)
  3. Ensure container images are accessible from all swarm nodes
  4. Confirm the stack path in Nebula matches your repository structure

Signature verification failed

  • GitHub — Regenerate the webhook secret in Nebula and update it in GitHub
  • GitLab — Ensure the X-Gitlab-Token header matches the Nebula secret exactly

Security Considerations

  • All webhooks use cryptographic signature verification (HMAC-SHA256 for GitHub)
  • Webhook secrets are generated using cryptographically secure random bytes
  • Use HTTPS URLs for repository access; SSH is not yet supported
  • For private repositories: https://[email protected]/org/repo.git
  • Consider IP allowlisting your webhook endpoint
  • Rotate webhook secrets periodically

Node Management

Pro and Enterprise users can manage their Docker Swarm topology directly from Nebula Pulse — adding worker nodes via SSH, assigning placement labels, and draining or removing nodes from the cluster.

Swarm Topology View

The Swarm Topology table shows all nodes in your cluster with their current status, role, IP address, and assigned labels. Each node displays real-time CPU and memory usage collected by the Nebula Agent.

Labels & Placement Constraints

Docker Swarm node labels are key-value pairs that control which nodes run specific services. Nebula provides a UI for managing labels without touching the CLI.

Labels are set on nodes (via Nebula's Node Management UI), while autoscaling labels are set on services (in your stack YAML's deploy.labels section).

Example placement constraint

yaml
deploy:
  placement:
    constraints:
      - node.labels.rabbitmq == 1   # Only run on nodes with this label

Common label patterns

LabelPurpose
rabbitmq=1, rabbitmq=2Pin specific RabbitMQ nodes to specific hosts
postgres=primaryPin the primary database to a dedicated host
zone=us-east-1aAvailability zone awareness
tier=frontendGroup nodes by role
env=productionEnvironment isolation

Label format rules

  • Keys and values: alphanumeric with dots (.), dashes (-), or underscores (_)
  • Maximum 128 characters per key or value
  • Examples: rabbitmq=1, app.role=database, zone=us-east

Managing Labels via UI

Adding a label (Admin only)

  1. Go to the Swarm Topology table
  2. Find the target node and click + TAG in the Tags column
  3. Type the label in key=value format (e.g. rabbitmq=1)
  4. Press Enter or click ADD — the label is applied immediately

Removing a label (Admin only)

  1. Find the label badge on the node
  2. Click the red × on the badge and confirm

Automatic Labels

When a node joins the swarm via Nebula's SSH join feature, it automatically receives an ip_address label set to its IP address.

API Reference

EndpointMethodDescription
/api/nodes/:id/labelsPOSTAdd or update labels on a node
/api/nodes/:id/labels/:keyDELETERemove a label from a node
/api/nodes/listGETList all nodes with current labels
http
POST /api/nodes/abc123/labels
Authorization: Bearer <jwt>
Content-Type: application/json

{ "labels": { "rabbitmq": "1", "zone": "us-east" } }

→ { "success": true, "message": "Labels updated" }

Audit Logging

All label operations are recorded in the audit trail:

  • NODE_LABEL_ADD — Label added or updated (includes node ID and key/value)
  • NODE_LABEL_REMOVE — Label removed (includes node ID and key)

Docker Configs

Nebula Pulse supports Docker Swarm configs — a mechanism for injecting configuration files into containers at deploy time. Applications like RabbitMQ, HAProxy, Nginx, and Kafka often require configuration files that must exist before services start.

How It Works

Automatic config detection

When you paste YAML into the Deploy Stack modal, Nebula automatically parses it for Docker config references. If any service contains a configs: section, the UI shows input fields for each config file — one textarea per config.

Deploy flow with configs

  1. Open the Deploy Stack modal
  2. Enter a stack name and paste your YAML
  3. A purple CONFIG FILES section appears listing each detected config with its target path
  4. Paste the content for each config file into its textarea
  5. Click DEPLOY

Behind the scenes

  1. Config content is saved to disk at /opt/nebula/stacks/<stackname>/
  2. Each config is registered in Docker Swarm via docker config create
  3. The YAML is modified to add a top-level configs: block with external: true
  4. The stack is deployed with docker stack deploy

On stack destroy

  1. Nebula reads the stack YAML to identify config references
  2. Waits 3 seconds for services to drain
  3. Removes each config with docker config rm
  4. Deletes the stack directory and config files from disk

RabbitMQ Cluster Example

Stack YAML

yaml
version: "3.8"
services:
  rabbitmq1:
    image: rabbitmq:3.13-management
    hostname: rabbitmq1
    configs:
      - source: rabbitmq-config
        target: /etc/rabbitmq/rabbitmq.conf
      - source: rabbitmq-definitions
        target: /etc/rabbitmq/definitions.json
    networks:
      - rmq_net
    deploy:
      placement:
        constraints:
          - node.labels.rabbitmq == 1

networks:
  rmq_net:
    driver: overlay

rabbitmq-config content

ini
cluster_formation.peer_discovery_backend = classic_config
cluster_formation.classic_config.nodes.1 = rabbit@rabbitmq1
cluster_formation.classic_config.nodes.2 = rabbit@rabbitmq2
cluster_formation.classic_config.nodes.3 = rabbit@rabbitmq3
loopback_users.guest = false

Redeployment

Docker Swarm configs are immutable. When redeploying a stack that already has configs, Nebula removes the old configs first, then creates new ones with the updated content before redeploying the stack.

API Reference

EndpointMethodDescription
/api/stack/configs/parsePOSTParse YAML to detect config references
/api/stack/deployPOSTDeploy stack with optional configs object
/api/stack/:name/configsGETGet configs associated with a deployed stack
http
POST /api/stack/configs/parse
Content-Type: application/json

{ "yaml": "version: '3.8'\nservices:\n  rabbitmq:\n    configs:\n      - source: my-config\n        target: /etc/app/config.conf" }

→ {
  "success": true,
  "configs": [{ "name": "my-config", "target": "/etc/app/config.conf" }]
}

Audit Logging

Nebula Pulse records a comprehensive audit trail of all user actions and system events. Audit logs are available to admin users across all license tiers and include automatic 90-day retention with configurable cleanup.

Event Types

Authentication Events

ActionDescription
LOGIN_SUCCESSUser successfully authenticated
LOGIN_FAILUREFailed login attempt (wrong credentials)

User Management Events

ActionDescription
USER_CREATENew user account created
USER_DELETEUser account deleted
PASSWORD_CHANGEUser changed their own password
PASSWORD_RESETAdmin reset a user's password

Stack & Service Events

ActionDescription
STACK_DEPLOYStack deployed — success or failure both logged
STACK_DESTROYStack destroyed
SERVICE_SCALEService scaled up or down
SERVICE_RESTARTService force-restarted

Node Events

ActionDescription
NODE_JOINNew node joined the swarm via SSH
NODE_REMOVENode drained and removed from swarm
NODE_LABEL_ADDLabel added or updated on a node
NODE_LABEL_REMOVELabel removed from a node

GitOps Events (Enterprise)

ActionDescription
GITOPS_REPO_ADDGit repository added for GitOps
GITOPS_REPO_DELETEGit repository removed
GITOPS_SYNCManual or webhook-triggered sync

System Events

ActionDescription
LICENSE_ACTIVATELicense key activated
BACKUP_DOWNLOADBackup ZIP downloaded
FILE_UPLOADFile uploaded to a container

Log Entry Fields

Every audit log entry contains:

FieldDescription
timestampWhen the event occurred (ISO 8601)
usernameWho performed the action
actionEvent type code (e.g. STACK_DEPLOY)
categoryEvent category (AUTH, STACK, NODE, etc.)
target_typeWhat was affected (stack, node, user, etc.)
target_nameName of the affected resource
ip_addressClient IP (forwarded through Nginx proxy)
user_agentBrowser / client user agent string
statussuccess or failure
detailsAdditional context as JSON (error messages, etc.)
session_idJWT session identifier

API & Export

Query audit logs

http
GET /api/audit/logs?page=1&limit=50&category=STACK&status=failure
Authorization: Bearer <admin-jwt>

Available query parameters: page, limit, category, action, username, status, startDate, endDate

Get audit statistics

http
GET /api/audit/stats
→ Event counts by category, status breakdown, failed logins, active users

Export logs

http
GET /api/audit/export?format=csv&startDate=2025-01-01&endDate=2025-12-31
→ Returns CSV or JSON file download

Viewing Logs in the UI

  1. Log in as an admin user
  2. Click the admin menu dropdown (top right)
  3. Select Audit Logs
  4. Use the filters to search by category, action, user, date range, or status
  5. Click Export to download as CSV or JSON

Retention Policy

  • Default retention: 90 days
  • Configurable via AUDIT_RETENTION_DAYS environment variable
  • Automatic daily cleanup of expired entries
  • Cleanup runs on a 24-hour interval

Licensing

Nebula Pulse uses JWT-based license keys to gate features across three tiers. The license file is stored at /opt/nebula/license.key and validated on startup and every hour thereafter.

License Tiers

TierLicense RequiredKey Capabilities
FreeNoneStack management, manual scaling, audit logging, up to 5 users
ProJWT license key+ Real-time metrics, autoscaling, multi-node management, RBAC, backup
EnterpriseJWT license key+ AI assistant, GitOps, SSO, full REST API

License Key Format

License keys are signed JWT tokens containing:

ClaimDescription
tierpro or enterprise
clientCompany or client name
billingmonthly or yearly
featuresArray of enabled features
expExpiration timestamp (Unix)
issnebula-pulse

Activating a License

Via the UI

  1. In the Nebula Pulse UI, click ACTIVATE PRO (visible on the Free tier)
  2. Paste your license key into the input field
  3. Click ACTIVATE
  4. The page reloads with the new tier active

Via file system

bash
# Write directly to the license file
echo "your.jwt.license.key" > /opt/nebula/license.key

# Or use the generator tool (if you have access)
node /opt/nebula/tools/generate-license.js \
  --tier enterprise \
  --client "Acme Corp" \
  --expires 2027-01-01 \
  --output /opt/nebula/license.key

License Status API

http
GET /api/license/status

→ {
  "tier": "enterprise",
  "billing": "yearly",
  "client": "Acme Corp",
  "expired": false,
  "gracePeriod": false,
  "expiresAt": "2027-01-01T00:00:00.000Z",
  "daysRemaining": 300,
  "features": ["backup", "nodes", "scaling", "rbac", "ai", "sso", "audit", "api"]
}

Expiry & Grace Period

  • Licenses are validated on startup and every hour
  • On expiry, a 7-day grace period begins — all features remain active
  • After the grace period, the system reverts to Free tier
Days Until ExpiryWarningBilling Type
30 daysYellow bannerYearly
10 daysYellow bannerMonthly
1–10 daysOrange bannerBoth
0–7 days (expired)Red banner — grace period activeBoth

Troubleshooting

"License key has expired"

Generate a new key with a future expiration date and activate it via the UI or by writing to /opt/nebula/license.key.

Features not appearing after activation

The license is re-read on startup and every hour. If features still don't appear, restart the nebula_app service to force an immediate re-validation.

Key not working

Ensure the license was generated with the correct signing secret. Both the generator and the server must use the same secret key.