CaddyUI v2.7.8 — Enforce zone ↔ hostname match on the DNS picker
🚀 Release Roundup

CaddyUI v2.7.8 — Enforce zone ↔ hostname match on the DNS picker

Twenty-three releases since v2.5.0. v2.7.8 closes another real gotcha: the DNS picker would let you save a hostname into the wrong zone. The amber "isn't a subdomain of…" warning was advisory only; now it's enforced both client- and server-side.

v2.7.8 ☕ ~10 min read

🧑‍🤝‍🧑 Groups — team-level resource visibility v2.7.4

This is the headline of the release cycle. If you host sites for teams or customers, Groups is the feature you've been quietly wishing for.

Until now, every resource in CaddyUI — proxy host, redirection, advanced route, certificate — belonged to exactly one user. If Alice and Bob both worked for Customer X, one of them had to "own" everything and the other had zero visibility. Or you gave them both admin, which blew up your access model.

Groups fix that cleanly. Admin creates a group, drops user accounts into it, and every member sees every other member's rows in their own dashboard. Example:

  • Admin creates a group called Customer X with members alice@customer.com and bob@customer.com.
  • Alice logs in to /proxy-hosts — she sees her own sites and every site Bob created, plus every redirection, raw route, and certificate Bob owns.
  • Each of Bob's rows shows a small amber chip Team: bob@customer.com so Alice knows at a glance which rows are hers vs. her teammate's.
  • Alice can read but not edit or delete Bob's rows. Ownership stays with the creator; the group just shares visibility.
  • Carol (not in any group) sees only her own stuff, as always.
  • Admin sees everything, no chips, exactly like before.

Where to find it: sidebar → Access Control → Groups (admin only). Create a group, check the members, save. Done — no restart, no config file, no migration.

How it works under the hood. Two new tables (groups and user_groups, with ON DELETE CASCADE on both join legs). A de-duped self-join on user_groups returns the list of teammate IDs for the viewer, which is then spliced into owner_id IN (...) on every list query. A peerless viewer collapses cleanly to owner_id IN (NULL) (always false), so users with no group membership see exactly what they saw before — zero risk of the rollout breaking existing installs.

📊 Visitor analytics v2.7.0 → v2.7.1

Opt-in traffic analytics, fully self-hosted, no external service. Caddy streams its access log to a TCP listener in CaddyUI — no Google Analytics, no third-party pixel, nothing leaves your box.

  • Top hosts by view count, with unique-visitor estimates.
  • 24-hour sparkline so you can spot traffic spikes at a glance.
  • Status-code pie chart — if your 5xx rate goes up, you see it.
  • Per-server filter (added in v2.7.1) — narrow to one Caddy node in a multi-server fleet.

Completely off by default. Toggle at Settings → Analytics. Until you flip that switch, zero access-log traffic is collected or even listened for. The per-server filter makes this usable for fleet admins who previously couldn't tell which node a traffic spike was hitting.

v2.7.1 bug worth calling out: if you enabled analytics in v2.7.0 and saw no data, the FormValue call in the Settings save path was always storing 0 due to the hidden-input-plus-checkbox pattern browsers use. Fixed in v2.7.1 (same save also broke smtp_skip_verify and cf_proxied silently). If you toggled analytics on and saw an empty page, try again after upgrading.

👤 Per-user ownership of every resource v2.7.2 → v2.7.4

Proxy hosts got per-user ownership early (0.0.15). This cycle extended it to the remaining three resource types and made the model much richer:

VersionWhat shipped
v2.7.2Certificates joined the ownership model. Users can upload their own TLS material; other users don't see it in the list or the dropdown. Admin-owned certs remain globally visible (so your wildcard still works for everyone).
v2.7.3Admin can reassign ownership on any resource's edit form. Provision a proxy host under your admin account, then hand it off to a customer in one click. Works on proxy hosts, redirections, advanced routes, and certificates.
v2.7.4Groups extend ownership into read-share visibility (covered above).

A delete on a user-owned cert that's still referenced by another user's site is blast-radius-protected: you'll get this certificate is in use by another user's site — ask an admin to delete it instead of a silently broken site elsewhere.


🚀 The Deploying page v2.5.2 → v2.5.6

Save a proxy host or advanced route and you now land on an interstitial page that polls your domain in real time:

  • DNS propagates — an external resolver sees the A record you just provisioned.
  • TLS cert issued — Caddy's automatic HTTPS completed the ACME dance.
  • HTTPS responds — the site is actually serving.

No more guessing whether Caddy picked up your change. The page handles the tricky real-world cases:

  • v2.5.3 — multi-server setups use the target server's public IP, not the CaddyUI host's, as the expected A record value.
  • v2.5.4 — routers without hairpin NAT (common on consumer ISPs) are checked from outside your LAN, so they stop reporting false red.
  • v2.5.5 — Cloudflare-proxied domains (the orange cloud) aren't expected to resolve to your origin IP, so the check doesn't flag them as broken.
  • v2.5.6 — advanced routes get the same treatment as proxy hosts.

🔄 Import & migration — paste a Caddyfile, keep going

One of the most common rollout questions was "I already have a Caddyfile with 40 sites in it — do I need to retype everything?" The answer is no. CaddyUI can adopt an existing Caddy config in a single paste.

  • Paste-to-import (/caddyfile-import) — pipe your existing Caddyfile into the textarea. CaddyUI runs it through Caddy's own /adapt endpoint, then extracts every site block into a proxy host or raw route (whichever fits), and pushes the result back. Snippets and globals stay in the Caddyfile — we don't touch those — so you can keep hand-authored bits next to UI-managed ones.
  • TLS automation policies come along for the ride (v2.5.8) — if any of your pasted site blocks used tls { dns cloudflare ... } (DNS-01 challenge, custom issuer, off-the-shelf ZeroSSL, etc.), those per-site automation policies are extracted alongside the routes and merged into apps.tls.automation.policies[]. Previously the routes were imported but the policies were silently dropped, which meant Caddy fell back to HTTP-01 and failed for any host that wasn't reachable on :80.
  • "Import from live Caddy" — if CaddyUI is pointed at a Caddy that already has a running config (e.g. you're adding CaddyUI to an existing deployment), the import page offers a "fetch current config" button that pulls the adapted JSON out of /config/ and lets you review before writing anything to the DB.
  • Incremental adoption — you don't have to import all at once. Pasted blocks become DB rows; unimported blocks stay in the Caddyfile. The sync path merges both so Caddy always has the union.

There's also a CADDYUI_SYNC_ON_START=1 env var that tells CaddyUI to push its DB state back to Caddy every time it boots — useful once every live site block has an equivalent in the DB, so the Caddyfile only needs to hold globals + snippets. Safe by default: it refuses to sync when the DB is empty (which would otherwise wipe Caddy's live routes on a first-boot race).


🌐 Managed DNS — multi-provider expansion v2.3.0 → v2.5.11

What started as "Cloudflare only" is now a full multi-provider system. Providers currently supported: Cloudflare, Porkbun, Namecheap, GoDaddy, DigitalOcean, Hetzner.

  • Per-provider zone allow-lists (v2.4.7) — whitelist which zones CaddyUI is allowed to touch. Keeps it out of domains you don't want it managing.
  • Per-server public IPs (v2.4.0) — multi-Caddy fleets resolve the correct A record for the host that'll actually serve the site.
  • Safe surgical edits (v2.4.9) — DNS "override" only touches A / AAAA / CNAME records. Your MX, TXT, SRV, CAA are never touched, ever.
  • "Clear credentials" button per provider (v2.4.6) — rotate or remove secrets without hunting through Settings.
  • A record per hostname (v2.5.9) — a proxy host with three domains creates three A records, not one.
  • Alias-only changes react on edit (v2.5.10) — add an alias to an existing host and the new DNS record appears automatically. This one was a nasty silent-fail bug; see the v2.5.10 notes.
  • Smarter zone picker (v2.5.1) — ranks matching zones by specificity so api.example.com finds example.com before example.net.
  • Collision warnings (v2.4.8 + v2.5.6) — "DNS record already exists" warning with one-click "yes, overwrite" confirmation instead of a silent stomp.
  • Advanced routes (v2.5.6) — Managed DNS now works for advanced routes too, not just proxy hosts.

🏗️ Multi-server fleet management

CaddyUI isn't locked to one Caddy. One control plane can administer a whole fleet — every page has a server picker in the top bar, every resource carries a server_id, and every Managed DNS call resolves the right public IP for whichever Caddy is actually going to answer the request.

  • /servers admin page — add a Caddy by admin URL (http://10.0.0.5:2019, https://caddy.internal:2019, or a Unix socket unix:///var/run/caddy-admin.sock), optionally with HTTP Basic Auth credentials if you've fronted its admin endpoint with a reverse proxy. First-boot seeds your primary Caddy from the CADDY_ADMIN_URL env var so the single-box case is zero-config.
  • Per-server public IP (v2.4.0) — each server row has its own server_public_ip. Managed DNS provisions A records pointing at the specific Caddy that'll serve the site, not a globally-shared IP. Drag a proxy host from server A to server B and the DNS record retargets automatically.
  • Retarget-all-records — when you change a server's public IP (migration to a new box, ISP change), one click re-writes every managed DNS record for every resource bound to that server. Also iterates raw routes since v2.5.6, so advanced routes don't get left behind.
  • Health polling every 10s — the server picker shows a coloured dot per server. Three consecutive failed pings are required before a server flips to offline (v2.4.2), so one 100ms tunnel blip in a WireGuard mesh doesn't mark your whole fleet down.
  • App-response poller (v2.4.4) — in addition to TCP port probes, every proxy host is HTTPS-GET'd once a minute. Catches the "port open, app wedged" case — upstream :3000 accepts connections but your Node process is stuck in a DB retry loop. The TCP dot goes green; the App dot goes red.
  • Split-horizon DNS handled gracefully (v2.4.5) — LAN DNS often resolves your public hostnames to the LAN IP, and Caddy's internal probe can't reach itself that way. CaddyUI now treats this as "unknown" (amber) rather than "down" (red), so you don't get false-red for a healthy server.
  • Edge-only Caddy is a supported topology — you can run CaddyUI on a separate admin box and a lean Caddy-only container at the edge (no Go, no SQLite, just the official caddy:2-alpine image). The CaddyUI container talks to the edge Caddy over its admin port. Useful for keeping the admin UI off the public internet while Caddy handles L7 traffic.
Admin-endpoint safety. Caddy's admin API is unauthenticated by default — anyone who can reach :2019 can reconfigure it. CaddyUI supports three hardening paths: bind it to a private Docker network (the default compose setup), front it with a reverse proxy enforcing HTTP Basic Auth (CaddyUI will send the credentials on every call — see CADDY_ADMIN_USER / CADDY_ADMIN_PASS), or run Caddy with a Unix-socket admin listener. The /servers form accepts all three URL shapes.

🔐 Security & hardening

  • Login CAPTCHA (v2.5.0) — switchable between Cloudflare Turnstile and reCAPTCHA v3. Off by default.
  • reCAPTCHA v3 threshold fix (v2.7.0) — the score check was accepting very low scores; tightened to the documented default (0.5).
  • Three-role RBAC (v2.7.0) — admin / user / view. Admin-only pages (Users, Groups, Settings, Caddy Servers, Snapshots) return 404 for non-admins, not a rendered page with disabled buttons.
  • XSS hardening (v2.5.11) — zone-picker error paths now escape provider-supplied strings. Nobody was actively exploited; found during CodeQL review.

🔑 Login security — 2FA, CAPTCHA, and kill switches

Login is where every admin panel gets tested. CaddyUI ships three independent layers you can stack or turn off:

  • TOTP two-factor auth — any user account can enroll a TOTP authenticator (Aegis, 1Password, Authy, Google Authenticator, etc.). Enroll flow shows a QR + manual secret, verifies a current code before turning it on, and issues 10 single-use recovery codes you print and stash. Admin can force-reset 2FA on any user; admin cannot see or recover another user's TOTP secret (the enrollment's the secret, not the DB).
  • CAPTCHA — switchable between Cloudflare Turnstile and Google reCAPTCHA v3, off by default (v2.5.0). Applies to /login, /login/totp (the 2FA code page), and /users/new. reCAPTCHA v3 is invisible (runs on submit); Turnstile shows its managed widget inline. Switching providers preserves both sets of credentials in the DB so you can A/B without retyping.
  • reCAPTCHA v3 score threshold input (0.0 = bot, 1.0 = human). Default 0.5 per Google's recommendation. v2.7.0 tightened a bug where the server-side score check was accepting scores well below threshold; if you're on v2.6.x with reCAPTCHA enabled, upgrade.
  • Env kill switch CADDYUI_CAPTCHA_DISABLE=1 — if Cloudflare goes out and your Turnstile challenge stops responding, you'd normally be locked out of your own admin panel. Setting this env and restarting the container bypasses the challenge without touching the DB, so you can log in, fix things, and unset it. Settings page shows an amber "overridden by env" badge while it's active.
  • Three-role RBAC (v2.7.0) — admin / user / view. Admin-only pages (Users, Groups, Settings, Caddy Servers, Snapshots) return 404 for non-admins, not a rendered page with disabled buttons. Route-level enforcement in server.go's requireAdmin / requireWrite chi groups means the router is the single enforcement point — no individual handler can forget the check.
  • TOTP failure ≠ burned slot — getting CAPTCHA wrong on the 2FA form re-renders with the same pending-TOTP token, so fat-finger doesn't kick you back to the email/password screen. The 5-minute auto-expire on the pending token still caps abuse.
RoleSeesCan do
adminEverything, every server, every userEverything — create/edit/delete/reassign on any resource, manage users + groups + settings + Caddy servers
userOwn rows + global (admin-owned) rows + group teammates' rows (read-only)Create, edit, delete own rows; read teammates' rows; upload own certs (v2.7.2)
viewSame visibility as userRead-only — no create, edit, delete, or upload anywhere

📬 Notifications — SMTP, webhooks, cert expiry

CaddyUI can tell you when things go wrong (or right) via two independent notification channels you configure in Settings. Both channels are tested with inline buttons that land right next to the credential fields — no separate "test" tab to scroll to (v2.4.12).

  • SMTP email notifications — generic SMTP credentials (host + port + user + password + from + to list). TLS toggle, optional smtp_skip_verify for self-signed internal relays (bug fixed in v2.7.1 that was silently ignoring this toggle). Inline "Send test email" button fires a real message using the live form values so you can verify without saving.
  • Webhook notifications — plain HTTP POST to an arbitrary URL with a JSON payload. Works with Discord, Slack, Mattermost, ntfy, Gotify, Home Assistant, n8n, anything that takes a webhook. Inline "Send test webhook" button.
  • Cert-expiry alerts — a background notifier wakes up nightly and fires both channels (if configured) for any certificate — managed Let's Encrypt or user-uploaded — within 14 days of expiring. Each cert/domain only alerts once per day, so a forgotten cert doesn't spam you for a week.
  • Update-available badge — the top bar shows an amber "update available" pill when a newer CaddyUI release is on Docker Hub. Click it for release notes. Admin-only (other roles don't need to think about pulling images).

💾 Snapshots, backups & activity log

Three overlapping tools for "what did I just do, and can I undo it?"

  • Named snapshots/snapshots (admin-only) lets you capture the full Caddy config + CaddyUI DB state under a named label. Restore with one click. Typical usage: snapshot before trying a big Caddyfile paste-import or a multi-route refactor, roll back if it goes sideways. Snapshots are stored in the DB, so they ride along with /backup.
  • One-click DB backup — the Settings page has a /backup link that downloads a consistent SQLite snapshot (via VACUUM INTO, so the WAL is flushed into a single file). Drop the file back into the /data volume to restore. Fixed in v2.7.5 to work on the scratch image — see below.
  • Activity log/activity shows a chronological feed of every mutation across the fleet: proxy host created, redirection deleted, user password reset, group membership changed, certificate rotated, Caddy config pushed. Each entry carries the actor (user email), resource (host / route / user / group / cert / server), and the change summary. Filterable by actor, resource type, and date range. Survives container restarts — stored in the main DB, not a separate log file.
  • WAL-mode-aware exports — SQLite's VACUUM INTO is the right tool here specifically because it handles the write-ahead log coherently. A naive cp caddyui.db while the server is running would miss uncommitted pages in -wal and -shm. The backup endpoint never has that problem.

🎨 UX polish

  • Tap-to-edit rows (v2.4.10) — click anywhere on a proxy-host row to open the edit form.
  • Explicit Edit + Delete buttons (v2.5.7) — on mobile, the tap-to-edit zone and the delete button are now clearly separated so you don't delete by accident.
  • Visible pencil icons on every editable identifier (v2.4.11).
  • Sticky Actions column on wide tables (v2.4.8).
  • Branded error pages (v2.4.12) — CaddyUI-styled 404 / 502 / 503 / 504 pages, injected into Caddy automatically, consistent with the admin UI look.
  • Timezone picker in Settings (v2.4.12) — all timestamps render in your picked zone. Defaults to TZ env var, falls back to UTC.
  • Caddyfile paste-import captures TLS policies (v2.5.8) — per-site tls automation blocks (DNS challenge, issuer overrides) are preserved, not dropped.

📱 Dark mode, mobile & keyboard-friendly

Admin tools get used in the worst possible conditions — on the phone, at 2am, in bed, while something is broken. CaddyUI has been shaped around that:

  • Dark mode follows the system preference via prefers-color-scheme. No theme toggle, no extra setting — match the OS. Every page, dropdown, table, chart, and error banner is re-tinted for dark mode (including the analytics sparklines, which render their SVG fills from CSS variables so the line colour adjusts automatically).
  • Mobile cards on every list page — narrow viewports get card layouts instead of tables, with the whole card tap-to-edit. Delete stays a separate button (explicit in v2.5.7 so you don't fat-finger a delete while trying to edit). Domain pills still open in new tabs. The Actions column sticks to the right so you can always reach it on a narrow phone (v2.4.8).
  • PWA-ready — service worker, manifest, touch icons, Apple meta tags. "Add to Home Screen" gets you a standalone CaddyUI app on iOS and Android with proper app icon. No offline support (you need the network to talk to Caddy anyway) but the chrome feels native.
  • Keyboard shortcuts/ focuses the first search field on the page; Esc closes open dropdowns. Form submit on Cmd/Ctrl+Enter from any textarea so long raw-route JSON edits don't need a mouse reach. Cmd/Ctrl-click and middle-click on any tap-to-edit row open edit in a new tab (v2.4.10) — same convention as any other navigational UI.
  • Sticky Actions column (v2.4.8) on every admin table — pin the Edit / Delete column to the right edge, whatever the viewport width. Purple "advanced" rows get a matching purple-tinted sticky cell so the overlay doesn't look transparent.
  • In-app /docs page — built-in reference for every page, field, and DB env var, rendered from the same Go templates as the rest of the UI. No external wiki to fall out of date.
  • Timezone picker (v2.4.12) — every timestamp in the UI (cert expiry, activity feed, snapshot names, analytics windows) renders in the active zone. Picks from a common-zones list or type any IANA zone name. DB setting wins over the TZ env var wins over UTC.

💊 Reliability & bug fixes

  • Backup download works on scratch image (v2.7.5, today — see below).
  • Analytics toggle actually enables analytics (v2.7.1) — described above.
  • End-to-end App health dot (v2.4.4) — catches "port open but app wedged" where Caddy thinks upstream is fine but the app itself is hung.
  • Split-horizon DNS handled gracefully (v2.4.5) — no more false-red App dot when the server can't reach its own public hostname because LAN DNS resolves it to the private IP.
  • Amber "unknown" for Docker-named backends (v2.4.3) — hostnames like myapp:3000 that only resolve inside Docker don't show red (we can't probe them from outside).
  • Health-poller WG flap fix (v2.4.2) — three consecutive failed pings required before a server flips to "offline". One tunnel blip no longer marks your whole fleet down.

🐛 Today's fix — v2.7.8

v2.7.8 — DNS picker let you save a hostname into the wrong zone

Reported live (with screenshot): a route for richardapplegate.io ended up paired with the applegatecloud.com zone in the dropdown. The form rendered the v2.5.1 amber "isn't a subdomain of …" warning, but it was advisory only — clicking Save committed the row, and the subsequent dnsCreateRecord call either failed at the provider API or quietly put the A record in the wrong zone. Symmetric on both the proxy-host form and the raw-route form, since both share the picker logic.

Two parts to the fix — front-end and back-end:

Front-end: hostname-derived zone always wins now. renderZones used to call bestZoneMatch(firstDomain, zones) only when nothing was pre-selected and the user hadn't manually picked. That left two failure paths — editing a row whose saved dns_zone_id was wrong, and a manual mis-pick that stuck around even after the user changed the hostname. v2.7.8 strips both gates: if any zone in the list is a parent of the first hostname, the dropdown snaps to it on every render. Falls back to the saved selection only when no zone matches (in which case you should set the provider to "(none)" anyway).

Back-end: new validateZoneMatchesHostname(provider, zoneID, zoneName, domains) validator wired into all four DNS-aware save handlers — createProxyHost, updateProxyHost, createRawRoute, updateRawRoute. Returns "" when no managed DNS is configured (provider or zoneID empty) or when the first hostname is a suffix of the zone name; otherwise returns a user-facing error and the row is refused. Matching is case-insensitive and trim-tolerant via a new domainInZone(fqdn, zoneName) helper that mirrors how Caddy and every DNS provider we integrate with normalises FQDNs.

Validates the first hostname only — that matches what dnsCreateRecord actually provisions records for. Extra entries in the comma-separated Domains field are SAN aliases on the same TLS cert, not separate DNS records.

Error message is actionable:

Hostname "richardapplegate.io" doesn't live in DNS zone
"applegatecloud.com". Pick a zone whose apex matches the
hostname (e.g. zone "richardapplegate.io" for hostname
"richardapplegate.io"), or change the DNS provider to
(none) if you don't want CaddyUI to manage the A record.

The amber warning text is also strengthened — from "saving will create the DNS record in the wrong place otherwise" (advisory) to "Save will be rejected — pick a zone whose apex matches the hostname, or set the provider to (none) to skip managed DNS." (actionable). Matches the new server-side reality.

Existing rows with mismatched zones aren't auto-cleaned. The validator only fires on save. Re-saving an affected row through the edit form will surface the new error and force you to pick the correct zone. Rows you don't re-save keep their current behaviour — the next dnsCreateRecord call still fails the same way it always did, but the new warning text makes it obvious why.

🐛 Previous fix — v2.7.7

v2.7.7 — Duplicate domains were silently overriding each other

Reported live: a proxy host's traffic suddenly started going to the wrong upstream. Cause: somebody (a teammate, or the same admin in a second tab) had created a second proxy claiming the same hostname. The form accepted it. Caddy's route table only keeps one match per host, so on the next sync the newer entry's reverse_proxy handler took the slot and the older row stopped working — even though both still showed up in /proxy-hosts as enabled.

Root cause: ProxyHostDomainsConflict existed in internal/models/models.go but was dead code — none of the four save handlers (createProxyHost, updateProxyHost, createRedirectionHost, updateRedirectionHost) called it. There was also a latent bug in the helper itself: the redirection-host loop didn't honour excludeID, so it would have falsely flagged an in-place redirect edit as a self-conflict the moment we tried to wire it in.

Fix: rewrote the helper as models.DomainsConflict(db, serverID, domains, excludeProxyID, excludeRedirectID) and called it from all four handlers. The two-exclude split is required because proxies and redirects live in separate tables with overlapping IDs — a single excludeID would be ambiguous, so create-path callers pass (0, 0) and update-path callers pass (p.ID, 0) for proxy edits or (0, rh.ID) for redirect edits. A no-op save (e.g. toggling SSL on the same row) doesn't trip the new guard.

Comparison is case-insensitive and trim-tolerant — Example.com , example.com, and EXAMPLE.COM all collapse to the same key, matching Caddy's host-matcher semantics. The error renders inline on the form (same path as the SSL-flag and Advanced-config validators) and the user's input is preserved:

Domain "example.com" is already in use by another proxy
or redirect on this server. Each domain can only be claimed
once — edit the existing entry or remove it before reusing
the name.

Admin view is forced for the conflict check so user A can't claim example.com after user B already claimed it via a different account. Caddy resolves routes by hostname, not by who owns the row in the UI — owner-scoped checking would let two users each create a working-looking proxy and only one would actually receive traffic.

Raw routes are intentionally not part of this round. Their host matchers live inside the route's JSON body rather than a flat column, and the postImport flow already covers raw-route deduplication. Most reports were proxy↔proxy or proxy↔redirect, so v2.7.7 scopes there; raw-route conflict checking can land separately when there's evidence it's needed.


🐛 Previous fixes — v2.7.5 & v2.7.6

v2.7.6 — /analytics server filter showed "no data" on multi-server installs

Reported live: picking a secondary Caddy in the /analytics server filter rendered every card as zero even on a server that was absolutely serving real traffic.

Root cause: the Settings → Analytics toggle used to call EnableAccessLogs on exactly one Caddy — the primary (CADDY_ADMIN_URL). Every secondary server registered via /servers kept its default "no access logs" state and never shipped a single event to the ingest listener. So when the analytics page filtered to a secondary, the access_events table had zero matching rows and every aggregate came back empty.

Fix: applyAnalyticsToggle now iterates every managed server in caddy_servers, builds a per-server client via newCaddyClient(adminURL, user, pass), and applies enable/disable to each. External-type servers (read-only observations of Caddy instances CaddyUI isn't authoritative over) are skipped. Per-server errors are collected into a single aggregated error so one unreachable Caddy doesn't short-circuit the others.

To recover: after upgrading, go to Settings → Analytics → Save once (no changes needed — just click Save). That single save runs the new loop and installs the caddyui_access logger on every secondary you've added. Events should start flowing within a minute of real traffic hitting them.

v2.7.5 — /backup returned SQLITE_CANTOPEN

backup failed: unable to open database file:
unable to open database: /tmp/caddyui-backup-YYYYMMDD-HHMMSS.db (14)

Error 14 is SQLITE_CANTOPEN. Root cause: the v2.7.x scratch-based final image has no /tmp directory, and our non-root UID can't mkdir one at the filesystem root. Silently broken since v2.7.0 — nobody hit it until someone actually tried the backup button.

Two fixes, stacked:

  • Handler-level: /backup now writes the temp file next to the live DB (filepath.Dir(s.DBPath)) instead of /tmp. Same filesystem as the source DB, same UID ownership as the existing caddyui.db / .db-wal / .db-shm. Zero chance of CANTOPEN on a directory that was already open two lines earlier.
  • Image-level: Dockerfile now pre-creates /tmp with 1777 (sticky world-writable) in the scratch final image. The backup handler doesn't need this anymore, but mime/multipart file uploads or any third-party library that calls os.TempDir() internally would otherwise silently break the same way. Catches the next instance of this bug before anyone has to file it.

Smoke-tested end-to-end: downloaded a valid 104 KB backup with 15 tables and the admin row intact, where v2.7.4 returned HTTP 500. Pull :latest and the backup button works.


📦 Upgrade

docker compose pull && docker compose up -d

Or in Portainer: Recreate → enable Re-pull image. Migrations run automatically on startup. No downtime beyond the container restart.

Multi-arch on Docker Hub (linux/amd64 + linux/arm64, SBOM + provenance attestations, scratch base, non-root UID 10001):

docker pull applegater/caddyui:v2.7.8
# or
docker pull applegater/caddyui:latest

🗺️ On the whiteboard for upcoming releases

No promises, but things being considered:

  • Per-group resource templates — spin up a new user in a group and auto-seed them with a starter proxy host.
  • Export / import for migrations — move a set of proxy hosts between servers in one click.
  • More DNS providers — Route 53, Gandi, Njalla if there's demand.
  • Audit-log retention policies — configurable rollover for the activity table.
  • Per-host rate limiting — Caddy supports it natively; the UI doesn't expose it yet.

Want something specific? Open an issue on GitHub. Every feature above started as someone going "wait, I wish it did X." Keep that loop going.


⏸️ Why the next two weeks are quiet

Twenty-three releases in three weeks is great for iteration and rough on anyone running this in production. So starting today, CaddyUI moves to a twice-a-month cadence — roughly the 1st and the 15th — with emergency fixes only in between. (v2.7.6, v2.7.7, and v2.7.8 are exactly that — three live-reported bugs, three quiet-window patches, three fixes shipped same-day. Same rule going forward.)

The next two weeks: no more updates. Go set up your groups. Migrate a customer to their own account. Turn on analytics and watch the sparkline build. Hand off some sites. Enjoy the stability.

Bug reports are still welcome — they'll land in the next window. Feature requests too.

Thanks to everyone running CaddyUI in production and reporting the rough edges. The 2.5 → 2.7 arc has been shaped almost entirely by that feedback loop. 🙏

Please enjoy. 🚀

— Richard · @X4Applegate