🐛 Release + Recipe
CaddyUI v2.5.10 — Alias domains now get DNS records, plus a self-hosted encrypted-DNS recipe
Started as a bug report: "adding an alias domain breaks with DNS_PROBE_FINISHED_BAD_SECURE_CONFIG." Ended as a complete AdGuard Home + Caddy DoH/DoT/DoQ stack with per-device client names.
April 23, 2026
v2.5.10
☕ ~8 min read
The bug
I was setting up AdGuard Home behind Caddy so my phone could use DNS-over-HTTPS at laptop.dns.richardapplegate.io. The proxy host worked on the first save. Then I added a second hostname on the same row as an alias — and Chrome started showing this on every request to the alias:
DNS_PROBE_FINISHED_BAD_SECURE_CONFIG
This site can't be reached
Checking the proxy, firewall, and Secure DNS configuration
The primary domain still worked. Only the alias broke. That's a weirdly specific failure mode, and it turned out to be a weirdly specific bug.
What v2.5.9 fixed — and what it missed
v2.5.9 (shipped an hour before v2.5.10) changed dnsCreateRecord to iterate every hostname in a proxy host's Domains list and request one A record per FQDN, instead of only the first. The create path started working correctly, and the bulk IP-retarget path (the one that fires when you change a Caddy server's public IP) got the same treatment.
The user-initiated edit path did not. In updateProxyHost the change-detection logic was still:
oldDomain := dns.FirstDomain(old.Domains)
newDomain := dns.FirstDomain(p.Domains)
domainChanged := oldDomain != newDomain
Which means: if you take an existing proxy host with a single domain, and you add an alias to it, the first domain hasn't changed — so domainChanged is false, needCreate is false, and no record ever gets provisioned for the new alias. The row in the DB looks right. Caddy is happy to serve the alias. But there's no A record at the provider, so the recursive resolver returns NXDOMAIN.
DNS_PROBE_FINISHED_BAD_SECURE_CONFIG
should
The v2.5.10 fix
The patch is six lines of logic plus a "slices" import. Compare the full DomainList() instead of just the first element:
var oldDomains []string
if old != nil {
oldDomains = old.DomainList()
}
newDomains := p.DomainList()
domainChanged := !slices.Equal(oldDomains, newDomains)
Same change for the raw-route edit path (updateRawRoute): compare rawRouteHosts(*old) to rawRouteHosts(*rr) instead of only the first host. Any addition, removal, rename, or reorder now triggers the same delete-all-then-create-all cycle that dnsCreateRecord / dnsCreateRecordForRaw have done since v2.5.9. All three mutation paths (create, edit, IP retarget) now agree on "every hostname in the list gets an A record."
No schema change, no migration. Existing rows with a stale alias list self-heal on the next save.
🧅 The recipe this bug unblocked
Once the fix was live, I put together the whole stack the bug was originally blocking: wildcard cert via Caddy's DNS-01, AdGuard Home running on the same box, DoH proxied through Caddy, DoT/DoQ direct to AdGuard using the shared cert, per-device client names showing up correctly on both protocols.
Here's the complete thing, copy-pasteable.
1. Wildcard cert via Caddy + Cloudflare DNS-01
Dockerfile.caddy already bakes in the Cloudflare DNS provider module, so all that's needed is a Caddyfile snippet. Paste this into CaddyUI → /caddyfile-import:
*.dns.richardapplegate.io, dns.richardapplegate.io {
tls {
dns cloudflare {env.CF_API_TOKEN}
}
reverse_proxy http://adguardhome:8080
}
The wildcard covers every <device>.dns.richardapplegate.io I'll ever need (phone, laptop, router, TV, …) on a single cert. The apex is included as an extra SAN so the same file covers bare dns.richardapplegate.io too. Everything — DoH at /dns-query, the admin UI, device management — all goes to AdGuard on port 8080; AdGuard routes internally.
Gotcha:
*.dns.richardapplegate.io
phone.dns....
dns.richardapplegate.io
2. AdGuard Home as a Portainer stack
services:
adguardhome:
image: adguard/adguardhome:v0.107.56
container_name: adguardhome
restart: unless-stopped
ports:
# DNS - must be on host
- "53:53/tcp"
- "53:53/udp"
# Optional: DoT / DoQ direct (not via Caddy)
- "853:853/tcp"
- "784:784/udp"
environment:
TZ: America/Los_Angeles
volumes:
- /mnt/1TB/adguard/work:/opt/adguardhome/work
- /mnt/1TB/adguard/conf:/opt/adguardhome/conf
- /mnt/1TB/caddy/caddy_data:/caddy-certs:ro
networks:
- caddy-and-ui_caddy_net
networks:
caddy-and-ui_caddy_net:
external: true
The critical piece is the read-only mount of Caddy's data volume (/caddy-certs). Caddy writes the wildcard cert there after the DNS-01 challenge completes; AdGuard reads the same files for its DoT/DoQ listener. Pin the AdGuard image tag (not :latest) so a surprise major upgrade doesn't land overnight.
The caddy-and-ui_caddy_net network name comes from Portainer — it's the <stack-name>_<network-name> form, external because the network is owned by the CaddyUI stack and we're joining it from a separate stack.
3. AdGuardHome.yaml — the TLS block that actually works
tls:
enabled: true
server_name: dns.richardapplegate.io
force_https: false
port_https: 443
port_dns_over_tls: 853
port_dns_over_quic: 853
port_dnscrypt: 0
dnscrypt_config_file: ""
allow_unencrypted_doh: true
certificate_chain: ""
private_key: ""
certificate_path: /caddy-certs/caddy/certificates/acme-v02.api.letsencrypt.org-directory/wildcard_.dns.richardapplegate.io/wildcard_.dns.richardapplegate.io.crt
private_key_path: /caddy-certs/caddy/certificates/acme-v02.api.letsencrypt.org-directory/wildcard_.dns.richardapplegate.io/wildcard_.dns.richardapplegate.io.key
strict_sni_check: false
Three settings matter here and are easy to miss:
allow_unencrypted_doh: true— lets AdGuard serve DoH over plain HTTP on its admin port. Caddy does the TLS. Without this, Caddy proxies/dns-queryand gets 502 back from AdGuard.certificate_path/private_key_path— Caddy stores multi-subject certs under the first subject's name, with wildcards normalised towildcard_. Both filenames are the same directory, different extensions.strict_sni_check: false— clients connect tophone.dns.richardapplegate.io:853, not to the exactserver_name. Strict check rejects those.
AdGuard reads the cert once at startup, so after each Caddy renewal (~60-day cycle) you need to restart the container. A monthly cron on the host is plenty:
0 4 1 * * docker restart adguardhome
4. Per-device ClientIDs
The whole point of a private DNS resolver is per-device filtering: "block YouTube on the kids' tablet, not on mine." AdGuard Home extracts the ClientID differently for each protocol:
| Protocol | ClientID source | URL on the device |
|---|---|---|
| DoT / DoQ | TLS SNI leftmost label | phone.dns.richardapplegate.io(port 853 for DoT, 784 for DoQ) |
| DoH | URL path suffix | https://dns.richardapplegate.io/dns-query/phone |
| DoH (alternate) | Host header leftmost label | https://phone.dns.richardapplegate.io/dns-query |
The SNI-based path is cleanest because the ClientID lives in the hostname, not in a URL suffix that Android's private-DNS field won't accept. The DoH path-suffix form is useful when a DoH client (some browsers, some apps) locks the URL format to a specific domain.
Debugging tip:
strict_sni_check: false
phone.dns.richardapplegate.io
dns.richardapplegate.io
5. Routing diagram
| Request | Port | Who terminates TLS | Who answers |
|---|---|---|---|
DoH — https://phone.dns.richardapplegate.io/dns-query | 443/tcp | Caddy | AdGuard (via reverse_proxy) |
DoT — phone.dns.richardapplegate.io:853 | 853/tcp | AdGuard | AdGuard |
DoQ — phone.dns.richardapplegate.io:784 | 784/udp | AdGuard | AdGuard |
| Plain DNS (LAN only) | 53/udp,tcp | n/a | AdGuard |
Admin UI — https://dns.richardapplegate.io/ | 443/tcp | Caddy | AdGuard |
📦 Upgrade
docker pull applegater/caddyui:v2.5.10
# or
docker pull applegater/caddyui:latest
Multi-arch linux/amd64 + linux/arm64, SBOM + provenance attestations, scratch base, non-root UID 10001. No schema migration. Rows with stale alias lists self-heal on the next edit.
📦 View v2.5.10 release
🐳 Docker Hub
⭐ GitHub repo
💬 Feedback
The bug that kicked this off was mine — I hit it live while adding a second alias to the AdGuard proxy host and got the "can't be reached" screen for twenty minutes before I figured out which side of the stack was lying. If you've got a similar "X seems set up right but Y doesn't work" story, open an issue. A lot of the smaller fixes in the 2.5.x series started that way.
Thanks to everyone running CaddyUI in anger and reporting the rough edges. The 2.5.x line has been almost entirely shaped by that feedback loop. 🙏
— Richard · @X4Applegate
Richard Applegate
Comments (0)
No comments yet. Be the first!