Tag: Final

  • ERPNext v15 Slack Notifications for Stock Movements

    If you are running ERPNext v15 and want real-time Slack notifications whenever stock movements occur, this step-by-step guide shows you exactly how to set up webhook-based alerts for Material Issues, Material Transfers, and Purchase Receipts. You will also learn how to avoid the common Jinja templating error that catches most people off guard during setup.

    This method uses ERPNext’s built-in Webhook DocType paired with Slack Incoming Webhooks. No custom app, third-party plugin, or server script is required — just a few minutes of configuration inside your ERPNext instance.


    How It Works

    ERPNext includes a built-in Webhook DocType that fires HTTP POST requests whenever a document is created, submitted, updated, or deleted. In this setup, you configure webhooks that trigger on the on_submit event for stock-related documents. When a user submits a Stock Entry or Purchase Receipt, ERPNext renders a Jinja template into a JSON payload and sends it as a POST request to a Slack Incoming Webhook URL. Slack then displays a neatly formatted message in your chosen channel, including the stock entry type, item details, quantities, and warehouse locations.

    There is one critical gotcha you need to understand before writing your Jinja templates. In ERPNext’s webhook context, the doc object behaves like a Python dictionary rather than a Frappe document object. This means that if you write doc.items in your Jinja template to access line items, Python interprets it as a call to the dictionary’s built-in .items() method — not as the child table field named “items.” The result is the error TypeError: 'builtin_function_or_method' object is not iterable, and your webhook silently fails. The fix is straightforward: use doc.get('items', []) instead, which safely retrieves the child table without any conflict.

    You will also need two separate webhooks because Material Issue and Material Transfer are types within the Stock Entry DocType, while Purchase Receipt is an entirely separate DocType with its own data structure.


    Step 1 — Create a Slack Incoming Webhook

    1. Go to https://api.slack.com/apps.
    2. Create a new Slack app or open an existing one.
    3. Under Features, enable Incoming Webhooks.
    4. Click Add New Webhook to Workspace and select the channel where you want stock alerts to appear.
    5. Copy the generated Webhook URL — you will need it in the next steps.

    Security note: Treat this URL like a password. Anyone who possesses it can post messages to your Slack channel. Never commit it to version control or share it in a public forum.


    Step 2 — Create the Stock Entry Webhook (Material Issue + Material Transfer)

    In ERPNext, navigate to the search bar, type Webhook, and click New to open a blank webhook form.

    Webhook Settings

    Field Value
    Document Type Stock Entry
    Doc Event on_submit
    Request Method POST
    Request URL Your Slack webhook URL
    Enabled Checked

    Condition

    Add a condition to ensure this webhook only fires for Material Issues and Material Transfers, and not for every Stock Entry type in the system:

    doc.stock_entry_type in ("Material Issue", "Material Transfer")

    Without this condition, you will receive Slack alerts for every stock entry type — manufacture entries, repack entries, material receipts, and more — which quickly creates notification fatigue and makes alerts meaningless.

    Request Header

    Add one header row so Slack correctly interprets the incoming data:

    Key Value
    Content-Type application/json

    JSON Payload

    Paste the following template into the Request Body field. Note that this template uses doc.get() throughout to avoid the doc.items dictionary conflict described earlier:

    {
      "text": "📦 *Stock Entry Submitted*\n*Type:* {{ doc.get('stock_entry_type', '—') }}\n*ID:* {{ doc.get('name', '—') }}\n*Company:* {{ doc.get('company', '—') }}\n*Posting:* {{ doc.get('posting_date', '—') }} {{ doc.get('posting_time', '') }}\n*From WH:* {{ doc.get('from_warehouse') or '—' }}\n*To WH:* {{ doc.get('to_warehouse') or '—' }}\n\n*Items:*\n{% for i in doc.get('items', []) %}• {{ i.get('item_code', '—') }} — Qty: {{ i.get('qty', 0) }} {{ i.get('uom', '') }} ({{ i.get('s_warehouse') or i.get('t_warehouse') or '—' }})\n{% endfor %}"
    }

    Step 3 — Create a Separate Webhook for Purchase Receipt

    Purchase Receipt is a distinct DocType in ERPNext — it is not a subtype of Stock Entry. You cannot handle it by adding another condition to the Stock Entry webhook. A second, dedicated webhook is required.

    Webhook Settings

    Field Value
    Document Type Purchase Receipt
    Doc Event on_submit
    Request Method POST
    Request URL Your Slack webhook URL
    Enabled Checked

    Add the same Content-Type: application/json header as before. No condition is necessary unless you want to filter alerts by a specific supplier, company, or warehouse.

    JSON Payload

    {
      "text": "🧾 *Purchase Receipt Submitted*\n*ID:* {{ doc.get('name', '—') }}\n*Supplier:* {{ doc.get('supplier', '—') }}\n*Company:* {{ doc.get('company', '—') }}\n*Posting:* {{ doc.get('posting_date', '—') }} {{ doc.get('posting_time', '') }}\n\n*Items Received:*\n{% for i in doc.get('items', []) %}• {{ i.get('item_code', '—') }} — Qty: {{ i.get('qty', 0) }} {{ i.get('uom', '') }} ({{ i.get('warehouse') or '—' }})\n{% endfor %}"
    }

    Step 4 — Test the Webhooks

    Run a quick test for each document type to confirm that Slack messages are delivered correctly and contain the expected information:

    1. Create and submit a Material Transfer Stock Entry. Check your Slack channel for the alert and verify the item details are correct.
    2. Create and submit a Material Issue Stock Entry. Confirm the alert appears and the source warehouse is populated.
    3. Create and submit a Purchase Receipt. Confirm it triggers the second webhook and displays the supplier and received items.

    Troubleshooting

    If no messages appear in Slack, check the ERPNext backend logs for errors. Look specifically for Jinja rendering errors or HTTP request failures. If you are running ERPNext in Docker, use one of the following commands:

    docker compose logs -f backend

    Or from inside the bench container:

    bench --site yoursite tail

    The most common causes of failure are an incorrect or expired Slack webhook URL, a malformed JSON payload (such as unescaped newlines or missing brackets), and Jinja template errors caused by using doc.items instead of doc.get('items', []).


    Optional — Add a Clickable Link to the ERP Record

    You can make your Slack alerts even more actionable by including a direct link to the submitted document. Add the following line inside the text field of your JSON payload, replacing erp.yourdomain.com with your actual ERPNext domain:

    For Stock Entries:

    https://erp.yourdomain.com/app/stock-entry/{{ doc.get('name') }}

    For Purchase Receipts:

    https://erp.yourdomain.com/app/purchase-receipt/{{ doc.get('name') }}

    Slack automatically converts plain URLs into clickable hyperlinks, so no additional Slack Block Kit formatting is needed. Team members can jump directly from the Slack alert to the relevant ERPNext record with a single click.


    Summary

    You now have automatic, real-time Slack alerts for Material Issues, Material Transfers, and Purchase Receipts — all triggered on submission and formatted with item-level detail. Here are the key points to remember as you manage and expand these webhooks:

    • Always use doc.get('field') instead of doc.field in ERPNext webhook Jinja templates. This prevents the doc.items conflict with Python’s built-in dictionary method and eliminates the most common webhook error.
    • Material Issue and Material Transfer are subtypes within the Stock Entry DocType — use a condition filter to target only those types and avoid notification noise.
    • Purchase Receipt is a separate DocType entirely and requires its own dedicated webhook configuration.
    • Treat your Slack webhook URL as a sensitive credential — store it securely and never expose it in public repositories or documentation.
    • Add clickable ERPNext record links to your Slack messages to help your team act on alerts quickly without searching for the document manually.
  • Disable IPv6 on Linux Permanently with sysctl

    Disable IPv6 on Linux

    This guide explains how to permanently disable IPv6 on a Linux server using a persistent sysctl configuration file. The change takes effect immediately — no reboot required — and will survive future system restarts automatically.


    How It Works

    The Linux kernel exposes IPv6 behavior through a set of runtime parameters found under /proc/sys/net/ipv6/. By default, IPv6 is enabled on all network interfaces. To disable it, you need to set three specific kernel parameters to 1: one for all currently active interfaces, one for any interfaces created in the future, and one for the loopback interface.

    While you can change these parameters on the fly using the sysctl command, those changes are lost on the next reboot. To make them permanent, you write them to a configuration file inside /etc/sysctl.d/. Files in this directory are loaded automatically at boot time. Using a 99- prefix ensures your file is applied last — overriding any distribution defaults that might otherwise re-enable IPv6.


    Step 1 — Create a Dedicated sysctl Configuration File

    Start by creating a new, dedicated configuration file for disabling IPv6. Keeping this setting in its own file makes it easy to identify, modify, or remove later:

    sudo nano /etc/sysctl.d/99-disable-ipv6.conf

    Paste the following three lines into the file:

    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1

    Each line targets a different scope:

    • all — disables IPv6 on every network interface currently active on the system.
    • default — ensures that any new network interfaces created after boot also have IPv6 disabled.
    • lo — disables IPv6 on the loopback interface (localhost).

    Save and close the file. In nano, press Ctrl+O to save and Ctrl+X to exit.


    Step 2 — Apply the Changes Immediately (No Reboot Needed)

    Reload all sysctl configuration files, including the one you just created, with the following command:

    sudo sysctl --system

    This reads every .conf file in /etc/sysctl.d/ (and related directories) and applies all settings to the running kernel in real time. IPv6 is disabled immediately — no reboot is required.


    Step 3 — Verify That IPv6 Is Disabled

    Check whether any network interfaces still have IPv6 addresses assigned:

    ip a | grep inet6

    If IPv6 has been fully disabled, this command will return no output at all. If you still see a ::1 address listed for the loopback interface, double-check that the net.ipv6.conf.lo.disable_ipv6 = 1 line is present and correctly formatted in your configuration file.

    You can also verify the kernel parameter value directly:

    cat /proc/sys/net/ipv6/conf/all/disable_ipv6

    A return value of 1 confirms IPv6 is disabled. If you see 0, IPv6 is still active and the sysctl configuration was not loaded correctly. In that case, re-run sudo sysctl --system and check your configuration file carefully for typos or formatting errors.


    Summary

    Disabling IPv6 via sysctl is the most reliable and maintainable method on Linux. By placing your settings in /etc/sysctl.d/99-disable-ipv6.conf, the configuration is modular, easy to manage, and automatically applied on every boot. This approach works across most major Linux distributions, including Ubuntu, Debian, CentOS, and Rocky Linux.

  • UFW Firewall: Allow TCP Port 8123 from Specific IP

    Firewalls are a foundational component of any secure Linux server environment. On Ubuntu and other Debian-based distributions, UFW (Uncomplicated Firewall) provides a straightforward yet powerful interface for managing iptables rules without requiring deep knowledge of low-level networking syntax. It strikes an ideal balance between simplicity and control, making it a trusted tool for both beginners and experienced system administrators.

    In this post, we will dissect the following UFW rule line by line, explain precisely what each component does, and examine the real-world scenarios in which this pattern is most appropriate:

    sudo ufw allow from 10.8.0.2 to any port 8123 proto tcp

    What Is UFW?

    UFW is a frontend for iptables — the built-in Linux packet-filtering framework — designed to make firewall configuration more accessible and less error-prone. Rather than writing complex iptables rule chains directly, UFW allows administrators to define intent-based rules using human-readable syntax.

    Under the hood, UFW translates these rules into the appropriate iptables commands and applies them to the kernel’s netfilter subsystem. This abstraction reduces the risk of misconfiguration while preserving the full power of the underlying firewall engine. UFW ships with Ubuntu by default and is also available on Debian, Linux Mint, and other Debian-derived systems.


    Breaking Down the Rule

    Let us examine each token of the command in detail and understand exactly what role it plays.

    sudo

    Firewall rules operate at the kernel level and require root-level privileges to modify. The sudo prefix temporarily elevates the current user’s privileges, allowing the command to interact with the system’s networking stack. Without sudo, UFW will refuse to execute the rule change and return a permission error.

    ufw allow

    This instructs UFW to permit any traffic that matches the conditions specified in the rest of the rule. UFW supports three primary policy actions:

    • allow — Permits the matching traffic to pass through.
    • deny — Silently drops the matching traffic without notifying the sender.
    • reject — Blocks the traffic but sends an ICMP “port unreachable” response back to the sender, informing them the connection was actively refused.

    Choosing allow here grants explicit access, overriding any default-deny policy that UFW may have in place.

    from 10.8.0.2

    This clause restricts the rule to traffic originating exclusively from the IP address 10.8.0.2. Only packets whose source IP matches this address will be evaluated against this rule; all other source addresses will fall through to the next applicable rule or the default policy.

    The address 10.8.0.2 falls within the 10.0.0.0/8 CIDR block, which is one of three private IPv4 address ranges defined by RFC 1918. These addresses are not routable on the public internet, meaning they are only reachable within internal networks or through tunneling mechanisms. Common use cases for this address range include:

    • VPN tunnels — OpenVPN, WireGuard, and similar tools frequently assign clients addresses in the 10.0.0.0/8 space. The address 10.8.0.2 is, in fact, a classic default for the first OpenVPN client.
    • Private LANs — Internal corporate or home networks using RFC 1918 addressing.
    • Container and VM networking — Docker, Vagrant, and other virtualization platforms often use this space for internal bridge networks.

    By anchoring the rule to a single private IP, you dramatically reduce the attack surface compared to opening the port to all source addresses.

    to any port 8123

    This clause defines the destination of the allowed traffic:

    • to any — The destination can be any local network interface on the machine. This includes loopback, Ethernet, and VPN interfaces, so the rule is not restricted to a specific NIC or IP binding.
    • port 8123 — Only traffic directed at port 8123 matches this rule. All other ports remain unaffected.

    Port 8123 is not an IANA-registered well-known port, but it is widely associated with several popular applications:

    • Home Assistant — An open-source home automation platform that listens on port 8123 by default.
    • Custom web dashboards and internal APIs — Developers frequently run lightweight HTTP services on non-standard ports such as this to avoid conflicts with system services on ports 80 and 443.
    • Development and monitoring tools — Staging environments, metrics endpoints, and administrative panels are commonly bound to ports in the 8000–9000 range.

    proto tcp

    This argument restricts the rule to TCP (Transmission Control Protocol) traffic only. Because port 8123 could theoretically receive both TCP and UDP traffic, this qualifier ensures only the TCP variant is permitted.

    Understanding why this distinction matters:

    • TCP is connection-oriented. It establishes a session via a three-way handshake (SYN, SYN-ACK, ACK), guarantees ordered delivery, and performs error correction. It is the appropriate protocol for HTTP, HTTPS, and most web-based application traffic.
    • UDP is connectionless and stateless. It offers lower latency but no delivery guarantees, making it suitable for DNS, VoIP, streaming, and gaming — but generally not for web UIs or REST APIs.

    By specifying proto tcp, you ensure that UDP traffic directed at port 8123 remains blocked even though TCP is explicitly allowed. This follows the principle of least privilege: permit only what is strictly necessary, nothing more.


    What This Rule Accomplishes

    Taken together, the rule can be read in plain language as:

    “Allow inbound TCP connections to port 8123 on this server, but only when the connection originates from the IP address 10.8.0.2. All other sources, ports, and protocols remain unaffected by this rule.”

    In practical terms, this means a client sitting behind a VPN (assigned 10.8.0.2) can reach the service on port 8123, while any external actor scanning your public IP — or even another internal host with a different IP — will find that port entirely unresponsive.


    Why This Pattern Is a Security Best Practice

    This rule demonstrates several well-established security principles that should guide firewall design at every level of infrastructure.

    ✅ Principle of Least Privilege

    Access is granted to precisely one IP address rather than to a broad subnet or the entire internet. The rule does not over-provision. If that single trusted host is ever compromised or its access revoked, removing this one rule is all that is required to close the exposure.

    ✅ Reduced Attack Surface

    External port scanners — a first step in most automated attack pipelines — will find port 8123 completely unresponsive from their vantage point. Because UFW’s default-deny policy silently drops unmatched packets, the port does not even send a RST (reset) response to unauthorized sources, revealing nothing about the service running behind it.

    ✅ Protocol Specificity

    Explicitly declaring proto tcp prevents inadvertent exposure of UDP-based services that might be bound to the same port number. Many misconfigured firewalls leave UDP open simply because the administrator forgot to restrict protocol scope.

    ✅ Readability and Maintainability

    The rule reads almost like plain English, which means it is self-documenting. A future administrator reviewing your firewall policy can immediately understand the intent without needing to decode opaque iptables chains. This clarity reduces the likelihood of accidental rule removal or incorrect modification during maintenance windows.


    Verifying the Rule

    After applying the rule, you should always verify it was registered correctly before relying on it. Run the following command:

    sudo ufw status verbose

    The verbose flag provides additional detail, including the rule direction (IN/OUT/FWD) and the full source-destination tuple. You should see an entry similar to the following in the output:

    8123/tcp                   ALLOW IN    10.8.0.2

    If the rule does not appear, confirm that UFW itself is active by running sudo ufw status. If the status reads inactive, enable it with sudo ufw enable — but only after ensuring you have an existing SSH allow rule in place to avoid locking yourself out of the server.


    Common Use Cases

    IP-restricted, port-specific firewall rules like this one appear frequently in well-architected infrastructure. Representative scenarios include:

    • VPN-gated access to internal services — A Home Assistant instance, Grafana dashboard, or Prometheus metrics endpoint is made accessible only to authenticated VPN clients, keeping the service completely invisible to the public internet.
    • Jump-host patterns — Administrative UIs and control panels are locked to a single bastion host IP, so operators must first authenticate to the bastion before reaching the target service.
    • IoT and home automation security — Smart home hubs and automation controllers often lack robust built-in authentication. Restricting network access at the firewall level compensates for application-layer weaknesses.
    • Microservice and API isolation — Internal REST or gRPC APIs that should never be publicly routable are restricted to the specific IP of the calling service, enforcing network-layer service boundaries.

    Final Thoughts

    The command:

    sudo ufw allow from 10.8.0.2 to any port 8123 proto tcp

    is a textbook example of disciplined, intent-driven firewall policy. It combines source IP restriction, port targeting, and protocol scoping into a single, auditable rule that is easy to read, easy to reason about, and easy to revoke.

    If you are running services that do not require public internet exposure — and most internal services do not — this is the pattern you should default to. Open only what is necessary, to only who needs it, using only the protocol required. Everything else stays closed.

  • Secure WireGuard Homelab: Complete Setup Guide

    This page documents the complete, final configuration of a secure homelab environment built around WireGuard, Raspberry Pis, UFW, AdGuard Home, Fail2Ban, n8n workflow automation, and Uptime Kuma. It is structured as a top-to-bottom technical reference, covering design intent, implementation details, validation procedures, and the final security posture of the system.

    The goal of this build is not maximum complexity — it is clear, intentional security that is straightforward to operate, reason about, and maintain over time. Every decision documented here was made deliberately, and the rationale for each choice is explained alongside the configuration itself.


    Design Goals

    Before examining the implementation, it is worth stating the principles that shaped every configuration decision in this build:

    • Secure all management access using WireGuard — No administrative interface is reachable from the public internet without first authenticating through the VPN tunnel.
    • Preserve full internet performance — A split-tunnel design ensures that only VPN-destined traffic traverses WireGuard. Normal internet traffic continues to use each device’s local gateway without any performance penalty.
    • Expose only explicitly intended public services — Every open port is a deliberate choice. Nothing is open by accident or by default.
    • Eliminate accidental routing, NAT, and DNS side effects — Misconfigured AllowedIPs ranges and unintended DNS overrides are among the most common homelab failure modes. This design prevents both.
    • Provide monitoring, alerting, and automated response — Passive security is insufficient. The system actively detects anomalies, generates alerts, and can respond automatically.
    • Keep the system auditable and maintainable — Every component and rule is documented with intent. A future administrator — or your future self — should be able to understand what is running and why.

    Final Architecture Overview

    • Main Server: Acts as the WireGuard server and central management node for the homelab.
    • Raspberry Pi 1 & 2: WireGuard clients that also serve as hosts for internal services.
    • VPN Subnet: 10.8.0.0/24 — a private RFC 1918 address space dedicated to WireGuard tunnel traffic.
    • Tunnel Mode: Split tunnel — only traffic destined for the VPN subnet is routed through WireGuard.
    • DNS: Handled locally by AdGuard Home running on a Raspberry Pi. WireGuard does not override system DNS.
    • Firewall: UFW managing all inbound traffic rules, operating in IPv4-only mode.
    • IPv6: Intentionally and permanently disabled at both the kernel and firewall level.

    Only traffic destined for the 10.8.0.0/24 VPN subnet is routed through the WireGuard interface. All other outbound traffic — web browsing, software updates, external API calls — continues to flow through each device’s local LAN gateway. This split-tunnel design eliminates the performance overhead and NAT complexity of a full-tunnel VPN while still protecting all administrative access paths.


    WireGuard Configuration (Final)

    Server — /etc/wireguard/wg0.conf

    [Interface]
    Address = 10.8.0.1/24
    ListenPort = 51820
    PrivateKey = PLEASE_PUT_YOUR_SERVER_PRIVATE_KEY
    
    [Peer]
    PublicKey = PLEASE_PUT_YOUR_PI1_PUBLIC_KEY
    AllowedIPs = 10.8.0.2/32
    
    [Peer]
    PublicKey = PLEASE_PUT_YOUR_PI2_PUBLIC_KEY
    AllowedIPs = 10.8.0.3/32

    Each peer entry uses a /32 host mask in AllowedIPs, which tells the WireGuard server to accept packets from that peer only if they originate from that specific IP address. This prevents one peer from spoofing another’s tunnel address and keeps the routing table clean and unambiguous.

    Raspberry Pi Clients — /etc/wireguard/wg0.conf

    [Interface]
    Address = PLEASE_PUT_YOUR_PI_WG_IP
    PrivateKey = PLEASE_PUT_YOUR_PI_PRIVATE_KEY
    # No DNS line — AdGuard Home runs locally on this device
    
    [Peer]
    PublicKey = PLEASE_PUT_YOUR_SERVER_PUBLIC_KEY
    Endpoint = PLEASE_PUT_YOUR_SERVER_PUBLIC_IP_OR_DNS:51820
    AllowedIPs = 10.8.0.0/24
    PersistentKeepalive = 25

    Critical rule: AllowedIPs = 10.8.0.0/24

    This single setting is the most consequential value in the client configuration, and it is worth understanding precisely why. The AllowedIPs directive serves a dual purpose in WireGuard: it defines both the routing table entries added to the host when the tunnel is active and the cryptographic filter used to validate incoming packets from this peer.

    Using AllowedIPs = 10.8.0.0/24 tells the client to route only the VPN subnet through the wg0 interface. Using 0.0.0.0/0 instead would create a full-tunnel VPN, routing all traffic — including normal internet requests — through the WireGuard server. This would introduce NAT dependencies on the server, degrade performance on the Pis, and break the split-tunnel design entirely. The /24 subnet mask is non-negotiable for this architecture.

    PersistentKeepalive = 25 sends a keepalive packet to the server every 25 seconds. This is necessary because the Raspberry Pis sit behind NAT (most home routers perform NAT), and NAT state tables expire idle UDP sessions after a period of inactivity. Without keepalives, the tunnel would silently fail whenever no traffic was exchanged for more than a minute or two.


    Routing Validation (Required)

    After bringing the tunnel up on each Raspberry Pi, you must validate the routing table before considering the configuration correct. Run the following on each Pi:

    ip route

    The output must show exactly two relevant entries:

    • Default route → LAN gateway — All non-VPN traffic exits through the local router. This confirms the split-tunnel is functioning as intended.
    • 10.8.0.0/24wg0 — All VPN subnet traffic is correctly directed into the WireGuard interface.

    If the default route points to wg0 rather than the LAN gateway, the configuration has created a full-tunnel VPN. Stop immediately, bring the interface down with sudo wg-quick down wg0, correct the AllowedIPs value in the configuration file, and restart the tunnel before continuing. Proceeding with an incorrect default route will cause all internet traffic to be routed through the VPN server, breaking NAT and potentially causing a connectivity loss that requires physical access to recover.


    DNS Design (AdGuard Home)

    DNS resolution in this homelab is handled entirely by AdGuard Home, which runs locally on one of the Raspberry Pis. This approach provides ad blocking, query logging, and the ability to detect DNS-based abuse patterns — all without relying on any external resolver for internal queries.

    • AdGuard Home is deployed locally on a Raspberry Pi and listens on the standard DNS ports.
    • WireGuard does not override DNS — there is no DNS = directive in any WireGuard configuration file. This is intentional.
    • Adding a DNS = line to the WireGuard client config would redirect all system DNS queries through the tunnel, creating an unintended dependency on the VPN being active for basic name resolution. Because AdGuard Home is already local, this redirection is unnecessary and counterproductive.

    The following DNS ports are intentionally exposed as public services to allow external clients to use AdGuard Home as a resolver:

    • 53/udp and 53/tcp — Standard DNS (plain-text queries)
    • 853/tcp — DNS-over-TLS (DoT), providing encrypted resolution for clients that support it

    DNS query logs generated by AdGuard Home feed directly into the automation pipeline. Unusual query volumes, known malicious domains, or patterns indicative of DNS abuse trigger automated detection and response workflows in n8n.


    IPv6 Policy

    IPv6 is intentionally and permanently disabled across all nodes in this homelab. While IPv6 is the future of internet addressing, it introduces meaningful complexity in small environments: dual-stack routing requires firewalls to manage rules for two separate protocol families, and many homelab tools and guides implicitly assume IPv4-only operation. A single misconfigured IPv6 rule can expose services that the UFW IPv4 rules correctly block.

    Rather than manage this complexity, IPv6 is disabled at two independent layers to prevent any possibility of accidental re-enablement.

    Kernel-Level Disable — /etc/sysctl.d/99-disable-ipv6.conf

    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1

    These sysctl parameters disable IPv6 at the network stack level, preventing the kernel from assigning IPv6 addresses to any interface. Apply the settings immediately with sudo sysctl --system without requiring a reboot.

    Firewall-Level Disable — /etc/default/ufw

    IPV6=no

    Setting IPV6=no in UFW’s configuration file prevents UFW from generating or applying any IPv6 ip6tables rules. This provides a second layer of defense: even if IPv6 were somehow re-enabled at the kernel level, UFW would not be managing rules for it, making the discrepancy immediately visible during audits.

    The entire environment operates exclusively over IPv4.


    Firewall Policy (UFW — Final)

    Default Policy

    ufw default deny incoming
    ufw default allow outgoing

    The default-deny incoming policy is the foundation of the entire firewall strategy. Every inbound packet is dropped unless a specific rule explicitly permits it. This means every open port in this system is a conscious decision — not an oversight. The default-allow outgoing policy permits services on the machine to initiate connections externally without restriction, which is appropriate for a homelab host that needs to reach package repositories, APIs, and update servers.

    Publicly Exposed Services (Intentional)

    The following ports are intentionally reachable from the public internet. Each has a documented reason for being open:

    • 22/tcp — SSH. Restricted to WireGuard IPs only (see SSH Access Model below). On the public interface, this port is blocked.
    • 80/tcp — HTTP. Used for Let’s Encrypt ACME challenge responses during TLS certificate issuance and renewal.
    • 443/tcp — HTTPS. The primary public-facing service endpoint for any web services hosted on the server.
    • 51820/udp — WireGuard. Must be reachable on the public interface so that Raspberry Pi clients can establish and maintain the VPN tunnel from behind their home routers.
    • 53/udp and 53/tcp — DNS. AdGuard Home accepts external queries on these ports.
    • 853/tcp — DNS-over-TLS. Encrypted DNS resolution for clients that support the DoT standard.

    WireGuard Internal Access

    ufw allow in on wg0
    ufw allow in on wg0 to any port 53
    ufw allow in on wg0 to any port 853

    The first rule permits all traffic arriving on the wg0 interface. Because WireGuard cryptographically authenticates every packet using public-key cryptography, only peers with a valid keypair can inject traffic into wg0. Traffic arriving on this interface is therefore already trusted at the transport layer, and blanket allowance is appropriate.

    The additional DNS rules on wg0 are belt-and-suspenders entries that explicitly permit VPN peers to query AdGuard Home on ports 53 and 853. They are technically redundant given the first rule but serve as explicit documentation that DNS over the VPN is an intended access path.

    SSH Access Model

    SSH is restricted to authenticated WireGuard peers only. The firewall rule is:

    ufw allow from 10.8.0.0/24 to any port 22 proto tcp

    This rule permits SSH connections only when the source IP falls within the WireGuard VPN subnet. Because 10.8.0.0/24 is a private address range that is not routable on the public internet, the only way to originate a connection from this subnet is to already be authenticated as a WireGuard peer. This effectively makes WireGuard a mandatory pre-authentication step before SSH is even reachable.

    External port scans targeting the public IP address will correctly show port 22 as blocked. LAN-originating SSH attempts (from the local home network, for example) are also blocked because LAN addresses do not fall within the 10.8.0.0/24 range. Access requires VPN authentication first — no exceptions.


    SSH Hardening

    Network-layer access control is only part of the SSH security posture. The SSH daemon itself is configured with the following hardening measures in /etc/ssh/sshd_config:

    • Key-only authentication — Public key authentication is the sole permitted login method. Clients without a corresponding private key cannot authenticate regardless of any other factor.
    • Password authentication disabledPasswordAuthentication no is set explicitly, eliminating the entire class of brute-force and credential-stuffing attacks against SSH.
    • Root login disabledPermitRootLogin no prevents direct root access over SSH. Administrative tasks requiring root are performed via sudo from a non-privileged account.
    • Reachable only via WireGuard IPs — As enforced by the UFW rule above, SSH is not reachable from any address outside the VPN subnet.

    To verify reachability before testing SSH, use ping 10.8.0.X to confirm that the VPN tunnel is up and the target host is responding at the network layer. A successful ping confirms connectivity; a failed SSH connection after a successful ping indicates a key placement or permission issue on the target host rather than a network or firewall problem.


    Docker Monitoring (Uptime Kuma)

    Uptime Kuma monitors the health of Docker containers running on the Raspberry Pis. To enable this, it requires read access to the Docker API. The following design decisions keep this monitoring capability secure:

    • The Docker API is never exposed publicly. Port 2375 (the default unencrypted Docker API port) is not open on any public interface.
    • Access is restricted to WireGuard peers only. The UFW rule limits access to the VPN subnet, ensuring the API is reachable only by authenticated tunnel participants.
    • A read-only Docker socket proxy mediates all access. Rather than exposing the raw Docker socket or the full API — which would grant complete control over all containers — a socket proxy is placed in front of it. The proxy exposes only the read-only endpoints required by Uptime Kuma (container status, health, metadata), and nothing else.

    The UFW rule permitting this access is:

    ufw allow from 10.8.0.0/24 to any port 2375

    This combination — network restriction plus a read-only proxy — means Uptime Kuma can monitor container health without ever having the ability to start, stop, delete, or modify containers. Monitoring capability and control capability are intentionally separated.


    Detection, Alerting, and Automation

    Static firewall rules block known bad patterns at the network layer, but they cannot adapt to novel threats or respond in real time. This homelab adds an active detection and response layer built on three tools:

    • Fail2Ban monitors authentication logs (SSH, web server, and others) and automatically blocks source IP addresses that exceed configured failure thresholds. It translates log events into firewall rules dynamically, without manual intervention.
    • AdGuard Home logs every DNS query passing through the resolver. These logs serve as a signal source for detecting DNS-based abuse, including domain generation algorithm (DGA) traffic, known command-and-control domains, and unusual query volumes that may indicate a compromised device on the network.
    • n8n acts as the automation and orchestration layer. Running on a scheduled trigger every minute, it ingests events from Fail2Ban, AdGuard Home, and other log sources, applies logic to determine whether a response is warranted, executes automated IP blocking when thresholds are exceeded, and delivers real-time alert notifications to a configured Slack channel.
    • Uptime Kuma provides continuous availability monitoring for hosts, services, and Docker containers. It alerts immediately when a monitored target becomes unreachable, providing early warning of hardware failures, service crashes, or network disruptions.

    Together, these tools form a complete detect → alert → respond pipeline. A threat event does not require a human to be watching a dashboard to trigger a response — the system acts autonomously and notifies operators of what it has done.


    Validation Checklist

    Run the following commands after any configuration change to verify that the system is in the expected state. Do not skip this step — confirming correct behavior is as important as configuring it.

    wg                          # Confirm tunnel is up, peers are listed, and handshakes are recent
    ip route                    # Confirm default route → LAN gateway, not wg0
    ping 10.8.0.1               # Confirm VPN tunnel reachability to the server
    ping 8.8.8.8                # Confirm internet reachability bypasses the tunnel
    ping google.com             # Confirm DNS resolution is functioning correctly
    ufw status verbose          # Confirm all expected rules are in place and nothing unexpected is open
    ss -tulpen | head -n 30     # Confirm which services are actually listening and on which interfaces

    In addition to these local checks, perform an external port scan using a tool such as nmap from a machine outside the VPN to confirm that port 22 appears as filtered (not open, not closed) from the public internet. SSH access must work correctly only from WireGuard peer addresses — never from any external or LAN source.


    Common Pitfalls (Avoided in This Design)

    The following mistakes are common in homelab WireGuard deployments. Each was explicitly considered and avoided in this build:

    • AllowedIPs = 0.0.0.0/0 on clients — Creates an unintended full-tunnel VPN. All internet traffic is routed through the server, introducing NAT dependencies, degrading performance, and requiring IP forwarding to be enabled on the server. Always use the specific VPN subnet (10.8.0.0/24) for split-tunnel operation.
    • Adding a DNS = directive when AdGuard Home runs locally — Overriding DNS in WireGuard when the resolver is already local creates circular dependencies and breaks name resolution when the tunnel is down. Remove the directive entirely; local DNS requires no tunnel involvement.
    • Exposing the Docker API publicly — The Docker API grants full control over all containers and images on the host. Exposing port 2375 (or even the TLS-protected 2376) to the public internet is equivalent to granting root access to anyone who can connect. Always gate Docker API access behind the VPN.
    • Using an HTTPS reverse proxy as the sole Docker security layer — A proxy in front of the Docker API may restrict which endpoints are accessible over HTTP, but it does not protect against misconfiguration, proxy bypass vulnerabilities, or incorrect TLS certificate validation. Network-layer access control via UFW is a necessary additional layer.
    • Relying on NAT as a security mechanism — NAT hides internal hosts from external initiators as a side effect, but it is not a firewall. NAT state can be bypassed, and an explicit firewall policy should never be substituted with “it’s behind NAT.” This build treats NAT as a connectivity mechanism, not a security control.

    Final Summary

    This homelab build delivers a secure, high-performance, split-tunnel WireGuard network connecting a central server and multiple Raspberry Pis, without sacrificing internet speed or introducing unnecessary operational complexity.

    The main server acts as the WireGuard hub. Each Raspberry Pi connects as a spoke client, receiving a static address within the private 10.8.0.0/24 VPN subnet. Only traffic destined for that subnet is routed through WireGuard — all other internet traffic continues to use each device’s local gateway directly. This eliminates NAT dependencies on the server side and ensures no performance degradation for normal internet activity on the Pis.

    DNS is handled entirely by AdGuard Home running locally on a Raspberry Pi. WireGuard does not override system DNS settings on any device, preventing common resolution failures and keeping DNS behavior predictable and independently functional — even if the VPN tunnel is momentarily down.

    IPv6 is disabled at both the kernel (via sysctl) and firewall (via UFW configuration) levels, removing an entire class of dual-stack edge cases from the threat model without any practical impact on functionality.

    The UFW firewall operates under a default-deny incoming policy. Every open port is explicitly documented and intentional. SSH access is restricted to WireGuard-authenticated peers and enforced with key-only authentication, making brute-force attacks structurally impossible — an attacker cannot even reach the SSH service without first authenticating through WireGuard.

    Monitoring, automated alerting, and real-time response are provided by Uptime Kuma, Fail2Ban, AdGuard Home log analysis, and n8n automation — forming an active defense layer that operates continuously without manual intervention.

    The result is a fast, secure, low-maintenance homelab with a clearly defined and deliberately minimized attack surface, documented access paths, and operational procedures that can be understood, audited, and reproduced by anyone reading this reference — including you, six months from now.

  • Homelab Security: WireGuard VPN & Firewall Setup Guide

    Goals Achieved

    This runbook documents the complete setup procedure for a secure, split-tunnel WireGuard homelab. By the end of this guide, the following will be in place:

    • The main server runs a WireGuard server on the wg0 interface, acting as the VPN hub for the network.
    • Pi 1 and Pi 2 each run a WireGuard client, connecting back to the server as authenticated tunnel peers.
    • A split tunnel is configured so that only VPN subnet traffic traverses WireGuard. All other internet traffic continues to use each device’s local gateway at full speed.
    • AdGuard Home runs locally on a Pi and retains full DNS control. WireGuard does not override system DNS, preventing common resolution failures.
    • Optional: Docker hosts on each Pi are monitored by Uptime Kuma over the WireGuard tunnel, using a read-only socket proxy.
    • Optional: IPv6 is permanently disabled at the kernel and firewall level to eliminate dual-stack complexity.
    • UFW enforces a default-deny firewall policy, with only explicitly required ports open to the public internet.

    1. WireGuard on the Main Server (Server Side)

    1.1 Install WireGuard

    sudo apt update
    sudo apt install -y wireguard

    1.2 Generate the Server Keypair

    WireGuard uses public-key cryptography. Each peer has a private key (kept secret) and a derived public key (shared with other peers). The following command generates both in a single pipeline and saves them to the standard WireGuard directory:

    wg genkey | sudo tee /etc/wireguard/server.key | wg pubkey | sudo tee /etc/wireguard/server.pub >/dev/null
    sudo chmod 600 /etc/wireguard/server.key

    The chmod 600 command restricts the private key file to root-only read access. WireGuard will refuse to start if the private key file has overly permissive permissions.

    1.3 Create the Server Configuration

    sudo nano /etc/wireguard/wg0.conf

    Paste the following and replace <SERVER_PRIVATE_KEY> with the contents of /etc/wireguard/server.key:

    [Interface]
    Address = 10.8.0.1/24
    ListenPort = 51820
    PrivateKey = 

    To retrieve the private key value for pasting:

    sudo cat /etc/wireguard/server.key

    The server is assigned 10.8.0.1 — the first address in the VPN subnet — and will act as the gateway for all tunnel peers. Peer entries ([Peer] blocks for each Raspberry Pi) will be added in Step 3 after the Pi keys are generated.

    1.4 Enable and Start the WireGuard Interface

    sudo systemctl enable wg-quick@wg0
    sudo systemctl start wg-quick@wg0
    sudo wg

    The systemctl enable command ensures the WireGuard interface starts automatically on every boot. The sudo wg command displays the current tunnel status. At this point, the server will show a listening interface but no connected peers — that is expected until the Pi clients are configured and their public keys are added.


    2. WireGuard on Raspberry Pi 1 and Pi 2 (Client Side)

    Perform the following steps on each Raspberry Pi, substituting the correct IP address for each device. Pi 1 uses 10.8.0.2 and Pi 2 uses 10.8.0.3.

    2.1 Install WireGuard

    sudo apt update
    sudo apt install -y wireguard

    2.2 Generate the Client Keypair

    wg genkey | tee ~/client.key | wg pubkey > ~/client.pub
    chmod 600 ~/client.key

    The public key (~/client.pub) must be copied to the server and added to the server’s wg0.conf in Step 3. The private key (~/client.key) stays on this Pi and is referenced in the client configuration below.

    2.3 Create the Client Configuration

    Pi 1 Configuration (10.8.0.2)

    sudo nano /etc/wireguard/wg0.conf
    [Interface]
    Address = 10.8.0.2/24
    PrivateKey = 
    # No DNS directive — AdGuard Home manages DNS locally on this device
    
    [Peer]
    PublicKey = 
    Endpoint = :51820
    AllowedIPs = 10.8.0.0/24
    PersistentKeepalive = 25

    Pi 2 Configuration (10.8.0.3)

    Use the same configuration structure, with the following values changed:

    Address = 10.8.0.3/24
    PrivateKey = 

    All other fields — PublicKey, Endpoint, AllowedIPs, and PersistentKeepalive — remain identical.

    Critical: the split-tunnel rule.

    AllowedIPs = 10.8.0.0/24   ✅ Correct — routes only VPN subnet traffic through the tunnel
    AllowedIPs = 0.0.0.0/0     ❌ Wrong — creates a full-tunnel VPN, routing all internet traffic through the server

    The AllowedIPs directive controls both which traffic is sent into the tunnel and which source addresses are accepted from the peer. Using the /24 VPN subnet ensures that only tunnel traffic is affected. Using 0.0.0.0/0 would redirect all internet traffic through the WireGuard server, breaking NAT, degrading performance, and requiring IP forwarding to be enabled on the server — none of which is intended in this design.

    PersistentKeepalive = 25 sends a keepalive packet every 25 seconds. This is required because the Raspberry Pis sit behind NAT routers, and NAT state tables will expire idle UDP sessions within one to two minutes of inactivity. Without keepalives, the tunnel silently drops after a period of no traffic.

    2.4 Enable and Start on Each Pi

    sudo systemctl enable wg-quick@wg0
    sudo systemctl start wg-quick@wg0
    sudo wg

    The interface will start, but the tunnel handshake will not complete until the server’s wg0.conf is updated with this Pi’s public key in the next step.


    3. Add Pi Peers to the Main Server

    On the main server, open the WireGuard configuration file and append a [Peer] block for each Raspberry Pi:

    sudo nano /etc/wireguard/wg0.conf

    Add the following blocks at the end of the file, replacing the placeholder values with the actual public keys generated on each Pi in Step 2.2:

    [Peer]
    PublicKey = 
    AllowedIPs = 10.8.0.2/32
    
    [Peer]
    PublicKey = 
    AllowedIPs = 10.8.0.3/32

    Each peer uses a /32 host mask in AllowedIPs. This tells the server to accept packets from each peer only when they originate from that specific IP address, preventing one peer from injecting traffic that claims to originate from another peer’s tunnel address.

    Restart the server interface to apply the changes:

    sudo systemctl restart wg-quick@wg0
    sudo wg

    After restarting, sudo wg should show both peers listed. Once the Pi clients send their first keepalive or data packet, the “latest handshake” timestamp for each peer will update, confirming the tunnel is active.


    4. Verify the Split Tunnel Is Correct

    4.1 Tunnel Ping Tests

    From Pi 1 or Pi 2, verify that the server is reachable over the tunnel:

    ping 10.8.0.1

    From the server, verify that both Pis are reachable:

    ping 10.8.0.2
    ping 10.8.0.3

    A successful ping in both directions confirms that the tunnel handshake completed and traffic is flowing correctly through the wg0 interface.

    4.2 Confirm Internet Traffic Bypasses the Tunnel

    On each Pi, inspect the routing table:

    ip route

    The output must show exactly two relevant entries:

    • Default route via LAN gateway — for example, default via 192.168.1.1 dev eth0. This confirms that all internet-bound traffic exits through the local router, not the VPN tunnel.
    • 10.8.0.0/24 dev wg0 — confirming that VPN subnet traffic is correctly directed into the WireGuard interface.

    If the default route points to wg0 instead of the LAN gateway, the tunnel has been accidentally configured as a full tunnel. Bring the interface down immediately with sudo wg-quick down wg0, correct AllowedIPs in the configuration file, then restart the interface. Do not proceed until this is resolved.


    5. DNS Configuration (AdGuard Home on Pi)

    Because AdGuard Home runs locally on the Raspberry Pi, no DNS = directive should be present in the WireGuard client configuration — and none has been added.

    Adding a DNS = line to the WireGuard config would instruct the wg-quick tool to modify /etc/resolv.conf when the tunnel comes up, redirecting all DNS queries through WireGuard. When AdGuard Home is already running locally, this creates a circular dependency: DNS resolution requires the tunnel, and the tunnel endpoint may require DNS resolution to establish. It also means that if the tunnel is temporarily down, all name resolution on the Pi fails — even for entirely local queries.

    Because AdGuard Home is co-located on the same device, DNS resolution is already local and requires no tunnel involvement. The correct configuration is to omit the DNS = directive entirely and allow AdGuard Home to manage resolution directly.


    6. Optional: Disable IPv6 Permanently (Ubuntu / Raspberry Pi OS)

    Disabling IPv6 removes an entire class of dual-stack routing and firewall edge cases. In small homelab environments, the operational complexity of managing rules across two protocol families rarely provides any benefit. The following procedure disables IPv6 at both the kernel level and the firewall level.

    6.1 Disable at the Kernel Level (Recommended Method)

    sudo nano /etc/sysctl.d/99-disable-ipv6.conf

    Paste the following:

    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1

    Apply the settings immediately without rebooting:

    sudo sysctl --system

    Verify that no IPv6 addresses are assigned to any interface:

    ip a | grep inet6

    If this command produces no output, IPv6 has been successfully disabled at the kernel level. The settings will persist across reboots because the configuration file resides in /etc/sysctl.d/, which is read during every boot sequence.


    7. Firewall Configuration (UFW)

    7.1 Reset to a Clean State

    Before applying the intended ruleset, reset UFW to eliminate any previously configured rules that might conflict or leave unexpected ports open:

    sudo ufw disable
    sudo ufw reset
    sudo ufw default deny incoming
    sudo ufw default allow outgoing

    The default-deny incoming policy means every inbound packet is dropped unless a specific rule explicitly permits it. This is the foundation of the entire firewall strategy — every open port in this system is a deliberate decision, not an oversight or default.

    7.2 Allow Required Ports

    SSH (Choose One)

    Recommended — WireGuard-only access (most secure):

    sudo ufw allow from 10.8.0.0/24 to any port 22 proto tcp comment "SSH via WireGuard only"

    This rule permits SSH connections only from addresses within the WireGuard VPN subnet. Because 10.8.0.0/24 is a private address range not routable on the public internet, an attacker cannot reach port 22 without first authenticating through WireGuard. External port scans will show port 22 as filtered — not open, not closed.

    Alternative — public SSH with rate limiting (if WireGuard-only is not viable):

    sudo ufw limit 22/tcp comment "SSH public with rate limiting"

    UFW’s limit action blocks source IPs that attempt more than six connections within 30 seconds. This reduces brute-force exposure but leaves the SSH service reachable from any source IP. Use this only if VPN-gated access is not possible.

    HTTP and HTTPS

    sudo ufw allow 80/tcp comment "HTTP"
    sudo ufw allow 443/tcp comment "HTTPS"

    Port 80 is required for Let’s Encrypt ACME HTTP-01 challenge responses during TLS certificate issuance and renewal. Port 443 is the primary public-facing endpoint for any HTTPS services hosted on the server.

    WireGuard

    sudo ufw allow 51820/udp comment "WireGuard handshake and keepalive"
    sudo ufw allow in on wg0 comment "WireGuard tunnel — all internal traffic"

    Port 51820 must be open on the public interface so that Raspberry Pi clients can initiate and maintain the WireGuard handshake from behind their home routers. The allow in on wg0 rule permits all traffic arriving on the WireGuard interface. Because WireGuard cryptographically authenticates every packet using public-key cryptography, only peers with a valid keypair can inject traffic into wg0, making blanket allowance on this interface appropriate.

    7.3 Enable UFW and Verify

    sudo ufw enable
    sudo ufw status numbered
    sudo ufw status verbose

    Review the output of ufw status verbose carefully. Every rule listed should correspond to a service you have intentionally chosen to expose. If any unexpected rules appear, investigate before proceeding.


    8. Optional: Docker Monitoring over WireGuard (Uptime Kuma)

    8.1 Deploy a Read-Only Docker Socket Proxy on Each Pi

    Rather than exposing the raw Docker socket or the full unfiltered Docker API — both of which would grant complete control over every container on the host — a socket proxy is deployed in front of it. The proxy exposes only the read-only API endpoints required by Uptime Kuma (container status, version, ping, and metadata) and blocks everything else.

    Create a docker-compose.yml file in a directory of your choice on each Pi:

    version: "3.8"
    services:
      docker-socket-proxy:
        image: tecnativa/docker-socket-proxy
        container_name: docker-socket-proxy
        ports:
          - "2375:2375"
        environment:
          CONTAINERS: 1   # Allow container list and inspect endpoints
          INFO: 1          # Allow /info endpoint
          PING: 1          # Allow /_ping health check endpoint
          VERSION: 1       # Allow /version endpoint
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock:ro   # Mount socket read-only
        restart: unless-stopped

    Start the proxy:

    docker compose up -d

    With this configuration, Uptime Kuma can query container health and status, but cannot start, stop, delete, or modify any container. Monitoring capability and control capability are intentionally separated at the API layer.

    8.2 Restrict Docker Proxy Access to WireGuard Only

    On each Pi, add a UFW rule that permits connections to port 2375 only from addresses within the WireGuard VPN subnet:

    sudo ufw allow from 10.8.0.0/24 to any port 2375 comment "Docker socket proxy — WireGuard only"

    Port 2375 must never be reachable from the public internet. The combination of the VPN-only UFW rule and the read-only socket proxy provides two independent layers of protection: network-level access control and API-level capability restriction.

    8.3 Configure Uptime Kuma

    In the Uptime Kuma interface, add a new monitor for each Raspberry Pi with the following settings:

    • Monitor Type: Docker Host
    • Docker Host URL for Pi 1: http://10.8.0.2:2375
    • Docker Host URL for Pi 2: http://10.8.0.3:2375

    Because Uptime Kuma itself runs on a device within the WireGuard subnet, it can reach these addresses directly over the tunnel. No public internet exposure is involved.


    9. Common Pitfalls to Avoid

    The following mistakes are frequently made in homelab WireGuard deployments. Each was explicitly considered and avoided in this build:

    • Never expose Docker port 2375 to the public internet. The Docker API grants full control over all containers and images on the host. Exposure to any untrusted network is equivalent to granting root access. Always gate it behind the VPN.
    • Never use AllowedIPs = 0.0.0.0/0 on clients. This creates a full-tunnel VPN. All internet traffic is routed through the server, introducing NAT requirements, degrading Pi performance, and breaking the split-tunnel design. Always use the specific VPN subnet: 10.8.0.0/24.
    • Never set DNS = 10.8.0.1 (or any address) unless the WireGuard server is also running a DNS resolver. If AdGuard Home runs on the Pi itself, adding a DNS directive causes WireGuard to override local DNS with a tunnel-dependent resolver, breaking resolution whenever the tunnel is down.
    • Never rely on a Certbot or Caddy HTTPS reverse proxy to secure the Docker API. An HTTPS proxy in front of the Docker socket may restrict which endpoints are reachable over HTTP, but it does not eliminate API access vulnerabilities, does not protect against misconfiguration, and does not provide network-layer isolation. WireGuard plus UFW is the correct security model.
    • Always use WireGuard plus UFW together. WireGuard provides authenticated tunnel access. UFW enforces which tunnel-internal services are accessible and from which addresses. Neither alone is sufficient — both are necessary.

    System Health Checklist

    Run the following commands on any node after a configuration change, reboot, or any time you want to confirm the system is in the expected state:

    sudo wg                      # Confirm tunnel is active, peers are listed, handshakes are recent
    ip route                     # Confirm default route → LAN gateway, not wg0
    sudo ufw status verbose      # Confirm expected rules are in place, no unexpected ports are open
    ss -tulpen | head -n 30      # Confirm which processes are listening and on which interfaces

    For a complete validation, also verify the following behavioral properties:

    • Pinging 10.8.0.1 from a Pi succeeds — the VPN tunnel is up.
    • Pinging 8.8.8.8 from a Pi succeeds — internet traffic is bypassing the tunnel correctly.
    • Pinging google.com from a Pi succeeds — DNS resolution is functioning independently of the tunnel.
    • An external port scan of the server’s public IP shows port 22 as filtered — SSH is not reachable from outside the VPN.
    • SSH from a WireGuard peer using the correct key succeeds — VPN-gated access is working as intended.
  • WireGuard Split-Tunnel VPN: Secure Setup Guide

    This document is the final, stable Standard Operating Procedure (SOP) for a small production homelab built on WireGuard, AdGuard Home, Docker monitoring, and UFW. It reflects all configuration decisions, corrections, and lessons learned through implementation and testing. It is intended to serve as both a day-to-day operational reference and an audit record of every intentional security choice made in this environment.


    1. Design Goals

    The following principles govern every configuration decision in this homelab. When a future change is proposed, it should be evaluated against these goals before implementation:

    • Secure all private management traffic. Administrative access to servers, containers, and APIs must never be reachable from the public internet without VPN authentication.
    • No performance impact on internet traffic. A split-tunnel design ensures that only VPN-destined traffic traverses WireGuard. All other traffic uses each device’s local gateway at full speed.
    • Public DNS via AdGuard Home is intentional. AdGuard Home is deliberately exposed as a public DNS resolver. This is a feature, not a misconfiguration, and is reflected explicitly in the firewall rules.
    • No public exposure of administrative or Docker APIs. Management interfaces are accessible only to authenticated WireGuard peers. They are invisible to the public internet.
    • Simple, predictable routing and firewall rules. Every rule has a documented purpose. Nothing is open by default or by accident. The configuration must be auditable by anyone reading this document.

    2. Final Network Model

    The following table describes the final, stable topology of this homelab:

    • Main Server — Acts as the WireGuard server and central management hub. Holds the authoritative wg0.conf with peer entries for all connected Raspberry Pis.
    • Raspberry Pi 1 and Pi 2 — WireGuard clients. Each connects to the server as an authenticated peer and hosts internal services accessible only over the VPN tunnel.
    • VPN Subnet10.8.0.0/24. All WireGuard tunnel addresses are drawn from this private RFC 1918 range.
    • Tunnel Mode — Split tunnel. Only traffic destined for 10.8.0.0/24 is routed through WireGuard. All other traffic continues to use the local LAN gateway.
    • DNS — Managed locally by AdGuard Home running on a Raspberry Pi. WireGuard does not override system DNS on any device.
    • IPv6 — Permanently disabled at both the kernel level (via sysctl) and the firewall level (via UFW configuration).
    • Firewall — UFW operating in IPv4-only mode with a default-deny incoming policy.

    3. WireGuard Configuration (Final)

    Server — /etc/wireguard/wg0.conf

    [Interface]
    Address = 10.8.0.1/24
    ListenPort = 51820
    PrivateKey = PLEASE_PUT_YOUR_SERVER_PRIVATE_KEY
    
    [Peer]
    PublicKey = PLEASE_PUT_YOUR_PI1_PUBLIC_KEY
    AllowedIPs = 10.8.0.2/32
    
    [Peer]
    PublicKey = PLEASE_PUT_YOUR_PI2_PUBLIC_KEY
    AllowedIPs = 10.8.0.3/32

    Each peer entry uses a /32 host mask. This instructs the server to accept packets from each peer only when they originate from that specific IP address, preventing one authenticated peer from spoofing another peer’s tunnel address.

    Clients — Pi 1 and Pi 2 (/etc/wireguard/wg0.conf)

    [Interface]
    Address = PLEASE_PUT_YOUR_PI_WG_IP
    PrivateKey = PLEASE_PUT_YOUR_PI_PRIVATE_KEY
    # No DNS directive — AdGuard Home manages DNS locally on this device
    
    [Peer]
    PublicKey = PLEASE_PUT_YOUR_SERVER_PUBLIC_KEY
    Endpoint = PLEASE_PUT_YOUR_SERVER_PUBLIC_IP_OR_DNS:51820
    AllowedIPs = 10.8.0.0/24
    PersistentKeepalive = 25

    Critical rule: AllowedIPs = 10.8.0.0/24

    This is the most consequential setting in the client configuration. It limits WireGuard’s routing influence to the VPN subnet only, preserving the split-tunnel design. Using 0.0.0.0/0 instead would redirect all internet traffic through the server, introducing NAT dependencies and breaking the performance goal of this build.

    Never use /0 unless the explicit, documented intent is to build a full-tunnel VPN that routes all client traffic through the server.


    4. Routing Verification (Mandatory)

    After bringing up the WireGuard interface on each client, verify the routing table before treating the configuration as correct. This step is not optional — an incorrect default route will silently degrade performance and create NAT failures that are difficult to diagnose later.

    Run on each client:

    ip route

    Expected output must include exactly the following two relevant entries:

    • Default route → LAN gateway (e.g., default via 192.168.1.1 dev eth0) — confirms that internet-bound traffic exits through the local router, not the tunnel.
    • 10.8.0.0/24 dev wg0 — confirms that VPN subnet traffic is correctly directed into the WireGuard interface.

    If the default route points to wg0, stop immediately. Bring the interface down with sudo wg-quick down wg0, correct AllowedIPs in the configuration file, and restart before proceeding. Do not continue with an incorrect default route in place.


    5. DNS Model (Final)

    • AdGuard Home runs locally on a Raspberry Pi and serves as the DNS resolver for the homelab.
    • WireGuard must not override system DNS. Adding a DNS = directive to any WireGuard configuration file would redirect all DNS queries through the tunnel and create a dependency on the VPN being active for basic name resolution — which would break DNS whenever the tunnel is momentarily down. Because AdGuard Home is co-located on the device, no tunnel involvement in DNS is required or desirable.
    • No DNS = entry exists in any WireGuard configuration file in this environment. This is intentional and must not be changed.

    The following DNS ports are intentionally exposed as public services, allowing external clients to use AdGuard Home as a resolver:

    • 53/udp — Standard DNS (UDP, used by the majority of DNS clients)
    • 53/tcp — Standard DNS (TCP, used for large responses and zone transfers)
    • 853/tcp — DNS-over-TLS (DoT), providing encrypted resolution for clients that support it

    DNS access over the WireGuard tunnel is also permitted for internal VPN clients, as documented in the firewall rules in Section 7.


    6. IPv6 Policy (Final)

    IPv6 is permanently disabled across all nodes in this environment. Managing dual-stack routing requires maintaining firewall rules for two separate protocol families. A single misconfigured IPv6 rule can expose services that the UFW IPv4 rules correctly block, creating a false sense of security. In a small homelab where IPv6 provides no functional benefit, disabling it entirely is the correct trade-off.

    IPv6 is disabled at the kernel level via /etc/sysctl.d/99-disable-ipv6.conf:

    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1

    IPv6 rule generation is also disabled in UFW via /etc/default/ufw:

    IPV6=no

    These two layers are independent. If IPv6 were somehow re-enabled at the kernel level, UFW would still not be generating rules for it, making the discrepancy immediately visible during a firewall audit.


    7. Firewall Policy (UFW — Final)

    Default Policy

    ufw default deny incoming
    ufw default allow outgoing

    Every inbound packet is dropped unless a specific rule explicitly permits it. This means every open port in this environment is a documented, deliberate decision — not a default or an oversight.

    Publicly Exposed Ports (Intentional)

    ufw allow 22/tcp        # SSH — WireGuard-only or rate-limited (see SSH section below)
    ufw allow 80/tcp        # HTTP — required for Let's Encrypt ACME certificate renewal
    ufw allow 443/tcp       # HTTPS — primary public-facing service endpoint
    ufw allow 51820/udp     # WireGuard — must be reachable for clients to establish the tunnel
    ufw allow 53/udp        # DNS — AdGuard Home public resolver (UDP)
    ufw allow 53/tcp        # DNS — AdGuard Home public resolver (TCP)
    ufw allow 853/tcp       # DNS-over-TLS — encrypted DNS for supporting clients

    WireGuard Internal Traffic

    ufw allow in on wg0
    ufw allow in on wg0 to any port 53
    ufw allow in on wg0 to any port 853

    The first rule permits all traffic arriving on the wg0 interface. Because WireGuard cryptographically authenticates every packet using public-key cryptography, only peers holding a valid private key can inject traffic into this interface. Blanket allowance on wg0 is therefore appropriate and safe. The additional DNS rules are explicit documentation that VPN peers are permitted to query AdGuard Home on the standard DNS ports.

    SSH Access Model (Choose One)

    Recommended — WireGuard-only access:

    ufw allow from 10.8.0.0/24 to any port 22 proto tcp

    SSH is reachable only from addresses within the WireGuard VPN subnet. Because this subnet is not routable on the public internet, WireGuard authentication is a mandatory prerequisite for SSH access. Port 22 appears as filtered to any external scanner.

    Alternative — public SSH with rate limiting (use only if VPN-gated access is not viable):

    ufw limit 22/tcp

    UFW’s limit action blocks source IPs that exceed six connection attempts within 30 seconds. This reduces brute-force exposure but leaves port 22 reachable from any source IP. This option should be considered a fallback, not a preference.


    8. Docker Monitoring Policy (Final)

    • The Docker API is never exposed publicly. Port 2375 is not open on any public interface. Exposing the Docker API to the internet grants full control over all containers on the host and is equivalent to remote root access.
    • Access is permitted only over WireGuard. The UFW rule restricts port 2375 to the VPN subnet, ensuring that only authenticated tunnel peers can reach the Docker API.
    • A read-only docker-socket-proxy is strongly preferred. Rather than exposing the full Docker API, the socket proxy limits Uptime Kuma to read-only endpoints (container status, health, metadata). This separates monitoring capability from control capability at the API layer.

    Firewall rule: