Category: Work

  • 0) Goals we achieved

    • Main server runs WireGuard server (wg0)
    • Pi1 + Pi2 run WireGuard clients
    • Split tunnel (VPN traffic only; internet stays normal/fast)
    • AdGuard Home runs on a Pi and keeps DNS control (no WG DNS override)
    • Optional: Monitor Docker hosts from Uptime Kuma over WireGuard
    • IPv6 permanently disabled (optional)
    • UFW firewall locked down with only the ports you want public

    1) WireGuard on the Main Server (Server side)

    1.1 Install

    sudo apt update
    sudo apt install -y wireguard
    

    1.2 Generate keys

    wg genkey | sudo tee /etc/wireguard/server.key | wg pubkey | sudo tee /etc/wireguard/server.pub >/dev/null
    sudo chmod 600 /etc/wireguard/server.key
    

    1.3 Create /etc/wireguard/wg0.conf

    sudo nano /etc/wireguard/wg0.conf
    

    Paste:

    [Interface]
    Address = 10.8.0.1/24
    ListenPort = 51820
    PrivateKey = <SERVER_PRIVATE_KEY>
    

    Insert the private key:

    sudo cat /etc/wireguard/server.key
    

    1.4 Enable + start

    sudo systemctl enable wg-quick@wg0
    sudo systemctl start wg-quick@wg0
    sudo wg
    

    2) WireGuard on Raspberry Pi 1 and Pi 2 (Client side)

    Do this on each Pi, changing the IP.

    2.1 Install

    sudo apt update
    sudo apt install -y wireguard
    

    2.2 Generate keys

    wg genkey | tee ~/client.key | wg pubkey > ~/client.pub
    chmod 600 ~/client.key
    

    2.3 Create /etc/wireguard/wg0.conf

    Pi1 config (10.8.0.2)

    sudo nano /etc/wireguard/wg0.conf
    
    [Interface]
    Address = 10.8.0.2/24
    PrivateKey = <PI1_PRIVATE_KEY>
    # IMPORTANT: No DNS line (because you run AdGuard)
    
    [Peer]
    PublicKey = <SERVER_PUBLIC_KEY>
    Endpoint = <SERVER_PUBLIC_IP>:51820
    AllowedIPs = 10.8.0.0/24
    PersistentKeepalive = 25
    

    Pi2 config (10.8.0.3)

    Same, but:

    Address = 10.8.0.3/24
    PrivateKey = <PI2_PRIVATE_KEY>
    

    Critical split tunnel rule:

    AllowedIPs = 10.8.0.0/24
    

    ❌ Do NOT use .../0 or 0.0.0.0/0.

    2.4 Start on each Pi

    sudo systemctl enable wg-quick@wg0
    sudo systemctl start wg-quick@wg0
    sudo wg
    

    3) Add Pi peers to the Main Server

    On the main server, open:

    sudo nano /etc/wireguard/wg0.conf
    

    Add:

    [Peer]
    PublicKey = <PI1_PUBLIC_KEY>
    AllowedIPs = 10.8.0.2/32
    
    [Peer]
    PublicKey = <PI2_PUBLIC_KEY>
    AllowedIPs = 10.8.0.3/32
    

    Restart:

    sudo systemctl restart wg-quick@wg0
    sudo wg
    

    4) Verify split tunnel is correct

    4.1 Tunnel ping tests

    From Pi1/Pi2:

    ping 10.8.0.1
    

    From server:

    ping 10.8.0.2
    ping 10.8.0.3
    

    4.2 Confirm internet stays normal (split tunnel)

    On Pi2:

    ip route
    

    You should see:

    • default route via your LAN gateway (ex: 192.168.x.1)
    • 10.8.0.0/24 dev wg0

    If you ever see default via wg0, you accidentally full-tunneled.


    5) DNS note (AdGuard Home on Pi)

    You fixed this correctly.

    ✅ If AdGuard runs on the Pi, do NOT set DNS = ... in the WG config.
    Because WireGuard would override system DNS and break resolution.


    6) Optional: Disable IPv6 permanently (Ubuntu / Pi)

    6.1 Sysctl method (recommended)

    sudo nano /etc/sysctl.d/99-disable-ipv6.conf
    

    Paste:

    net.ipv6.conf.all.disable_ipv6 = 1
    net.ipv6.conf.default.disable_ipv6 = 1
    net.ipv6.conf.lo.disable_ipv6 = 1
    

    Apply:

    sudo sysctl --system
    

    Verify:

    ip a | grep inet6
    

    7) Firewall (UFW) with your current public ports (22, 80, 443)

    7.1 Reset + defaults

    sudo ufw disable
    sudo ufw reset
    sudo ufw default deny incoming
    sudo ufw default allow outgoing
    

    7.2 Allow required ports

    SSH (choose one)

    Safer (WireGuard only):

    sudo ufw allow from 10.8.0.0/24 to any port 22 proto tcp comment "SSH via WireGuard"
    

    Or if public SSH is needed:

    sudo ufw limit 22/tcp comment "SSH rate limit"
    

    HTTP/HTTPS:

    sudo ufw allow 80/tcp comment "HTTP"
    sudo ufw allow 443/tcp comment "HTTPS"
    

    WireGuard port:

    sudo ufw allow 51820/udp comment "WireGuard"
    sudo ufw allow in on wg0 comment "WireGuard tunnel"
    

    7.3 Enable

    sudo ufw enable
    sudo ufw status numbered
    sudo ufw status verbose
    

    8) Optional: Docker monitoring over WireGuard (Uptime Kuma)

    8.1 Install docker socket proxy on each Pi (recommended)

    Create docker-compose.yml:

    version: "3.8"
    services:
      docker-socket-proxy:
        image: tecnativa/docker-socket-proxy
        container_name: docker-socket-proxy
        ports:
          - "2375:2375"
        environment:
          CONTAINERS: 1
          INFO: 1
          PING: 1
          VERSION: 1
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock:ro
        restart: unless-stopped
    

    Run:

    docker compose up -d
    

    8.2 Firewall: allow Docker proxy ONLY over WireGuard

    On each Pi:

    sudo ufw allow from 10.8.0.0/24 to any port 2375 comment "Docker API via WireGuard only"
    

    8.3 Add in Uptime Kuma

    • Monitor Type: Docker Host
    • URL:
      • http://10.8.0.2:2375
      • http://10.8.0.3:2375

    9) What NOT to do (the pitfalls we hit)

    • ❌ Don’t expose Docker 2375 publicly
    • ❌ Don’t use AllowedIPs = .../0 (that becomes full tunnel)
    • ❌ Don’t set DNS = 10.8.0.1 unless server truly runs DNS
    • ❌ Don’t rely on Certbot/Caddy to “secure Docker API” (wrong model)
    • ✅ Use WireGuard + firewall instead

    Quick “health checklist” commands

    On any node:

    sudo wg
    ip route
    sudo ufw status verbose
    ss -tulpen | head -n 30
    

    If you want, I can also generate a one-page printable version of this runbook with:

    • blanks for keys/IPs
    • checkboxes
    • and “expected outputs” for each command.
  • N8N Project for My production Server

    Production Monitoring & Security Automation Runbook

    Purpose

    This runbook describes how to operate, monitor, and respond to events generated by the company’s production automation stack:

    • n8n (orchestration)
    • AdGuard DNS (Primary & Secondary Raspberry Pi)
    • Fail2Ban
    • Uptime Kuma
    • Slack (alerting)
    • Omada Controller (network devices)

    It is written so that any on-call engineer can safely respond to alerts without deep system knowledge.


    System Overview

    What this system does

    • Monitors DNS behavior on two AdGuard servers (Pi1 = Primary, Pi2 = Secondary)
    • Detects possible DNS abuse / attacks using query heuristics
    • Automatically blocks malicious IPs in AdGuard (when enabled)
    • Monitors uptime of both DNS servers
    • Pushes health heartbeats to Uptime Kuma
    • Receives Fail2Ban ban/unban events from multiple hosts
    • Receives Omada controller events (AP, gateway, switch up/down)
    • Sends actionable alerts to Slack

    What it does NOT do

    • It does not permanently blacklist IPs without review
    • It does not modify firewall rules (DNS-layer only)
    • It does not auto-restart servers

    Normal Operation (Healthy State)

    Expected behavior

    • Cron runs every minute
    • Slack is quiet most of the time
    • Uptime Kuma shows:
      • Pi1 Uptime: UP
      • Pi2 Uptime: UP
      • DNS Status: NORMAL

    Normal Slack messages

    • ✅ DNS NORMAL (baseline)
    • ✅ DNS OK / RECOVERED
    • ✅ Fail2Ban UNBANNED
    • ℹ️ Omada informational events

    No action is required in these cases.


    Alert Types & Response Actions

    🚨 POSSIBLE DNS ATTACK

    Meaning

    • One client is responsible for an abnormally high percentage of DNS queries
    • Triggered when:
      • ≥ 80% of recent queries OR
      • ≥ 400 queries in sample window

    Automatic actions

    • AdGuard auto-block may already be applied
    • IP reputation (IPinfo) is attached to the alert

    Required response (step-by-step)

    1. Open the Slack alert
    2. Review:
      • Attacker IP
      • Client name (if known)
      • Organization / ASN
    3. Log into the affected AdGuard server
    4. Open Query Log
    5. Confirm traffic pattern matches alert
    6. If legitimate client:
      • Remove IP from disallowed_clients
      • Add client to DNS whitelist in n8n
    7. If malicious:
      • No action needed (auto-block handled it)

    Escalation

    • Repeated attacks from different IPs → notify network/security team

    ✅ DNS OK / RECOVERED

    Meaning

    • DNS traffic has returned to normal

    Action

    • None required

    🔴 / 🚨 UPTIME DOWN

    Meaning

    • DNS server is unreachable or returning bad HTTP status

    Response steps

    1. Check Uptime Kuma for confirmation
    2. Attempt to reach host:
      • Ping
      • HTTPS access
    3. If unreachable:
      • Check power
      • Check network connectivity
    4. Review system logs if accessible
    5. Restart service/server if required

    Escalation

    • If downtime > SLA threshold, notify management

    🚫 Fail2Ban BANNED

    Meaning

    • Fail2Ban blocked an IP due to repeated authentication failures

    Automatic actions

    • IP already blocked at service level
    • Geo/IP data added automatically

    Response steps

    1. Review IP reputation in Slack
    2. Confirm jail name (sshd, nginx, etc.)
    3. If internal or known IP:
      • Manually unban
      • Adjust Fail2Ban rules if needed
    4. If external/malicious:
      • No action required

    🚨 Omada Device DOWN

    Meaning

    • AP, gateway, or switch disconnected

    Response steps

    1. Identify device and site in Slack alert
    2. Check Omada Controller status
    3. Verify power and uplink
    4. If multiple devices affected:
      • Suspect upstream outage

    Environment & Configuration

    Required environment variables (n8n)

    • F2B_TOKEN
    • IPINFO_TOKEN
    • KUMA_PI1_UPTIME_URL
    • KUMA_PI1_DNS_URL
    • KUMA_PI2_UPTIME_URL
    • KUMA_PI2_DNS_URL

    Webhook endpoints

    • /fail2ban-pi1
    • /fail2ban-pi2
    • /OmadaController
    • /JoeOmadaTPlink

    Maintenance & Safe Changes

    Before making changes

    • Disable auto-block if testing
    • Clone workflow for testing
    • Verify Slack output formatting

    After changes

    • Manually trigger workflow
    • Confirm:
      • No duplicate Slack alerts
      • Kuma heartbeats still flow

    Break-Glass (Emergency)

    If automation behaves incorrectly:

    1. Disable the n8n workflow
    2. Remove IPs from AdGuard block list
    3. Notify security/network team
    4. Document incident

    Ownership

    • System owner: IT / Network Team
    • Primary contact: IT Manager
    • Slack channel: Monitoring / Security Alerts

  • 50 D3 Pro POS Deployment: Setup with Esper and Joe Coffee OS

    Today was one of those days where physical hustle meets tech setup—moving and deploying 50 brand new D3 Pro Point of Sale (POS) devices to our office. Here’s a behind-the-scenes look at what it takes before a POS terminal ever sees its first sale.


    Step 1: The Big Move (Elevator to the Rescue!)

    First up: hauling 50 D3 Pro POS machines from downstairs storage to the upstairs office. Let’s just say, if I didn’t have an elevator, it would’ve been a serious cardio day. Luckily, I could load 12 D3 Pro boxes at a time into the elevator. Here’s how it all went down, literally and figuratively:

    • Load 12 D3 Pro POS terminals onto a cart
    • Roll them into the elevator
    • Go up, take them out, and arrange them in the office
    • Repeat trips until all 50 were upstairs

    It added up to plenty of steps and quite a few sweaty trips, but having that elevator was a lifesaver!


    Step 2: Ditch the Wood, Clear the Space

    After moving the D3 Pros, I had to tackle all the wood packaging they came in. Took a solid chunk of time to break down the boxes, haul the packing wood out, and dump it. Trust me: tech jobs are not always as glamorous as they sound!


    Step 3: Setting Up the D3 Pros for Action

    The real fun starts here. With 50 POS terminals lined up like soldiers, it was time to get them into service:

    • Connect and Power Up: Plugged each D3 Pro in and made sure they powered up correctly.
    • Install Software: Processed each one with the installation of Esper (for device management) and Joe Coffee OS (for our awesome coffee shop POS services).
    • Label Each Device: Printed and attached unique labels to every D3 Pro for easy tracking.
    • Add to Inventory: Entered every device’s details into our inventory system, so they’re ready to be assigned and managed remotely.

    Conclusion

    Moving and setting up 50 D3 Pro POS terminals in one day? Not as simple as it sounds! It’s half workout, half tech setup, and 100% worth it to see the office ready for action with top-notch POS systems, all tracked and labeled. Whether you’re in IT setup or retail management, you know these behind-the-scenes marathons are what keep everything running smoothly.

    If you’ve ever wondered what “setting up POS” really involves, now you know: a little heavy lifting, a little problem-solving, and a lot of satisfaction at the finish line!

Secret Link