Build log · MikroTik RB5009 · Converge fiber, PH

Building a usable home network behind residential CGNAT

VLANs, a UniFi controller running on the router itself, routed IPv6 over WireGuard, and encrypted DNS. Each section stands alone; pick the parts that match your situation.

Abstract

This is a journal of building a small, opinionated home network on a single MikroTik RB5009 behind a residential fiber connection that uses carrier-grade NAT. The ISP gives a dynamic IPv4 address with no inbound reachability and no usable IPv6. The goals are pedestrian: keep IoT devices and guests away from the trusted LAN, manage the Wi-Fi APs without buying another always-on box, and have real IPv6 so the network stays useful as the rest of the internet finishes the v6 migration.

The build proceeds in three layers, in the order they were actually installed: first the LAN is split into VLANs, then the UniFi controller is moved into RouterOS containers, then IPv6 reachability is recovered through a $3/month VPS (1 TB monthly transfer included) that routes a /48 down to the house. DoH and RA RDNSS sit on top of the third layer. None of these steps depend on a specific brand of fiber service; they only assume that outbound UDP works and that the router can carry a small WireGuard tunnel.

Each numbered section below is paste-ready against a defconf RouterOS v7 setup. The italicized notes after each step are the rationale — what trade-off was being made and why. The appendices document the optional pieces (sub-second IPv6 failover, an IoT printer exception, cost) and the references point at the original gists.

Design decisions

Three protocol choices in this build are load-bearing enough that a reader substituting parts will want a sentence of justification for each.

WireGuard carries the IPv6 transport because residential CGNAT only guarantees outbound UDP, and WireGuard needs nothing more than that. Both endpoints (RouterOS v7 and Linux) speak it in-kernel with a single config primitive on each side; there is no daemon, no negotiation, and no MTU two-stepping with an underlying IPsec SA. IPsec/IKEv2, GRE+IPsec, or OpenVPN would all work; each adds operational surface this build does not need to pay for.

BGP appears only in Appendix A, and only because it pairs cleanly with BFD: BGP withdraws a single ::/0 route the instant BFD declares the forwarding path dead, which gives sub-second IPv6 failover with no scripts and no controller. With one peer, BGP is not chosen for scaling — it is chosen for clean dynamic route withdrawal that static routes do not offer.

BFD exists because a WireGuard interface stays administratively UP even when the tunnel is silently dead: NAT mapping expired, peer rebooted, ISP null-routed the VPS. Without BFD, BGP and interface state would happily keep announcing a black hole. BFD adds an independent forwarding-path liveness signal so the routing layer can tell "interface up" apart from "packets actually reach the other side".

1. Topology and address plan

                       Internet (IPv4 + IPv6)
                              │
                ┌─────────────┴─────────────┐
                │  VPS — Ubuntu, routed /48 │
                │  <VPS_IP>                 │
                │  enp3s0: provider GUA     │
                │  wg0:    <LAN_PREFIX>:0::1│
                └─────────────┬─────────────┘
                              │ WireGuard / UDP 51820
                              │ (only IPv6 transits the tunnel)
                ┌─────────────┴─────────────┐
                │   Converge fiber + CGNAT  │
                │   (IPv4 outbound only)    │
                └─────────────┬─────────────┘
                              │
                ┌─────────────┴─────────────┐
                │  MikroTik RB5009 — edge   │
                │  • DHCPv4 + RA + RDNSS    │
                │  • DoH resolver           │
                │  • WireGuard client       │
                │  • RouterOS containers    │
                └──┬──────────┬──────────┬──┘
                   │          │          │
        ether2/3 (trunks)  ether4/5 (LAN-only access)
                   │
        ┌──────────┴──────────┐
        │  UniFi 6 APs ×2     │
        │  untagged = mgmt    │
        │  tag 10 = IoT SSID  │
        │  tag 20 = Guest SSID│
        └──────────┬──────────┘
                   │
       LAN 1   IoT 10   Guest 20
   192.168.88/24  .89/24  .90/24
   <LAN_PREFIX>:1::/64  :10::/64  :20::/64

One box does everything the LAN side needs. The RB5009 terminates the WAN, runs DHCPv4 and IPv6 RA, runs DoH outbound, hosts the WireGuard client toward the VPS, and runs the UniFi Network Application and MongoDB as containers. Two UniFi 6 APs hang off bridge ports configured as hybrid trunks: untagged frames carry AP management on the main LAN, tagged frames carry IoT and Guest SSID traffic.

One always-on device is enough. Running the controller on the router avoids a second box and a second attack surface; the cost is paid in memory pressure and is bounded by the limits in section 4.

Address plan

VLANTaggingRoleIPv4IPv6
VLAN 1untaggedmain LAN, AP mgmt192.168.88.0/24<LAN_PREFIX>:1::/64
VLAN 10taggedIoT SSID192.168.89.0/24<LAN_PREFIX>:10::/64
VLAN 20taggedGuest SSID192.168.90.0/24<LAN_PREFIX>:20::/64
WG transportVPS ↔ MikroTikn/a<LAN_PREFIX>:0::/64

Each VLAN is a separate L3 boundary at the router. The IoT and Guest VLANs deliberately reuse the same slice IDs as their VLAN numbers in the IPv6 plan (:10::/64, :20::/64) so the mapping stays obvious in tcpdump.

2. Conventions and placeholders

Every snippet below uses angle-bracketed placeholders. Substitute them before pasting; the rest of the text is literal RouterOS or bash.

PlaceholderMeaning
<LAN_PREFIX>Routed IPv6 /48 (or /56) the VPS hands you. Drop the trailing zeros, e.g. 2001:db8.
<ULA_PREFIX>Locally-generated ULA. python3 -c 'import secrets; print(f"fd{secrets.token_hex(5)}")'.
<VPS_IP>VPS public IPv4 from the provider panel.
<VPS_NIC>VPS public interface. ip -o link shows it (typical: enp3s0, ens3).
<VPS_PUBKEY> / <MT_PUBKEY>WireGuard public keys, one per side. Each is printed during section 5.
<TZ>IANA tz database name for the controller container. Examples: Asia/Manila, Europe/Berlin.

Documentation prefixes from RFC 3849 (2001:db8::/32) and RFC 4193 (fd00::/8) work as stand-ins while testing. Snippets assume the defconf bridge name bridge and ports ether2ether5; rename the interfaces where they appear.

3. LAN segmentation comes first

The network is split before any services are added on top of it. Main LAN keeps VLAN 1 untagged so AP adoption and existing wired devices stay boring. IoT and Guest land on tagged VLANs 10 and 20 that share the same uplinks to the APs.

Hybrid trunks instead of a tagged management VLAN keeps AP onboarding trivial: APs boot on the main LAN, only client SSID traffic is tagged. The trade-off is that the trunk ports carry both untagged and tagged frames, so they have to be documented as such.

3.1 Bridge VLAN table and L3 gateways

Turn on bridge VLAN filtering, declare the two VLAN interfaces, mark the AP uplinks as trunks, and give each VLAN a gateway address.

VLAN bridge + gateways

bash

1/interface/bridge set [find name=bridge] vlan-filtering=yes 2 3/interface/vlan add interface=bridge name=vlan-iot vlan-id=10 4/interface/vlan add interface=bridge name=vlan-guest vlan-id=20 5 6# Bridge VLAN table — hybrid trunks on ether2/ether3 (to APs), 7# access-only LAN on ether4/ether5. 8/interface/bridge/vlan 9add bridge=bridge vlan-ids=1 untagged=bridge,ether2,ether3,ether4,ether5 comment="main LAN untagged" 10add bridge=bridge vlan-ids=10 tagged=bridge,ether2,ether3 comment="IoT to UniFi APs" 11add bridge=bridge vlan-ids=20 tagged=bridge,ether2,ether3 comment="Guest to UniFi APs" 12 13/ip/address add address=192.168.89.1/24 interface=vlan-iot 14/ip/address add address=192.168.90.1/24 interface=vlan-guest

3.2 DHCP scopes

Each VLAN gets its own pool, server, and network record, with the router itself acting as DNS for now. DoH replaces the upstream side of that resolver in section 6.

IoT + Guest DHCPv4

bash

1/ip/pool add name=iot-pool ranges=192.168.89.100-192.168.89.200 2/ip/pool add name=guest-pool ranges=192.168.90.100-192.168.90.200 3 4/ip/dhcp-server add name=iot-dhcp interface=vlan-iot address-pool=iot-pool lease-time=1d 5/ip/dhcp-server add name=guest-dhcp interface=vlan-guest address-pool=guest-pool lease-time=1d 6 7/ip/dhcp-server/network add address=192.168.89.0/24 gateway=192.168.89.1 dns-server=192.168.89.1 8/ip/dhcp-server/network add address=192.168.90.0/24 gateway=192.168.90.1 dns-server=192.168.90.1

3.3 Firewall — input services and east-west isolation

The input chain accepts only the router services the VLANs actually need; the forward chain drops new flows back into trusted networks. Established replies are not affected, which is what makes narrow exceptions (Appendix B) practical.

Input + forward firewall

bash

1# Input — place BEFORE defconf's "drop all not coming from LAN". 2/ip/firewall/filter 3add chain=input action=accept in-interface=vlan-iot protocol=udp dst-port=67-68 comment="IOT: DHCPv4" 4add chain=input action=accept in-interface=vlan-iot protocol=udp dst-port=53 comment="IOT: DNS UDP" 5add chain=input action=accept in-interface=vlan-iot protocol=tcp dst-port=53 comment="IOT: DNS TCP" 6add chain=input action=accept in-interface=vlan-guest protocol=udp dst-port=67-68 comment="GUEST: DHCPv4" 7add chain=input action=accept in-interface=vlan-guest protocol=udp dst-port=53 comment="GUEST: DNS UDP" 8add chain=input action=accept in-interface=vlan-guest protocol=tcp dst-port=53 comment="GUEST: DNS TCP" 9 10# Forward — place BEFORE fasttrack / established accepts. 11/ip/firewall/filter 12add chain=forward action=drop in-interface=vlan-iot out-interface=bridge connection-state=new comment="IOT !-> LAN" 13add chain=forward action=drop in-interface=vlan-guest out-interface=bridge connection-state=new comment="GUEST !-> LAN" 14add chain=forward action=drop in-interface=vlan-guest out-interface=vlan-iot connection-state=new comment="GUEST !-> IOT"

Isolation is enforced in forward, not by starving clients of DHCP/DNS in input. Mixing the two layers makes the rules unreviewable later.

4. UniFi controller on the router

The controller runs as two RouterOS containers — MongoDB at 192.168.88.2 and the UniFi Network Application at 192.168.88.3 — both bridged onto VLAN 1 through veth interfaces. From the LAN they look like normal hosts; from the router they are services with hard memory limits.

The RB5009 has 1 GB of RAM. Without a swap file and explicit caps on Mongo's WiredTiger cache and UniFi's Java heap, a backup-restore burst can knock routing off the air.

4.1 USB layout: swap + ext4 data

/dev/usb1 (64 GB USB stick)
├── part1   8 GiB  raw       → swap (smb-sharing=no, media-sharing=no, swap=yes)
└── part2  ~56 GiB ext4      → /usb1-part2
                              ├── images/         container image layers
                              ├── tmp/            container tmpdir
                              ├── unifi-config/   UniFi /config
                              ├── mongo-data/     Mongo /data/db
                              ├── mongo-config/   Mongo /data/configdb
                              └── mongo-initdb/   Mongo init scripts (read-only)

The sequence below is destructive to the USB stick. Confirm it is expendable and run from a path that survives bridge and container changes.

USB swap + container storage

bash

1/container/config/set layer-dir="" tmpdir="" 2 3:foreach p in=[/disk/find slot~"usb1-part"] do={/disk/remove $p} 4/disk/add parent=usb1 type=partition partition-size=8589934592 5/disk/add parent=usb1 type=partition 6/disk/format numbers=usb1-part2 file-system=ext4 7 8/disk/set [find slot="usb1-part1"] smb-sharing=no media-sharing=no 9/disk/set [find slot="usb1-part1"] swap=yes 10 11/file/add type=directory name="usb1-part2/images" 12/file/add type=directory name="usb1-part2/tmp" 13/file/add type=directory name="usb1-part2/unifi-config" 14/file/add type=directory name="usb1-part2/mongo-data" 15/file/add type=directory name="usb1-part2/mongo-config" 16/file/add type=directory name="usb1-part2/mongo-initdb" 17 18/container/config/set layer-dir=/usb1-part2/images tmpdir=/usb1-part2/tmp

4.2 Container veths on VLAN 1

Static veths make the containers ordinary LAN hosts. With bridge VLAN filtering on, the VLAN-1 untagged list must be updated to include the new veths or they will not pass traffic.

veths on the main LAN

bash

1/interface/veth/add name=veth1-mongo address=192.168.88.2/24 gateway=192.168.88.1 gateway6="" 2/interface/veth/add name=veth2-unifi address=192.168.88.3/24 gateway=192.168.88.1 gateway6="" 3 4/interface/bridge/port/add bridge=bridge interface=veth1-mongo pvid=1 5/interface/bridge/port/add bridge=bridge interface=veth2-unifi pvid=1 6 7# Required when bridge vlan-filtering is enabled. 8/interface/bridge/vlan/set [find vlan-ids=1] \ 9 untagged=bridge,ether2,ether3,ether4,ether5,veth1-mongo,veth2-unifi

4.3 Mongo bootstrap user

UniFi's LinuxServer image expects an external Mongo. The bootstrap user needs ownership of unifi, unifi_stat, and unifi_audit, plus broader admin roles for backup restore — UniFi creates a transient restore DB during import.

Mongo bootstrap user

javascript

1db.getSiblingDB("unifi").createUser({ 2 user: "unifi", 3 pwd: "<MONGO_UNIFI_PASS>", 4 roles: [ 5 { role: "dbOwner", db: "unifi" }, 6 { role: "dbOwner", db: "unifi_stat" }, 7 { role: "dbOwner", db: "unifi_audit" }, 8 { role: "readWriteAnyDatabase", db: "admin" }, 9 { role: "dbAdminAnyDatabase", db: "admin" } 10 ] 11});

4.4 Mongo and UniFi containers

MongoDB is pinned to arm64v8/mongo:4.4.18. Newer ARM64 builds require ARMv8.2-A atomics that the RB5009 Cortex-A72 (ARMv8.0-A) does not have; they crash with SIGILL. The pinned version is EOL, so it stays LAN-only and never gets a WAN forward.

Mongo + UniFi containers

bash

1/container/envs/add list=mongo-envs key=MONGO_INITDB_ROOT_USERNAME value=root 2/container/envs/add list=mongo-envs key=MONGO_INITDB_ROOT_PASSWORD value=<MONGO_ROOT_PASS> 3 4/container/mounts/add list=mongo-mounts src=/usb1-part2/mongo-data dst=/data/db 5/container/mounts/add list=mongo-mounts src=/usb1-part2/mongo-config dst=/data/configdb 6/container/mounts/add list=mongo-mounts src=/usb1-part2/mongo-initdb dst=/docker-entrypoint-initdb.d read-only=yes 7 8# Mongo 4.4.18 — newer ARM64 builds need ARMv8.2-A atomics that the 9# RB5009 Cortex-A72 (ARMv8.0-A) doesn't have; they exit with SIGILL. 10/container/add remote-image=arm64v8/mongo:4.4.18 interface=veth1-mongo envlist=mongo-envs 11/container/set [find name="mongo:4.4.18"] \ 12 mountlists=mongo-mounts hostname=mongo name=mongo start-on-boot=yes logging=yes dns=192.168.88.1 \ 13 cmd="mongod --wiredTigerCacheSizeGB 0.25 --bind_ip_all --ipv6" tmpfs="/tmp:64M:fixed" 14 15/container/envs/add list=unifi-envs key=PUID value=1000 16/container/envs/add list=unifi-envs key=PGID value=1000 17/container/envs/add list=unifi-envs key=TZ value=<TZ> # e.g. Asia/Manila — see tzdata(5) 18/container/envs/add list=unifi-envs key=MONGO_HOST value=192.168.88.2 19/container/envs/add list=unifi-envs key=MONGO_PORT value=27017 20/container/envs/add list=unifi-envs key=MONGO_USER value=unifi 21/container/envs/add list=unifi-envs key=MONGO_PASS value=<MONGO_UNIFI_PASS> 22/container/envs/add list=unifi-envs key=MONGO_DBNAME value=unifi 23/container/envs/add list=unifi-envs key=MONGO_AUTHSOURCE value=unifi 24/container/envs/add list=unifi-envs key=MEM_LIMIT value=384 25/container/envs/add list=unifi-envs key=MEM_STARTUP value=256 26 27/container/mounts/add list=unifi-mounts src=/usb1-part2/unifi-config dst=/config 28/container/add remote-image=lscr.io/linuxserver/unifi-network-application:latest interface=veth2-unifi envlist=unifi-envs 29/container/set [find name="unifi-network-application:latest"] \ 30 mountlists=unifi-mounts hostname=unifi name=unifi start-on-boot=yes logging=yes \ 31 dns=192.168.88.1 tmpfs="/tmp:128M:fixed" 32 33/container/start [find name="mongo"] 34/container/start [find name="unifi"]

Container mounts use list= on the mount object and mountlists= on the container. RouterOS rejects mountlists= during the initial image pull, so set them with /container/set after the add.

4.5 Verification

Controller smoke tests

bash

1/disk/print where slot~"usb1-part" 2/log/print where topics~"container" 3 4nc -z -w 2 192.168.88.2 27017 && echo OK 5curl -sk https://192.168.88.3:8443/manage/account/login \ 6 -o /dev/null -w "HTTP %{http_code}\n"

5. IPv6 over WireGuard via a routed /48

CGNAT eats inbound IPv4. IPv6 sidesteps that problem entirely — but only if the network has real, routable IPv6. The recipe is a $3/month VPS that routes a /48 to its instance, a WireGuard tunnel from the RB5009 to that VPS, and per-VLAN /64s carved out of the /48.

A routed /48 is what unlocks the address plan in section 1. With an on-link /64 the only options are NDP-proxying and a single subnet; with a routed /48 there are no helper daemons and 65k subnets to spare.

Hurricane Electric's tunnel broker is the obvious first answer for residential IPv6: a free /64 (or /48 on request), worldwide PoPs, and a dyndns-style endpoint update so a changing client IPv4 does not break the tunnel. It still does not help behind CGNAT. HE's primary service is 6in4, which encapsulates IPv6 directly inside IP protocol 41 — no TCP or UDP ports for the carrier's NAT to track, so most CGNAT deployments drop the packets outright. The dyndns hook keeps the tunnel configured against the current public IPv4; the encapsulated frames just never make it through the carrier's NAT44.

HE also offers AYIYA, which is UDP-encapsulated and would traverse CGNAT, but its only working client (aiccu) is unmaintained: SixXS shut down in June 2017 and Debian removed the package as non-functional, and there is no RouterOS port. WireGuard fills the same role with a current in-kernel implementation, and the encryption is a bonus rather than a cost.

Route64 is a free IPv6 tunnel broker that already runs on WireGuard, which initially makes it look like a drop-in replacement for the VPS in this build. The catch is what Route64 binds the tunnel to. The peer config it issues pins your registered public IPv4 as the allowed remote endpoint. Behind CGNAT that IP is shared with other subscribers and rotates on the carrier's schedule, so the tunnel either authenticates the wrong subscriber's address or stops accepting the re-NAT'd session entirely; there is no per-restart re-registration in the broker UI. The VPS topology sidesteps the binding problem — the RB5009 reaches out to a known public IP, PersistentKeepalive = 25 keeps the carrier's NAT44 mapping warm, and the VPS does not care what source IP the mapping happens to land on. Owning the reachable side of the tunnel is what the $3/month VPS actually buys.

5.1 Return routing: why each /64 has to appear in AllowedIPs

The provider routes the /48 to the VPS, then configures one address from it on the public NIC at /48 mask. That installs a connected route for the entire /48 on the public interface, which competes with the WireGuard tunnel for return traffic. Listing the parent /48 in AllowedIPs ties on prefix length and loses to the public NIC by metric. Listing each LAN /64 separately wins by longest-prefix match.

# Routing table on the VPS — what makes return traffic work
<LAN_PREFIX>::/48        dev <VPS_NIC>  metric 256   ← provider's connected route (harmless)
<LAN_PREFIX>:1::/64      dev wg0                    ← from AllowedIPs, wins by /64 > /48
<LAN_PREFIX>:10::/64     dev wg0                    ← from AllowedIPs
<LAN_PREFIX>:20::/64     dev wg0                    ← from AllowedIPs

# Listing the parent /48 in AllowedIPs would lose to the connected route
# on the public NIC and return traffic would disappear. List each /64.

This is the one detail that silently breaks the whole setup. Egress works, ping works, then the first packet to a SLAAC client falls off the public NIC and the network looks broken in a way that no log line explains. The fix is paste-only — but only if it is paste-correct.

5.2 VPS — WireGuard relay

VPS wg0 + AllowedIPs per /64

bash

1set -e 2apt-get update -qq 3apt-get install -y -qq wireguard 4 5cat >/etc/sysctl.d/99-wg-relay.conf <<'EOF' 6net.ipv6.conf.all.forwarding = 1 7net.ipv6.conf.default.forwarding = 1 8net.ipv6.conf.<VPS_NIC>.accept_ra = 2 9EOF 10sysctl --system >/dev/null 11 12umask 077 13mkdir -p /etc/wireguard 14wg genkey | tee /etc/wireguard/server.key | wg pubkey > /etc/wireguard/server.pub 15VPS_PRIVKEY=$(cat /etc/wireguard/server.key) 16 17cat >/etc/wireguard/wg0.conf <<EOF 18[Interface] 19PrivateKey = ${VPS_PRIVKEY} 20Address = <LAN_PREFIX>:0::1/64 21ListenPort = 51820 22MTU = 1420 23 24[Peer] 25PublicKey = <MT_PUBKEY> 26# One entry per LAN /64. Longest-prefix match beats the connected 27# /48 on the public NIC and keeps return traffic on wg0. 28AllowedIPs = <LAN_PREFIX>:0::2/128, <LAN_PREFIX>:1::/64, <LAN_PREFIX>:10::/64, <LAN_PREFIX>:20::/64 29PersistentKeepalive = 25 30EOF 31 32ufw allow 51820/udp comment "WireGuard" 33ufw route allow in on wg0 34ufw route allow out on wg0 35ufw reload || true 36systemctl enable --now wg-quick@wg0 37echo "VPS public key: $(cat /etc/wireguard/server.pub)"

5.3 MikroTik — WireGuard client and main-LAN IPv6

The static ::/0 with check-gateway=ping detects a dead tunnel in roughly 30 seconds. Appendix A replaces it with BGP+BFD for ~600 ms detection if that matters.

MikroTik WireGuard + main LAN IPv6

bash

1/interface/wireguard add name=wg-host listen-port=51820 mtu=1420 2/interface/wireguard/peers add interface=wg-host name=vps \ 3 public-key="<VPS_PUBKEY>" \ 4 endpoint-address=<VPS_IP> endpoint-port=51820 \ 5 allowed-address=::/0 \ 6 persistent-keepalive=25s 7 8/ipv6/address add address=<LAN_PREFIX>:0::2/64 interface=wg-host advertise=no 9/ipv6/address add address=<LAN_PREFIX>:1::1/64 interface=bridge advertise=yes 10/ipv6/address add address=<ULA_PREFIX>:1::1/64 interface=bridge advertise=yes comment="LAN ULA" 11 12# Static default with ping liveness — detection ~30 s. 13# Appendix A replaces this with BGP+BFD for ~600 ms. The comment is what 14# Appendix A's `/ipv6/route/remove` matches against, so set it now. 15/ipv6/route add dst-address=::/0 gateway=<LAN_PREFIX>:0::1%wg-host \ 16 check-gateway=ping comment="vps primary" 17 18/ipv6/firewall/filter add chain=input action=accept in-interface=wg-host \ 19 comment="accept input from VPS WG peer" \ 20 place-before=[find where chain=input and comment="defconf: drop everything else not coming from LAN"]

5.4 Per-VLAN addresses and RA RDNSS

Each VLAN gets a GUA from the /48 and a stable ULA from the locally-generated fd… prefix. The ULA is what gets advertised as the DNS server — it stays the same when the global prefix is renumbered, which it eventually will be.

GUA + ULA + RDNSS per VLAN

bash

1/ipv6/address 2add interface=vlan-iot address=<LAN_PREFIX>:10::1/64 advertise=yes comment="IOT GUA" 3add interface=vlan-iot address=<ULA_PREFIX>:10::1/64 advertise=yes comment="IOT ULA" 4add interface=vlan-guest address=<LAN_PREFIX>:20::1/64 advertise=yes comment="GUEST GUA" 5add interface=vlan-guest address=<ULA_PREFIX>:20::1/64 advertise=yes comment="GUEST ULA" 6 7/ipv6/nd 8add interface=bridge advertise-dns=yes dns=<ULA_PREFIX>:1::1 managed-address-configuration=no other-configuration=no 9add interface=vlan-iot advertise-dns=yes dns=<ULA_PREFIX>:10::1 managed-address-configuration=no other-configuration=no 10add interface=vlan-guest advertise-dns=yes dns=<ULA_PREFIX>:20::1 managed-address-configuration=no other-configuration=no

5.5 IPv6 firewall and anti-spoofing

Mirror the IPv4 isolation in IPv6, then drop traffic whose source prefix does not belong on the ingress VLAN. SLAAC makes prefix forgery trivially cheap; the address-list filters make it not work.

IPv6 isolation + anti-spoof

bash

1/ipv6/firewall/filter 2add chain=input action=accept in-interface=vlan-iot protocol=udp dst-port=547 comment="IOT: DHCPv6" 3add chain=input action=accept in-interface=vlan-iot protocol=udp dst-port=53 comment="IOT: DNSv6" 4add chain=input action=accept in-interface=vlan-guest protocol=udp dst-port=547 comment="GUEST: DHCPv6" 5add chain=input action=accept in-interface=vlan-guest protocol=udp dst-port=53 comment="GUEST: DNSv6" 6 7add chain=forward action=drop in-interface=vlan-iot out-interface=bridge connection-state=new comment="IOT !-> LAN (v6)" 8add chain=forward action=drop in-interface=vlan-guest out-interface=bridge connection-state=new comment="GUEST !-> LAN (v6)" 9add chain=forward action=drop in-interface=vlan-guest out-interface=vlan-iot connection-state=new comment="GUEST !-> IOT (v6)" 10 11/ipv6/firewall/address-list 12add list=lan-legit address=<LAN_PREFIX>:1::/64 13add list=lan-legit address=<ULA_PREFIX>:1::/64 14add list=lan-legit address=fe80::/10 15add list=iot-legit address=<LAN_PREFIX>:10::/64 16add list=iot-legit address=<ULA_PREFIX>:10::/64 17add list=iot-legit address=fe80::/10 18add list=guest-legit address=<LAN_PREFIX>:20::/64 19add list=guest-legit address=<ULA_PREFIX>:20::/64 20add list=guest-legit address=fe80::/10 21 22/ipv6/firewall/filter 23add chain=forward action=drop in-interface=bridge src-address-list=!lan-legit comment="LAN: anti-spoof" 24add chain=forward action=drop in-interface=vlan-iot src-address-list=!iot-legit comment="IOT: anti-spoof" 25add chain=forward action=drop in-interface=vlan-guest src-address-list=!guest-legit comment="GUEST: anti-spoof"

6. Encrypted DNS with stable resolver addresses

Upstream DNS leaves the house through Cloudflare DoH; downstream, the router is the resolver and advertises itself via RA RDNSS at its ULA. The bootstrap records are A/AAAA pins for cloudflare-dns.com, needed once at boot before DoH itself can resolve anything.

Endpoints stay simple — they keep using a local resolver. Resolver identity is on ULA, so SLAAC prefix churn never changes the DNS server the OS has memorized.

DoH resolver + ULA RDNSS

bash

1/tool/fetch url=https://curl.se/ca/cacert.pem dst-path=cacert.pem 2/certificate/import file-name=cacert.pem passphrase="" 3 4/ip/dns set allow-remote-requests=yes max-concurrent-queries=200 \ 5 use-doh-server=https://cloudflare-dns.com/dns-query verify-doh-cert=yes 6 7/ip/dns/static 8# A-only on purpose — see rationale below. 9add address=104.16.248.249 name=cloudflare-dns.com comment="DoH bootstrap" 10add address=104.16.249.249 name=cloudflare-dns.com comment="DoH bootstrap" 11# Reachable as the FQDN `router.lan` from any client whose resolver is the 12# router (the default — clients use RDNSS, and the router answers its own 13# static zone). No search-domain magic; type the dot-lan suffix. 14add address=<ULA_PREFIX>:1::1 name=router.lan type=AAAA comment="LAN ULA" 15 16# Stop DHCPv4 from handing out the router as DNS; clients use RDNSS instead. 17/ip/dhcp-server/network set [find address=192.168.88.0/24] dns-none=yes

The DoH bootstrap is A-only on purpose. IPv4 is up the moment the WAN is up; IPv6 only becomes reachable after the WireGuard tunnel handshakes (and, with Appendix A, after BFD converges). Pinning AAAA records here would push the first DoH query onto a half-warm — or broken — tunnel, while the IPv4 path is direct to a nearby Cloudflare PoP. Once DoH is up, regular client queries still resolve and use AAAA records normally.

7. End-to-end verification

The build has enough moving parts that it is worth testing each layer independently before declaring victory.

Smoke tests

bash

1# VLAN clients (wired, on each SSID) 2ip -4 addr show 3ping 1.1.1.1 4ping 192.168.88.1 # MUST fail from IoT and Guest 5nslookup cloudflare.com 192.168.89.1 6 7# IPv6 path 8wg show # on VPS: handshake < 3 min 9ping6 -c 2 <LAN_PREFIX>:0::2 # VPS -> MikroTik WG endpoint 10ip -6 route get <client-IPv6-addr> # on VPS: expect "dev wg0" 11curl -6 -s https://test-ipv6.com/json/ # client: score 10/10 12 13# DNS 14dig @<ULA_PREFIX>:1::1 cloudflare.com 15scutil --dns | grep -i 'nameserver\[' # macOS: expect the ULA

8. Update — keeping streaming off the metered tunnel

Added May 16, 2026. Running the build surfaced a failure mode the original design did not anticipate.

Two problems compounded on the IPv6 leg, both rooted in the same fact: every global IPv6 packet from the house egresses the VPS, so to the outside world it originates from a datacenter ASN, not the residential ISP.

  • VPN / geo detection. Streaming apps increasingly treat datacenter address space as a VPN and either refuse to play or pin the wrong region. Because Happy Eyeballs (RFC 8305) prefers IPv6, the app reaches for the flagged path first; the perfectly good native IPv4 route never gets a turn.
  • Metered bandwidth. The VPS plan is 1 TB/month with overage billed per Appendix C. High-bitrate video has no reason to transit a rented tunnel — it just burns the cap.

The realization: not every trusted device needs global IPv6. The cheapest and most robust lever is a VLAN that has a ULA but no GUA. With no global address there is no global v6 route at all — the client uses native ISP IPv4 for anything internet-facing, while the ULA still carries router DNS (RA RDNSS, §5.4) and on-link v6. This is failsafe by construction: it is the absence of a route, not a firewall rule something could slip past, and the per-VLAN anti-spoof list (§5.5) enforces that nothing forges its way back onto a global prefix.

Why not just drop the GUA's forward path in the firewall instead? Because the client would still autoconfigure a GUA, still prefer it per Happy Eyeballs, and stall for the fallback timeout on every new connection. Removing the GUA means the client never attempts the tunnel — there is nothing to fall back from.

8.1 A ULA-only trusted VLAN

VLAN 30 is trusted exactly like VLAN 1 — the only thing that makes a VLAN "trusted" here is membership in the LAN interface list, which the defconf input/forward rules gate on. It differs from VLAN 1 in one way: no GUA.

VLAN 30 — trusted, ULA-only

bash

1/interface/vlan add interface=bridge name=vlan30 vlan-id=30 2/interface/bridge/vlan add bridge=bridge vlan-ids=30 \ 3 tagged=bridge,ether2,ether3 comment="VLAN30 trusted, tagged to APs" 4 5/ip/address add address=192.168.91.1/24 interface=vlan30 6/ip/pool add name=vlan30-pool ranges=192.168.91.100-192.168.91.200 7/ip/dhcp-server add name=vlan30-dhcp interface=vlan30 address-pool=vlan30-pool lease-time=1d 8/ip/dhcp-server/network add address=192.168.91.0/24 gateway=192.168.91.1 dns-none=yes 9 10/interface/list/member add list=LAN interface=vlan30 11 12# ULA only — deliberately no <LAN_PREFIX>:30::1 GUA. 13/ipv6/address add interface=vlan30 address=<ULA_PREFIX>:30::1/64 advertise=yes comment="VLAN30 ULA" 14/ipv6/nd add interface=vlan30 advertise-dns=self \ 15 managed-address-configuration=no other-configuration=no 16 17/ipv6/firewall/address-list 18add list=vlan30-legit address=<ULA_PREFIX>:30::/64 19add list=vlan30-legit address=fe80::/10 20/ipv6/firewall/filter add chain=forward action=drop in-interface=vlan30 \ 21 src-address-list=!vlan30-legit comment="VLAN30: anti-spoof"

dns-none=yes on the DHCP scope is intentional: VLAN 30 hands out no DHCPv4 resolver, so clients take DNS from the ND RDNSS, the same as the main LAN's defconf scope. The IoT and Guest scopes of §3.2 were since moved to dns-none=yes as well — no VLAN hands out a DHCPv4 resolver anymore, so DNS is uniformly the ND RDNSS and the router resolves upstream over DoH (§6). The trade-off: a client with no RFC 8106 RDNSS support gets no DNS at all — acceptable here because every VLAN carries a ULA and the device population supports it, but it makes IPv6 a hard dependency for name resolution. advertise-dns=self supersedes the explicit dns=<ULA_PREFIX>:X::1 of §5.4 — the router advertises whatever address it holds on the interface, so the RDNSS can never point at a stale or wrong address and it survives a renumber with nothing to keep in sync; it is now set on every RA interface, not just VLAN 30. The RA also omits the mtu=1420 clamp the tunnel-bound VLANs carry (§5.4) — VLAN 30's v6 never reaches the 1420-byte WireGuard path, so clamping its on-link MTU would only shrink local traffic for nothing.

Move the main client SSID onto VLAN 30 in the controller and leave a second AP (or VLAN 1) for the few devices that genuinely want global v6. Guest (VLAN 20) gets the same treatment — drop its GUA and the matching guest-legit entry from §5.5; guests never needed metered global reachability:

Strip the Guest GUA too

bash

1/ipv6/address remove [find comment="GUEST GUA"] 2/ipv6/firewall/address-list remove [find list=guest-legit address=<LAN_PREFIX>:20::/64]

IoT (VLAN 10) keeps its GUA — its handful of outbound v6 flows are negligible against the cap, and some devices behave better with working v6 than with a half-deprecated one.

8.2 The cutover gotcha

A device that is associated at the moment its SSID's VLAN changes keeps the old VLAN's SLAAC addresses until their lifetimes expire — RouterOS advertises prefixes with a one-week preferred / four-week valid lifetime by default. So for up to a week those devices still prefer the old GUA, the anti-spoof rule on the new VLAN drops it, and they fall back to IPv4 with a brief Happy-Eyeballs delay on the first connection to each host.

This is harmless — nothing leaks; the drop counter on the new anti-spoof rule is exactly that retried traffic being contained — but it is not instant. There is no router-side fix, because a prefix can only be retracted on the link the client has already left. The one effective cleanup is a single Wi-Fi reconnect (or reboot) of each moved device: on re-association it rebuilds its address set for the new link and the stale prefix is simply never recreated.

Net effect: streaming and other high-bitrate traffic from the main SSID now rides native ISP IPv4 — no datacenter ASN, no VPN flag, no metered GB — while DNS, on-link IPv6, and the trusted-LAN posture are unchanged. The VPS tunnel goes back to doing what it is actually for: giving the devices that want it real, routable IPv6.

A. Appendix A — Sub-second IPv6 failover (BGP + BFD)

A WireGuard interface stays administratively UP even when the path is dead — NAT mapping expired, peer rebooted, VPS null-routed — so neither interface state nor BGP keepalives alone are a reliable failure signal. Section 5's static ::/0 with check-gateway=ping detects a dead tunnel in roughly 30 seconds; during that window dual-stack apps stall on AAAA before Happy Eyeballs falls back to IPv4. Replacing the static route with a BGP-advertised one on a BFD-monitored session cuts detection to about 600 ms — pings fail once and clients are already on IPv4 by the next attempt.

EventMeasured
WG silent → BFD down → route withdrawn~600 ms
WG restored → BFD up → route reinstalled~3 s
Full VPS reboot → bird up, route installed~28 s
BFD bandwidth (200 ms × 3, bidirectional)~3.4 GB / mo
BFD cost at $2.50/TB~$0.0085 / mo

A.1 VPS — bird2 with BFD

bird2: BGP + BFD on the VPS

bash

1# 1. Add a link-local on wg0 (bird's "next hop self" needs one). 2# Append to /etc/wireguard/wg0.conf and reload: 3# Address = fe80::1/64 4 5apt-get install -y bird2 6mkdir -p /etc/bird 7cat >/etc/bird/bird.conf <<EOF 8log syslog all; 9router id <VPS_ROUTER_ID>; 10 11protocol device { } 12protocol kernel kernel6 { ipv6 { export none; import all; }; learn yes; } 13 14protocol bfd { 15 interface "wg0" { 16 min rx interval 200 ms; 17 min tx interval 200 ms; 18 idle tx interval 1 s; 19 multiplier 3; 20 }; 21 # Explicit neighbor so bird actively probes; passive-only stalls 22 # after a tunnel flap because both sides wait for the other. 23 neighbor <LAN_PREFIX>:0::2 dev "wg0"; 24} 25 26protocol bgp mikrotik { 27 local <LAN_PREFIX>:0::1 as <VPS_AS>; 28 neighbor <LAN_PREFIX>:0::2 as <MT_AS>; 29 ipv6 { import none; export where net = ::/0; next hop self; }; 30} 31EOF 32chown -R bird:bird /etc/bird 33 34# Restart on any exit (packaged unit uses on-abnormal). 35mkdir -p /etc/systemd/system/bird.service.d 36printf '[Service]\nRestart=on-failure\nRestartSec=2s\n' \ 37 > /etc/systemd/system/bird.service.d/restart.conf 38systemctl daemon-reload && systemctl enable --now bird

The explicit neighbor in protocol bfd matters. Without it, bird is passive and only responds to probes; after a flap the MikroTik waits for BFD before re-establishing BGP, bird waits for BGP before initiating BFD, and recovery needs a manual birdc restart.

A.2 MikroTik — BGP, BFD, and remove the static route

RouterOS BGP + BFD

bash

1/routing/bgp/instance/add name=default-bgp as=<MT_AS> router-id=<MT_ROUTER_ID> 2/routing/bgp/template/add name=tpl-host as=<MT_AS> use-bfd=yes 3/routing/bgp/connection/add name=host-vps instance=default-bgp \ 4 remote.address=<LAN_PREFIX>:0::1 remote.as=<VPS_AS> \ 5 local.address=<LAN_PREFIX>:0::2 local.role=ebgp \ 6 templates=tpl-host afi=ipv6 7 8/routing/bfd/configuration/add interfaces=wg-host \ 9 min-rx=200ms min-tx=200ms multiplier=3 10 11/ipv6/firewall/filter add chain=input action=accept protocol=udp dst-port=3784 \ 12 in-interface=wg-host comment="BFD from VPS" \ 13 place-before=[find where chain=input and comment="defconf: drop everything else not coming from LAN"] 14 15# Remove the static ::/0; BGP-learned route at distance 20 takes over. 16/ipv6/route/remove [find comment="vps primary"]

RouterOS 7 splits BGP into instance, template, and connection. The address-family field is afi=ipv6 (singular). The as= on both instance and template is the local AS, not the remote.

B. Appendix B — IoT printer exception

The default policy is isolation. A printer is the canonical exception: pin its IP in the IoT scope, allow trusted LAN clients to initiate to that one address, and reflect mDNS between LAN and IoT so AirPrint discovery works. Guest stays excluded on purpose.

LAN → IoT printer + mDNS reflector

bash

1/ip/dhcp-server/lease add server=iot-dhcp mac-address=AA:BB:CC:DD:EE:FF \ 2 address=192.168.89.200 comment="Brother printer" 3 4/ip/firewall/filter add chain=forward action=accept connection-state=new \ 5 in-interface=bridge out-interface=vlan-iot dst-address=192.168.89.200 \ 6 place-before=[find where comment="IOT !-> LAN"] \ 7 comment="LAN -> printer" 8 9/ip/dns set mdns-repeat-ifaces=bridge,vlan-iot 10/ip/firewall/filter add chain=input action=accept in-interface=vlan-iot \ 11 protocol=udp dst-address=224.0.0.251 dst-port=5353 \ 12 comment="IOT: mDNS to router"

C. Appendix C — Cost and provider notes

The IPv6 leg is the only recurring cost in the build, and it depends entirely on whether the VPS provider routes a prefix to the instance or only hands out an on-link /64. Providers known to route prefixes (verify on your specific plan): WebHorizon SG (/48), Hetzner Cloud (/64 effectively routed via NDP-proxy), Linode/Akamai (/64 default, /56 on request), Oracle Cloud (/56).

Line itemOn-link /64 (e.g. Vultr)Routed /48 (this build)
VPS plan$5 / mo$3 / mo (routed /48, 1 TB)
Reserved IPv6 fee$3 / mo$0
Bandwidth overage$0.01 / GB$0.0025 / GB
LAN address spaceone /64/48 — 65 k /64s
Extra daemon on VPSndppdnone
Fixed total$8 / mo$3 / mo

Glossary

AcronymExpansionReference
APWireless access pointWikipedia
ASAutonomous system (number)RFC 1930
BFDBidirectional Forwarding DetectionRFC 5880
BGPBorder Gateway ProtocolRFC 4271
CACertificate authorityWikipedia
CGNATCarrier-grade NATWikipedia
DHCPDynamic Host Configuration ProtocolRFC 2131
DoHDNS over HTTPSRFC 8484
GREGeneric Routing EncapsulationRFC 2784
GUAGlobal unicast addressRFC 4291
IKEv2Internet Key Exchange v2RFC 7296
IoTInternet of ThingsWikipedia
IPsecInternet Protocol SecurityRFC 4301
ISPInternet service providerWikipedia
LANLocal area networkWikipedia
mDNSMulticast DNSRFC 6762
MTUMaximum transmission unitWikipedia
NATNetwork address translationWikipedia
NDPNeighbor Discovery ProtocolRFC 4861
OpenVPNOpenVPN tunnel daemonWikipedia
RARouter advertisementRFC 4861
RDNSSRecursive DNS Server option in RARFC 8106
SASecurity association (IPsec)RFC 4301
SIGILLIllegal-instruction signalWikipedia
SLAACStateless address autoconfigurationRFC 4862
SSIDService set identifier (Wi-Fi)Wikipedia
TLSTransport Layer SecurityRFC 8446
ULAUnique local addressRFC 4193
VLANVirtual LANWikipedia
VPSVirtual private serverWikipedia
WANWide area networkWikipedia
WireGuardWireGuard VPNwireguard.com

D. References

Share

Comments

Comments are powered by GitHub Discussions and require a free GitHub account to post.