Building a usable home network behind residential CGNAT
VLANs, a UniFi controller running on the router itself, routed IPv6 over WireGuard, and encrypted DNS. Each section stands alone; pick the parts that match your situation.
Abstract
This is a journal of building a small, opinionated home network on a single
MikroTik RB5009 behind a residential fiber connection that uses carrier-grade
NAT. The ISP gives a dynamic IPv4 address with no inbound reachability and no
usable IPv6. The goals are pedestrian: keep IoT devices and guests away from
the trusted LAN, manage the Wi-Fi APs without buying another always-on box,
and have real IPv6 so the network stays useful as the rest of the internet
finishes the v6 migration.
The build proceeds in three layers, in the order they were actually installed:
first the LAN is split into VLANs, then the UniFi controller is moved into
RouterOS containers, then IPv6 reachability is recovered through a $3/month
VPS (1 TB monthly transfer included) that routes a /48 down to the
house. DoH and RA RDNSS sit on top of the third layer. None of these steps
depend on a specific brand of fiber service; they only assume that outbound
UDP works and that the router can carry a small WireGuard tunnel.
Each numbered section below is paste-ready against a defconf RouterOS v7
setup. The italicized notes after each step are the rationale — what trade-off
was being made and why. The appendices document the optional pieces
(sub-second IPv6 failover, an IoT printer exception, cost) and the references
point at the original gists.
Design decisions
Three protocol choices in this build are load-bearing enough that a reader
substituting parts will want a sentence of justification for each.
WireGuard carries the IPv6 transport because residential CGNAT only
guarantees outbound UDP, and WireGuard needs nothing more than that. Both
endpoints (RouterOS v7 and Linux) speak it in-kernel with a single
config primitive on each side; there is no daemon, no negotiation, and no
MTU two-stepping with an underlying IPsec SA. IPsec/IKEv2, GRE+IPsec, or
OpenVPN would all work; each adds operational surface this build does not
need to pay for.
BGP appears only in Appendix A, and only because it pairs cleanly
with BFD: BGP withdraws a single ::/0 route the instant BFD declares the
forwarding path dead, which gives sub-second IPv6 failover with no scripts
and no controller. With one peer, BGP is not chosen for scaling — it is
chosen for clean dynamic route withdrawal that static routes do not offer.
BFD exists because a WireGuard interface stays administratively UP even
when the tunnel is silently dead: NAT mapping expired, peer rebooted, ISP
null-routed the VPS. Without BFD, BGP and interface state would happily keep
announcing a black hole. BFD adds an independent forwarding-path liveness
signal so the routing layer can tell "interface up" apart from "packets
actually reach the other side".
One box does everything the LAN side needs. The RB5009 terminates the WAN,
runs DHCPv4 and IPv6 RA, runs DoH outbound, hosts the WireGuard client toward
the VPS, and runs the UniFi Network Application and MongoDB as containers.
Two UniFi 6 APs hang off bridge ports configured as hybrid trunks: untagged
frames carry AP management on the main LAN, tagged frames carry IoT and Guest
SSID traffic.
One always-on device is enough. Running the controller on the router avoids a
second box and a second attack surface; the cost is paid in memory pressure
and is bounded by the limits in section 4.
Address plan
VLAN
Tagging
Role
IPv4
IPv6
VLAN 1
untagged
main LAN, AP mgmt
192.168.88.0/24
<LAN_PREFIX>:1::/64
VLAN 10
tagged
IoT SSID
192.168.89.0/24
<LAN_PREFIX>:10::/64
VLAN 20
tagged
Guest SSID
192.168.90.0/24
<LAN_PREFIX>:20::/64
—
WG transport
VPS ↔ MikroTik
n/a
<LAN_PREFIX>:0::/64
Each VLAN is a separate L3 boundary at the router. The IoT and Guest VLANs
deliberately reuse the same slice IDs as their VLAN numbers in the IPv6 plan
(:10::/64, :20::/64) so the mapping stays obvious in tcpdump.
2. Conventions and placeholders
Every snippet below uses angle-bracketed placeholders. Substitute them before
pasting; the rest of the text is literal RouterOS or bash.
Placeholder
Meaning
<LAN_PREFIX>
Routed IPv6 /48 (or /56) the VPS hands you. Drop the trailing zeros, e.g. 2001:db8.
VPS public interface. ip -o link shows it (typical: enp3s0, ens3).
<VPS_PUBKEY> / <MT_PUBKEY>
WireGuard public keys, one per side. Each is printed during section 5.
<TZ>
IANA tz database name for the controller container. Examples: Asia/Manila, Europe/Berlin.
Documentation prefixes from RFC 3849 (2001:db8::/32) and RFC 4193
(fd00::/8) work as stand-ins while testing. Snippets assume the defconf
bridge name bridge and ports ether2–ether5; rename the interfaces where
they appear.
3. LAN segmentation comes first
The network is split before any services are added on top of it. Main LAN
keeps VLAN 1 untagged so AP adoption and existing wired devices stay
boring. IoT and Guest land on tagged VLANs 10 and 20 that share the same
uplinks to the APs.
Hybrid trunks instead of a tagged management VLAN keeps AP onboarding
trivial: APs boot on the main LAN, only client SSID traffic is tagged. The
trade-off is that the trunk ports carry both untagged and tagged frames, so
they have to be documented as such.
3.1 Bridge VLAN table and L3 gateways
Turn on bridge VLAN filtering, declare the two VLAN interfaces, mark the AP
uplinks as trunks, and give each VLAN a gateway address.
VLAN bridge + gateways
bash
1/interface/bridge set[find name=bridge] vlan-filtering=yes
23/interface/vlan addinterface=bridge name=vlan-iot vlan-id=104/interface/vlan addinterface=bridge name=vlan-guest vlan-id=2056# Bridge VLAN table — hybrid trunks on ether2/ether3 (to APs),7# access-only LAN on ether4/ether5.8/interface/bridge/vlan
9addbridge=bridge vlan-ids=1untagged=bridge,ether2,ether3,ether4,ether5 comment="main LAN untagged"10addbridge=bridge vlan-ids=10tagged=bridge,ether2,ether3 comment="IoT to UniFi APs"11addbridge=bridge vlan-ids=20tagged=bridge,ether2,ether3 comment="Guest to UniFi APs"1213/ip/address addaddress=192.168.89.1/24 interface=vlan-iot
14/ip/address addaddress=192.168.90.1/24 interface=vlan-guest
3.2 DHCP scopes
Each VLAN gets its own pool, server, and network record, with the router
itself acting as DNS for now. DoH replaces the upstream side of that resolver
in section 6.
3.3 Firewall — input services and east-west isolation
The input chain accepts only the router services the VLANs actually need; the
forward chain drops new flows back into trusted networks. Established replies
are not affected, which is what makes narrow exceptions (Appendix B)
practical.
Input + forward firewall
bash
1# Input — place BEFORE defconf's "drop all not coming from LAN".2/ip/firewall/filter
3addchain=input action=accept in-interface=vlan-iot protocol=udp dst-port=67-68 comment="IOT: DHCPv4"4addchain=input action=accept in-interface=vlan-iot protocol=udp dst-port=53comment="IOT: DNS UDP"5addchain=input action=accept in-interface=vlan-iot protocol=tcp dst-port=53comment="IOT: DNS TCP"6addchain=input action=accept in-interface=vlan-guest protocol=udp dst-port=67-68 comment="GUEST: DHCPv4"7addchain=input action=accept in-interface=vlan-guest protocol=udp dst-port=53comment="GUEST: DNS UDP"8addchain=input action=accept in-interface=vlan-guest protocol=tcp dst-port=53comment="GUEST: DNS TCP"910# Forward — place BEFORE fasttrack / established accepts.11/ip/firewall/filter
12addchain=forward action=drop in-interface=vlan-iot out-interface=bridge connection-state=new comment="IOT !-> LAN"13addchain=forward action=drop in-interface=vlan-guest out-interface=bridge connection-state=new comment="GUEST !-> LAN"14addchain=forward action=drop in-interface=vlan-guest out-interface=vlan-iot connection-state=new comment="GUEST !-> IOT"
Isolation is enforced in forward, not by starving clients of DHCP/DNS in
input. Mixing the two layers makes the rules unreviewable later.
4. UniFi controller on the router
The controller runs as two RouterOS containers — MongoDB at
192.168.88.2 and the UniFi Network Application at 192.168.88.3 — both
bridged onto VLAN 1 through veth interfaces. From the LAN they look like
normal hosts; from the router they are services with hard memory limits.
The RB5009 has 1 GB of RAM. Without a swap file and explicit caps on
Mongo's WiredTiger cache and UniFi's Java heap, a backup-restore burst can
knock routing off the air.
Static veths make the containers ordinary LAN hosts. With bridge VLAN
filtering on, the VLAN-1 untagged list must be updated to include the new
veths or they will not pass traffic.
UniFi's LinuxServer image expects an external Mongo. The bootstrap user needs
ownership of unifi, unifi_stat, and unifi_audit, plus broader admin
roles for backup restore — UniFi creates a transient restore DB during import.
MongoDB is pinned to arm64v8/mongo:4.4.18. Newer ARM64 builds require
ARMv8.2-A atomics that the RB5009 Cortex-A72 (ARMv8.0-A) does not have; they
crash with SIGILL. The pinned version is EOL, so it stays LAN-only and
never gets a WAN forward.
Container mounts use list= on the mount object and mountlists= on the
container. RouterOS rejects mountlists= during the initial image pull, so
set them with /container/set after the add.
4.5 Verification
Controller smoke tests
bash
1/disk/print where slot~"usb1-part"2/log/print where topics~"container"34nc-z-w2192.168.88.2 27017&&echo OK
5curl-sk https://192.168.88.3:8443/manage/account/login \6-o /dev/null -w"HTTP %{http_code}\n"
5. IPv6 over WireGuard via a routed /48
CGNAT eats inbound IPv4. IPv6 sidesteps that problem entirely — but only if
the network has real, routable IPv6. The recipe is a $3/month VPS that routes
a /48 to its instance, a WireGuard tunnel from the RB5009 to that VPS, and
per-VLAN /64s carved out of the /48.
A routed /48 is what unlocks the address plan in section 1. With an
on-link /64 the only options are NDP-proxying and a single subnet; with a
routed /48 there are no helper daemons and 65k subnets to spare.
Hurricane Electric's tunnel broker is the
obvious first answer for residential IPv6: a free /64 (or /48 on request),
worldwide PoPs, and a dyndns-style endpoint update so a changing client IPv4
does not break the tunnel. It still does not help behind CGNAT. HE's primary
service is 6in4, which encapsulates IPv6 directly inside IP protocol 41 — no
TCP or UDP ports for the carrier's NAT to track, so most CGNAT deployments
drop the packets outright. The dyndns hook keeps the tunnel configured
against the current public IPv4; the encapsulated frames just never make it
through the carrier's NAT44.
HE also offers AYIYA, which is UDP-encapsulated and would traverse CGNAT,
but its only working client (aiccu) is unmaintained: SixXS shut down in
June 2017 and Debian removed the package as non-functional, and there is
no RouterOS port. WireGuard fills the same role with a current in-kernel
implementation, and the encryption is a bonus rather than a cost.
Route64 is a free IPv6 tunnel broker that already
runs on WireGuard, which initially makes it look like a drop-in replacement
for the VPS in this build. The catch is what Route64 binds the tunnel to.
The peer config it issues pins your registered public IPv4 as the
allowed remote endpoint. Behind CGNAT that IP is shared with other
subscribers and rotates on the carrier's schedule, so the tunnel either
authenticates the wrong subscriber's address or stops accepting the
re-NAT'd session entirely; there is no per-restart re-registration in the
broker UI. The VPS topology sidesteps the binding problem — the RB5009
reaches out to a known public IP, PersistentKeepalive = 25 keeps the
carrier's NAT44 mapping warm, and the VPS does not care what source IP the
mapping happens to land on. Owning the reachable side of the tunnel is
what the $3/month VPS actually buys.
5.1 Return routing: why each /64 has to appear in AllowedIPs
The provider routes the /48 to the VPS, then configures one address from it on
the public NIC at /48 mask. That installs a connected route for the entire
/48 on the public interface, which competes with the WireGuard tunnel for
return traffic. Listing the parent /48 in AllowedIPs ties on prefix length
and loses to the public NIC by metric. Listing each LAN /64 separately wins
by longest-prefix match.
# Routing table on the VPS — what makes return traffic work
<LAN_PREFIX>::/48 dev <VPS_NIC> metric 256 ← provider's connected route (harmless)
<LAN_PREFIX>:1::/64 dev wg0 ← from AllowedIPs, wins by /64 > /48
<LAN_PREFIX>:10::/64 dev wg0 ← from AllowedIPs
<LAN_PREFIX>:20::/64 dev wg0 ← from AllowedIPs
# Listing the parent /48 in AllowedIPs would lose to the connected route
# on the public NIC and return traffic would disappear. List each /64.
This is the one detail that silently breaks the whole setup. Egress works,
ping works, then the first packet to a SLAAC client falls off the public NIC
and the network looks broken in a way that no log line explains. The fix is
paste-only — but only if it is paste-correct.
5.2 VPS — WireGuard relay
VPS wg0 + AllowedIPs per /64
bash
1set-e2apt-get update -qq3apt-getinstall-y-qq wireguard
45cat>/etc/sysctl.d/99-wg-relay.conf <<'EOF'
6net.ipv6.conf.all.forwarding = 1
7net.ipv6.conf.default.forwarding = 1
8net.ipv6.conf.<VPS_NIC>.accept_ra = 2
9EOF10sysctl--system>/dev/null
1112umask 077
13mkdir-p /etc/wireguard
14wg genkey |tee /etc/wireguard/server.key | wg pubkey > /etc/wireguard/server.pub
15VPS_PRIVKEY=$(cat /etc/wireguard/server.key)1617cat>/etc/wireguard/wg0.conf <<EOF
18[Interface]
19PrivateKey = ${VPS_PRIVKEY}20Address = <LAN_PREFIX>:0::1/64
21ListenPort = 51820
22MTU = 1420
2324[Peer]
25PublicKey = <MT_PUBKEY>
26# One entry per LAN /64. Longest-prefix match beats the connected
27# /48 on the public NIC and keeps return traffic on wg0.
28AllowedIPs = <LAN_PREFIX>:0::2/128, <LAN_PREFIX>:1::/64, <LAN_PREFIX>:10::/64, <LAN_PREFIX>:20::/64
29PersistentKeepalive = 25
30EOF3132ufw allow 51820/udp comment "WireGuard"33ufw route allow in on wg0
34ufw route allow out on wg0
35ufw reload ||true36systemctl enable--now wg-quick@wg0
37echo"VPS public key: $(cat /etc/wireguard/server.pub)"
5.3 MikroTik — WireGuard client and main-LAN IPv6
The static ::/0 with check-gateway=ping detects a dead tunnel in roughly
30 seconds. Appendix A replaces it with BGP+BFD for ~600 ms
detection if that matters.
MikroTik WireGuard + main LAN IPv6
bash
1/interface/wireguard addname=wg-host listen-port=51820mtu=14202/interface/wireguard/peers addinterface=wg-host name=vps \3 public-key="<VPS_PUBKEY>"\4 endpoint-address=<VPS_IP> endpoint-port=51820\5 allowed-address=::/0 \6 persistent-keepalive=25s
78/ipv6/address addaddress=<LAN_PREFIX>:0::2/64 interface=wg-host advertise=no
9/ipv6/address addaddress=<LAN_PREFIX>:1::1/64 interface=bridge advertise=yes
10/ipv6/address addaddress=<ULA_PREFIX>:1::1/64 interface=bridge advertise=yes comment="LAN ULA"1112# Static default with ping liveness — detection ~30 s.13# Appendix A replaces this with BGP+BFD for ~600 ms. The comment is what14# Appendix A's `/ipv6/route/remove` matches against, so set it now.15/ipv6/route add dst-address=::/0 gateway=<LAN_PREFIX>:0::1%wg-host \16 check-gateway=ping comment="vps primary"1718/ipv6/firewall/filter addchain=input action=accept in-interface=wg-host \19comment="accept input from VPS WG peer"\20 place-before=[find where chain=input and comment="defconf: drop everything else not coming from LAN"]
5.4 Per-VLAN addresses and RA RDNSS
Each VLAN gets a GUA from the /48 and a stable ULA from the locally-generated
fd… prefix. The ULA is what gets advertised as the DNS server — it stays
the same when the global prefix is renumbered, which it eventually will be.
Mirror the IPv4 isolation in IPv6, then drop traffic whose source prefix does
not belong on the ingress VLAN. SLAAC makes prefix forgery trivially cheap;
the address-list filters make it not work.
Upstream DNS leaves the house through Cloudflare DoH; downstream, the router
is the resolver and advertises itself via RA RDNSS at its ULA. The bootstrap
records are A/AAAA pins for cloudflare-dns.com, needed once at boot before
DoH itself can resolve anything.
Endpoints stay simple — they keep using a local resolver. Resolver identity
is on ULA, so SLAAC prefix churn never changes the DNS server the OS has
memorized.
DoH resolver + ULA RDNSS
bash
1/tool/fetch url=https://curl.se/ca/cacert.pem dst-path=cacert.pem
2/certificate/import file-name=cacert.pem passphrase=""34/ip/dns set allow-remote-requests=yes max-concurrent-queries=200\5 use-doh-server=https://cloudflare-dns.com/dns-query verify-doh-cert=yes
67/ip/dns/static
8# A-only on purpose — see rationale below.9addaddress=104.16.248.249 name=cloudflare-dns.com comment="DoH bootstrap"10addaddress=104.16.249.249 name=cloudflare-dns.com comment="DoH bootstrap"11# Reachable as the FQDN `router.lan` from any client whose resolver is the12# router (the default — clients use RDNSS, and the router answers its own13# static zone). No search-domain magic; type the dot-lan suffix.14addaddress=<ULA_PREFIX>:1::1 name=router.lan type=AAAA comment="LAN ULA"1516# Stop DHCPv4 from handing out the router as DNS; clients use RDNSS instead.17/ip/dhcp-server/network set[find address=192.168.88.0/24] dns-none=yes
The DoH bootstrap is A-only on purpose. IPv4 is up the moment the WAN is up;
IPv6 only becomes reachable after the WireGuard tunnel handshakes (and, with
Appendix A, after BFD converges). Pinning AAAA records here would push
the first DoH query onto a half-warm — or broken — tunnel, while the IPv4
path is direct to a nearby Cloudflare PoP. Once DoH is up, regular client
queries still resolve and use AAAA records normally.
7. End-to-end verification
The build has enough moving parts that it is worth testing each layer
independently before declaring victory.
Smoke tests
bash
1# VLAN clients (wired, on each SSID)2ip-4 addr show
3ping1.1.1.1
4ping192.168.88.1 # MUST fail from IoT and Guest5nslookup cloudflare.com 192.168.89.1
67# IPv6 path8wg show # on VPS: handshake < 3 min9ping6 -c2<LAN_PREFIX>:0::2 # VPS -> MikroTik WG endpoint10ip-6 route get <client-IPv6-addr># on VPS: expect "dev wg0"11curl-6-s https://test-ipv6.com/json/ # client: score 10/101213# DNS14dig @<ULA_PREFIX>:1::1 cloudflare.com
15scutil --dns|grep-i'nameserver\['# macOS: expect the ULA
8. Update — keeping streaming off the metered tunnel
Added May 16, 2026. Running the build surfaced a failure mode the original
design did not anticipate.
Two problems compounded on the IPv6 leg, both rooted in the same fact: every
global IPv6 packet from the house egresses the VPS, so to the outside world it
originates from a datacenter ASN, not the residential ISP.
VPN / geo detection. Streaming apps increasingly treat datacenter
address space as a VPN and either refuse to play or pin the wrong region.
Because Happy Eyeballs (RFC 8305) prefers IPv6, the app reaches for
the flagged path first; the perfectly good native IPv4 route never gets a
turn.
Metered bandwidth. The VPS plan is 1 TB/month with overage billed
per Appendix C. High-bitrate video has no reason to transit a rented
tunnel — it just burns the cap.
The realization: not every trusted device needs global IPv6. The cheapest and
most robust lever is a VLAN that has a ULA but no GUA. With no global
address there is no global v6 route at all — the client uses native ISP IPv4
for anything internet-facing, while the ULA still carries router DNS (RA
RDNSS, §5.4) and on-link v6. This is failsafe by construction: it is the
absence of a route, not a firewall rule something could slip past, and the
per-VLAN anti-spoof list (§5.5) enforces that nothing forges its way back onto
a global prefix.
Why not just drop the GUA's forward path in the firewall instead? Because the
client would still autoconfigure a GUA, still prefer it per Happy Eyeballs,
and stall for the fallback timeout on every new connection. Removing the GUA
means the client never attempts the tunnel — there is nothing to fall back
from.
8.1 A ULA-only trusted VLAN
VLAN 30 is trusted exactly like VLAN 1 — the only thing that makes a
VLAN "trusted" here is membership in the LAN interface list, which the
defconf input/forward rules gate on. It differs from VLAN 1 in one way:
no GUA.
dns-none=yes on the DHCP scope is intentional: VLAN 30 hands out no
DHCPv4 resolver, so clients take DNS from the ND RDNSS, the same as the main
LAN's defconf scope. The IoT and Guest scopes of §3.2 were since moved to
dns-none=yes as well — no VLAN hands out a DHCPv4 resolver anymore, so DNS
is uniformly the ND RDNSS and the router resolves upstream over DoH (§6). The
trade-off: a client with no RFC 8106 RDNSS support gets no DNS at all —
acceptable here because every VLAN carries a ULA and the device population
supports it, but it makes IPv6 a hard dependency for name resolution.
advertise-dns=self supersedes the explicit
dns=<ULA_PREFIX>:X::1 of §5.4 — the router advertises whatever address it
holds on the interface, so the RDNSS can never point at a stale or wrong
address and it survives a renumber with nothing to keep in sync; it is now set
on every RA interface, not just VLAN 30. The RA also omits the mtu=1420
clamp the tunnel-bound VLANs carry (§5.4) — VLAN 30's v6 never reaches the
1420-byte WireGuard path, so clamping its on-link MTU would only shrink local
traffic for nothing.
Move the main client SSID onto VLAN 30 in the controller and leave a
second AP (or VLAN 1) for the few devices that genuinely want global v6.
Guest (VLAN 20) gets the same treatment — drop its GUA and the matching
guest-legit entry from §5.5; guests never needed metered global
reachability:
IoT (VLAN 10) keeps its GUA — its handful of outbound v6 flows are
negligible against the cap, and some devices behave better with working v6
than with a half-deprecated one.
8.2 The cutover gotcha
A device that is associated at the moment its SSID's VLAN changes keeps the
old VLAN's SLAAC addresses until their lifetimes expire — RouterOS
advertises prefixes with a one-week preferred / four-week valid lifetime by
default. So for up to a week those devices still prefer the old GUA, the
anti-spoof rule on the new VLAN drops it, and they fall back to IPv4 with a
brief Happy-Eyeballs delay on the first connection to each host.
This is harmless — nothing leaks; the drop counter on the new anti-spoof rule
is exactly that retried traffic being contained — but it is not instant. There
is no router-side fix, because a prefix can only be retracted on the link the
client has already left. The one effective cleanup is a single Wi-Fi
reconnect (or reboot) of each moved device: on re-association it rebuilds its
address set for the new link and the stale prefix is simply never recreated.
Net effect: streaming and other high-bitrate traffic from the main SSID now
rides native ISP IPv4 — no datacenter ASN, no VPN flag, no metered GB — while
DNS, on-link IPv6, and the trusted-LAN posture are unchanged. The VPS tunnel
goes back to doing what it is actually for: giving the devices that want it
real, routable IPv6.
A. Appendix A — Sub-second IPv6 failover (BGP + BFD)
A WireGuard interface stays administratively UP even when the path is dead —
NAT mapping expired, peer rebooted, VPS null-routed — so neither interface
state nor BGP keepalives alone are a reliable failure signal.
Section 5's static ::/0 with check-gateway=ping detects a dead
tunnel in roughly 30 seconds; during that window dual-stack apps stall
on AAAA before Happy Eyeballs falls back to IPv4. Replacing the static route
with a BGP-advertised one on a BFD-monitored session cuts detection to about
600 ms — pings fail once and clients are already on IPv4 by the next
attempt.
Event
Measured
WG silent → BFD down → route withdrawn
~600 ms
WG restored → BFD up → route reinstalled
~3 s
Full VPS reboot → bird up, route installed
~28 s
BFD bandwidth (200 ms × 3, bidirectional)
~3.4 GB / mo
BFD cost at $2.50/TB
~$0.0085 / mo
A.1 VPS — bird2 with BFD
bird2: BGP + BFD on the VPS
bash
1# 1. Add a link-local on wg0 (bird's "next hop self" needs one).2# Append to /etc/wireguard/wg0.conf and reload:3# Address = fe80::1/6445apt-getinstall-y bird2
6mkdir-p /etc/bird
7cat>/etc/bird/bird.conf <<EOF
8log syslog all;
9router id <VPS_ROUTER_ID>;
1011protocol device { }
12protocol kernel kernel6 { ipv6 { export none; import all; }; learn yes; }
1314protocol bfd {
15 interface "wg0" {
16 min rx interval 200 ms;
17 min tx interval 200 ms;
18 idle tx interval 1 s;
19 multiplier 3;
20 };
21 # Explicit neighbor so bird actively probes; passive-only stalls
22 # after a tunnel flap because both sides wait for the other.
23 neighbor <LAN_PREFIX>:0::2 dev "wg0";
24}
2526protocol bgp mikrotik {
27 local <LAN_PREFIX>:0::1 as <VPS_AS>;
28 neighbor <LAN_PREFIX>:0::2 as <MT_AS>;
29 ipv6 { import none; export where net = ::/0; next hop self; };
30}
31EOF32chown-R bird:bird /etc/bird
3334# Restart on any exit (packaged unit uses on-abnormal).35mkdir-p /etc/systemd/system/bird.service.d
36printf'[Service]\nRestart=on-failure\nRestartSec=2s\n'\37> /etc/systemd/system/bird.service.d/restart.conf
38systemctl daemon-reload && systemctl enable--now bird
The explicit neighbor in protocol bfd matters. Without it, bird is
passive and only responds to probes; after a flap the MikroTik waits for BFD
before re-establishing BGP, bird waits for BGP before initiating BFD, and
recovery needs a manual birdc restart.
A.2 MikroTik — BGP, BFD, and remove the static route
RouterOS BGP + BFD
bash
1/routing/bgp/instance/add name=default-bgp as=<MT_AS> router-id=<MT_ROUTER_ID>2/routing/bgp/template/add name=tpl-host as=<MT_AS> use-bfd=yes
3/routing/bgp/connection/add name=host-vps instance=default-bgp \4remote.address=<LAN_PREFIX>:0::1 remote.as=<VPS_AS>\5local.address=<LAN_PREFIX>:0::2 local.role=ebgp \6templates=tpl-host afi=ipv6
78/routing/bfd/configuration/add interfaces=wg-host \9 min-rx=200ms min-tx=200ms multiplier=31011/ipv6/firewall/filter addchain=input action=accept protocol=udp dst-port=3784\12 in-interface=wg-host comment="BFD from VPS"\13 place-before=[find where chain=input and comment="defconf: drop everything else not coming from LAN"]1415# Remove the static ::/0; BGP-learned route at distance 20 takes over.16/ipv6/route/remove [find comment="vps primary"]
RouterOS 7 splits BGP into instance, template, and connection. The
address-family field is afi=ipv6 (singular). The as= on both instance
and template is the local AS, not the remote.
B. Appendix B — IoT printer exception
The default policy is isolation. A printer is the canonical exception: pin
its IP in the IoT scope, allow trusted LAN clients to initiate to that one
address, and reflect mDNS between LAN and IoT so AirPrint discovery works.
Guest stays excluded on purpose.
The IPv6 leg is the only recurring cost in the build, and it depends entirely
on whether the VPS provider routes a prefix to the instance or only hands out
an on-link /64. Providers known to route prefixes (verify on your specific
plan): WebHorizon SG (/48), Hetzner Cloud (/64 effectively routed via
NDP-proxy), Linode/Akamai (/64 default, /56 on request), Oracle Cloud (/56).