docs-onboarding/network.md
2025-04-30 08:24:07 +02:00

4.1 KiB

Network

vnet List

List of vnets (latest version see Unifi console):

Name VLAN ID Router Subnet Azure vnet
Default 1 prd-unifi-1 192.168.1.0/24 N/A
Management 2 prd-unifi-1 192.168.2.0/24 N/A
Clients 3 prd-unifi-1 192.168.3.0/24 N/A
Server 4 prd-unifi-1 192.168.4.0/24 N/A
IoT 5 prd-unifi-1 192.168.5.0/24 10.5.0.0/16
Guests 6 prd-unifi-1 192.168.6.0/24 N/A
Volt - Development 7 prd-unifi-1 192.168.7.0/24 N/A
Var - Testing 8 prd-unifi-1 192.168.8.0/24 N/A
Watt - Production 9 prd-unifi-1 192.168.9.0/24 N/A

Tasks:

  • Define Networks
    • OK Ranges definieren
    • OK Verteilen, was wohin kommt
    • OK VLAN IDs statisch besser als dynamisch
    • OK DNS definieren (fix vs. dynamisch)
  • Gateway Settings
    • Auto Update
    • Block outgoing DNS
    • Plugins wie OPNSense CrowdSec

Traefik load balancing

Aparently due to these issues:

we might need to update our approach. As far as I understood it will be required to use a primary Traefik that does no ACME challanging at all. And thus either create an additional instance for handling separate connections to Proxmox und what all is overarching.

::: mermaid graph LR A[Internet] -->|ISP Connection| TRA[Traefik
*.amp.jambor.pro
Old version 2.11.0]

TRA --> TRB[Traefik Dashboard]
TRA --> PRX[Proxmox Servers]
TRA --> LX1[LXC CouchDB]
TRA --> LX2[LXC Flightradar]

subgraph "direct connections"
    TRB
    PRX
    LX1
    LX2
end

TRA --> TRVO[Traefik]

subgraph "*.volt.jambor.pro Development"
    TRVO --> DCD[Docker host]
    TRVO --> LXD[LXC Container]
end

TRA --> TRVA[Traefik]

subgraph "*.var.jambor.pro Testing"
    TRVA --> DCT[Docker host]
    TRVA --> LXT[LXC Container]
end

TRA --> TRW[Traefik ]

subgraph "*.watt.jambor.pro Production"
    TRW --> DCP[Docker host]
    TRW --> LXW[LXC Container]
end

:::

Network diagram

::: mermaid graph LR A[Internet] -->|ISP Connection| ND1[Gateway
gw-jj-nar-prd-opr-1]

subgraph "On-Prem Hub (VLAN ID 1)"
    ND1 -->|VPN Tunnel to Azure| C[VPN Gateway]
    ND1 --> D[Firewall & Security Policies]
    ND2[Switch<br>sw-jj-nar-prd-opr-1]
    ND3[Access Point<br>ap-jj-nar-prd-opr-0]
    ND4[Access Point<br>ap-jj-nar-prd-opr-1]
    ND5[Access Point<br>ap-jj-nar-prd-opr-2]
    ND6[Access Point<br>ap-jj-nar-prd-opr-3]
end

subgraph "On-Premises Spoke Networks"
    D --> V2[Management VLAN ID 2]
    V2 --> V201[Supermicro]
    V2 --> V202[prd-proxmox-1]
    V2 --> V203[prd-proxmox-2]
    D --> V3[Clients VLAN 3]
    V3 --> V301[Mobiles]
    V3 --> V302[Laptops]
    V3 --> V303[Apple TV]
    V3 --> V304[HomePods]
    D --> V4[Servers VLAN 4]
    V4 --> V401[Legacy unneeded in future<br>will be in VLAN 7/8/9]
    D --> V5[IoT VLAN 5 - Isolated 🔒]
    V5 --> V501[Home infrastructure]
    V5 --> V502[Loxone]
    V5 --> V503[Home Assistant]
    D --> V6[Guests VLAN 6]
    V6 --> V601[Friends visting]
    D --> V10[Guests VLAN 10]
    V10 --> V1001[Customers of rented<br>out flat]

end

subgraph "On-Premises Workload Spoke Networks"
    D --> O[*.volt.* VLAN ID 7]
    D --> P[*.war.* VLAN 8]
    D --> Q[*.watt.* VLAN 9]
end

C -->|VPN Tunnel| J[Azure VPN Gateway]

subgraph "Azure Hub"
    J --> K[Azure Firewall]
end

subgraph "Azure Workload Spoke Networks"
    K --> L[Spoke 1: *.volt.*]
    K --> M[Spoke 2: *.var.*]
    K --> N[Spoke 3: *.watt.*]
    K --> R[Spoke 4: IoT]
end

:::