Proxmox openvswitch vs linux bridge

Migrating network from linux-bridge to OVS — and it’s incredibly confusing

You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an alternative browser.

ivdok

New Member
  • Several predefined interfaces available to choose from, which have static VLAN ID set up, and neither guest VM nor PVE are managing/aware of VLAN assignment. They are needed to avoid manual labor with giving access to common VLANs. Vast majority of VMs uses them for external IP assignments, manually filling them out would be a major PITA.
  • One interface, which is virtual and VLAN-aware, and VLAN ID assignment is done via Proxmox GUI. It is managed by MikroTik CHR, which uses same interface, but is untagged and manages VLAN separation and traffic inside it.
  • VMs on said virtual interface and with same VLAN ID can connect to each other, even when they are on different Proxmox nodes.
auto lo iface lo inet loopback iface eno1 inet manual iface eno2 inet manual auto vmbr0 iface vmbr0 inet manual bridge_ports eno1.40 bridge_stp off bridge_fd 0 #Predefined auto vmbr1 iface vmbr1 inet static address 172.11.11.11 netmask 255.255.255.0 gateway 172.11.11.1 bridge_ports eno1.30 bridge_stp off bridge_fd 0 #Management auto vmbr2 iface vmbr2 inet static address 198.51.100.11 #actually unused netmask 255.255.255.0 #actually unused bridge_ports none bridge_stp off bridge_fd 0 bridge_vlan_aware yes #CHR

Note that bullet 3 was unachievable, hence the need to move to Open vSwitch.
My new OVS config is like this:

auto lo iface lo inet loopback auto eno1 allow-vmbr0 eno1 iface eno1 inet manual ovs_bridge vmbr0 ovs_type OVSPort ovs_options tag=40 vlan_mode=native-untagged ovs_mtu 9000 auto eno2 allow-vmbr0 eno2 iface eno2 inet manual allow-ovs vmbr0 iface vmbr0 inet manual ovs_type OVSBridge pre-up ( ifconfig eno1 mtu 9000 && ifconfig eno2 mtu 9000 ) ovs_ports eno1 vlan30 vlan700 vlan40 ovs_mtu 9000 #Central Bridge allow-vmbr0 vlan30 iface vlan30 inet static ovs_type OVSIntPort ovs_bridge vmbr0 ovs_options tag=30 ovs_extra set interface $ external-ids:iface-id=$(hostname -s)-$-vif address 172.11.11.11 netmask 255.255.255.0 gateway 172.11.11.1 ovs_mtu 9000 #Management allow-vmbr0 vlan700 iface vlan700 inet static ovs_type OVSIntPort ovs_bridge vmbr0 ovs_extra set interface $ external-ids:iface-id=$(hostname -s)-$-vif ovs_mtu 9000 address 198.51.100.11 netmask 255.255.255.0 #CHR allow-vmbr0 vlan40 iface vlan40 inet manual ovs_type OVSIntPort ovs_bridge vmbr0 ovs_options tag=40 ovs_extra set interface $ external-ids:iface-id=$(hostname -s)-$-vif ovs_mtu 9000 #Predefined

And not only it doesn’t work as I expected, I’m already both out of applicable to my usecase documentation (OVS+ifupdown seems like uncommon solution outside of PVE, everything else is either ovs-* CLI tools, or libvirt/openstack related) and confused AF, since I was replicating official doc, and it doesn’t work as expected.

Читайте также:  Arch linux mac mini

What doesn’t work:
1. I can only select vmbr0, and all OVSIntPorts are unselectable. And there’s conficting information — one source says that IntPorts are for management, and therefore unaccessible to guest VM by design, other (to be precise — Ex. 1) suggests that admin should define OVSBridge, OVSIntPorts, and said bridge, together with VLAN pseudointerfaces, will be selectable from networks list in VM properties. I just want it the same way as old Linux bridges behaved.
2. Even if I, say, resolve this problem, I have a feeling that CHR interface won’t allow intra-cluster connections between VMs in single VLANs, as described in bullet 3. I saw guides about creating GRE tunnels in order to do this, but can I just config OVS to do it instead? I mean, they are inside single physical switch, within single port group, and why would I want another L2 abstraction? Besides, tunnel solution won’t scale. With 3 nodes it’s manageable. With 5 — too. Anything bigger will quickly become slow, error-prone and simply inefficient, and I don’t know anything about OpenFlow, that further complicates the task.

My brain is completely scrambled by this point. Is there any sane way, sane configs, sane docs to accomplish this?

P.S. Yes, I do have openvswitch-switch package and recent enough Proxmox VE, but since you’re gonna ask it anyway, here:

proxmox-ve: 6.1-2 (running kernel: 5.3.10-1-pve)
pve-manager: 6.1-3 (running version: 6.1-3/37248ce6)
pve-kernel-5.3: 6.0-12
pve-kernel-helper: 6.0-12
pve-kernel-5.0: 6.0-11
pve-kernel-4.15: 5.4-9
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-3-pve: 5.0.21-7
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-4.15.18-21-pve: 4.15.18-48
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph-fuse: 12.2.12-pve1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-2
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12+deb10u1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-14
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191002-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-2
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2

Читайте также:  Alt linux rdp client

Источник

OVS vs Linux Bridge: Who’s the Winner?

The battle between the OVS (Open vSwitch) and Linux Bridge has lasted for a while in the field of virtual switch technologies. Someone hold that the Open vSwitch owns more functions and better performance, which plays the most important role in virtual switch now. While others consider that the Linux Bridge has been used for years, which is matured than OVS. So OVS vs Linux Bridge: who’s the winner?

OVS vs Linux Bridge: What Are They?

Open vSwitch (OVS) is an open source multilayer virtual switch. It usually operates as a software-based network switch or as the control stack for dedicated switching hardware. Designed to enable effective network automation via programmatic extensions, OVS also supports standard management interfaces and protocols, including NetFlow, sFlow, CLI, IPFIX, RSPAN, LACP, 802.1ag. In addition, Open vSwitch can support transparent distribution across multiple physical servers. This function is similar to the proprietary virtual switch solutions such as the VMware vSphere Distributed Switch (vDS). In short, OVS is used with hypervisors to interconnect virtual machines within a host and virtual machines between different hosts across networks.

OVS

As mentioned above, the Open vSwitch is a multilayer virtual switch, which can work as a Layer 2 or Layer 3 switch. While the Linux bridge only behaves like a Layer 2 switch. Usually, Linux bridge is placed between two separate groups of computers that communicate with each other, but it communicates much more with one of the computer groups. It consists of four major components, including a set of network ports, a control plane, a forwarding plane, and MAS learning database. With these components, Linux bridge can be used for forwarding packets on routers, on gateways, or between VMs and network namespaces on a host. What’s more, it also supports STP, VLAN filter, and multicast snooping.

Читайте также:  Swf with swf linux

OVS vs Linux Bridge: Advantages And Disadvantages of OVS

Compared to Linux Bridge, there are several advantages of Open vSwitch:

  • Easier for network management – With the Open vSwitch, it is convenient for the administrator to manage and monitor the network status and data flow in the cloud environment.
  • Support more tunnel protocols – OVS supports GRE, VXLAN, IPsec, etc. However, Linux Bridge only supports GRE tunnel.
  • Incorporated in SDN – Open vSwitch is incorporated in software-defined networking (SDN) that it can be driven by using an OpenStack plug-in or directly from an SDN Controller, such as OpenDaylight.

Despite these advantages, Open vSwitch have some challenges:

  • Lacks stability – Open vSwitch has some stability problems such as Kernetl panics, ovs-switched segfaults, and data corruption.
  • Complex operation – Open vSwitch itself is a complex solution, which owns so many functions. It is hard to learn, install and operate.

OVS vs Linux Bridge: Strengths And Limitations of Linux Bridge

Linux Bridge is still popular mainly for the following reasons:

  • Stable and reliable – Linux Bridge has been used for years, its stability and reliability are approved.
  • Easy for installation – Linux Bridge is a part of standard Linux installation and there are no additional packages to install or learn.
  • Convenient for troubleshooting – Linux Bridge itself is a simple solution that its operation is simpler than that of Open vSwitch. It is convenient for troubleshooting.

However, there are some limitations:

  • Fewer functions – Linux Bridge doesn’t support the Neutron DVR, the newer and more scalable VXLAN model, and some other functions.
  • Fewer supporters – Many enterprises wanted to ensure that there was an open model for integrating their services into OpenStack. However, Linux Bridge can’t ensure the demand, so it has fewer users than that of Open vSwitch.

OVS vs Linux Bridge: Who’s the Winner?

OVS vs Linux Bridge: who’s the winner? Actually, both of them are good network solutions and each has its appropriate usage scenarios. OVS has more functions in centralized management and control. Linux Bridge has good stability that is suitable for Large-scale network deployments. All in all, The winner is the right one that meets your demands.

Источник

Оцените статью
Adblock
detector