Controlling the switch is done with the kernel switchdev driver and the devlink and tc tools. Basic rules like VLAN-tagging are supported of course, but you can also do more complex things like L3 routing and routing based on TCP port numbers. So you could for example take one IPv4 and divide it among several VMs based on port numbers.
tc and devlink are the more barebones interface to this. In their manual they suggest to use Open vSwitch to manage this. What it does is quite clever: it implements a quite capable software switch with the OpenFlow rule language, a management process and it's own small database backend. Packets are sent to this software switch first and (slowly) switched in software according to the rules you set.
When this first packet is forwarded, the management process also calculates the minimal rules that were necessary to forward this packet and subsequent similar ones. Then it creates a tc rule to offload this to hardware, so the following packets are switched purely in hardware. This ensures that only the rules that are actually used right now are configured on the switch ASIC, reducing bloat on the ASIC and improving switching speed.
Downside is that Open vSwitch and OpenFlow introduce an extra layer and complexity that has to be managed and understood. There seems to be a Ansible collection to manage Open vSwitch, but I didn't see
an easy way to use it to manage complex OpenFlow rules. But maybe I missed it because I just had a short look at it.
[#]openvswitch
=> More informations about this toot | View the thread | More toots from electronic_eel@treehouse.systems
=> View openvswitch tag This content has been proxied by September (ba2dc).Proxy Information
text/gemini