First, we need to pull in xt_connmark , which is a depedency for some features of qos-scripts :
Then, in /etc/config/qos replace the wan interface with our various defined interfaces with some modest and determinable differences to test that it works.
# INTERFACES: # Fast client. config interface lan1 option classgroup "Default" option enabled 1 option upload 256 option download 2048 # Slow client. (1/2 speed) config interface lan2 option classgroup "Default" option enabled 1 option upload 128 option download 1024 # The Host. (Big pipes) config interface lan3 option classgroup "Default" option enabled 1 option upload 4096 option download 4096
To enable QoS and start it:
/etc/init.d/qos enable /etc/init.d/qos start
When I attempted to use the pktsize option in a classifier I recieved yet unresolved errors.
iptables: No chain/target/match by that name.
Inspecting the generated config:
See the script executed along with output (for diagnosing errors):
/usr/lib/qos/generate.sh all | sh -x
# . + iptables -t mangle -A qos_Default -m mark --mark 0/0xf0 -p udp -m length --length 500 -j MARK --set-mark 34/0xff iptables: No chain/target/match by that name. # .
I was only able to resolve this by removing the use of pktsize in classifiers then restarting the service.
Open up a terminal to use vagrant ssh test1 and test2 then, run the following command at approximately the same time (remember to sub in $TEST3IP ):
You should see something like the following:
Feel free to cancel them at any time. We only care about the rate right now.
As you can see, test1 recieved approximately twice the bandwidth of test2 . Perfect, that’s exactly what we wanted.
Things will not always be exact as there are protocol overheads and other factors in our simulation.
Before going on to more complicated experiments you may want to ensure that the rates are more fair.
With our QoS system in place, what kind of experiments can we do to the network?
Looking over the default configuration you can see the following config block:
config classify option target "Priority" option ports "22,53" option comment "ssh, dns"
Which match to the following class:
config class "Priority" option packetsize 400 option avgrate 10 option priority 20
This suggests that all traffic over port 22 (The standard ssh default) will prioritized as it has the highest priority in the default configuration.
But what if you don’t use port 22 for ssh ? Then this is a silly rule. You can easily just remap the ports option to «2222,53» or something else.
Say we’d like to gaurentee that one class of packets can only take up so much of the total limit of the connection.
First, lets get our test3 VM serving off multiple ports so we can classify them differently.
In /etc/nginx/nginx.conf modify your listen block:
server listen 80 default_server; listen 8080; # Add this. server_name localhost; root /vagrant; # .
Then run sudo systemctl restart nginx .
Back on the router, edit /etc/config/qos to add two new blocks, a classify block and a class block. Also make sure that your lan1 and lan2 interfaces have the same upload and download so we can use them to test.
config classify option target "httpdev" option ports "8080" option comment "httpdev" config class "httpdev" option packetsize 1500 option packetdelay 100 option limitrate 10 option priority 5
Then in the classroup «Default» add httpdev to the classes.
Then get test1 to download from port 80, and test2 to download from port 8080.
Attempting to download on port 8080 only resulted in recieving 10% of the available bandwidth, where on port 80 it was able to use the entire link.
Instead of placing a hard limit on the rate a class can achieve, let’s instead change the priority it recieves.
On the router, edit /etc/config/qos , change the Normal class to have a 50 priority, while leaving httpdev on 5 .
config class "Normal" option packetsize 1500 option packetdelay 100 option avgrate 10 option priority 50
Then issue /etc/init.d/qos reload . To test this, on one of the test1 or test2 open two connections and attempt to download from both ports.
Here I ran two tests, one with downloads on different ports, one with them on the same. Notice how the 8080 download is much slower while there is a download on port 80 happening, but while both at on port 80 they’re nearly the same (except that I couldn’t stop both at the same time so the second one I stopped was reading higher then before, they were both in the teens).
A lone download on port 8080 (and thus of the httpdev class) would still be fast if there is no other traffic in the normal class, though.
While qos-scripts provides many simplier facilities for configuring Quality-of-Service in OpenWRT, it is certainly not the be all, end all.
You can also take a much more low-level and fine grained control of how things behave using standard Linux tools. After all, everything in qos-scripts is built off iptables , tc , and other standard tools.
Here are some links to explore further:
A special thanks to the Bergens Banen train for our test material. The cover image of this article is from the video.
Adblock