Shouldn’t my LinkSys E1000 router be able to give each outbound TCP/IP connection it’s own share of the bandwidth? I double-checked the router settings and QOS functionality is turned off.
No. The router has no control over what packets your ISP puts on the wire to you. QoS won’t give it control over that.
It feels like it’s the router that should be responsible for giving each app its fair share but maybe that’s just not how routers work?
No, that’s not how routers work. Your router just routes the packets it receives from the rest of the world. It has no control over what it receives or how data gets put on the wire from your ISP to your router.
No significant effort has been put into make residential Internet access support any notion of “fairness” within the available bandwidth for an individual customer. There do exist some routers that fake it with technologies like StreamBoost, but they are the exception.
Most DSL modems don’t offer this functionality but if you can dedicate a low spec PC and some time you can install pfsense and configure it like this:
Go to firewall > Traffic Shaper > Limiters
Create a New Limiter(8) for the download of WAN1 named like LimWan1Down. Just set its Bandwidth to a bit(5) bellow the download BW of your WAN(3) (preferably(1) in multiplies of 64kbps). Don’t go bellow 256Kbps(6)
Save the limiter and click the option to Add a New Queue(7) to create a queue named like QueWan1Down and just set its mask to “destination addresses”.
Create another Limiter for the upload of WAN1 named like LimWan1Up. Just set its Bandwidth to a bit bellow the upload BW of your WAN* (preferably in multiplies of 64kbps). Don’t go bellow 256Kbps(6)
Save the limiter and click the option to Add a New Queue to create a queue named like QueWan1Up and just set its mask to “source addresses”.
Finally go to firewall > rules > LAN and find the rule(2) that pass trafic to WAN1. Click edit and go to advanced settings > In / Out pipe and specify your Upload Queue in the first (In) part and your Download Queue in the second (Out) part(9).
Don’t forget to click Apply Changes.
Now the limiters will keep the total upload or download BW from all of your clients limited so that we avoid unwanted buffering at the ISP side while the queues distribute that limited BW evenly between all the LAN clients (LAN IPs to be precise).
To test immidiately you must first go to Diagnostics > States > Reset States and click Reset but beware that everybodys TCP connections we’ll be reset (that includes your connection to the WebUI).
Now go start some downloads/uploads and check Diagnostics > Limiter Info.
This method keeps your RTT times low by limiting your total download and upload bandwidth (thus preventing buffering at the ISP) and shares that BW evenly between you LAN clients. You get a decent browsing/skype/RDP/ssh experience when a lot of users have maxed out your WAN(s), yet it still allows one client to get the full BW when nobody else needs it.
If you get used to the above procedure and just want a quick reference in order not to mix up the the settings here it is:
Pipe | Direction | Queue mask -----+------------+-------------------- In | Upload | Source Address Out | Download | Destination Address
(1): not sure if this is important.
(2): not sure if it works the same when there are many rules because I’ve only tested with one rule.
(3): if you have multiple WANs you do the above for every WAN but I believe that things get tricky because on your firewall each LAN rule that passes traffic should be dealing with a specific WAN(4). in my case when I had 2 WANs I splitted the LAN IPs in even and odd ones and each group was using a specific WAN(with fail over of course). A bit hacky I know. Keep in mind that drakontas doesn’t share my worries. He states that “The limiters described here work as expected when used in combination with Gateway Groups (i.e. Multi-WAN) without any additional modification as long as the firewall rules include the Gateway Group as the Gateway (under Advanced).”
(4): I’m 99% sure this is the case but have never tested.
(5): here’s how to find how much it’s a bit: find a bad day/time for you line where it downloads/uploads at its worst rate. Before enabling anything begin a massive download/upload and look at your ping times going of the roof. Try a setting for the download/upload limit. If your ping times go off the roof you need to lower the limit.
(6) I haven’t tested it but I’ve read this logic: the max size of an IPv4 packet is 64KBytes i.e. 256kbits. So if you go lower the limiter may have to cut packets apart, which results in higher latency, lower throughput, and a poor user experience.
(7) pfsense’s UI is a little glitchy at this point. Finish your limiter, save it, click on it’s name again and then try to click Add New Queue
(8) the terminology is a little confusing. In this text I’ve used the terms with caution but you may see the terms limiter, pipe and queue used interchangably.
(9) think of the packets flows from the perspective of the LAN interface. Packets flowing from the local network into the LAN interface will become WAN upload and vice versa.
These are some advanced notes copied directly from the drakontas reddit post (see credits):
If using Floating firewall rules instead of per-interface rules, you must have two rules — one applied to “In” traffic and one applied to “Out” traffic (direction is specified in the rule).
If you have many users (for example, thousands of devices sharing a relatively small uplink), you will want to increase the advanced “Queue Size” option on each Pipe and Queue (default is 50 if no value is indicated). This allows cleaner handling of user traffic during high congestion periods.
The advanced “Queue Size” option can be set to a maximum of 100. This is not documented or indicated on the GUI, but if you try to set this value >100, the limiters will throw an error and will not be correctly applied. This can slow down a reboot and also can be a totally hidden error when applied on a running system (the only indication that something is wrong is that the Diagnostics > Limiter Info page will not reflect the new settings, even after resetting states).
Remember that for TCP flows where ACK responses are required for data integrity, upload is necessary even while downloading (and vice versa). In my tests, restricting upload to 512kbps results in a maximum real-world download speed of 50-60mbps; restricting upload to 256kbps results in a maximum download speed of 20-30mbps (and of course, YMMV) For UDP flows this is irrelevant