Effect of rate-limiting on channel efficiency

  • 1
  • Question
  • Updated 4 years ago
  • Answered
Rate limiting each client increases the amount of time needed to complete data transfer (assuming the air-interface is not the bottleneck). Unless a scheduling method such as air-time fairness is used (where clients with good link quality are given a slight advantage), the result will be increased contention for the channel and more rapidly diminishing performance as the number of concurrent users increases. A "hog" will want to consume a lot of data whether rate-limited or not. How then does Ruckus recommend reducing the amount of time that hog is contending for channel access?

Note: This topic was created from a reply on the Rate Limiting topic.
Photo of RTSCTS

RTSCTS

  • 4 Posts
  • 1 Reply Like

Posted 4 years ago

  • 1
Photo of Keith - Pack Leader

Keith - Pack Leader

  • 860 Posts
  • 51 Reply Likes
This is a good point but I think dependent on how we actually implement the rate limiting. Given our focus on performance, I'd be surprised if this were the result. I'll see if we can get a developer to chime in.
Photo of Bill Kish

Bill Kish, Official Rep

  • 5 Posts
  • 4 Reply Likes
It is true that rate limiting can reduce overall capacity. The primary reason for this is increased contention (as mentioned) and reduced MAC-layer efficiency due to smaller aggregation sizes. Modern WiFi is extremely reliant on the 802.11n aggregation mechanism for achieving high MAC-layer efficiency and anything that reduces aggregation sizes such as rate limiting can negatively impact overall capacity. Ruckus AP's mitigate this effect through the use of larger rate limiting buffers (essentially enforcing the specified rate over a longer average time interval) to allow the bursting that is essential to maximizing aggregation and MAC-layer efficiency. Our rate limiting and airtime fairness scheduler effectively give a bandwidth hog less frequent but longer access to the medium which results in reduced contention and higher efficiency. The downside is somewhat longer latency if rate limiting but most data applications don't notice it.
Photo of RTSCTS

RTSCTS

  • 4 Posts
  • 1 Reply Like
Thank you Bill for that very detailed response. With airtime fairness, the amount of time each client uses the channel is proportioned more equitably and the impact of the hog on other users is reduced. It is interesting to note that Ruckus goes one step further to ensure that MAC-layer efficiency is improved (or restored) for the rate-limited client. Does this approach result in frame sizes or transmission intervals that are even larger than what A-MPDU would yield? Also, is there any interoperability risk associated with a "non-standard" method of aggregation? Finally, do you have any figures describing the efficiency gain of this methodology, e.g. the percentage of airtime spent sending payload packets when the larger rate limiting buffers are used versus using standard frames?
Photo of Keith - Pack Leader

Keith - Pack Leader

  • 860 Posts
  • 51 Reply Likes
Bill is our CTO btw...
Photo of Bill Kish

Bill Kish, Official Rep

  • 5 Posts
  • 4 Reply Likes
The buffering I mentioned is within the rate limiting and ATF scheduler implementation. Over the air we use the standard 802.11n A-MPDU aggregation mechanism so there is no interoperability issue.

I don't have any results showing the efficiency gain from bursty rate limiting in isolation but here is an analytical example.

Our ATF scheduler targets about 4ms of transmission time per station per transmission. Link access times of 500-1000 microseconds (or larger) are not uncommon in busy 2.4 GHz environments so lets assume 750 microseconds. This results in a MAC-layer 'efficiency' of 4 ms / (4 ms + 0.75 ms) = 84%.

Now assume that 4ms transmission contained 8 subframes. If we didn't buffer them up to send at once but instead metered them out individually then those same 8 frames would each have an 0.75 ms access time. Since the original 8 frame aggregate took 4ms each individual frame would take about 0.5 ms to send (this is a simplification ignoring ack time and other fixed overheads which are relatively small). So in this case the MAC-layer efficiency is 0.5 ms / (0.5 ms + 0.75 ms) = 40%.

So under these conditions we would expect a capacity speedup of (84/40) = 2.1x from a bursty rate-limiter compared to a naive rate limiter that tried to smooth the traffic!
Photo of RTSCTS

RTSCTS

  • 4 Posts
  • 1 Reply Like
Hi Bill. Your analytical explanation is very clear. Again, I thank you for taking time to respond personally to this inquiry.