Time: 七月 20, 2015
Category: mesos, tcp/ip internals

1. Simple, classless Queueing Disciplines

1.1 pfifo_fast

This queue is, as the name says, First In, First Out, which means that no packet receives special treatment. At least, not quite. This queue has 3 so called ’bands’. Within each band, FIFO rules apply. However, as long as there are packets waiting in band 0, band 1 won’t be processed. Same goes for band
1 and band 2.

The kernel honors the so called Type of Service flag of packets, and takes care to insert ’minimum delay’ packets in band 0.

Do not confuse this classless simple qdisc with the classful PRIO one! Although they behave similarly, pfifo_fast is classless and you cannot add other qdiscs to it with the tc command.

1.2 Token Bucket Filter

The Token Bucket Filter (TBF) is a simple qdisc that only passes packets arriving at a rate which is not exceeding some administratively set rate, but with the possibility to allow short bursts in excess of this rate.

The TBF implementation consists of a buffer (bucket), constantly filled by some virtual pieces of information called tokens, at a specific rate (token rate). The most important parameter of the bucket is its size, that is the number of tokens it can store.

Each arriving token collects one incoming data packet from the data queue and is then deleted from the bucket. Associating this algorithm with the two flows -- token and data, gives us three possible scenarios:

  • The data arrives in TBF at a rate that’s equal to the rate of incoming tokens. In this case each incoming packet has its matching token and passes the queue without delay.
  • The data arrives in TBF at a rate that’s smaller than the token rate. Only a part of the tokens are deleted at output of each data packet that’s sent out the queue, so the tokens accumulate, up to the bucket size. The unused tokens can then be used to send data at a speed that’s exceeding the standard  token rate, in case short data bursts occur.
  • The data arrives in TBF at a rate bigger than the token rate. This means that the bucket will soon be devoid of tokens, which causes the TBF to throttle itself for a while. This is called an ’overlimit situation’. If packets keep coming in, packets will start to get dropped.

The last scenario is very important, because it allows to administratively shape the bandwidth available to data that’s passing the filter

1.3 Stochastic Fairness Queueing(SFQ)

The key word in SFQ is conversation (or flow), which mostly corresponds to a TCP session or a UDP stream. Traffic is divided into a pretty large number of FIFO queues, one for each conversation. Traffic is then sent in a round robin fashion, giving each session the chance to send data in turn.

SFQ is called ’Stochastic’ because it doesn’t really allocate a queue for each session, it has an algorithm which divides traffic over a limited number of queues using a hashing algorithm.

Because of the hash, multiple sessions might end up in the same bucket, which would halve each session’s chance of sending a packet, thus halving the effective speed available. To prevent this situation from becoming noticeable, SFQ changes its hashing algorithm quite often so that any two colliding sessions will only do so for a small number of seconds.

It is important to note that SFQ is only useful in case your actual outgoing interface is really full! If it isn’t then there will be no queue on your linux machine and hence no effect.

2. Classful Queueing Disciplines

2.1 The PRIO qdisc

The PRIO qdisc doesn’t actually shape, it only subdivides traffic based on how you configured your filters.

When a packet is enqueued to the PRIO qdisc, a class is chosen based on the filter commands you gave. By default, three classes are created. These classes by default contain pure FIFO qdiscs with no internal structure, but you can replace these by any qdisc you have available.

Whenever a packet needs to be dequeued, class :1 is tried first. Higher classes are only used if lower bands all did not give up a packet.

2.2 The famous CBQ qdisc

CBQ works by making sure that the link is idle just long enough to bring down the real bandwidth to the configured rate. 

During operations, the effective idletime is measured using an exponential weighted moving average (EWMA), which considers recent packets to be exponentially more important than past ones. The UNIX loadaverage is calculated in the same way.

The calculated idle time is subtracted from the EWMA measured one, the resulting number is called ’avgidle’.

  1. A perfectly loaded link has an avgidle of zero: packets arrive exactly once every calculated interval.
  2. An overloaded link has a negative avgidle and if it gets too negative, CBQ shuts down for a while and is then ’overlimit’.
  3. Conversely, an idle link might amass a huge avgidle, which would then allow infinite bandwidths after a few hours of silence. To prevent this, avgidle is capped at maxidle.

2.3 Hierarchical Token Bucket (HTB)

Hierarchical approach is well suited for setups where you have a fixed amount of bandwidth which you want to divide for different purposes, giving each purpose a guaranteed bandwidth, with the possibility of specifying how much bandwidth can be borrowed.

HTB works just like CBQ but does not resort to idle time calculations to shape. Instead, it is a classful Token Bucket Filter

Leave a Comment