One of the tools to ensure the quality of service in data networks is the mechanisms of polishing and shaping, and maybe these are the most frequently used tools. Your ISP has probably limited your speed to just that.

The topic of quality of service is not the easiest to understand, and if you have ever been interested in polysers and shapers, you most likely met the same type of graphs depicting the dependence of speed on time, heard the terms “basket”, “tokens” and “burst”, maybe even saw formulas for calculating some parameters. A good and typical example is in SDSM - chapter on QoS and speed limit .

In this article, we’ll try to go a little the other way, relying on the Cisco tutorial , RFC 2697 and RFC 2698 - the most basic concepts.

The first thing that needs to be understood and what the entire speed control mechanism is based on is the concept of speed itself. Speed ​​is the derivative calculated by , nowhere and in any place do we see it directly. Devices operate only on data and their quantity. We talk about speed in the context of observation and monitoring, knowing the amount of transmitted data in 5 minutes or 5 seconds and getting different values ​​of average speed.

Second, the amount of data the interface skips over a period of time is constant , absolute. It can neither be reduced nor increased. Through 100Mbit/s, the 90Mbit interface will always be skipped in 0.9 seconds, and the interface will remain idle for the remaining 0.1 second. But taking into account the fact that the speed is a calculated value, we get that the data was transmitted at an average speed of 90 Mbit/s. This is a little like road traffic, we always have either 100% workload, or full downtime. In the context of network traffic, the congestion of an interface is how many free periods it has left from the total measured interval. Further we continue to use the dimension of Mbit and seconds, for a better understanding, although this does not matter.

This implies the main task and the way to limit traffic transmission - transfer no more than a given amount of data per unit of time . If we have 100 Mbit, and we want to limit the speed to 50 Mbit/s, then in this second we need to transfer no more than 50 Mbit, and transfer the remaining data in the next second. However, we have only one way to do this - turn on the interface, which always works at a constant speed, or turn it off. The only choice is how often we turn it on and off.

Burst


Let's look at the speed graph from the 5th grade physics textbook. This shows the dependence of the amount of transmitted data on the Y axis, on time on the X axis. The steeper the slope of the line, the greater the speed. There are several ways to transfer 50 Mbps per second:

ITKarma picture

  1. Transfer everything at once in 0.5 seconds, and then wait for the next ( solid green )
  2. Transmit more often, making smaller pauses, but increasing their number

This leads us to the concept of burst - the maximum continuously transmitted amount of data, as a rule it is measured in bytes, and not in bits, as we have for simplicity. Burst is inextricably linked with time, the longer it is, the longer the pause between data transfers: CDMY0CDMY, where CIR, Committed Information Rate is the speed limit that we introduce ( red dots ) and this value is usually measured in bits per second.

If burst=50, CIR=50 Mbps, then the time is 1 second: CDMY1CDMY. So every second, we can transfer no more than 50Mbps. Since the interface speed is 100 Mbit/s, 50 Mbit will be transferred in 0.5 seconds, the remaining time will be simple. Starting from the next second, we will again be able to transfer 50Mbps.

In the case of burst=25, we get 25/50 or 0.5 seconds between transmission every 25 Mbps.Given the speed of the interface, 25/100=0.25 seconds will be spent on transmitting 25 Mbps and the interface will be idle for the next 0.25 seconds. In each case, we spend CDMY2CDMY on transmission and 1/2 on idle time. If you increase the CIR to 75 Mbit/s, then 75/100=3/4 periods, respectively, will take transmission and 1/4 pause.

Please note that the slope of the lines showing the amount of transmitted data is always the same. Because the interface speed is constant ( blue dots ) and we physically cannot transmit at a different speed.

In most cases, it is burst that is used in the configuration of the equipment, although time intervals can also be used. On the graph, the differences are clearly visible, a smaller burst gives a more strict adherence to the specified limit - the graph does not run far from the CIR, even at a smaller measured interval and at the same time provides short pauses between transmission times. A larger burst does not limit traffic over short stretches. If you measure the speed only in the first 0.5 seconds, you would get 50/0.5=100Mbit/s. And a long pause after such a transfer can adversely affect the traffic control mechanisms outside the borders of our device, or lead to the loss of a logical connection.

To be closer to reality, network traffic, as a rule, is not transmitted continuously, but has different intensity at different points in time ( green dotted line ):

ITKarma picture

This graph shows that in 1 second we want to transfer 55Mbps with a limit of 50Mbps. That is, real traffic practically does not go beyond the boundaries that we set at the end of the measured interval. At the same time, restriction mechanisms lead to the fact that less data is transferred than we expect. And here, a larger burst looks better, because it captures the intervals where traffic is really transmitted, and a smaller burst and the desire to strictly limit traffic throughout the entire site results in big losses.

Shaper


We will be even closer to reality, in which there is always a buffer for transmitting data. Given that the interface with us is either 100% busy or idle, and data can come from several sources simultaneously faster than the interface can transmit them, even a simple buffer forms a queue, allowing the data to wait for the moment of transfer. It also allows you to compensate for the losses that we may have due to the restrictions introduced:

ITKarma picture

Policer is a Burst 5 chart from the previous image. Shaper is the same Burst 5 chart, but taking into account a buffer in which data that had not time to be transferred are delayed and which can be transferred a bit later.

As a result, we fully met our requirements to limit traffic by “smoothing” the source peaks and without losing data. Traffic still has a pulsating form: alternating transmission periods and pauses - because we can not affect the speed of the interface and only control the amount of data transferred at a time. This is the same graph of the comparison of the shaper and polyser from SDSM QoS , but on the other hand:

ITKarma picture

At what cost did we achieve this? At the cost of a buffer that cannot be infinite and which introduces a delay in data transmission. The peak in the buffer graph falls on 15 Mbps, this is the data that is lost by the polyser, but is delayed by the shaper. With a given limit of 50 Mbps, this is 15/50=300 milliseconds, which for many network applications is already beyond the limits.

And now let's see when this price plays a role, just a little more traffic intensity is enough - 60Mbit/s, with a limit of 50Mbit/s:

ITKarma picture

The amount of data transferred by the shaper and polyser coincided. The policer, of course, loses data, and the shaper saves in the buffer, the occupied space in which is constantly growing, that is, the delay is growing. When the space in the buffer runs out, data by the shaper will also begin to be lost, but adjusted for the size of the buffer, with a delay.

Therefore, choosing a shaper or polyser it is worth starting from how critical the additional delay is, which is higher, the higher the speed and the more intense the traffic. Or it’s worth sacrificing data and losing some of it, given that at the next logical level, mechanisms will almost certainly work that restore the integrity of the transmission and respond to congestion and loss.

Cart


To take into account the volume of traffic transmitted through the interface, the concept and term basket are used. In fact, this is a counter from the maximum burst value to 0, which decreases with the transmission of each data quantum - token . Accordingly, there are two processes - one fills the basket, the second one takes it.

The basket is filled to burst every given interval, with a known CIR. For burst 5 and CIR 50, every 0.1 second, as calculated a little earlier. But the volume of traffic for the time interval can be less than the burst that we set, since the restriction condition is "no more." So this counter may not reach 0 and tokens remain in the basket. Then in the next interval, when filling the basket, the unused amount of data (tokens) will be lost.

This situation is visible on the Policer chart above, every 0.05 seconds we are able to transfer 5Mbps at the interface speed, but the amount of data that we have is only 3Mbps, since the data arrival rate is only 60Mbps. That is why the chart almost merges with CIR, which is not entirely correct. In any case, the transfer is carried out at the interface speed and 3Mbit will be transmitted in 0.03 seconds, and the remaining 0.02 will be paused. This would give us the characteristic ladder that we see on the Shaper chart. Here, just an example of the average speed and smoothing of the measurement accuracy, which is usually shown by monitoring systems operating not even in seconds, but in minutes.

We will improve the approach, knowing that traffic is spontaneous and a larger burst can help not to lose data. We introduce another basket where we will put tokens unused in the previous interval. Thus, in the absence of traffic from the source, this simple one will be partially compensated, as if we had a larger burst. Each basket has its own burst, for the main one - Committed Burst , CBS, Bc. And for the second - Excess Burst , EBS, Be. Thus, the maximum amount of data that can be continuously transmitted is CBS + EBS.

ITKarma picture

Shaper Exs ( yellow ) - a schedule taking into account two baskets, each with a burst of 5 Mbps. Shaper - a graph from the previous image. Now the maximum burst=EBS + CBS=10 and the first 0.1 second we use it. We replenish the main basket every 5/50=0.1 second. Accordingly, at time 0.1, again there is the possibility of transmitting data and the period of continuous transmission lasts 0.15 seconds. As a result of a long downtime, when there was no traffic from the source and we transferred all the data from the buffer, at the moment of time 0.6 seconds, we add an unused amount of data to the second basket. Thus, we are able to conduct continuous traffic skipping again for 0.15 seconds, which eliminates the need for a buffer at all. As a result, we got a good compromise on the accuracy of band cutting, in the case of heavy traffic, and loyalty in relation to bursts when using a larger burst.

Let's make one more improvement concerning time. We will get rid of the periodic process of replenishing the basket and replacing it with replenishment only at those moments at which data arrives to us.In most cases, less than one does not operate the whole package at a time. Therefore, you can calculate the period between arrivals of consecutive packets and replenish the basket with the amount of data that corresponds to this period. This, firstly, eliminates the need to keep a separate timer for time intervals associated with burst periods, and secondly, reduces the periods between possible traffic passes.

Therefore, burst, as the maximum amount of continuously transmitted data, separated from the concept of the number of data replenished at a time, although the formula remained the same: CDMY3CDMY. But we cannot replenish the basket more than its maximum burst volume, this is the condition by which the restriction is achieved. The packet size defines the smallest burst possible — the smallest packet size in a given network technology. If burst is less, then the whole packet cannot be sent in the allotted interval at a given interface speed.

Polyser: 1 speed, 2 baskets, 3 colors


Before that, it was mainly about shapers, although the concepts and terms are similar to those that apply to polysers. The policer, however, as defined in RFC 2697 , is not a traffic restriction mechanism, it is a classification mechanism. Each passing packet, in accordance with the specified CIR, CBS and EBS, belongs to one of the categories (color): conform (green) , exceed (yellow) , violate (red) . On devices, you can immediately configure in which case the traffic should be blocked, but in the general case, this is precisely the purpose of the label or painting.

For each package, the following algorithm is checked:

  1. If the packet size is less than the tokens in the first basket, then this package is marked as green, and the number of tokens corresponding to the size of the package is deducted from the first basket, otherwise
  2. If the packet size is less than the tokens in the second Excess basket, then this package is marked as yellow, and the number of tokens corresponding to the packet size is subtracted from the second basket, otherwise
  3. The package is marked as red and is not subtracted from anywhere.

In the case of using one basket, the second item is excluded.

ITKarma picture

We use the same parameters as before CIR=50, CBS=5, EBS=5. The number of tokens in baskets is now shown separately: the main Bucket C ( cyan ) and the additional Bucket E ( violet ). Now we have not a continuous stream of bit data, but packets of 5 Mbit. Which is not entirely realistic, traffic, in the general case, consists of packets of different sizes arriving at different time intervals, and this can greatly change the picture of what is happening. But to demonstrate the basic principles and ease of calculation, we use this option. Also, the process of replenishing the basket with the arrival of each package is reflected.

In the first 0.05 seconds we transmit a 5Mb packet, emptying the main basket. With the arrival of the second package, we replenish it, but by 2.5 Mbit, which corresponds to a given CIR of 0.05 * 50. These tokens are not enough to transfer the next 5Mb packet, so we empty the second basket, but the packet is marked differently. After 0.05 seconds, the packet comes again, and we again replenish the main basket with 2.5 Mbps and this amount is enough to transfer it to the green category. The next package, despite the fact that the basket is replenished, already lacks tokens and it falls into the red category. Solid red graph reflects the situation if only packets marked in red are discarded.

During the downtime, the baskets are not replenished, as was seen in the previous graph, but at the time point 0.6, when receiving the next piece of data, the interval between the packets is calculated: 0.6-0.3=0.3 seconds, therefore we have 0.3 * 50=15Mbps to replenish the main basket. Its maximum volume is CBS=5Mbit, the second basket is replenished with the remainder, also with the volume of EBS=5Mbit. We do not use the remaining 5Mbps, thereby traffic with very long pauses is still limited in order to prevent the situation: an hour of inactivity is an hour without restrictions.

As a result, on the graph there are 6 green sections or 30 Mbps transmitted per second - the average speed is 30 Mbps, which corresponds to using only one basket and two colors. 3 yellow sections and in total with the first chart 45 Mbit/s, taking into account red sections 55 Mbit/s - two baskets, three colors.

2 speeds, 3 colors


There is another approach RFC 2698 , which sets the parameter peak speed PIR - Peak Information Rate And in this case, two baskets are used, but each of them is filled independently of the other - one in accordance with CIR, the other with PIR:

  1. Bucket CIR, CDMY4CDMY
  2. Bucket PIR, CDMY5CDMY

Thus, if the PIR is greater than the CIR, then one of the baskets will fill faster than the other.

Traffic, as in the previous case, is classified into 3 categories as follows:

  1. If the packet size is less than the tokens in the CIR basket, then this package is marked as green and the number of tokens corresponding to the packet size is subtracted from both baskets , otherwise
  2. If the packet size is less than the tokens in the PIR basket, then this packet is marked as yellow and only the number of tokens corresponding to the packet size is subtracted from the PIR basket, otherwise
  3. The package is marked as red and is not subtracted from anywhere.

The main idea of ​​this approach is to use two independent restrictions, if we fit into both, then this is a green zone, if only under one, then yellow, if not under one, then red.

Recall why we need a second basket and a larger burst to compensate for periods of downtime due to less strict restrictions for a larger period. An approach with two conditions gives us the same opportunity. Let's form PIR and CIR equal to 50 Mbps, the size of the first basket is 5, and the second PBS 10. Why 10? Because this is an independent constraint, which brings us back to the very first graph showing the burst difference. That is, we want to achieve an average result between burst 5 and burst 10 and set these conditions directly.

ITKarma picture

We got the same schedule for the polyser as when using the previous method. With burst=10 we get more freedom, and with the second condition burst=5 we add accuracy. Pay attention to how the baskets behave, each in itself.

Two separate conditions, each of which is satisfied for the same input values. More strict - classifies traffic that is guaranteed to fall under it, while less strict expands these boundaries. In case of equal CIR and PIR, we get the method interchangeable with the previous one.

And setting the PIR speed, we increase the number of degrees of freedom. You can separately check different bursts for different traffic profiles with different CIRs, and then combine everything together using this method:

ITKarma picture

CIR=50, PIR=75, CBS=5, PBS=7.5. CBS and PBS are chosen to fit in the same interval. But, of course, you can proceed from other conditions, for example, to make a smaller burst for PIR in order to increase the guarantee not to go beyond the specified boundaries, but for CIR, on the contrary, more loyal.

In principle, with heavy traffic, there is no reason to put a little burst in any case, or for any of the baskets. A few tens of seconds and a few extra megabytes will not make the weather for long time intervals, but frequent blockages due to a small burst can break the mechanisms for regulating the flow that are invisible to us. For 80Mbit/s traffic, 40Mbit/s fit into the green zone. Given the yellow just fit between CIR and PIR - 60Mbit/s. Once again, the restriction mechanisms are trying to guarantee not going beyond the upper boundaries, they know nothing about the lower ones. And as you can see in the examples, the resulting traffic is always less than the specified restrictions, sometimes much less, even if he himself got into them without any help.

The approaches described above were formed in the RFC more than 20 years ago, but so far they have not rushed to the dump, and are often used precisely as a limiting tool that worsens quality, and not as a classifier, or as a compensating mechanism. Even in the most modern implementations you will surely come across if not these algorithms, but these terms are necessary and, of course, the complexity of applying the concept of speed in data transmission networks. Maybe with one more article, sorting it all out will be a little easier.

Source