Buffer Management Algorithm Design and Implementation Based on Network Processors
📝 Original Info
- Title: Buffer Management Algorithm Design and Implementation Based on Network Processors
- ArXiv ID: 1005.0905
- Date: 2010-05-06
- Authors: Yechang Fang, Kang Yen, Deng Pan, Zhuo Sun
📝 Abstract
To solve the parameter sensitive issue of the traditional RED (random early detection) algorithm, an adaptive buffer management algorithm called PAFD (packet adaptive fair dropping) is proposed. This algorithm supports DiffServ (differentiated services) model of QoS (quality of service). In this algorithm, both of fairness and throughput are considered. The smooth buffer occupancy rate function is adopted to adjust the parameters. By implementing buffer management and packet scheduling on Intel IXP2400, the viability of QoS mechanisms on NPs (network processors) is verified. The simulation shows that the PAFD smoothes the flow curve, and achieves better balance between fairness and network throughput. It also demonstrates that this algorithm meets the requirements of fast data packet processing, and the hardware resource utilization of NPs is higher.💡 Deep Analysis
This research explores the key findings and methodology presented in the paper: Buffer Management Algorithm Design and Implementation Based on Network Processors.To solve the parameter sensitive issue of the traditional RED (random early detection) algorithm, an adaptive buffer management algorithm called PAFD (packet adaptive fair dropping) is proposed. This algorithm supports DiffServ (differentiated services) model of QoS (quality of service). In this algorithm, both of fairness and throughput are considered. The smooth buffer occupancy rate function is adopted to adjust the parameters. By implementing buffer management and packet scheduling on Intel IXP2400, the viability of QoS mechanisms on NPs (network processors) is verified. The simulation shows that the PAFD smoothes the flow curve, and achieves better balance between fairness and network throughput. It also demonstrates that this algorithm meets the requirements of fast data packet processing, and the hardware resource utilization of NPs is higher.
📄 Full Content
Queue is actually a storage area to store IP packets with priority level inside routers or switches. Queue management algorithm is a particular calculation method to determine the order of sending data packets stored in the queue. Then the fundamental requirement is to provide better and timely services for high priority packets [1]. The NP is a dedicated processing chip to run on high speed networks, and to achieve rapid processing of packets.
Queue management plays a significant role in the control of network transmission. It is the core mechanism to control network QoS, and also the key method to solve the network congestion problem. Queue management consists of buffer management and packet scheduling. Generally the buffer management is applied at the front of a queue and cooperates with the packet scheduling to complete the queue operation [2,3]. When a packet arrives at the front of a queue, the buffer management decides whether to allow the packet coming into the buffer queue. From another point of view, the buffer management determines whether to drop the packet or not, so it is also known as dropping control.
The control schemes of the buffer management can be analyzed from two levels, data flow and data packet. In the data stream level and viewed form the aspect of system resource management, the buffer management needs to adopt certain resource management schemes to make a fair and effective allocation of queue buffer resources among flows through the network nodes. In the data packet level and viewed from the aspect of packet dropping control, the buffer management needs to adopt certain drop control schemes to decide that under what kind of circumstances a packet should be dropped, and which packet will be dropped.
Considering congestion control response in an end-to-end system, the transient effects for dropping different packets may vary greatly. However, statistics of the long-term operation results indicates that the transient effect gap is minimal, and this gap can be negligible in majority of cases.
In some specific circumstances, the completely shared resource management scheme can cooperate with drop schemes such as tail-drop and head-drop to reach effective control. However, in most cases, interaction between these two schemes is very large. So the design of buffer management algorithms should consider both of the two schemes to obtain better control effects [4,5].
Reference [6] proposed the RED algorithm for active queue management (AQM) mechanism [7] and then standardized as a recommendation from IETF [8]. This algorithm will adaptively gain balance between congestion and fairness according to cache congestion situation. When there is minor congestion, the algorithm will tend to fairly drop packets in order to ensure all users access the system resources to their scale. For moderate congestion, the algorithm will incline to drop the packet of low quality service flows by reducing its sending rate using scheduling algorithm to alleviate congestion. In severe congestion, the algorithm will tend to fairly drop packets, through the upper flow control mechanism to meet the QoS requirements, and reduces sending rate of most service flows, in order to speed up the process of easing the congestion.
In buffer management or packet scheduling algorithms, it will improve the system performance to have service flows with better transmission conditions reserved in advance. But this operation will make system resources such as buffer space and bandwidth be unfairly distributed, so that QoS of service flows with poor transmission conditions cannot be guaranteed. Packet scheduling algorithms usually use generalized processor sharing (GPS) as a comparative model of fairness. During the process of realization of packet scheduling algorithms based on GPS, each service flow has been assigned a static weight to show their QoS.
The weight φ i actually express the percentage of the service flow i in the entire bandwidth B. φ i will not change with packet scheduling algorithms, and meet 1 1
where N expresses the number of service flows in the link.
And the service volume is described by
where i, j denotes two different service flows. In GPS based algorithms, the bandwidth allocation of different service flows meets the requirement B i /φ i = B j /φ j , where B i is the allocated bandwidth of the service flow i. By assigning a smaller weight φ to an unimportant background service flow, the weight of service flow with high priority φ high will be much larger than φ low , so that the majority of the bandwidth is accessed by high-priority service flows.
In buffer management algorithms, how to control the buffer space occupation is very key [11]. Here we define
where C i is the buffer space occupation, and W i expresses the synthetic weight of the service flow i. When the cache is full, the service flow with the largest value of C i /W i will be dropped in order to guarantee fairness. Here the fairness is reflected in packets with different queue length [12,13].
Assume that u i is the weight, and v i is the current queue length of the service flow i. The synthetic weight W i can be calculated as described by ( 1)
where α is the adjust parameter of the two weighting coefficients u i and v i . α can be pre-assigned, or determined in accordance with usage of the cache. u i is related to the service flow itself, and different service flows are assigned with different weight values. As long as the service flow is active, this factor will remain unchanged. v i is time varying, which reflects dropping situation of the current service flow.
Suppose a new packet T arrives, then the PAFD algorithm process is described as follows:
• Step 1: Check whether the remaining cache space can accommodate the packet T, if the remaining space is more than or equal to the length of T, add T into the cache queue. Otherwise, drop some packets from the cache to free enough storage space. The decision on which packet will be dropped is given in the following steps.
• Step 2: Calculate the weighting coefficients u and v for each service flow, and the parameter α. Then get the values of new synthetic weights W for each flow according to (4).
•
Step 3: Choose the service flow with the largest weighted buffer space occupation (C i /W i ), if the service flow associated to the packet T has the same value as it, then drop T at the probability P and returns. Otherwise, drop the head packet of the service flow with the largest weighted buffer space occupation at probability 1-P, and add T into the cache queue. Here Probability P is a random number generated by the system to ensure the smoothness and stability of the process.
• Step 4: Check whether the remaining space can accommodate another new packet, if the answer is yes, the packet will be transmitted into the cache.
Otherwise, return to Step 3 to continuously choose and drop packets until there is sufficient space.
If all packet lengths are the same, the algorithm only needs one cycle to compare and select the service flow with the largest weighted buffer space occupation. Therefore, the time complexity of the algorithm is O(N). In this case, we also need additional 4N storage space to store the weights.
Taking into account the limited capacity of wireless network, In Step 2, α, a function of shared buffer, is a parameter for adjusting proportion of the two weighting coefficients u and v. For a large value of α, the PAFD algorithm will tend to fairly select and drop packets according to the synthetic weight W. Otherwise, the algorithm tends to select and drop the service flow with large queue length. A reasonable value for α can be used to balance between fairness and performance. Here we introduce an adaptive method to determine the value of α. This adaptive method will determine α value based on the congestion situation of the cache, and this process does not require manual intervention.
When there is a minor congestion, the congestion can be relieved by reducing the sending rate of a small number of service flows. The number of service flows in wireless network nodes is not as many as that in the wired network.
So the minor congestion can be relieved by reducing the sending rate of any one of service flows. We hope this choice is fair, to ensure that all user access to the system resources according to their weights.
When there is a moderate congestion, the congestion can not be relieved by reducing the sending rate of any one of service flows. Reducing the rate of different service flows will produce different results. We hope to reduce the rate of service flows which are most effective to the relief of congestion. That is, the service flow which current queue length is the longest (The time that these service flow occupied the cache is also the longest). This not only improves system throughput, but also made to speeds up the congestion relief.
When there is a severe congestion, it is obvious that reducing the sending rate of a small portion of the service flows cannot achieve the congestion relief. We may need to reduce the rate of a lot of service flows. Since the TCP has a characteristic of additive increase multiplicative decrease (AIMD), continuous drop packets from one service flow to reduce the sending rate would adversely affect the performance of the TCP flow. While the effect on relieving system congestion will become smaller, we gradually increase the values of parameters, and the algorithm will choose service flows to drop packet fairly. On one hand, at this point the “fairness” can bring the same benefits as in the minor congestion system; on the other hand this is to avoid continuously dropping the longer queue service flow.
Congestion is measured by the system buffer space occupation rate. α is a parameter relevant to system congestion status and its value is between 0 to 1. Assume that the current buffer space occupation rate is denoted by Buffer cur , and Buffer medium , Buffer min , and Buffer max represent threshold value of the buffer space occupation rate for moderate, minor, and severe congestion, respectively.
When Buffer cur is close to Buffer min , the system enters a state of minor congestion. When Buffer cur reaches Buffer max , the system is in a state of severe congestion. Buffer medium means moderate congestion. If we value α by using linear approach, the system will have a dramatic oscillation.
Instead we use high order nonlinear or index reduction to get smooth curve of α as shown in Figure 1.
The value of α can also be calculated as below In the DiffServ model, we retain the implement process of PAFD, and only modify (4) into
where β is a new parameter used to adjust the fairness among service flows of different service levels. As mentioned above, we can set the value of parameter α different from that shown in Figure 1 to satisfy different requirements. α is the parameter which balances fairness and transmission conditions. For high-priority services, the curve in Figure 1 is reasonable. The fairness is able to guarantee the QoS for different service flows, and also is required to relief congestion quickly. For high-priority services which have no delay constraints and high fairness requirements, a higher throughput is more practical. Therefore, we can get the value of the parameter α for low-priority services, which is slightly less than that for high-priority services as shown in Figure 2.
We compare the PAFD algorithm with two commonly used buffer management algorithms RED and tail drop (TD).
We Figure 4 shows that all the algorithms have similar throughputs for low network load. When the load increases, the throughput effectiveness of BCF is higher than that of other scheduling algorithms. This figure shows that PAFD-BCF provides significant higher throughput than the other algorithms. PAFD does not randomly drop or simply tail drop packets, but fully considers fairness and transmission conditions. In this way, service flows under poor transmission condition receive high probability of packet dropping, thus a relatively short virtual queue. When BCF is working with PAFD, the service flow under better channel transmission condition will give higher priority and result effective throughput. Both TD and RED use shared cache instead of flow queuing so that they fail to consider the fairness. Here the fairness index F is given by 2
where G i is the effective throughput of service flow i, and N is the total number of service flows. It is not difficult to prove that F∈(0, 1). When F has a lager value, the fairness of the system is better. If the value of F equals to 1, the system resource is completely fair. We can use (7) to calculate the fairness index and compare the fairness of different algorithms. In ON-OFF model with the assumption that there are 16 service flows, the ON average rate of flows 1-8 is twice of that of 9-16. That is, W i : W j = 2 : 1, where i∈ [1,8] and j∈ [9,16]. Using round robin algorithms without considering W, we can calculate the reference value of fairness index F = 0.9. The table indicates that the fairness index of BCF is lower when combined with TD and RED. Since PAFD takes the fairness into consideration, the fairness index of PAFD is higher than that of TD when there are congestions. The combination of PAFD and LQF has higher throughput and more fair distribution of cache and bandwidth resources. By changing the value of parameter α, we can conveniently balance the system performance and fairness based on the requirements.
In this section we adopt the same environment as The dequeuing operation is similar to the enqueuing operation. In order to maintain the performance of the system, micro engine threads of NPs must operate in strict accordance with the predetermined sequence. This is controlled by internal thread semaphore. When a queue changes from empty to non-empty in an enqueuing operation, or from non-empty to empty in a dequeuing operation, the buffer manager of PAFD will send a message to packet scheduling module through the adjacent loop. The future network has two development requirements:
high-speed bandwidth and service diversification. Research on buffer management algorithms is able to suit for these requirements. In the future, buffer management will become more complex. Therefore, the requirements for NPs and other hardware will be more stringent. It is very important to consider the comprehensive performance of the algorithms while pursuing simplicity and easy implementation.
http://sites.google.com/site/ijcsis/ ISSN 1947-5500
http://sites.google.com/site/ijcsis/ ISSN 1947-5500
📸 Image Gallery
