To solve the parameter sensitive issue of the traditional RED (random early detection) algorithm, an adaptive buffer management algorithm called PAFD (packet adaptive fair dropping) is proposed. This algorithm supports DiffServ (differentiated services) model of QoS (quality of service). In this algorithm, both of fairness and throughput are considered. The smooth buffer occupancy rate function is adopted to adjust the parameters. By implementing buffer management and packet scheduling on Intel IXP2400, the viability of QoS mechanisms on NPs (network processors) is verified. The simulation shows that the PAFD smoothes the flow curve, and achieves better balance between fairness and network throughput. It also demonstrates that this algorithm meets the requirements of fast data packet processing, and the hardware resource utilization of NPs is higher.
Deep Dive into Buffer Management Algorithm Design and Implementation Based on Network Processors.
To solve the parameter sensitive issue of the traditional RED (random early detection) algorithm, an adaptive buffer management algorithm called PAFD (packet adaptive fair dropping) is proposed. This algorithm supports DiffServ (differentiated services) model of QoS (quality of service). In this algorithm, both of fairness and throughput are considered. The smooth buffer occupancy rate function is adopted to adjust the parameters. By implementing buffer management and packet scheduling on Intel IXP2400, the viability of QoS mechanisms on NPs (network processors) is verified. The simulation shows that the PAFD smoothes the flow curve, and achieves better balance between fairness and network throughput. It also demonstrates that this algorithm meets the requirements of fast data packet processing, and the hardware resource utilization of NPs is higher.
Network information is transmitted in the form of data flow, which constitutes of data packets. Therefore, different QoS means different treatment of data flow. This treatment involves assignment of different priority to data packets.
Queue is actually a storage area to store IP packets with priority level inside routers or switches. Queue management algorithm is a particular calculation method to determine the order of sending data packets stored in the queue. Then the fundamental requirement is to provide better and timely services for high priority packets [1]. The NP is a dedicated processing chip to run on high speed networks, and to achieve rapid processing of packets.
Queue management plays a significant role in the control of network transmission. It is the core mechanism to control network QoS, and also the key method to solve the network congestion problem. Queue management consists of buffer management and packet scheduling. Generally the buffer management is applied at the front of a queue and cooperates with the packet scheduling to complete the queue operation [2,3]. When a packet arrives at the front of a queue, the buffer management decides whether to allow the packet coming into the buffer queue. From another point of view, the buffer management determines whether to drop the packet or not, so it is also known as dropping control.
The control schemes of the buffer management can be analyzed from two levels, data flow and data packet. In the data stream level and viewed form the aspect of system resource management, the buffer management needs to adopt certain resource management schemes to make a fair and effective allocation of queue buffer resources among flows through the network nodes. In the data packet level and viewed from the aspect of packet dropping control, the buffer management needs to adopt certain drop control schemes to decide that under what kind of circumstances a packet should be dropped, and which packet will be dropped.
Considering congestion control response in an end-to-end system, the transient effects for dropping different packets may vary greatly. However, statistics of the long-term operation results indicates that the transient effect gap is minimal, and this gap can be negligible in majority of cases.
In some specific circumstances, the completely shared resource management scheme can cooperate with drop schemes such as tail-drop and head-drop to reach effective control. However, in most cases, interaction between these two schemes is very large. So the design of buffer management algorithms should consider both of the two schemes to obtain better control effects [4,5].
Reference [6] proposed the RED algorithm for active queue management (AQM) mechanism [7] and then standardized as a recommendation from IETF [8]. This algorithm will adaptively gain balance between congestion and fairness according to cache congestion situation. When there is minor congestion, the algorithm will tend to fairly drop packets in order to ensure all users access the system resources to their scale. For moderate congestion, the algorithm will incline to drop the packet of low quality service flows by reducing its sending rate using scheduling algorithm to alleviate congestion. In severe congestion, the algorithm will tend to fairly drop packets, through the upper flow control mechanism to meet the QoS requirements, and reduces sending rate of most service flows, in order to speed up the process of easing the congestion.
In buffer management or packet scheduling algorithms, it will improve the system performance to have service flows with better transmission conditions reserved in advance. But this operation will make system resources such as buffer space and bandwidth be unfairly distributed, so that QoS of service flows with poor transmission conditions cannot be guaranteed. Packet scheduling algorithms usually use generalized processor sharing (GPS) as a comparative model of fairness. During the process of realization of packet scheduling algorithms based on GPS, each service flow has been assigned a static weight to show their QoS.
The weight φ i actually express the percentage of the service flow i in the entire bandwidth B. φ i will not change with packet scheduling algorithms, and meet 1 1
where N expresses the number of service flows in the link.
And the service volume is described by
where i, j denotes two different service flows. In GPS based algorithms, the bandwidth allocation of different service flows meets the requirement B i /φ i = B j /φ j , where B i is the allocated bandwidth of the service flow i. By assigning a smaller weight φ to an unimportant background service flow, the weight of service flow with high priority φ high will be much larger than φ low , so that the majority of the bandwidth is accessed by high-priority service flows.
In buffer management algorithms, how to control the buffer space occupation is very key [11]. Here we define
where C i
…(Full text truncated)…
This content is AI-processed based on ArXiv data.