An optimized round robin cpu scheduling algorithm with dynamic time quantum
📝 Abstract
CPU scheduling is one of the most crucial operations performed by operating system. Different algorithms are available for CPU scheduling amongst them RR (Round Robin) is considered as optimal in time shared environment. The effectiveness of Round Robin completely depends on the choice of time quantum. In this paper a new CPU scheduling algorithm has been proposed, named as DABRR (Dynamic Average Burst Round Robin). That uses dynamic time quantum instead of static time quantum used in RR. The performance of the proposed algorithm is experimentally compared with traditional RR and some existing variants of RR. The results of our approach presented in this paper demonstrate improved performance in terms of average waiting time, average turnaround time, and context switching.
💡 Analysis
CPU scheduling is one of the most crucial operations performed by operating system. Different algorithms are available for CPU scheduling amongst them RR (Round Robin) is considered as optimal in time shared environment. The effectiveness of Round Robin completely depends on the choice of time quantum. In this paper a new CPU scheduling algorithm has been proposed, named as DABRR (Dynamic Average Burst Round Robin). That uses dynamic time quantum instead of static time quantum used in RR. The performance of the proposed algorithm is experimentally compared with traditional RR and some existing variants of RR. The results of our approach presented in this paper demonstrate improved performance in terms of average waiting time, average turnaround time, and context switching.
📄 Content
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.1, February 2015
DOI : 10.5121/ijcseit.2015.5102 7
AN OPTIMIZED ROUND ROBIN CPU SCHEDULING ALGORITHM WITH DYNAMIC TIME QUANTUM
Amar Ranjan Dash1, Sandipta kumar Sahu2 and Sanjay Kumar Samantra3
1Department of Computer Science, Berhampur University, Berhampur, India. 2Department of Computer Science, NIST, Berhampur, India. 3Department of Computer Science, NIST, Berhampur, India.
ABSTRACT
CPU scheduling is one of the most crucial operations performed by operating system. Different algorithms are available for CPU scheduling amongst them RR (Round Robin) is considered as optimal in time shared environment. The effectiveness of Round Robin completely depends on the choice of time quantum. In this paper a new CPU scheduling algorithm has been proposed, named as DABRR (Dynamic Average Burst Round Robin). That uses dynamic time quantum instead of static time quantum used in RR. The performance of the proposed algorithm is experimentally compared with traditional RR and some existing variants of RR. The results of our approach presented in this paper demonstrate improved performance in terms of average waiting time, average turnaround time, and context switching.
KEYWORDS
CPU Scheduling, Round Robin, Response Time, Waiting Time, Turnaround Time
- INTRODUCTION
Operating systems are resource managers. The resources managed by Operating systems are hardware, storage units, input devices, output devices and data. Operating systems perform many functions such as implementing user interface, sharing hardware among users, facilitating input/output, accounting for resource usage, organizing data, etc. Process scheduling is one of the functions performed by Operating systems. CPU scheduling is the task of selecting a process from the ready queue and allocating the CPU to it. Whenever CPU becomes idle, a waiting process from ready queue is selected and CPU is allocated to that. The performance of the scheduling algorithm mainly depends on CPU utilization, throughput, turnaround time, waiting time, response time, and context switch.
Different CPU scheduling algorithms described by Abraham Silberschatz et al. [1], viz. FCFS (First Come First Served), SJF (Shortest Job First), Priority and RR (Round Robin). Neetu Goel et al. [2] make a comparative analysis of CPU scheduling algorithms with the concept of schedulers. Jayashree S. Somani et al. [3] also make a similar analysis but with their characteristics and applications. In FCFS, the process that requests the CPU first is allocated the CPU first. In SJF, the CPU is allocated to the process with smallest burst time. When the CPU becomes available, it is assigned to the process that has the smallest next CPU burst. If the next CPU bursts of two processes are the same, FCFS scheduling is used to break the tie. In priority scheduling algorithm a priority is associated with each process, and the CPU is allocated to the process with the highest priority. Equal priority processes are scheduled in FCFS order. A major problem with priority scheduling is starvation. In this scheduling some low priority processes wait indefinitely to get the CPU.
International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol. 5,No.1, February 2015 8
In RR a small unit of time is used which is called Time Quantum or Time slice. The CPU scheduler goes around the Ready Queue allocating the CPU to each process for a time interval up to 1 time quantum. If a process’s CPU burst exceeds 1 time quantum, that process is pre-empted and is put back in the ready queue .If a new process arrives then it is added to the tail of the circular queue. Out of the above discussed algorithms RR provides better performance as compared to the others in case of a time sharing operating system. The performance of a scheduling algorithm depends upon the scheduling criteria viz. Turnaround time, Waiting time, Response time, CPU utilization, and throughput.
Turnaround time is the time interval from the submission time of a process to the completion time of a process. Waiting time is the sum of periods spent waiting in the ready queue. The time from the submission of a process until the first response is called Response time. The CPU utilization is the percentage of time CPU remains busy. The number of processes completed per unit time is called Throughput. Context switch is the process of swap-out the pre-executed process from CPU and swap-in a new process to CPU. Context switch is the number of times the process switches to get execute. A scheduling algorithm can be optimized by minimizing response time, waiting time and turnaround time and by maximizing CPU utilization, throughput.
This content is AI-processed based on ArXiv data.