Till today we dreamt of imperceptible delay in a network. The computer science research grows today faster than ever offering more and more services (computational representational, graphical, intelligent implication etc) to its user. But the problem lies in "greater the volume of services greater the problem of delay". So tracing delay, or performance analysis focusing on time required for computation, in a existing or newly configured network is necessary to conclude the improvement. In this paper, we have done the job of delay analysis in a multi-server system,. For this proposed work we have used continuous -parameter Markov chains (Non -Birth -Death Process),for developing the required models, and for developing the simulator we have used queuing networking, different scheduling algorithms at the servers queue and process scheduling . The work can be further extended to test the performance of wireless domain.
.,(n-3) ∧ j =1,2,…,(n-3)
To measure the delay at the node 2 n v -when nodes k v , k=1,2,..(n-3) send the jobs and the jobs are executed at the nodes 1 n v -and n v .
we can view the problem of delay analysis in a Network as the problem stated above. We can imagine that 1 v , 2 v ,…., 3 n v -are the clients those are sending jobs at unpredictable interval of unpredictable volume. The jobs form a queue for waiting at the node and n v for service. We offer the responsibility of maintaining the queue and dispatching them to the server depending on their capability. Now we are required to find the queuing delay at the queue, transmission delay, processing delay etc. at the intermediate machine (node 2 n v -) the problem becomes more realistic and logical.
The clients C1, C2, ….,Cn are sending their jobs with different probabilities 1 μ depending on the capability of the servers to service.
In the intermediate machine (I) we did our calculations for performance measurement .
In an M/M/2 queuing system with heterogeneous servers there are two server , which are of different processing capability i.e service rates of the two servers are not identical . The discipline used to schedule the jobs is FCFS. This is equivalent to saying that jobs are served in the order of their arrival.
The state of the system is defined to be the tuple (n1,n2) where n1 denotes the number of Jobs in the queue including any at the faster server ,and n2 denotes the number of jobs at the slower server .Jobs wait in the order of their arrival .When both servers are idle, the faster server is schedule for service before the slower one.
Cn-1 : : :
Client side
In the steady state we have
The traffic intensity of the system is :- 5) is similar to the balanced equation of a birth -death process Therefore,
By repeated use of equation ( 6) we get :p(n,1) = ρ .p(n-1,1) = ρ . ρ .p(n-2,1)
:::::::::::::::::::::: :::::::::::::::::::::: = 1 n ρ -.p(1,1) , n>1 ————- (7) from equation ( 1) and ( 3) we can obtain by elimination :-
Some Simulated result and discussion The figure -1 shows the propagation delay per jobs .propagation time is the time taken by the data to be transmitted from source to destination .It is actually the delay taking place due to the physical medium .The graph does not show propagation time directly but measure the round trip time (RTT) as an estimate. The nature of the graph as seen from the figure is somewhat spiky .We can see that the RTT is almost same for different jobs (by using the job of same size).However, it is not fixed and it can be seen from the figure that it can vary depending upon certain conditions. In the graph the average RTT is around one second .The best case is a 0, 0 0, 1
near -zero RTT and the worst case is slightly greater than two seconds.
The figure -2 shows the amount of queuing delay for each individual job. Initially the queue is empty and hence the incoming jobs do not have to wait much in the queue .But as time increases, more jobs come into the intermediate machine.
The job arrival rate is much more than the rate at which the server the servers can process them. Hence the population in the queue increases. Since the jobs are sent to the servers following FCFS scheme, more number of jobs in the queue implies higher queuing delay for the jobs coming next. So if the clients throw jobs continuously (i.e in a particular session) the queuing delay is higher for the jobs that come later. When more than one job sending sessions are used, at the end of one no new jobs arrives in the system for some time but the servers continue to pull jobs from the queue. Hence the number of jobs waiting in the queue decreases. Now at the beginning of the next session new jobs find lesser number of jobs waiting in the queue and they have to wait for lesser amount of time in the queue.
The figure-3 shows how much load or unprocessed jobs are waiting in the job queue at the intermediate machine in certain Instance of time. The time axis here is started whenever the execution of dispatcher program begins. The above graph shows the relationship between load vs time i.e the number of jobs waiting in queue at any instance. The rate of arrival of jobs from the clients is much more than the rate at which they can be dispatched to the two servers, hence as time increased load i.e the number of jobs queued up also increases. For example if 20 jobs are present at 1 t instant and 10 jobs arrive within ( )
instant then it may happen that only 5 jobs are dispatched in this interval i.e the load at 1 t was 20 and that at 1 t +1 was 20+10-5=25. In the case when no jobs arrive, the load gradually decrease because the jobs are dispatched one by one without increasing the number of jobs in the queue. This explains the falling edge of the curve.
The figure -4 shows the queuing delay, it is time for the which a jobs waits in the queue before being served by the server and and by load it means the total number of jobs wa
This content is AI-processed based on open access ArXiv data.