CPU Scheduling - Manav Rachna Online

17
In early stages of development of computer systems, there was usually one process in the memory at a time and CPU would execute that process and then sit idle and wait for the next process to arrive in the main memory for execution. The efficiency of the computer was therefore very limited. In multi-user and multiprogramming systems, there are several processes in the memory competing for CPU time for execution. Thus the problem of scheduling of the processes becomes very acute, rather the scheduling of the processes is a challenging problem. This problem is usually referred to as CPU scheduling. In this chapter we have discussed objective of CPU scheduling, types of scheduling algorithms, performance parameters, priority scheduling and multilevel and multiprocess scheduling for uniprocessor systems. 4.1 CONCEPT OF CPU SCHEDULING Every process that needs to be executed, must reside in RAM and process that needs to be executed must have access to CPU. In a uniprocessor system, only one process can run at a time, so what when there are too many processes which are ready to be executed? The answer is that the operating system must schedule the processes according to some strategy; so that the processor is able to execute all these processes one by one, and this forms the basis for multiprogramming. Scheduling is a set of rules that decides the order in which the processes submitted to the system should be executed. In normal processing, a process may have to wait till CPU is able to take it up for execution, typically for the want of completion of some I/O request. In a simple computer system, the CPU would then sit idle; all this waiting time is wasted. With multiprogramming, we try to use this time productively, especially when several processes are there in memory at one time and waiting for execution. When one process has to wait, the operating system takes the CPU away from that process and gives the CPU to another process. This pattern continues indefinitely. Scheduling is a fundamental function of operating system. To take optimum work from a computer system, the operating system schedules almost all computer resources before their use. The CPU is, of course, one of the primary computer resources. Thus, CPU scheduling is a central consideration in the design of any operating system, especially scheduling of the processes. In fact, scheduling refers to the way the processes are assigned to CPU for execution. Since there are many processes arriving for execution and these can not be sent to CPU immediately due to non-availability of CPU time, and therefore these have to be scheduled. It is an activity Chapter—4 CPU Scheduling

Transcript of CPU Scheduling - Manav Rachna Online

In early stages of development of computer systems, there was usually one process in thememory at a time and CPU would execute that process and then sit idle and wait for the nextprocess to arrive in the main memory for execution. The efficiency of the computer was thereforevery limited. In multi-user and multiprogramming systems, there are several processes in thememory competing for CPU time for execution. Thus the problem of scheduling of the processesbecomes very acute, rather the scheduling of the processes is a challenging problem. Thisproblem is usually referred to as CPU scheduling.

In this chapter we have discussed objective of CPU scheduling, types of schedulingalgorithms, performance parameters, priority scheduling and multilevel and multiprocessscheduling for uniprocessor systems.

4.1 CONCEPT OF CPU SCHEDULING

Every process that needs to be executed, must reside in RAM and process that needs to beexecuted must have access to CPU. In a uniprocessor system, only one process can run at atime, so what when there are too many processes which are ready to be executed? The answeris that the operating system must schedule the processes according to some strategy; so thatthe processor is able to execute all these processes one by one, and this forms the basis formultiprogramming. Scheduling is a set of rules that decides the order in which the processessubmitted to the system should be executed.

In normal processing, a process may have to wait till CPU is able to take it up for execution,typically for the want of completion of some I/O request. In a simple computer system, theCPU would then sit idle; all this waiting time is wasted. With multiprogramming, we try to usethis time productively, especially when several processes are there in memory at one time andwaiting for execution. When one process has to wait, the operating system takes the CPUaway from that process and gives the CPU to another process. This pattern continues indefinitely.

Scheduling is a fundamental function of operating system. To take optimum work from acomputer system, the operating system schedules almost all computer resources before theiruse. The CPU is, of course, one of the primary computer resources. Thus, CPU scheduling is acentral consideration in the design of any operating system, especially scheduling of the processes.In fact, scheduling refers to the way the processes are assigned to CPU for execution. Sincethere are many processes arriving for execution and these can not be sent to CPU immediatelydue to non-availability of CPU time, and therefore these have to be scheduled. It is an activity

Chapter—4

CPU Scheduling

CPU Scheduling 49

that is done by the operating system component called scheduler. The purpose of the scheduleris to choose processes from the list of ready processes and send them to CPU for execution.

4.2 SCHEDULING OBJECTIVE AND SCHEDULING CRITERIA

Scheduling criteria helps in choosing a suitable scheduling algorithm for a particular situationas different CPU scheduling algorithms have different objectives and thus sequence the processesaccordingly. To choose an algorithm for a particular situation, different properties of the variousalgorithms are analyzed as these characteristics can make a substantial difference in thedetermination of best algorithm. The criteria include the following:

• CPU Utilization: CPU should not remain idle most of the time and should be keptas much busy as possible.

• Throughput: Throughput is the measure of number of processes that complete theirexecution per unit time. When CPU is busy executing processes, work is being done.It may be one process per hour or ten processes per second.

• Turnaround Time: The total time between submission of a process and its completionis known as turn around time. It is the sum of the periods spent waiting to get intomemory, waiting in the ready queue, executing on the CPU, and doing I/O.

• Response time: In an interactive system, a process can produce some output early tothe user, and then can continue executing the other instructions. The time-measure ofthis activity is important for the performance of the system. This measure, calledresponse time, is the amount of time it (CPU) takes from when a request was submitteduntil the first response is produced.

• Waiting Time: The time taken by the process from submission of the process to timethe CPU starts executing this process.

• Fairness: Equal CPU time to each process (or more generally appropriate timesaccording to each process' priority).

In the design of any operating system we want to maximize CPU utilization and throughput,and to minimize turnaround time, waiting time, and response time. In most cases, we optimizethe average measure. However, in some circumstances we want to optimize the values(minimum or maximum), rather than the average. For example, to guarantee that all users getgood service, we may want to minimize the maximum response time.

A system with reasonable and predictable response time may be considered more desirablethan a system that is faster on the average, but is highly variable from process to process.

4.3 CPU AND I/O BURST CYCLES

The behavior of a process plays a key role in CPU scheduling. It is depicted by a CPU-I/Oburst cycle as shown in Figure 4.1. The execution of a process has a CPU cycle and an I/O wait.A process normally runs for a while (known as CPU Burst), performs some I/O, (known as I/Oburst), then runs again (the next CPU burst) and so on. These CPU-burst and I/O-burst areshown in Figure 4.1.

CPU-Bound process: A process is said to be CPU-bound process if it performs lot ofcomputation and have very short I/O operations as shown in Figure 4.2 (a).

I/O-Bound Process: A process is said to be I/O-bound process if it performs lot of I/Ooperations as compared to CPU computations as shown in Figure 4.2 (b). Each I/O operationis followed by a short CPU-burst to process these I/O operations.

50 Principles of Operating Systems

In Figure 4.2 (a) we see that CPU-burst (shown as bar) are much larger than I/O bursts(shown as line). In Figure 4.2 (b) I/O burst line is much larger than CPU-bursts.

Fig. 4.1. CPU and I/O burst cycle

Fig. 4.2(a). CPU bound process and (b) I/O bound processes

4.4 SCHEDULER

A schedular is a module of operating system that selects the next process to be admitted intothe system for execution by the CPU. The scheduler will always switch the CPU to anotherprocess when the current process is busy with the I/O operation. The goal is that the CPUshould never be allowed to sit idle. This raises the questions.

CPU Scheduling 51

• Why is scheduling important?

• When should scheduling decision takes place?

• When does CPU choose which process to run?

In fact, scheduling has a big effect on resource utilization and the overall performance ofthe system. We must therefore look to all possible situations while designing the schedulingalgorithm especially when switching from one process to another and the activities involvedin deciding a scheduling algorithm. The general structure of CPU-scheduling comprises oftwo components, namely (a) handling processes and (b) presenting processes to CPU forexecution. Handling of processes involve different process states (from New state toTerminated state) as shown in Figure 4.3 (a). The second component is collecting processes(jobs) into queue, bring them in RAM, swapping whenever required, and execution by CPU,as shown in Figure 4.3 (b).

Fig. 4.3. Different schedulers in operating system

Operating systems may feature up to three distinct types of scheduler:

• Long-term scheduler (LTS): also known as an admission scheduler or high-levelscheduler.

• A mid-term or medium-term scheduler (MTS).

• A short-term scheduler (STS).The names suggest the relative frequency with which the scheduling functions are

performed.

52 Principles of Operating Systems

4.4.1 Long-term scheduler

The long-term (or admission) scheduler decides which jobs or processes are to beadmitted to the ready queue; that is, when an attempt is made to execute a program, itsadmission to the set of currently executing processes is either authorized or delayed bythe long-term scheduler. The long term schedular selects jobs from secondary storagedevices and loads them into memory for execution. Thus, the long term scheduler dictateswhich processes are to run on a system, and the degree of multiprogramming to besupported at any one time.

4.4.2 Mid-term scheduler

The mid-term scheduler temporarily removes processes from main memory and placesthem on secondary memory (such as a disk drive) or vice versa. This is commonlyreferred to as swapping out or swapping in. The mid-term scheduler may decide toswap out a process which has not been active for some time, or a process which has alow priority, or a process which is taking up a large amount of memory in order to freeup main memory for other processes. Moreover, the mid term scheduler may be swappingthe process back into the memory later when more memory is available, or when theprocess has been unblocked and is no longer waiting for a resource.

4.4.3 Short-term scheduler

The short-term scheduler (also known as the CPU scheduler) decides which of the readyqueue processes (in-memory) are to be executed (are allocated the CPU). Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term ormid-term schedulers. In fact at minimum level a scheduling decision will have to bemade after every time slice and these occur at very short time-slots. This scheduler canbe preemptive, implying that it is capable of forcibly removing processes from a CPUwhen it decides to allocate that CPU to another process, or non-preemptive (also knownas "voluntary" or "co-operative"), in which case once the CPU has been allocated to aprocess, the process keeps the CPU until its termination or its transition to the blockedstate.

4.4.4 Dispatcher

Another component involved in the CPU-scheduling function is the dispatcher. The dispatcheris the module that gives control of the CPU to the process selected by the short-term scheduler.The function of dispatcher involves the following:

• Switching context.

• Switching to user mode.

• Jumping to the proper location in the user program to restart that program.The dispatcher should be as fast as possible (for this it should have hardware support),

since it is invoked during every process switch. The time taken by the dispatcher tostop one process and start another running process is known as the dispatch latency.

CPU Scheduling 53

Fig. 4.4. Reasons for taking scheduling decisions

There are variety of possibilities, situations and reasons of switching from one state toanother state as shown in Figure 4.4. Some of them are:

• When process switches from running to waiting. It could be because of I/O request,or because of wait for child to terminate, or wait for synchronization operation (likelock acquisition) to complete.

• When process switches from running to ready - for example on completion of interrupthandler. In this case a common example of interrupt handler is the timer interrupt ininteractive systems. If scheduler switches processes in this case, it has preempted therunning process. Another common case of interrupt handler is the I/O completionhandler.

• When process switches from waiting to ready state (on completion of I/O or acquisitionof a lock, for example).

• When a process terminates.

4.5 CPU SCHEDULING ALGORITHMS

The problem of CPU scheduling arose almost from the beginning of development of operatingsystems. Several algorithms have been developed and analyzed. Different algorithms havedifferent merits and demerits. We shall describe some of the well known CPU schedulingalgorithms.

4.5.1 First-Come, First-Served (FCFS) Scheduling

First-Come-First-Served algorithm is one of the simplest scheduling algorithm. Processesare allocated to the CPU according to their arrival time in the ready queue. It is anonpreemptive scheduling technique, once a process has a CPU, it runs to completion. TheFCFS scheduling is fair in the formal sense, but it is unfair in the sense that long jobs makeshort jobs to wait and unimportant jobs make important jobs to wait. Other names of thisalgorithm are:

54 Principles of Operating Systems

• First-In-First-Out (FIFO)

• Run-to-Completion

• Run-Until-Done

We take up the advantages and disadvantages of FIFO scheduling algorithm.

Advantages:

• FCFS is more predictable than most of other schemes since it offers time estimates.• The code for FCFS scheduling is simple to write and understand.

Disadvantages:

• One of the major drawbacks of this scheme is that the average time is often quite long.

• It is not suitable for time sharing and interactive systems. Because it cannot guaranteegood response time (user based processes take unpredicable time for example, if auser is using MS-Word, how long he/she would work is not known).

• A proper mixing of CPU and I/O bound jobs is required to get good results from thealgorithm.

Example : Consider the following set of processes having their CPU burst time (in milliseconds). CPU burst time indicates that for how much time (of course, estimated time) theprocess needs the CPU.

Suppose three processes arrive in the order: P1, P2, P3, with their burst time:

Process Burst Time

P1 24

P2 3

P3 3

The Gantt chart for the schedule is:

Scheduling:

Waiting time for P1 = 0; P2 = 24; P3 = 27.

Average waiting time: (0 + 24 + 27)/3 = 17.

Next, suppose that the processes arrive in the order P2, P3, P1.

Then the Gantt chart for the schedule is:

Scheduling:

Waiting time for P1 = 6; P2 = 0; P3 = 3.

Average waiting time: (6 + 0 + 3)/3 = 3.

Thus, average waiting time will always depend upon the order in which the processesarrive. It would greatly vary depending upon whether the processes having small or largeCPU burst arrive first in the ready queue. When performance is a major issue this algorithmis not recommended.

CPU Scheduling 55

4.5.2 Shortest-Job-First (SJF) Scheduling

In shortest-job-first scheduling each process is associate with estimated length of its CPU time(called burst time), and use these lengths to schedule the process in a priority queue with theshortest CPU burst time first; that is, from a set of ready processes (processes in ready queue),CPU is allocated to the process with minimum CPU requirement. If there are two processeswith the same CPU burst, the process which arrived first will be given the CPU.

Two Schemes of SJF Scheduling Algorithm:

1. Non-preemptive scheduling: Once the CPU has been allocated to a process, theprocess keeps the CPU until its termination or its transition to the blocked state. Thismeans that once CPU is allocated to a process, this process can use the CPU for itsown execution till it completes the work or willingly surrenders or leaves the CPU.

2. Preemptive scheduling: In preemptive scheduling, when a process with higher prioritybecomes ready for execution, the process currently using CPU will be forced torelease CPU so that high priority process can run first. In other words, if a newprocess arrives with CPU burst length less than remaining time of current executingprocess, preempt the currently executing process. This scheme is know as the Shortest-Remaining-Time-First (SRTF).Next we take up the advantages and disadvantages of SJF scheduling.

Advantages:

SJF is optimal - gives minimum average waiting time for a given set of processes.

Disadvantages:

It is not practical to find the length of CPU-burst. Only way to find the amount of CPUtime required is prediction. This method is not practical and is usually used for comparisonwith other algorithms for their evaluation.

Example 1: Let us have three processes P1, P2 and P3 with burst line 13, 5, 2 units of time(ms), respectively, in the ready queue.

Process Burst time (ms)

P1 13

P2 5

P3 2

Schedular schedules these process according to their burst line.

Scheduling:

Avg. waiting time =� � � �

� ��� �

+ += = .

Next, let the arrival time of these process be 0, 1, 2, respectively. Then

Process CPU Burst Time of

Arrival

P1 13 0

P2 5 1

P3 2 2

56 Principles of Operating Systems

Schedular would take up process P1 for execution. At time 1, when P2 arrives, it findsthat P1 needs 12 units of time whereas P2 needs only 5 units. Therefore it preempts P1 andtakes up P2 and puts P1 into the queue. At time 2, it finds another process P3 with 2 unitsarrives. At this moment P2 needs 4 units and P1 need 12 units, so it puts P2 into queue andtakes up P3. In this way the processing proceeds in the following order.

Scheduling:

Waiting time for P1 = 0 + (8 – 1) = 7

P2 = 0 + (4 – 2) = 2

P3 = 0

Average waiting time =� � � �

�� �

+ += = ms.

Example 2: Let us have the following four processes with arrival time and Burst time:

Process Arrival Time Burst Time (ms)

P1 0.0 7

P2 2.0 4

P3 4.0 1

P4 5.0 4

Solution: Let us assume that the schedular adopts SJF (non-preemptive) algorithm.Initially schedular takes up process P1 with burst time 7 for execution. At time 2 the processP2 with burst time 4 arrives. Though P2 needs only 4 units of time and P1 needs 5 units, itdoes not preempties P1 and continues with it. At time 4 process P3 with burst time 1 alsoarrives. There are two process P2 and P3 in waiting, needing 4 and 1 units of time respectively,so schedular takes up P3 and not P2. Thus processing proceeds in the following sequence.

Scheduling:

Average waiting time = [0 + (8 – 2) + (7 – 4) + (12 – 5)]/4 = 4.

Next, let us discuss this with Preemptive SJF algorithm. In this case we have followingsequence of processing using SJF (preemptive) algorithm.

Scheduling:

Average waiting time = { } { }⎡ ⎤+ − + + − + + −⎣ ⎦� � � � �� � � �� � �

= (9 + 1 + 0 + 2)/4 = 3.

Thus we have reduction in the average waiting time.

4.5.3 Round Robin (RR) Scheduling

Round Robin (RR) Scheduling algorithm is one of the oldest, simplest, fairest and mostwidely used algorithm. In the round robin scheduling, processes are dispatched to CPU in aFIFO manner but are given a limited amount of CPU time called a time-slice or a quantum. If

CPU Scheduling 57

a process does not complete before its CPU-time expires, the CPU is preempted and given tothe next process waiting in a queue. The preempted process is then placed at the end of theready queue.

Example : Let us consider three Processes P1, P2 and P3 with respective burst times 24,3, and 3 and use RR with Time Quantum = 4. We have

Process Burst Time (ms)

P1 24

P2 3

P3 3

Solution: Initially the process P1 is taken up and runs for 4 units of time. It needs 20more units of time and therefore it is put at the end of queue. At time 4 the schedular takesup P2 and runs for 3 units of time to complete it. In this way the processing proceeds in thesequence given below.

The Gantt chart is:

Scheduling:

Average waiting time = [(10 – 4) + 4 + 7]/3 = 17/3 = 5.66 ms

Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effectivein time-sharing environments in which the system needs to guarantee reasonable responsetimes for interactive users.

Performance of round robin algorithm depends on the following factors.

The only interesting issue with round robin scheme is the length of the quantum. Setting thequantum too short causes too many context switches and lower the CPU efficiency. On the otherhand, setting the quantum too long may cause poor response time and approximates FCFS.

In any event, the average waiting time under round robin scheduling is often quite long.

In actual practice number of context switches should not be too many, it will slow downthe overall execution of all the processes. We have to take relative weight of switch time andtime quantum. Time quantum should be large with respect to the context switch time. Thiswill ensure that CPU will be busy doing useful work most of time as compared to the timespent in context switching.

4.5.4 Priority Scheduling

The basic idea is straightforward: each process is assigned a priority, and scheduling is doneaccording to the priority of each process. A high priority job get CPU before a low priorityjob. Equal-Priority processes are scheduled in FCFS order. The Shortest-Job-First (SJF)algorithm is a special case of general priority scheduling algorithm.

An SJF algorithm is simply a priority algorithm where the priority is the inverse of the(predicted) next CPU burst. That is, the longer the CPU burst, the lower the priority andvice-versa. Priority can be defined either internally or externally. Internally defined prioritiesuse some measurable quantities or qualities to compute priority of a process.

Examples of measurable quantities for determining the internal priorities are:

• Time limits.

• Memory requirements.

58 Principles of Operating Systems

• File requirements, for example, number of open files.

• CPU verses I/O requirements.

Externally defined priorities are set by criteria that are external to operating system suchas

• The importance of process.

• Type or amount of funds being paid for computer use.

• The department sponsoring the work.Further, priority scheduling can be either preemptive or non-preemptive, as:

• A preemptive priority algorithm will preemptive the CPU if the priority of the newlyarrived process is higher than the priority of the currently running process.

• A non-preemptive priority algorithm will simply put the new process at the head ofthe ready queue.

Example : Let us consider four processes P1, P2, P3 and P4 having priorities 4, 2, 3, 1.Assuming that 1 is the highest priority.

Process CPU time Priority Arrival time

P1 12 4 0

P2 10 2 1

P3 5 3 2

P4 2 1 3

Solution: Initially schedular takes up P1. At time 1 process P2 arrives with priority 2(higher than 4 of P1). The schedular preempties process P1 and takes up P2. At time 2 processP3 arrives with priority 3 (lower than 2 of P2) and so it continues with P2. At time 3 processP4 with priority 1 (highest priority). Thus P4 is taken up immediately. To sum up, in thisalgorithm, a low priority job P1 is preempted by a high priority job P2 and similarly P2 ispreempted by P4. In this way scheduling proceeds in the following sequence.

Scheduling:

Avg. waiting time =� �� �� � �� �� ��� �� �

+ − + + − + − +

=�� � �� ��

���� ��� �

+ += =

A major problem with priority scheduling is indefinite blocking or starvation. If a processis in ready state, it will always preempted on the arrival of higher priority jobs. That is aprocess may have to wait indefinitely. A solution to the problem of indefinite blockage of thelow-priority process is to use the concept of aging. Aging is a technique of gradually increasingthe priority of processes, after fixed interval of time, as a result the process becomes a highpriority job and finally gets CPU for execution.

4.5.5 Multilevel Queue Scheduling

A multilevel queue-scheduling algorithm partitions the ready queue into several separatequeues. This can be done in the situations where processes are easilyi classified into different

CPU Scheduling 59

groups as shown in Figure 4.5. The multilevel queue scheduling has the followingcharacteristics:

• The processes are permanently assigned to one queue based on some property of theprocess, such as memory size, process priority, or process type, etc.

• Each queue has its own scheduling algorithm. For example, separate queues might beused for foreground and background processes. The foreground queue might bescheduled by Round Robin algorithm, whereas the background queue is scheduled byFCFS algorithm.

• Moreover, there must be scheduling among the queues, which is commonlyimplemented as fixed-priority preemptive scheduling. For example, the foregroundqueue may have absolute priority over the background queue.

Let us look at an example of a multilevel queue-scheduling algorithm with three queues:

1. System processes

2. Interactive processes3. Batch processes

Each queue has absolute priority over lower-priority queues. In this system no processin the batch queue, could run unless the queues for system processes and interactive processeswere all empty. Moreover, if a system process entered the ready queue while a batch processwas running, the batch process would be preempted. Another possibility is to time slicebetween the queues. Each queue gets a certain portion of the CPU time, in which it canschedule among the various processes in its queue. For instance, in the foreground-background(interactive-batch) queue example, the foreground queue can be given 80 percent of the CPUtime for Round Robin scheduling among its processes, whereas the background queue receives20 percent of the CPU to give to its processes in a FCFS manner.

Multilevel queue scheduling has been shown in the following Figure :

Fig. 4.5. Multilevel priority queue scheduling

4.5.6 Multilevel Feedback Queue Scheduling

Multilevel feedback queue scheduling allows “a process to move between queues”. The ideais to separate processes with different CPU-burst characteristics. If a process uses too muchCPU time, it will be moved to a lower-priority queue. This scheme leaves I/O-bound andinteractive processes in the higher-priority queues. Similarly, a process that waits too long ina lower priority queue may be moved to a higher-priority queue. This form of aging preventsstarvation. For example, consider a multilevel feedback queue scheduler with three queues,numbered from 0 to 2 (0 high priority, 1 medium priority, 2 low priority). The scheduler firstexecutes all processes in queue 0. It will execute processes in queue 1 only when queue 0 is

60 Principles of Operating Systems

empty. Similarly, processes in queue 2 will be executed only if queues 0 and 1 are empty. Aprocess that arrives for queue 1 will preempt a process in queue 2. A process that arrives forqueue 0 will, in turn, preempt a process in queue 1 (see Figure 4.6).

Fig. 4.6. Multilevel feedback scheduling

A process entering the ready queue is put in queue 0. A process in queue 0 is given a timequantum of (say) 8 milliseconds. If it does not finish within this time, it is moved to the tailof queue 1. If queue 0 is empty, the process at the head of queue 1 is given a quantum of(say) 16 milliseconds. If it does not complete, it is preempted and is put into queue 2. Processesin queue 2 are run on an FCFS basis, only when queues 0 and 1 are empty.

This scheduling algorithm gives highest priority to any process with a CPU burst of 8milliseconds or less. Such a process will quickly get the CPU, finish its CPU burst, and go offto its next I/O burst. Processes that need more than 8, but less than 24, milliseconds are alsoserved quickly, although with lower priority than shorter processes. Long processesautomatically sink to queue 2 and are served in FCFS order with any CPU cycles left overfrom queues 0 and 1. In general, a multilevel feedback queue scheduler is defined by thefollowing parameters:

• The number of queues.

• The scheduling algorithm for each queue.• The method used to determine when to upgrade a process to a higher-priority queue.

• The method used to determine when to demote a process to a lower-priority queue.

• The method used to determine which queue a process will enter when that processneeds service.

Example: Let us consider five processes with their CPU-burst and arrival time as shownbelow:

Processes CPU Burst Arrival time

P1 16 0

P2 24 10

P3 8 20

P4 30 35

P5 18 45

CPU Scheduling 61

Consider multilevel feedback queue scheduling with three queues Q1, Q2 and Q3. Theschedular will execute the processes in Q1 with time quantum of 8 ms. If a process does notfinish within this time it is moved to tail of Q2. The schedular executes processes in Q2 onlywhen Q1 is empty. Same procedure is followed in Q2 also. If a process is not finished within16 ms it is preempted and is moved to tail of Q3. When Q1 and Q2 are empty processes in Q3are run on FCFS basis.

A process that arrives in Q1 will preempt process in Q2 and a process that arrives in Q2will preempt a process in Q3. Following this procedure we find that processes would bescheduled in the following order.

Scheduling:

Waiting time of Processes:

P1 (0 + (18 – 10) +(28 – 20)) = 16

P2 (0 + (32 – 18) + (43 – 35) + (69 – 45)) = 46

P3 = 0

P4 (0 + (53 – 43) + (90 – 69) = 31

P5 (0 + (80 – 53)) = 27

Advantages:

1. Allow process to move between queues. It is fair specially for I/O bound jobs, theydo not have to wait too long.

2. A process that wait too long in a low priority queue automatically move to highpriority queue, preventing starvation.

3. It is general by nature, and can be configured according to the system under design.

Disadvantages:

1. The time based on which processes are moves between queues must be choosen verycarefully, otherwise it may lead to inefficiency.

2. Moving processes among queues leads to some overheads.

4.5.7 Multiple-Processor Scheduling

We have so far focused on the problems of scheduling the CPU in a system with a singleprocessor. If multiple CPUs are available, load sharing becomes possible. But then thescheduling problem becomes correspondingly much more complex. Several possibilities exist;and as we observe with single processor CPU scheduling, there is no one best solution.Here, we discuss issues which arise in multiprocessor scheduling. In simplest form we limitourselves to processors which are identical-homogeneous in terms of their functionality.

Approaches to Multiple-Processor Scheduling

One approach to CPU scheduling in a multiprocessor system is to have an environmentin which all scheduling decisions, I/O processing, and other system activities handled by asingle processor say the master server. The remaining processors execute only user code. We

62 Principles of Operating Systems

observe that this asymmetric multiprocessing is simple because only one processor accessesthe system data structures, reducing the need for data sharing. A second approach can bewhich uses symmetric multiprocessing (SMP), where each processor is self-scheduling andhandles all aspects. In such an environmenet all processes may be in a common ready queue,or each processor may have its own private queue of ready processes. In this set-up schedulingproceeds by having the scheduler for each processor examine the ready queue and select aprocess to execute. Then we have a coution; if we have multiple processors trying to accessand update a common data structure, the scheduler have to be programmed carefully toavoid conflict. We must ensure that no two processors should choose the same process andthat processes are not lost from the queue. Virtually all modern operating systems supportsymmetric multiprocessing.

REVIEW QUESTIONS

A: Conceptual and conventional questions :

1. Define the term CPU scheduling. Give its function.

2. Explain the various scheduling criteria's. Give the points of their divergence.

3. What do you understand by CPU-Bound and I/O-Bound processes?

4. Define CPU burst time. Explain how it is calculated. Give example.

5. Differentiate between preemptive and non-preemptive scheduling. Illustrate through an example.

6. What is a dispatcher? Explain its functions.

7. Describe FCFS scheduling algorithm and give its steps. Illustrate with an example.

8. Describe SJF scheduling algorithm. Show this with a diagram.

9. Describe Round Robin scheduling algorithm. Give its advantages and disadvantages.

10. Describe Priority scheduling algorithm. Illustrate with an example.

11. Compare and contrast FCFS and SJF scheduling algorithms.

12. What is the role of time quantum in RR algorithm?

13. State the purpose of CPU scheduling and give its importance. Illustrate with an example.

B: Fill in the blanks with appropriate word/words :

1. The number of processes completed per unit time is called ____________.

2. The mean time from submission to completion of a process is called ________.

3. Amount of time spent in ready-to-run queue but actually not running is called ____________.

4. The time between submission of request for CPU time for a process and first response to therequest is called ____________.

5. The _________________decides which jobs or processes are to be admitted to the ready queue.

6. Long-term scheduler is also called as ___________________.

Answers

1. throughput 2. turn around time 3. waiting time

4. response time 5. scheduler 6. job scheduler

CPU Scheduling 63

C: State true or false. If false, write the correct statement :

1. Long term scheduler considers in which order of processes arriving for execution be put in aqueue and decides which one should be sent to CPU for execution.

2. The running processes in execution state may be suspended because of I/O or because ofpreemption.

3. Short term scheduler decides which of the available jobs on the hard disk are ready for execution.

4. In non-preemptive scheduling a process does not give up CPU until it either terminates or performsI/O.

5. SJF is an example of a priority-based scheduling algorithm.

Answers

1. False (Long term scheduler admits newly created processes into ready queue).

2. True

3. False (Short term scheduler decides which processes from ready queue is allocated to CPUfor execution).

4. True 5. True

D: Multiple choice questions :

1. FIFO scheduling is ________.

(a) preemptive Scheduling (b) none Preemptive Scheduling

(c) deadline Scheduling (d) fair share scheduling

2. Round Robin scheduling is essentially the preemptive version of ________.

(a) FIFO

(b) Shortest Job First

(c) Shortest Remaining Time First

(d) Longest Time First

3. Non-preemptive scheduling strategy is a scheduling mechanism that commonly rely on process/thread at execution time include the following. In which case this is wrong?

(a) FCFS (b) SJN

(c) Priority (d) Deadline

4. Threads can be implemented in each of the following ways. In which case this is wrong?

(a) Run-time libraries (b) Operating system

(c) Java Virtual Machine (d) Parent/child processes

5. Which one of the following state-transitions does not affect any "ready" queue?

(a) Release. (b) Timeout

(c) Event occurs (d) Admit.

6. Which one of the following statements correctly describes the relationship between the processesand programs in a computer system at any given moment?

(a) Every program stored in secondary memory must be associated with a process.

(b) A different program must be associated with every process.

(c) Several programs may be associated with the same process.

(d) Several processes may be associated with the same program.

64 Principles of Operating Systems

7. When a process is in blocked state, it is actually

(a) Waiting for I/O operation (b) Waiting for CPU for its execution

(c) Waiting in ready queue (d) Waiting for entering into ready queue

8. Which type of scheduling scheme did windows 3.1 operating system had

(a) preemptive multitasking (b) preemptive scheduling

(c) uniprogramming (d) co-operative multitasking

9. Which of the following state transitions is not possible?

(a) blocked to running (b) ready to running

(c) blocked to ready (d) running to blocked

10. A major problem with priority scheduling is _________.

(a) definite blocking (b) starvation

(c) low priority (d) infinite blocking

Answers

1. (b) 2. (a) 3. (a) 4. (c) 5. (a)

6. (d) 7. (a) 8. (a) 9. (a) 10. (b)

❑❑❑