Multilevel queue

Last updated

Multi-level queueing, used at least since the late 1950s/early 1960s, is a queue with a predefined number of levels. Items get assigned to a particular level at insert (using some predefined algorithm), and thus cannot be moved to another level (unlike in the multilevel feedback queue). Items get removed from the queue by removing all items from a level, and then moving to the next. If an item is added to a level above, the "fetching" restarts from there. Each level of the queue is free to use its own scheduling, thus adding greater flexibility than merely having multiple levels in a queue.

Contents

Process Scheduling

Multi-level queue [1] :196 scheduling algorithm is used in scenarios where the processes can be classified into groups based on property like process type, CPU time, IO access, memory size, etc. One general classification of the processes is foreground processes and background processes. In a multi-level queue scheduling algorithm, there will be 'n' number of queues, where 'n' is the number of groups the processes are classified into. Each queue will be assigned a priority and will have its own scheduling algorithm like Round-robin scheduling [1] :194 or FCFS. For the process in a queue to execute, all the queues of priority higher than it should be empty, meaning the process in those high priority queues should have completed its execution. In this scheduling algorithm, once assigned to a queue, the process will not move to any other queues.

Consider the following table with the arrival time, execute time and type of the process (foreground or background - where foreground processes are given high priority) to understand non pre-emptive and pre-emptive multilevel scheduling in depth with FCFS algorithm for both the queues:

Process NameArrival TimeExecute TimeType
P005Foreground
P118Background
P237Background
P343Foreground
P453Foreground
P5811Background
P6153Foreground
P7254Foreground
Non pre-emptive and pre-emptive multi-level queue scheduling Multilevel.jpg
Non pre-emptive and pre-emptive multi-level queue scheduling

See also

Related Research Articles

In computer science, a priority queue is an abstract data type similar to a regular queue or stack data structure in which each element additionally has a "priority" associated with it. In a priority queue, an element with high priority is served before an element with low priority. In some implementations, if two elements have the same priority, they are served according to the order in which they were enqueued, while in other implementations, ordering of elements with the same priority is undefined.

Real-time computing (RTC), or reactive computing is the computer science term for hardware and software systems subject to a "real-time constraint", for example from event to system response. Real-time programs must guarantee response within specified time constraints, often referred to as "deadlines".

A real-time operating system (RTOS) is an operating system (OS) intended to serve real-time applications that process data as it comes in, typically without buffer delays. Processing time requirements are measured in tenths of seconds or shorter increments of time. A real-time system is a time-bound system which has well-defined, fixed time constraints. Processing must be done within the defined constraints or the system will fail. They either are event-driven or time-sharing. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts. Most RTOSs use a pre-emptive scheduling algorithm.

In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows.

Round-robin scheduling

Round-robin (RR) is one of the algorithms employed by process and network schedulers in computing. As the term is generally used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority. Round-robin scheduling is simple, easy to implement, and starvation-free. Round-robin scheduling can be applied to other scheduling problems, such as data packet scheduling in computer networks. It is an operating system concept.

In modern computers many processes run at once. Active processes are placed in an array called a run queue, or runqueue. The run queue may contain priority values for each process, which will be used by the scheduler to determine which process to run next. To ensure each program has a fair share of resources, each one is run for some time period (quantum) before it is paused and placed back into the run queue. When a program is stopped to let another run, the program with the highest priority in the run queue is then allowed to execute.

In computer science, a multilevel feedback queue is a scheduling algorithm. Solaris 2.6 Time-Sharing (TS) scheduler implements this algorithm. The MacOS and Microsoft Windows schedulers can both be regarded as examples of the broader class of multilevel feedback queue schedulers. This scheduling algorithm is intended to meet the following design requirements for multimode systems:

  1. Give preference to short jobs.
  2. Give preference to I/O bound processes.
  3. Separate processes into categories based on their need for the processor.

In computer science, gang scheduling is a scheduling algorithm for parallel systems that schedules related threads or processes to run simultaneously on different processors. Usually these will be threads all belonging to the same process, but they may also be from different processes, where the processes could have a producer-consumer relationship or come from the same MPI program.

In computer science, an input queue is a collection of processes in storage that are waiting to be brought into memory to run a program. Input queues are mainly used in Operating System Scheduling which is a technique for distributing resources among processes. Input queues, not only apply to operating systems (OS), but may also be applied to scheduling inside networking devices. The purpose of scheduling is to ensure resources are being distributed fairly and effectively; therefore, it improves the performance of the system.

A discrete-event simulation (DES) models the operation of a system as a (discrete) sequence of events in time. Each event occurs at a particular instant in time and marks a change of state in the system. Between consecutive events, no change in the system is assumed to occur; thus the simulation time can directly jump to the occurrence time of the next event, which is called next-event time progression.

Foreground-background is a scheduling algorithm that is used to control an execution of multiple processes on a single processor. It is based on two waiting lists, the first one is called foreground because this is the one in which all processes initially enter, and the second one is called background because all processes, after using all of their execution time in foreground, are moved to background.

In computing job control refers to the control of multiple tasks or jobs on a computer system, ensuring that they each have access to adequate resources to perform correctly, that competition for limited resources does not cause a deadlock where two or more jobs are unable to complete, resolving such situations where they do occur, and terminating jobs that, for any reason, are not performing as expected.

Nano-RK: A Wireless Sensor Networking Real-Time Operating System (RTOS) is a real-time operating system (RTOS) from Carnegie Mellon University designed to run on micro-controllers for use in sensor networks. Nano-RK supports a fixed-priority fully preemptive scheduler with fine-grained timing primitives to support real-time task sets. "Nano" implies that the RTOS is small, consuming 2 KB of RAM and using 18 KB of flash, while "RK" is short for resource kernel. A resource kernel provides reservations on how often system resources can be consumed. For example, a task might only be allowed to execute 10 ms every 150 ms, or a node might only be allowed to transmit 10 network packets per minute. These reservations form a virtual energy budget to ensure a node meets its designed battery lifetime as well as protecting a failed node from generating excessive network traffic. Nano-RK is open source, is written in C and runs on the Atmel-based FireFly sensor networking platform, the MicaZ motes as well as the MSP430 processor.

In Operating systems, aging or ageing is a scheduling technique used to avoid starvation. Fixed priority scheduling is a scheduling discipline, in which tasks queued for utilizing a system resource are assigned a priority each. A task with a high priority is allowed to access a specific system resource before a task with a lower priority is allowed to do the same. A disadvantage of this approach is that tasks assigned with a lower priority may be starved when a large number of high priority tasks are queued. Aging is used to gradually increase the priority of a task, based on its waiting time in the ready queue.

Brain Fuck Scheduler

The Brain Fuck Scheduler (BFS) is a process scheduler designed for the Linux kernel in August 2009 as an alternative to the Completely Fair Scheduler (CFS) and the O(1) scheduler. BFS was created by an experienced kernel programmer Con Kolivas.

Earliest deadline first (EDF) or least time to go is a dynamic priority scheduling algorithm used in real-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs the queue will be searched for the process closest to its deadline. This process is the next to be scheduled for execution.

The term scheduling analysis in real-time computing includes the analysis and testing of the scheduler system and the algorithms used in real-time applications. In computer science, real-time scheduling analysis is the evaluation, testing and verification of the scheduling system and the algorithms used in real-time operations. For critical operations, a real-time system must be tested and verified for performance.

Processor sharing or egalitarian processor sharing is a service policy where the customers, clients or jobs are all served simultaneously, each receiving an equal fraction of the service capacity available. In such a system all jobs start service immediately.

In computer science, a monotone priority queue is a variant of the priority queue abstract data type in which the priorities of extracted items are required to form a monotonic sequence. That is, for a priority queue in which each successively extracted item is the one with the minimum priority, the minimum priority should be monotonically increasing. Conversely for a max-heap the maximum priority should be monotonically decreasing. The assumption of monotonicity arises naturally in several applications of priority queues, and can be used as a simplifying assumption to speed up certain types of priority queues.

Bucket queue

In the design and analysis of data structures, a bucket queue is a priority queue for prioritizing elements whose priorities are small integers. It has the form of an array of buckets: an array data structure, indexed by the priorities, whose cells contain buckets of items with the same priority as each other. With this data structure, insertion or removal of elements takes constant time. Searches for the minimum-priority element take time proportional to the number of buckets or, by maintaining a pointer to the most recently found bucket, in time proportional to the difference in priorities between the results of successive searches.

References

  1. 1 2 Silberschatz, Abraham; Galvin, Peter Baer; Gagne, Greg (2008). Operating system concepts (8th ed.). Hoboken, N.J.: Wiley. ISBN   0470128720.