Earliest deadline first scheduling

Last updated

Earliest deadline first (EDF) or least time to go is a dynamic priority scheduling algorithm used in real-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs (task finishes, new task released, etc.) the queue will be searched for the process closest to its deadline. This process is the next to be scheduled for execution.

Contents

EDF is an optimal scheduling algorithm on preemptive uniprocessors, in the following sense: if a collection of independent jobs, each characterized by an arrival time, an execution requirement and a deadline, can be scheduled (by any algorithm) in a way that ensures all the jobs complete by their deadline, the EDF will schedule this collection of jobs so they all complete by their deadline.

With scheduling periodic processes that have deadlines equal to their periods, EDF has a utilization bound of 100%. Thus, the schedulability test [1] for EDF is:

where the are the worst-case computation-times of the processes and the are their respective inter-arrival periods (assumed to be equal to the relative deadlines). [2]

That is, EDF can guarantee that all deadlines are met provided that the total CPU utilization is not more than 100%. Compared to fixed-priority scheduling techniques like rate-monotonic scheduling, EDF can guarantee all the deadlines in the system at higher loading.

Note that use the schedulability test formula under deadline as period. When deadline is less than period, things are different. Here is an example: The four periodic tasks needs scheduling, where each task is depicted as TaskNo( computation time, relative deadline, period). They are T0(5,13,20), T1(3,7,11), T2(4,6,10) and T3(1,1,20). This task group meets utilization is no greater than 1.0, where utilization is calculated as 5/20+3/11+4/10+1/20 = 0.97 (two digits rounded), but it's still unscheduable, check EDF Scheduling Failure figure for details.

EDF Scheduling Failure EDF Scheduling test failed.png
EDF Scheduling Failure


EDF is also an optimal scheduling algorithm on non-preemptive uniprocessors, but only among the class of scheduling algorithms that do not allow inserted idle time. When scheduling periodic processes that have deadlines equal to their periods, a sufficient (but not necessary) schedulability test for EDF becomes: [3]

Where p represents the penalty for non-preemption, given by max / min. If this factor can be kept small, non-preemptive EDF can be beneficial as it has low implementation overhead.

However, when the system is overloaded, the set of processes that will miss deadlines is largely unpredictable (it will be a function of the exact deadlines and time at which the overload occurs.) This is a considerable disadvantage to a real time systems designer. The algorithm is also difficult to implement in hardware and there is a tricky issue of representing deadlines in different ranges (deadlines can not be more precise than the granularity of the clock used for the scheduling). If a modular arithmetic is used to calculate future deadlines relative to now, the field storing a future relative deadline must accommodate at least the value of the (("duration" {of the longest expected time to completion} * 2) + "now"). Therefore EDF is not commonly found in industrial real-time computer systems.

Instead, most real-time computer systems use fixed-priority scheduling (usually rate-monotonic scheduling). With fixed priorities, it is easy to predict that overload conditions will cause the low-priority processes to miss deadlines, while the highest-priority process will still meet its deadline.

There is a significant body of research dealing with EDF scheduling in real-time computing; it is possible to calculate worst case response times of processes in EDF, to deal with other types of processes than periodic processes and to use servers to regulate overloads.

Example

Consider 3 periodic processes scheduled on a preemptive uniprocessor. The execution times and periods are as shown in the following table:

Process Timing Data
ProcessExecution TimePeriod
P118
P225
P3410

In this example, the units of time may be considered to be schedulable time slices. The deadlines are that each periodic process must complete within its period.

Timing diagram

Timing Diagram showing part of one possible schedule for the example. EDF Example Timing Diagram.png
Timing Diagram showing part of one possible schedule for the example.

In the timing diagram, the columns represent time slices with time increasing to the right, and the processes all start their periods at time slice 0. The timing diagram's alternating blue and white shading indicates each process's periods, with deadlines at the color changes.

The first process scheduled by EDF is P2, because its period is shortest, and therefore it has the earliest deadline. Likewise, when P2 completes, P1 is scheduled, followed by P3.

At time slice 5, both P2 and P3 have the same deadline, needing to complete before time slice 10, so EDF may schedule either one.

Utilization

The utilization will be:

Since the least common multiple of the periods is 40, the scheduling pattern can repeat every 40 time slices. But, only 37 of those 40 time slices are used by P1, P2, or P3. Since the utilization, 92.5%, is not greater than 100%, the system is schedulable with EDF.

Deadline interchange

Undesirable deadline interchanges may occur with EDF scheduling. A process may use a shared resource inside a critical section, to prevent it from being pre-emptively descheduled in favour of another process with an earlier deadline. If so, it becomes important for the scheduler to assign the running process the earliest deadline from among the other processes waiting for the resource. Otherwise the processes with earlier deadlines might miss them.

This is especially important if the process running the critical section has a much longer time to complete and exit from its critical section, which will delay releasing the shared resource. But the process might still be pre-empted in favour of others that have earlier deadlines but do not share the critical resource. This hazard of deadline interchange is analogous to priority inversion when using fixed-priority pre-emptive scheduling.

To speed up the deadline search within the ready queue, the queue entries be sorted according to their deadlines. When a new process or a periodic process is given a new deadline, it is inserted before the first process with a later deadline. This way, the processes with the earliest deadlines are always at the beginning of the queue.

Heavy traffic analysis for EDF queues with reneging

In a heavy-traffic analysis of the behavior of a single-server queue under an earliest-deadline-first scheduling policy with reneging, [4] the processes have deadlines and are served only until their deadlines elapse. The fraction of "reneged work", defined as the residual work not serviced due to elapsed deadlines, is an important performance measure.

Comparison with fixed-priority schedulers

It is commonly accepted that an implementation of fixed-priority pre-emptive scheduling (FPS) is simpler than a dynamic priority scheduler, like the EDF. However, when comparing the maximum usage of an optimal scheduling under fixed priority (with the priority of each thread given by the rate-monotonic scheduling), the EDF can reach 100% while the theoretical maximum value for rate-monotonic scheduling is around 69%. In addition, the worst-case overhead of an EDF implementation (fully preemptive or limited/non-preemptive) for periodic and/or sporadic tasks can be made proportional to the logarithm of the largest time representation required by a given system (to encode deadlines and periods) using Digital Search Trees. [5] In practical cases, such as embedded systems using a fixed, 32-bit representation of time, scheduling decisions can be made using this implementation in a small fixed-constant time which is independent of the number of system tasks. In such situations experiments have found little discernible difference in overhead between the EDF and FPS, even for task sets of (comparatively) large cardinality. [5]

Note that EDF does not make any specific assumption on the periodicity of the tasks; hence, it can be used for scheduling periodic as well as aperiodic tasks. [2]

Kernels implementing EDF scheduling

Although EDF implementations are not common in commercial real-time kernels, here are a few links of open-source and real-time kernels implementing EDF:

See also

Related Research Articles

<span class="mw-page-title-main">Computer multitasking</span> Concurrent execution of multiple processes

In computing, multitasking is the concurrent execution of multiple tasks over a certain period of time. New tasks can interrupt already started ones before they finish, instead of waiting for them to end. As a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as central processing units (CPUs) and main memory. Multitasking automatically interrupts the running program, saving its state and loading the saved state of another program and transferring control to it. This "context switch" may be initiated at fixed time intervals, or the running program may be coded to signal to the supervisory software when it can be interrupted.

A real-time operating system (RTOS) is an operating system (OS) for real-time computing applications that processes data and events that have critically defined time constraints. An RTOS is distinct from a time-sharing operating system, such as Unix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in a multitasking or multiprogramming environment. Processing time requirements need to be fully understood and bound rather than just kept as a minimum. All processing must occur within the defined constraints. Real-time operating systems are event-driven and preemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts.

<span class="mw-page-title-main">Thread (computing)</span> Smallest sequence of programmed instructions that can be managed independently by a scheduler

In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. In many cases, a thread is a component of a process.

<span class="mw-page-title-main">Queueing theory</span> Mathematical study of waiting lines, or queues

Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service.

In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows.

In computer science, rate-monotonic scheduling (RMS) is a priority assignment algorithm used in real-time operating systems (RTOS) with a static-priority scheduling class. The static priorities are assigned according to the cycle duration of the job, so a shorter cycle duration results in a higher job priority.

Completely Fair Queuing (CFQ) is an I/O scheduler for the Linux kernel which was written in 2003 by Jens Axboe.

Deficit Round Robin (DRR), also Deficit Weighted Round Robin (DWRR), is a scheduling algorithm for the network scheduler. DRR is, like weighted fair queuing (WFQ), a packet-based implementation of the ideal Generalized Processor Sharing (GPS) policy. It was proposed by M. Shreedhar and G. Varghese in 1995 as an efficient and fair algorithm.

<span class="mw-page-title-main">O(1) scheduler</span> Historical Linux 2.6 kernel process scheduler

An O(1) scheduler is a kernel scheduling design that can schedule processes within a constant amount of time, regardless of how many processes are running on the operating system. This is an improvement over previously used O(n) schedulers, which schedule processes in an amount of time that scales linearly based on the amounts of inputs.

Micro-Controller Operating Systems is a real-time operating system (RTOS) designed by Jean J. Labrosse in 1991. It is a priority-based preemptive real-time kernel for microprocessors, written mostly in the programming language C. It is intended for use in embedded systems.

Least slack time (LST) scheduling is an algorithm for dynamic priority scheduling. It assigns priorities to processes based on their slack time. Slack time is the amount of time left after a job if the job was started now. This algorithm is also known as least laxity first. Its most common use is in embedded systems, especially those with multiple processors. It imposes the simple constraint that each process on each available processor possesses the same run time, and that individual processes do not have an affinity to a certain processor. This is what lends it a suitability to embedded systems.

In computing, preemption is the act of temporarily interrupting an executing task, with the intention of resuming it at a later time. This interrupt is done by an external scheduler with no assistance or cooperation from the task. This preemptive scheduler usually runs in the most privileged protection ring, meaning that interruption and then resumption are considered highly secure actions. Such changes to the currently executing task of a processor are known as context switching.

Deadline-monotonic priority assignment is a priority assignment policy used with fixed-priority pre-emptive scheduling.

Dynamic priority scheduling is a type of scheduling algorithm in which the priorities are calculated during the execution of the system. The goal of dynamic priority scheduling is to adapt to dynamically changing progress and to form an optimal configuration in a self-sustained manner. It can be very hard to produce well-defined policies to achieve the goal depending on the difficulty of a given problem.

<span class="mw-page-title-main">Completely Fair Scheduler</span> Linux process scheduler

The Completely Fair Scheduler (CFS) is a process scheduler that was merged into the 2.6.23 release of the Linux kernel and is the default scheduler of the tasks of the SCHED_NORMAL class. It handles CPU resource allocation for executing processes, and aims to maximize overall CPU utilization while also maximizing interactive performance.

Nano-RK is a wireless sensor networking real-time operating system (RTOS) from Carnegie Mellon University, designed to run on microcontrollers for use in sensor networks. Nano-RK supports a fixed-priority fully preemptive scheduler with fine-grained timing primitives to support real-time task sets. "Nano" implies that the RTOS is small, using 2 KB of random-access memory (RAM) and using 18 KB of flash memory, while RK is short for resource kernel. A resource kernel provides reservations on how often system resources can be used. For example, a task might only be allowed to execute 10 ms every 150 ms, or a node might only be allowed to transmit 10 network packets per minute. These reservations form a virtual energy budget to ensure a node meets its designed battery lifetime and to prevent a failed node from generating excessive network traffic. Nano-RK is open-source software, is written in C and runs on the Atmel-based FireFly sensor networking platform, the MicaZ motes, and the MSP430 processor.

<span class="mw-page-title-main">Brain Fuck Scheduler</span> Process scheduler in Linux

The Brain Fuck Scheduler (BFS) is a process scheduler designed for the Linux kernel in August 2009 based on earliest eligible virtual deadline first scheduling (EEVDF), as an alternative to the Completely Fair Scheduler (CFS) and the O(1) scheduler. BFS was created by an experienced kernel programmer Con Kolivas.

SCHED_DEADLINE EDF-based task scheduler in the Linux kernel

SCHED_DEADLINE is a CPU scheduler available in the Linux kernel since version 3.14, based on the earliest deadline first (EDF) and constant bandwidth server (CBS) algorithms, supporting resource reservations: each task scheduled under such policy is associated with a budget Q, and a period P, corresponding to a declaration to the kernel that Q time units are required by that task every P time units, on any processor. This makes SCHED_DEADLINE particularly suitable for real-time applications, like multimedia or industrial control, where P corresponds to the minimum time elapsing between subsequent activations of the task, and Q corresponds to the worst-case execution time needed by each activation of the task.

The term scheduling analysis in real-time computing includes the analysis and testing of the scheduler system and the algorithms used in real-time applications. In computer science, real-time scheduling analysis is the evaluation, testing and verification of the scheduling system and the algorithms used in real-time operations. For critical operations, a real-time system must be tested and verified for performance.

<span class="mw-page-title-main">Bucket queue</span> Data structure for integer priorities

A bucket queue is a data structure that implements the priority queue abstract data type: it maintains a dynamic collection of elements with numerical priorities and allows quick access to the element with minimum priority. In the bucket queue, the priorities must be integers, and it is particularly suited to applications in which the priorities have a small range. A bucket queue has the form of an array of buckets: an array data structure, indexed by the priorities, whose cells contain collections of items with the same priority as each other. With this data structure, insertion of elements and changes of their priority take constant time. Searching for and removing the minimum-priority element takes time proportional to the number of buckets or, by maintaining a pointer to the most recently found bucket, in time proportional to the difference in priorities between successive operations.

References

  1. Xu, J.; Parnas, D.L. (1990). "Scheduling processes with release times, deadlines, precedence and exclusion relations". IEEE Transactions on Software Engineering. 16 (3): 360–369. doi:10.1109/32.48943.
  2. 1 2 Buttazzo, Giorgio (2011), Hard Real-Time Computing Systems: Predictable Scheduling Algorithms and Applications (Third ed.), New York, NY: Springer, p. 100, ISBN   9781461406761
  3. Short, Michael (2011). "Improved schedulability analysis of implicit deadline tasks under limited preemption EDF scheduling". 2011 IEEE International Conference on Emerging Technology and Factory Automation. pp. 1–8. doi:10.1109/ETFA.2011.6059008. ISBN   978-1-4577-0017-0. S2CID   7656331.
  4. Kruk, Łukasz; Lehoczky, John; Ramanan, Kavita; Shreve, Steven (2011). "Heavy traffic analysis for EDF queues with reneging" (PDF). The Annals of Applied Probability. 21 (2). doi:10.1214/10-AAP681. S2CID   12268649.
  5. 1 2 Short, Michael (April 2010). "Improved Task Management Techniques for Enforcing EDF Scheduling on Recurring Tasks". 2010 16th IEEE Real-Time and Embedded Technology and Applications Symposium. pp. 56–65. doi:10.1109/RTAS.2010.22. ISBN   978-1-4244-6690-0. S2CID   13940378.
  6. Cucinotta, Tommaso (2008). "Access Control for Adaptive Reservations on Multi-User Systems". 2008 IEEE Real-Time and Embedded Technology and Applications Symposium. pp. 387–396. doi:10.1109/RTAS.2008.16. ISBN   978-0-7695-3146-5. S2CID   1008365.
  7. Pierre G. Jansen, Sape J. Mullender, Paul J.M. Havinga, Hans Scholten. Lightweight EDF Scheduling with Deadline Inheritance