Shortest job next

Last updated
Shortest job next being executed Shortest job first.png
Shortest job next being executed

Shortest job next (SJN), also known as shortest job first (SJF) or shortest process next (SPN), is a scheduling policy that selects for execution the waiting process with the smallest execution time. [1] SJN is a non-preemptive algorithm. Shortest remaining time is a preemptive variant of SJN.

Contents

Shortest job next is advantageous because of its simplicity and because it minimizes the average amount of time each process has to wait until its execution is complete. However, it has the potential for process starvation for processes which will require a long time to complete if short processes are continually added. Highest response ratio next is similar but provides a solution to this problem using a technique called aging. [2]

Another disadvantage of using shortest job next is that the total execution time of a job must be known before execution. While it is impossible to predict execution time perfectly, several methods can be used to estimate it, such as a weighted average of previous execution times. [3] Multilevel feedback queue can also be used to approximate SJN without the need for the total execution time oracle. [1]

Shortest job next can be effectively used with interactive processes which generally follow a pattern of alternating between waiting for a command and executing it. If the execution burst of a process is regarded as a separate "job", the past behaviour can indicate which process to run next, based on an estimate of its running time.

Shortest job next is used in specialized environments where accurate estimates of running time are available.

Weighted shortest job first

Weighted shortest job first (WSJF) is a modification of the concept used in agile development where jobs get weighted with the cost of delay so that the highest valued jobs get done sooner. [4]

Value-flow rate (VFR) is an alternate, more intuitive name given to WSJF which expresses cost of delay and duration using unitless relative "points" rather than actual units of time or money. [5]

See also

Related Research Articles

<span class="mw-page-title-main">Operating system</span> Software that manages computer hardware resources

An operating system (OS) is system software that manages computer hardware and software resources, and provides common services for computer programs.

<span class="mw-page-title-main">Process (computing)</span> Particular execution of a computer program

In computing, a process is the instance of a computer program that is being executed by one or many threads. There are many different process models, some of which are light weight, but almost all processes are rooted in an operating system (OS) process which comprises the program code, assigned system resources, physical and logical access permissions, and data structures to initiate, control and coordinate execution activity. Depending on the OS, a process may be made up of multiple threads of execution that execute instructions concurrently.

<span class="mw-page-title-main">Thread (computing)</span> Smallest sequence of programmed instructions that can be managed independently by a scheduler

In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. In many cases, a thread is a component of a process.

<span class="mw-page-title-main">Queueing theory</span> Mathematical study of waiting lines, or queues

Queueing theory is the mathematical study of waiting lines, or queues. A queueing model is constructed so that queue lengths and waiting time can be predicted. Queueing theory is generally considered a branch of operations research because the results are often used when making business decisions about the resources needed to provide a service.

In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows.

<span class="mw-page-title-main">Round-robin scheduling</span> Algorithm employed by process and network schedulers in computing

Round-robin (RR) is one of the algorithms employed by process and network schedulers in computing. As the term is generally used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority. Round-robin scheduling is simple, easy to implement, and starvation-free. Round-robin scheduling can be applied to other scheduling problems, such as data packet scheduling in computer networks. It is an operating system concept.

In computer science, the test-and-set instruction is an instruction used to write (set) 1 to a memory location and return its old value as a single atomic operation. The caller can then "test" the result to see if the state was changed by the call. If multiple processes may access the same memory location, and if a process is currently performing a test-and-set, no other process may begin another test-and-set until the first process's test-and-set is finished. A central processing unit (CPU) may use a test-and-set instruction offered by another electronic component, such as dual-port RAM; a CPU itself may also offer a test-and-set instruction.

Memory segmentation is an operating system memory management technique of dividing a computer's primary memory into segments or sections. In a computer system using segmentation, a reference to a memory location includes a value that identifies a segment and an offset within that segment. Segments or sections are also used in object files of compiled programs when they are linked together into a program image and when the image is loaded into memory.

Micro-Controller Operating Systems is a real-time operating system (RTOS) designed by Jean J. Labrosse in 1991. It is a priority-based preemptive real-time kernel for microprocessors, written mostly in the programming language C. It is intended for use in embedded systems.

In computer science, a multilevel feedback queue is a scheduling algorithm. Scheduling algorithms are designed to have some process running at all times to keep the central processing unit (CPU) busy. The multilevel feedback queue extends standard algorithms with the following design requirements:

  1. Separate processes into multiple ready queues based on their need for the processor.
  2. Give preference to processes with short CPU bursts.
  3. Give preference to processes with high I/O bursts.

Muda is a Japanese word meaning "futility", "uselessness", or "wastefulness", and is a key concept in lean process thinking such as in the Toyota Production System (TPS), denoting one of three types of deviation from optimal allocation of resources. The other types are known by the Japanese terms mura ("unevenness") and muri ("overload"). Waste in this context refers to the wasting of time or resources rather than wasteful by-products and should not be confused with waste reduction.

<span class="mw-page-title-main">Shortest remaining time</span> Scheduling algorithm

Shortest remaining time, also known as shortest remaining time first (SRTF), is a scheduling method that is a preemptive version of shortest job next scheduling. In this scheduling algorithm, the process with the smallest amount of time remaining until completion is selected to execute. Since the currently executing process is the one with the shortest amount of time remaining by definition, and since that time should only reduce as execution progresses, the process will either run until it completes or get pre-empted if a new process is added that requires a smaller amount of time.

Lottery scheduling is a probabilistic scheduling algorithm for processes in an operating system. Processes are each assigned some number of lottery tickets, and the scheduler draws a random ticket to select the next process. The distribution of tickets need not be uniform; granting a process more tickets provides it a relative higher chance of selection. This technique can be used to approximate other scheduling algorithms, such as Shortest job next and Fair-share scheduling.

In computer main memory, auxiliary storage and computer buses, data redundancy is the existence of data that is additional to the actual data and permits correction of errors in stored or transmitted data. The additional data can simply be a complete copy of the actual data, or only select pieces of data that allow detection of errors and reconstruction of lost or damaged data up to a certain level.

Highest response ratio next (HRRN) scheduling is a non-preemptive discipline. It was developed by Brinch Hansen as modification of shortest job next or shortest job first to mitigate the problem of process starvation. In HRRN, the next job is not that with the shortest estimated run time, but that with the highest response ratio defined as

<span class="mw-page-title-main">I/O scheduling</span> Arbiter for mass storage access in an operating system

Input/output (I/O) scheduling is the method that computer operating systems use to decide in which order I/O operations will be submitted to storage volumes. I/O scheduling is sometimes called disk scheduling.

A glossary of terms relating to project management and consulting.

Cost of Delay is "a way of communicating the impact of time on the outcomes we hope to achieve". More formally, it is the partial derivative of the total expected value with respect to time. Cost of Delay combines an understanding of value with how that value leaks away over time. It is a tactic that helps communicate and prioritize development decisions by calculating the impact of time on value creation & capture.

A journaling file system is a file system that keeps track of changes not yet committed to the file system's main part by recording the goal of such changes in a data structure known as a "journal", which is usually a circular log. In the event of a system crash or power failure, such file systems can be brought back online more quickly with a lower likelihood of becoming corrupted.

Earliest deadline first (EDF) or least time to go is a dynamic priority scheduling algorithm used in real-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs the queue will be searched for the process closest to its deadline. This process is the next to be scheduled for execution.

References

  1. 1 2 Arpaci-Dusseau, Remzi H.; Arpaci-Dusseau, Andrea C. (2014), Operating Systems: Three Easy Pieces [Chapter Scheduling Introduction] (PDF), Arpaci-Dusseau Books
  2. Tanenbaum, A. S. (2008). Modern Operating Systems (3rd ed.). Pearson Education, Inc. p. 156. ISBN   978-0-13-600663-3.
  3. Silberschatz, A.; Galvin, P.B.; Gagne, G. (2005). Operating Systems Concepts (7th ed.). Wiley. p. 161. ISBN   0-471-69466-5.
  4. Reinertsen, Donald (2008). Principles of Product Development Flow: Second Generation Lean Product Development. Celeritas Publishing. p. 193. ISBN   978-1-935401-00-1.
  5. Knesek, Doug. "'Value-Flow Rate': A Better Name for a Great Measure" . Retrieved April 8, 2016.