This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these template messages)
|
In computer science, an input queue is a collection of processes in storage that are waiting to be brought into memory to run a program. Input queues are mainly used in Operating System Scheduling which is a technique for distributing resources among processes. Input queues not only apply to operating systems (OS), but may also be applied to scheduling inside networking devices. The purpose of scheduling is to ensure resources are being distributed fairly and effectively; therefore, it improves the performance of the system.
Essentially, a queue is a collection which has data added in the rear position and removed from the front position. There are many different types of queues, and the ways they operate may be totally different. Operating systems use First-Come, First-Served queues, Shortest remaining time, Fixed-priority pre-emptive scheduling, round-robin scheduling and multilevel queue scheduling. Network devices use First-In-First-Out queue, Weighted fair queue, Priority queue and Custom queue.
In operating systems, processes are loaded into memory, and wait for their turn to be executed by the central processing unit (CPU). CPU scheduling manages process states and decides when a process will be executed next by using the input queue.
First-in, First-out processes are taken out from the queue in consecutive order that they are put into the queue. With this method, every process is treated equally. If there are two processes of different priority and the lower priority process enters the queue first, it will be executed first. This approach may not be ideal if different processes have different priorities, especially if the processes are long running.
The shortest remaining time method tries to predict the processing time of developments and places them into the queue from the smallest to largest processing time. This method estimates and predicts based on prior history records. In term, its performance is not stable but better improves process waiting time than First-Come, First-Served.
Fixed-priority pre-emptive scheduling method assigns different priorities to the processes based on their processing time and arranges them into the queue in order of their priorities. CPU server processes from higher to lower priority, and processes which have the same priority are served as First-Come, First-Served. The CPU will temporary stop serving low priority process when higher priority process coming into the queue.
Round-robin scheduling method will give a same amount of time for each process and cycle through them. This method is heavily bases on a lot of time consuming to each process. Too short a lot time will fragment the processes, and too long a lot time will increase waiting time for each process to be executed. Choosing right a lot time is the foundation for this method.[ clarification needed ]
The Multilevel queue scheduling method employs several queues, and each queue may have its own scheduling algorithm. Multilevel queue scheduling is more complex compared to other methods, but it provides flexibility for OS to serve different response time requirements in complicated situations.
In networking, packets are the key foundation for scheduling. There are many different types of packet travelling around network core every day, and they are treated totally different. For example, voice and video packets have higher priority than normal packets. In order to manage and distribute packet effectively, network devices also use input queue to determine which packet will be transmitted first.
In this mode, packets are taken out from the queue in the order that they are coming from the queue. Every packet is treated the same priority. If a large packet A comes before a small packet B, B still has to wait until A is completely served. If a system treats every packet the same, users can experience the delay in transmitting such as: voice packets.
Weighted fair queue uses the min-max-fair-share algorithm to distribute packets. The min fair-share means the network OS will distribute equally minimum resource for each type of packet. The max fair-share means the network OS will provide more resource for packets that need to transfer large amount of date at that moment, but it will take the resource back after transferring. “Weighted” means the scheduler will assign weight for each type of packet. Base on the weight, it will determine how to put packet into the queue and serve them. Usually, each packet will be weighted based on IP Precedence field from IP header of each packet.
Priority queue is divided into 4 sub queues with different priorities. Data in each queue are only served when the higher priority queues are empty. If data come into the empty higher priority queue while the network OS is transferring data of lower priority queue, network OS will hold data of the lower priority queue and process data in higher priority queue first. The network OS does not care how long lower priority queues have to wait for their turn because it always finishes each queue from highest to lowest priority first before moving to the next queue. Within each queue, packets are forwarded based on First-In-First-Out basis.
Custom queue is divided into 17 different sub queues. The first queue, queue 0, is reserved for the network OS to transmit system packet, the other 16 queues are for user-defined packets. User can define various important packets and assign them into each queue. Each queue has limited size and it will drop all coming packets if it reaches that limit. Each queue is serviced based on how much packets are served in each queue. If that limit is met, the network OS will hold packets of current queue and services the next queue until that queue is empty or it reaches its packet limit. If one queue is empty, the network OS will skip that queue and service the next queue.
An operating system (OS) is system software that manages computer hardware and software resources, and provides common services for computer programs.
A real-time operating system (RTOS) is an operating system (OS) for real-time computing applications that processes data and events that have critically defined time constraints. An RTOS is distinct from a time-sharing operating system, such as Unix, which manages the sharing of system resources with a scheduler, data buffers, or fixed task prioritization in multitasking or multiprogramming environments. Processing time requirements need to be fully understood and bound rather than just kept as a minimum. All processing must occur within the defined constraints. Real-time operating systems are event-driven and preemptive, meaning the OS can monitor the relevant priority of competing tasks, and make changes to the task priority. Event-driven systems switch between tasks based on their priorities, while time-sharing systems switch the task based on clock interrupts.
In computing, a process is the instance of a computer program that is being executed by one or many threads. There are many different process models, some of which are light weight, but almost all processes are rooted in an operating system (OS) process which comprises the program code, assigned system resources, physical and logical access permissions, and data structures to initiate, control and coordinate execution activity. Depending on the OS, a process may be made up of multiple threads of execution that execute instructions concurrently.
In computing, scheduling is the action of assigning resources to perform tasks. The resources may be processors, network links or expansion cards. The tasks may be threads, processes or data flows.
Round-robin (RR) is one of the algorithms employed by process and network schedulers in computing. As the term is generally used, time slices are assigned to each process in equal portions and in circular order, handling all processes without priority. Round-robin scheduling is simple, easy to implement, and starvation-free. Round-robin scheduling can be applied to other scheduling problems, such as data packet scheduling in computer networks. It is an operating system concept.
A network interface controller is a computer hardware component that connects a computer to a computer network.
In computer science, resource starvation is a problem encountered in concurrent computing where a process is perpetually denied necessary resources to process its work. Starvation may be caused by errors in a scheduling or mutual exclusion algorithm, but can also be caused by resource leaks, and can be intentionally caused via a denial-of-service attack such as a fork bomb.
Weighted round robin (WRR) is a network scheduler for data flows, but also used to schedule processes.
The MCP is the operating system of the Burroughs B5000/B5500/B5700 and the B6500 and successors, including the Unisys Clearpath/MCP systems.
Micro-Controller Operating Systems is a real-time operating system (RTOS) designed by Jean J. Labrosse in 1991. It is a priority-based preemptive real-time kernel for microprocessors, written mostly in the programming language C. It is intended for use in embedded systems.
In computer science, a multilevel feedback queue is a scheduling algorithm. Scheduling algorithms are designed to have some process running at all times to keep the central processing unit (CPU) busy. The multilevel feedback queue extends standard algorithms with the following design requirements:
In computer science, asynchronous I/O is a form of input/output processing that permits other processing to continue before the I/O operation has finished. A name used for asynchronous I/O in the Windows API is overlapped I/O.
A job scheduler is a computer application for controlling unattended background program execution of jobs. This is commonly called batch scheduling, as execution of non-interactive jobs is often called batch processing, though traditional job and batch are distinguished and contrasted; see that page for details. Other synonyms include batch system, distributed resource management system (DRMS), distributed resource manager (DRM), and, commonly today, workload automation (WLA). The data structure of jobs to run is known as the job queue.
Fair queuing is a family of scheduling algorithms used in some process and network schedulers. The algorithm is designed to achieve fairness when a limited resource is shared, for example to prevent flows with large packets or processes that generate small jobs from consuming more throughput or CPU time than other flows or processes.
Maximum throughput scheduling is a procedure for scheduling data packets in a packet-switched best-effort network, typically a wireless network, in view to maximize the total throughput of the network, or the system spectral efficiency in a wireless network. This is achieved by giving scheduling priority to the least "expensive" data flows in terms of consumed network resources per transferred amount of information.
The Completely Fair Scheduler (CFS) was a process scheduler that was merged into the 2.6.23 release of the Linux kernel. It was the default scheduler of the tasks of the SCHED_NORMAL
class and handled CPU resource allocation for executing processes, aiming to maximize overall CPU utilization while also maximizing interactive performance.
A process is a program in execution, and an integral part of any modern-day operating system (OS). The OS must allocate resources to processes, enable processes to share and exchange information, protect the resources of each process from other processes and enable synchronization among processes. To meet these requirements, the OS must maintain a data structure for each process, which describes the state and resource ownership of that process, and which enables the OS to exert control over each process.
Nano-RK is a wireless sensor networking real-time operating system (RTOS) from Carnegie Mellon University, designed to run on microcontrollers for use in sensor networks. Nano-RK supports a fixed-priority fully preemptive scheduler with fine-grained timing primitives to support real-time task sets. "Nano" implies that the RTOS is small, using 2 KB of random-access memory (RAM) and using 18 KB of flash memory, while RK is short for resource kernel. A resource kernel provides reservations on how often system resources can be used. For example, a task might only be allowed to execute 10 ms every 150 ms, or a node might only be allowed to transmit 10 network packets per minute. These reservations form a virtual energy budget to ensure a node meets its designed battery lifetime and to prevent a failed node from generating excessive network traffic. Nano-RK is open-source software, is written in C and runs on the Atmel-based FireFly sensor networking platform, the MicaZ motes, and the MSP430 processor.
The Brain Fuck Scheduler (BFS) is a process scheduler designed for the Linux kernel in August 2009 based on earliest eligible virtual deadline first scheduling (EEVDF), as an alternative to the Completely Fair Scheduler (CFS) and the O(1) scheduler. BFS was created by Con Kolivas.
Processor sharing or egalitarian processor sharing is a service policy where the customers, clients or jobs are all served simultaneously, each receiving an equal fraction of the service capacity available. In such a system all jobs start service immediately.