YDS algorithm

Last updated

YDS is a scheduling algorithm for dynamic speed scaling processors which minimizes the total energy consumption. It was named after and developed by Yao et al. [1] There is both an online and an offline version of the algorithm.

Contents

Offline Algorithm

Definitions:

The algorithm then works as follows:

While    Determine the time interval  of maximum density .   In  process the jobs of  at speed  according to EDF   Set .    Remove  from the time horizon and update the release times and deadlines of unscheduled jobs accordingly. end While

In other terms it's a recursive algorithm that will follow these steps until all jobs are scheduled:

  1. Calculate all intensities for all possible combinations of intervals. This means that for every start time and end time combination the intensity of work is calculated. For this the times of all jobs whose arrival time and deadline lie inside the interval are added and divided by the interval length. To speed up the process, only combinations of arrival times and later deadlines need to be considered, as times without arrival of a process or deadline can be shrunk to a smaller interval with the same processes, thus increasing intensity, and negative intervals are invalid. Then the maximum intensity interval is selected. In case of multiple equally intense intervals, one can be chosen at will, as intensities of non-overlapping intervals do not influence each other, and removing a part of an interval will not change the intensity of the rest, as processes are removed proportionally.
  2. The processes inside this interval are scheduled using Earliest Deadline First, meaning that the job inside this interval whose deadline will arrive soonest is scheduled first, and so on. The jobs are executed at the above calculated intensity to fit all jobs inside the interval.
  3. The interval is removed from the timeline, as no more calculations can be scheduled here. To simplify further calculations, all arrival times and deadlines of remaining jobs are recalculated to omit already occupied times. For example, assume a job with arrival time , deadline and a workload , and a job with , and . Assume the previous interval was from time to . To omit this interval the times of and need to be adjusted; workloads are unaffected, as no work has been done for either or . stays the same, as it's unaffected by later omissions. , however, needs to be changed to , as . This is the time job has left before its deadline. The arrival time becomes , as it would have been inside the removed interval. also becomes , as the time left after the removed interval is . It is important, however, to remember the actual arrival and deadline times for later assembly of the scheduling.
  4. Repeat steps 1-3 until all jobs have been scheduled.
  5. Assemble jobs into final scheduling according to their allotted time intervals. Remember, though, that an interval may be split in two by another interval calculated earlier.

For any Job instance, the algorithm computes an optimal schedule minimizing the total energy consumption. [2]

See also

Related Research Articles

<span class="mw-page-title-main">Brownian motion</span> Random motion of particles suspended in a fluid

Brownian motion is the random motion of particles suspended in a medium.

A proportional–integral–derivative controller is a control loop mechanism employing feedback that is widely used in industrial control systems and a variety of other applications requiring continuously modulated control. A PID controller continuously calculates an error value as the difference between a desired setpoint (SP) and a measured process variable (PV) and applies a correction based on proportional, integral, and derivative terms, hence the name.

In bioinformatics, neighbor joining is a bottom-up (agglomerative) clustering method for the creation of phylogenetic trees, created by Naruya Saitou and Masatoshi Nei in 1987. Usually based on DNA or protein sequence data, the algorithm requires knowledge of the distance between each pair of taxa to create the phylogenetic tree.

UPGMA is a simple agglomerative (bottom-up) hierarchical clustering method. It also has a weighted variant, WPGMA, and they are generally attributed to Sokal and Michener.

<span class="mw-page-title-main">Graph coloring</span> Methodic assignment of colors to elements of a graph

In graph theory, graph coloring is a special case of graph labeling; it is an assignment of labels traditionally called "colors" to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices of a graph such that no two adjacent vertices are of the same color; this is called a vertex coloring. Similarly, an edge coloring assigns a color to each edge so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each face or region so that no two faces that share a boundary have the same color.

Verlet integration is a numerical method used to integrate Newton's equations of motion. It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics. The algorithm was first used in 1791 by Jean Baptiste Delambre and has been rediscovered many times since then, most recently by Loup Verlet in the 1960s for use in molecular dynamics. It was also used by P. H. Cowell and A. C. C. Crommelin in 1909 to compute the orbit of Halley's Comet, and by Carl Størmer in 1907 to study the trajectories of electrical particles in a magnetic field . The Verlet integrator provides good numerical stability, as well as other properties that are important in physical systems such as time reversibility and preservation of the symplectic form on phase space, at no significant additional computational cost over the simple Euler method.

In computer science, a topological sort or topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge (u,v) from vertex u to vertex v, u comes before v in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this application, a topological ordering is just a valid sequence for the tasks. Precisely, a topological sort is a graph traversal in which each node v is visited only after all its dependencies are visited. A topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph (DAG). Any DAG has at least one topological ordering, and algorithms are known for constructing a topological ordering of any DAG in linear time. Topological sorting has many applications, especially in ranking problems such as feedback arc set. Topological sorting is possible even when the DAG has disconnected components.

In statistics and probability theory, a point process or point field is a collection of mathematical points randomly located on a mathematical space such as the real line or Euclidean space. Point processes can be used for spatial data analysis, which is of interest in such diverse disciplines as forestry, plant ecology, epidemiology, geography, seismology, materials science, astronomy, telecommunications, computational neuroscience, economics and others.

<span class="mw-page-title-main">Geometry processing</span>

Geometry processing is an area of research that uses concepts from applied mathematics, computer science and engineering to design efficient algorithms for the acquisition, reconstruction, analysis, manipulation, simulation and transmission of complex 3D models. As the name implies, many of the concepts, data structures, and algorithms are directly analogous to signal processing and image processing. For example, where image smoothing might convolve an intensity signal with a blur kernel formed using the Laplace operator, geometric smoothing might be achieved by convolving a surface geometry with a blur kernel formed using the Laplace-Beltrami operator.

The Hungarian method is a combinatorial optimization algorithm that solves the assignment problem in polynomial time and which anticipated later primal–dual methods. It was developed and published in 1955 by Harold Kuhn, who gave it the name "Hungarian method" because the algorithm was largely based on the earlier works of two Hungarian mathematicians, Dénes Kőnig and Jenő Egerváry. However, in 2006 it was discovered that Carl Gustav Jacobi had solved the assignment problem in the 19th century, and the solution had been published posthumously in 1890 in Latin.

Single-machine scheduling or single-resource scheduling is an optimization problem in computer science and operations research. We are given n jobs J1, J2, ..., Jn of varying processing times, which need to be scheduled on a single machine, in a way that optimizes a certain objective, such as the throughput.

Interval scheduling is a class of problems in computer science, particularly in the area of algorithm design. The problems consider a set of tasks. Each task is represented by an interval describing the time in which it needs to be processed by some machine. For instance, task A might run from 2:00 to 5:00, task B might run from 4:00 to 10:00 and task C might run from 9:00 to 11:00. A subset of intervals is compatible if no two intervals overlap on the machine/resource. For example, the subset {A,C} is compatible, as is the subset {B}; but neither {A,B} nor {B,C} are compatible subsets, because the corresponding intervals within each subset overlap.

In statistics, single-linkage clustering is one of several methods of hierarchical clustering. It is based on grouping clusters in bottom-up fashion, at each step combining two clusters that contain the closest pair of elements not yet belonging to the same cluster as each other.

In computer vision, maximally stable extremal regions (MSER) are used as a method of blob detection in images. This technique was proposed by Matas et al. to find correspondences between image elements from two images with different viewpoints. This method of extracting a comprehensive number of corresponding image elements contributes to the wide-baseline matching, and it has led to better stereo matching and object recognition algorithms.

The maximum potential intensity of a tropical cyclone is the theoretical limit of the strength of a tropical cyclone.

Dynamic relaxation is a numerical method, which, among other things, can be used to do "form-finding" for cable and fabric structures. The aim is to find a geometry where all forces are in equilibrium. In the past this was done by direct modelling, using hanging chains and weights, or by using soap films, which have the property of adjusting to find a "minimal surface".

<span class="mw-page-title-main">Velocity</span> Speed and direction of a motion

Velocity is the speed in combination with the direction of motion of an object. Velocity is a fundamental concept in kinematics, the branch of classical mechanics that describes the motion of bodies.

Complete-linkage clustering is one of several methods of agglomerative hierarchical clustering. At the beginning of the process, each element is in a cluster of its own. The clusters are then sequentially combined into larger clusters until all elements end up being in the same cluster. The method is also known as farthest neighbour clustering. The result of the clustering can be visualized as a dendrogram, which shows the sequence of cluster fusion and the distance at which each fusion took place.

<span class="mw-page-title-main">Linear seismic inversion</span> Interpretation of seismic data using linear model

Inverse modeling is a mathematical technique where the objective is to determine the physical properties of the subsurface of an earth region that has produced a given seismogram. Cooke and Schneider (1983) defined it as calculation of the earth's structure and physical parameters from some set of observed seismic data. The underlying assumption in this method is that the collected seismic data are from an earth structure that matches the cross-section computed from the inversion algorithm. Some common earth properties that are inverted for include acoustic velocity, formation and fluid densities, acoustic impedance, Poisson's ratio, formation compressibility, shear rigidity, porosity, and fluid saturation.

A central problem in algorithmic graph theory is the shortest path problem. One of the generalizations of the shortest path problem is known as the single-source-shortest-paths (SSSP) problem, which consists of finding the shortest paths from a source vertex to all other vertices in the graph. There are classical sequential algorithms which solve this problem, such as Dijkstra's algorithm. In this article, however, we present two parallel algorithms solving this problem.

References

  1. F.F. Yao, A.J. Demers and S. Shenker. A scheduling model for reduced CPU energy. Proc. 36th IEEE Symposium on Foundations of Computer Science, 374–382, 1995.
  2. Susanne Albers , "Algorithms for Dynamic Speed Scaling"