A Time/Utility Function (TUF), née Time/Value Function, specifies the application-specific utility that an action (e.g., computational task, mechanical movement) yields depending on its completion time. [1] [2] TUFs and their utility interpretations (semantics), scales, and values are derived from application domain-specific subject matter knowledge. An example (but not the only) interpretation of utility is an action's relative importance, which otherwise is independent of its timeliness. The traditional deadline represented as a TUF is a special case—a downward step of utility from 1 to 0 at the deadline time—e.g., timeliness without importance. A TUF is more general—it has a critical time, with application-specific shapes and utility values on each side, after which it does not increase. The various researcher and practitioner definitions of firm and soft real-time can also be represented as special cases of the TUF model.
The optimality criterion for scheduling multiple TUF-constrained actions has historically in the literature been only maximal utility accrual (UA)—e.g., a (perhaps expected) weighted sum of the individual actions' completion utilities. This thus takes into account timeliness with respect to critical times. Additional criteria (e.g., energy, predictability), constraints (e.g., dependencies), system models, scheduling algorithms, and assurances have been added as the TUF/UA paradigm and its use cases have evolved. More expressively, TUF/UA allows accrued utility, timeliness, predictability, and other scheduling criteria and constraints to be traded off against one another for the schedule to yield situational application QoS [lower-alpha 1] —as opposed to only timeliness per se. Instances of the TUF/UA paradigm have been employed in a wide variety of application domains, most frequently in military systems.
The TUF/UA paradigm was originally created to address certain action timeliness, predictability of timeliness, and application QoS-based scheduling needs of various military applications for which traditional real-time concepts and practices are not sufficiently expressive (e.g., for dynamically timeliness-critical systems not having deadlines) and load resilience (e.g., for systems subject to routine action overloads). An important common example class of such applications is missile defense (notionally [3] [4] [5] ).
Subsequently, numerous variations on the original TUF model, the TUF/UA paradigm's system model, and thus scheduling techniques and algorithms, have been studied in the academic literature—e.g., [6] [7] [8] [9] [10] —and applied in civilian contexts.
Some examples of the latter include: cyber-physical systems, [11] AI, [12] multi-robot systems, [13] drone scheduling, [14] autonomous robots, [15] intelligent vehicle-to-cloud data transfers, [16] industrial process control, [17] transaction systems, [18] high performance computing, [19] cloud systems, [20] heterogeneous clusters, [21] service-oriented computing, [22] networking, [23] and memory management for real [24] and virtual [25] machines. A steel mill example is briefly described in the Introduction of Clark's Ph.D. thesis. [26]
TUFs and their utility interpretations (semantics), scales, and values are derived from domain-specific subject matter knowledge. [27] [5] A historically frequent interpretation of utility is actions' relative importance. [lower-alpha 2] A framework for á priori assigning static utility values subject to strong constraints on system models has been devised, [8] but subsequent (like prior) TUF/UA research and development have preferred to depend on exploiting application-specificity rather than attempting to create more general frameworks. However, such frameworks and tools remain an important research topic.
By traditional convention, a TUF is a concave function, including linear ones. See the depiction of some example TUFs.
TUF/UA papers in the research literature, with few exceptions, e.g., [28] [6] [29] [30] [8] [10] are for only either linear or piecewise linear [31] (including conventional deadline-based) TUFs because they are easier to specify and schedule. In many cases, the TUFs are only monotonically decreasing.
A constant function represents an action's utility that is not related to the action's completion time—for example, the action's constant relative importance. This allows both time-dependent and time-independent actions to be scheduled coherently.
A TUF has a global critical time, after which its utility does not increase. If a TUF never decreases, its global critical time is the first time when its maximum utility is reached. A constant TUF has an arbitrary critical time for the purpose of scheduling—such as the action's release time, or the TUF's termination time. The global critical time may be followed by local critical times [2] —for example, consider a TUF having a sequence of downward steps, perhaps to approximate a smooth downward curve. [lower-alpha 3]
TUF utility values are usually either integers or rational numbers.
TUF utility may include negative values. (A TUF that has negative values in its range is not necessarily dropped from scheduling consideration or aborted during its operation—that decision depends on the scheduling algorithm.)
A conventional deadline time (d) represented as a TUF is a special case—a downward step TUF [lower-alpha 4] having a unit penalty (i.e., having utility values 1 before and 0 after its critical time).
More generally, a TUF allows downward (and upward) step functions to have any pre- and post-critical time utilities.
Tardiness [32] represented as a TUF is a special case whose non-zero utility is the linear function C - d, where C is the action's completion time—either current, expected, or believed. [lower-alpha 5] More generally, a TUF allows non-zero earliness and tardiness to be non-linear—e.g., increasing tardiness may result in non-linearly decreasing utility, such as when detecting a threat.
Thus, TUFs provide a rich generalization of traditional action completion time constraints in real-time computing.
Alternatively, the TUF/UA paradigm can be employed to use timeliness with respect to the global critical time as a means to a utility accrual end—i.e., application-level Quality of Service (QoS)—instead of timeliness per se being an end in itself
.A TUF (its shape and values) may be dynamically adapted by an application or its operational environment, [2] independently for any actions currently either waiting or operating. [lower-alpha 6]
These adaptations ordinarily occur at discrete events—e.g., at an application mode change such as for ballistic missile flight phases. [5]
Alternatively, these adaptations may occur continuously, such as for actions whose operational durations and TUFs are application-specific functions of when those actions are either released or begin operation. The operation durations may increase or decrease or both, and may be non-monotonic. This continuous case is called time-dependent scheduling. [33] [34] Time-dependent scheduling was introduced for (but is not limited to) certain real-time military applications, such as radar tracking systems. [35] [36] [lower-alpha 7]
Multiple actions in a system may contend for access to sequentially exclusively [lower-alpha 8] shared resources—physical ones such as processors, networks, exogenous application devices (sensors, actuators, etc.)—and logical ones such as synchronizers, data.
The TUF/UA paradigm resolves each instance of this contention using an application-specific algorithmic technique that creates (or updates) a schedule at scheduling events—e.g., times (such as action arrival or completion) or states. The instance's contending actions are dispatched for resource access sequentially in order from the front of the schedule. Thus, action UA sequencing is not greedy. [lower-alpha 9]
The algorithmic technique creates a schedule based on one or more application-specific objectives (i.e., optimality criteria).
The primary objective for scheduling actions having TUFs is maximal utility accrual (UA). The accrued utility is an application-specific polynomial sum of the schedule's completed actions' utilities. When actions have one or more stochastic parameters (e.g., operation duration), the accrued utility is also stochastic (i.e., an expected polynomial sum).
Utility and accrued utility are generic, their interpretations (semantics) and scales are application-specific. [27]
An action's operation duration may be fixed and known at system configuration time. More generally, it may be either fixed or stochastic but not known (either with certainty or in expectation) until it either arrives or is released.
An operation duration may be an application-specific function of the action's operation starting time—it may increase or decrease or both, and may be non-monotonic. This case is called time-dependent scheduling. [33] [34] [35] [36]
Real-time computing (RTC) is the computer science term for hardware and software systems subject to a "real-time constraint", for example from event to system response. Real-time programs must guarantee response within specified time constraints, often referred to as "deadlines".
Reinforcement learning (RL) is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward. Reinforcement learning is one of three basic machine learning paradigms, alongside supervised learning and unsupervised learning.
Filter design is the process of designing a signal processing filter that satisfies a set of requirements, some of which may be conflicting. The purpose is to find a realization of the filter that meets each of the requirements to a sufficient degree to make it useful.
In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions. Evolution of the population then takes place after the repeated application of the above operators.
In computer science, rate-monotonic scheduling (RMS) is a priority assignment algorithm used in real-time operating systems (RTOS) with a static-priority scheduling class. The static priorities are assigned according to the cycle duration of the job, so a shorter cycle duration results in a higher job priority.
In computer science, evolutionary computation is a family of algorithms for global optimization inspired by biological evolution, and the subfield of artificial intelligence and soft computing studying these algorithms. In technical terms, they are a family of population-based trial and error problem solvers with a metaheuristic or stochastic optimization character.
MOSIX is a proprietary distributed operating system. Although early versions were based on older UNIX systems, since 1999 it focuses on Linux clusters and grids. In a MOSIX cluster/grid there is no need to modify or to link applications with any library, to copy files or login to remote nodes, or even to assign processes to different nodes – it is all done automatically, like in an SMP.
In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.
Swarm intelligence (SI) is the collective behavior of decentralized, self-organized systems, natural or artificial. The concept is employed in work on artificial intelligence. The expression was introduced by Gerardo Beni and Jing Wang in 1989, in the context of cellular robotic systems.
In computer networking, a reliable protocol is a communication protocol that notifies the sender whether or not the delivery of data to intended recipients was successful. Reliability is a synonym for assurance, which is the term used by the ITU and ATM Forum.
In artificial intelligence, an intelligent agent (IA) is anything which perceives its environment, takes actions autonomously in order to achieve goals, and may improve its performance with learning or may use knowledge. They may be simple or complex — a thermostat is considered an example of an intelligent agent, as is a human being, as is any system that meets the definition, such as a firm, a state, or a biome.
Network calculus is "a set of mathematical results which give insights into man-made systems such as concurrent programs, digital circuits and communication networks." Network calculus gives a theoretical framework for analysing performance guarantees in computer networks. As traffic flows through a network it is subject to constraints imposed by the system components, for example:
The random neural network (RNN) is a mathematical representation of an interconnected network of neurons or cells which exchange spiking signals. It was invented by Erol Gelenbe and is linked to the G-network model of queueing networks as well as to Gene Regulatory Network models. Each cell state is represented by an integer whose value rises when the cell receives an excitatory spike and drops when it receives an inhibitory spike. The spikes can originate outside the network itself, or they can come from other cells in the networks. Cells whose internal excitatory state has a positive value are allowed to send out spikes of either kind to other cells in the network according to specific cell-dependent spiking rates. The model has a mathematical solution in steady-state which provides the joint probability distribution of the network in terms of the individual probabilities that each cell is excited and able to send out spikes. Computing this solution is based on solving a set of non-linear algebraic equations whose parameters are related to the spiking rates of individual cells and their connectivity to other cells, as well as the arrival rates of spikes from outside the network. The RNN is a recurrent model, i.e. a neural network that is allowed to have complex feedback loops.
Routing in delay-tolerant networking concerns itself with the ability to transport, or route, data from a source to a destination, which is a fundamental ability all communication networks must have. Delay- and disruption-tolerant networks (DTNs) are characterized by their lack of connectivity, resulting in a lack of instantaneous end-to-end paths. In these challenging environments, popular ad hoc routing protocols such as AODV and DSR fail to establish routes. This is due to these protocols trying to first establish a complete route and then, after the route has been established, forward the actual data. However, when instantaneous end-to-end paths are difficult or impossible to establish, routing protocols must take to a "store and forward" approach, where data is incrementally moved and stored throughout the network in hopes that it will eventually reach its destination. A common technique used to maximize the probability of a message being successfully transferred is to replicate many copies of the message in hopes that one will succeed in reaching its destination.
Real-time Control System (RCS) is a reference model architecture, suitable for many software-intensive, real-time computing control problem domains. It defines the types of functions needed in a real-time intelligent control system, and how these functions relate to each other.
A Reeb graph is a mathematical object reflecting the evolution of the level sets of a real-valued function on a manifold. According to a similar concept was introduced by G.M. Adelson-Velskii and A.S. Kronrod and applied to analysis of Hilbert's thirteenth problem. Proposed by G. Reeb as a tool in Morse theory, Reeb graphs are the natural tool to study multivalued functional relationships between 2D scalar fields , , and arising from the conditions and , because these relationships are single-valued when restricted to a region associated with an individual edge of the Reeb graph. This general principle was first used to study neutral surfaces in oceanography.
Data-intensive computing is a class of parallel computing applications which use a data parallel approach to process large volumes of data typically terabytes or petabytes in size and typically referred to as big data. Computing applications which devote most of their execution time to computational requirements are deemed compute-intensive, whereas computing applications which require large volumes of data and devote most of their processing time to I/O and manipulation of data are deemed data-intensive.
Earliest deadline first (EDF) or least time to go is a dynamic priority scheduling algorithm used in real-time operating systems to place processes in a priority queue. Whenever a scheduling event occurs the queue will be searched for the process closest to its deadline. This process is the next to be scheduled for execution.
In the mathematical theory of probability, the drift-plus-penalty method is used for optimization of queueing networks and other stochastic systems.
Fog robotics can be defined as an architecture which consists of storage, networking functions, control with fog computing closer to robots.