Loss of load

Last updated

Loss of load in an electrical grid is a term used to describe the situation when the available generation capacity is less than the system load. [1] Multiple probabilistic reliability indices for the generation systems are using loss of load in their definitions, with the more popular [2] being Loss of Load Probability (LOLP) that characterizes a probability of a loss of load occurring within a year. [1] Loss of load events are calculated before the mitigating actions (purchasing electricity from other systems, load shedding) are taken, so a loss of load does not necessarily cause a blackout.

Contents

Loss-of-load-based reliability indices

Multiple reliability indices for the electrical generation are based on the loss of load being observed/calculated over a long interval (one or multiple years) in relatively small increments (an hour or a day). The total number of increments inside the long interval is designated as (e.g., for a yearlong interval if the increment is a day, if the increment is an hour): [3]

One-day-in-ten-years criterion

A typically accepted design goal for is 0.1 day per year [10] ("one-day-in-ten-years criterion" [10] a.k.a. "1 in 10" [11] ), corresponding to . In the US, the threshold is set by the regional entities, like Northeast Power Coordinating Council: [11]

resources will be planned in such a manner that ... the probability of disconnecting non-interruptible customers will be no more than once in ten years

NPCC criteria on generation adequacy

See also

Related Research Articles

<span class="mw-page-title-main">Probability distribution</span> Mathematical function for the probability a given outcome occurs in an experiment

In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon in terms of its sample space and the probabilities of events.

In reliability engineering, the term availability has the following meanings:

Mean time between failures (MTBF) is the predicted elapsed time between inherent failures of a mechanical or electronic system during normal system operation. MTBF can be calculated as the arithmetic mean (average) time between failures of a system. The term is used for repairable systems while mean time to failure (MTTF) denotes the expected time to failure for a non-repairable system.

Pattern recognition is the task of assigning a class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess (PR) capabilities but their primary function is to distinguish and create emergent pattern. PR has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Pattern recognition has its origins in statistics and engineering; some modern approaches to pattern recognition include the use of machine learning, due to the increased availability of big data and a new abundance of processing power.

A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output are random variables.

<span class="mw-page-title-main">PP (complexity)</span> Class of problems in computer science

In complexity theory, PP is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of less than 1/2 for all instances. The abbreviation PP refers to probabilistic polynomial time. The complexity class was defined by Gill in 1977.

Service level measures the performance of a system. Certain goals are defined and the service level gives the percentage to which those goals should be achieved. Fill rate is different from service level.

<span class="mw-page-title-main">Probabilistic design</span> Discipline within engineering design

Probabilistic design is a discipline within engineering design. It deals primarily with the consideration and minimization of the effects of random variability upon the performance of an engineering system during the design phase. Typically, these effects studied and optimized are related to quality and reliability. It differs from the classical approach to design by assuming a small probability of failure instead of using the safety factor. Probabilistic design is used in a variety of different applications to assess the likelihood of failure. Disciplines which extensively use probabilistic design principles include product design, quality control, systems engineering, machine design, civil engineering and manufacturing.

Electrical power system simulation involves power system modeling and network simulation in order to analyze electrical power systems using design/offline or real-time data. Power system simulation software's are a class of computer simulation programs that focus on the operation of electrical power systems. These types of computer programs are used in a wide range of planning and operational situations for electric power systems.

Load-loss factor is a dimensionless ratio between average and peak values of load loss. Since the losses in the wires are proportional to the square of the current, the LLF can be calculated by measuring the square of delivered power over a short interval of time, calculating an average of these values over a long period, and dividing by the square of the peak power exhibited during the same long period:

<span class="mw-page-title-main">Poisson distribution</span> Discrete probability distribution

In probability theory and statistics, the Poisson distribution is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant mean rate and independently of the time since the last event. It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1.

Probability bounds analysis (PBA) is a collection of methods of uncertainty propagation for making qualitative and quantitative calculations in the face of uncertainties of various kinds. It is used to project partial information about random variables and other quantities through mathematical expressions. For instance, it computes sure bounds on the distribution of a sum, product, or more complex function, given only sure bounds on the distributions of the inputs. Such bounds are called probability boxes, and constrain cumulative probability distributions.

<span class="mw-page-title-main">Bayesian programming</span> Statistics concept

Bayesian programming is a formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available.

<span class="mw-page-title-main">Roy Billinton</span>

Roy Billinton is a Canadian scholar and a Distinguished Emeritus Professor at the University of Saskatchewan, Saskatoon, Saskatchewan, Canada. In 2008, Billinton won the IEEE Canada Electric Power Medal for his research and application of reliability concepts in electric power system. In 2007, Billinton was elected a Foreign Associate of the United States National Academy of Engineering for "contributions to teaching, research and application of reliability engineering in electric power generation, transmission, and distribution systems."

Line sampling is a method used in reliability engineering to compute small failure probabilities encountered in engineering systems. The method is particularly suitable for high-dimensional reliability problems, in which the performance function exhibits moderate non-linearity with respect to the uncertain parameters The method is suitable for analyzing black box systems, and unlike the importance sampling method of variance reduction, does not require detailed knowledge of the system.

Structural reliability is about applying reliability engineering theories to buildings and, more generally, structural analysis. Reliability is also used as a probabilistic measure of structural safety. The reliability of a structure is defined as the probability of complement of failure . The failure occurs when the total applied load is larger than the total resistance of the structure. Structural reliability has become known as a design philosophy in the twenty-first century, and it might replace traditional deterministic ways of design and maintenance.

Capacity credit is the fraction of the installed capacity of a power plant which can be relied upon at a given time, frequently expressed as a percentage of the nameplate capacity. A conventional (dispatchable) power plant can typically provide the electricity at full power as long as it has a sufficient amount of fuel and is operational, therefore the capacity credit of such a plant is close to 100%; it is exactly 100% for some definitions of the capacity credit. The output of a variable renewable energy (VRE) plant depends on the state of an uncontrolled natural resource, therefore a mechanically and electrically sound VRE plant might not be able to generate at the rated capacity when needed, so its CC is much lower than 100%. The capacity credit is useful for a rough estimate of the firm power a system with weather-dependent generation can reliably provide. For example, with a low, but realistic wind power capacity credit of 5%, 20 gigawatts (GW) worth of wind power needs to be added to the system in order to permanently retire a 1 GW fossil fuel plant while keeping the electrical grid reliability at the same level.

Reliability index is an attempt to quantitatively assess the reliability of a system using a single numerical value. The set of reliability indices varies depending on the field of engineering, multiple different indices may be used to characterize a single system. In the simple case of an object that cannot be used or repaired once it fails, a useful index is the mean time to failure representing an expectation of the object's service lifetime. Another cross-disciplinary index is forced outage rate (FOR), a probability that a particular type of a device is out of order. Reliability indices are extensively used in the modern electricity regulation.

Resource adequacy in the field of electric power is the ability of the electric grid to satisfy the end-user power demand at any time. RA is a component of the electrical grid reliability. For example, sufficient unused generation capacity shall be available to the electrical grid at any time to accommodate major equipment failures and drops in variable renewable energy sources. The adequacy standard should satisfy the chosen reliability index, typically the loss of load expectation (LOLE) of 1 day in 10 years.

The power system reliability is the probability of a normal operation of the electrical grid at a given time. Reliability indices characterize the ability of the electrical system to supply customers with electricity as needed by measuring the frequency, duration, and scale of supply interruptions. Traditionally two interdependent components of the power system reliability are considered:

References

Sources