Availability

Last updated

In reliability engineering, the term availability has the following meanings:

Contents

Normally high availability systems might be specified as 99.98%, 99.999% or 99.9996%. The converse, unavailability, is 1 minus the availability.

Representation

The simplest representation of availability (A) is a ratio of the expected value of the uptime of a system to the aggregate of the expected values of up and down time (that results in the "total amount of time" C of the observation window)

Another equation for availability (A) is a ratio of the Mean Time To Failure (MTTF) and Mean Time Between Failure (MTBF), or

If we define the status function as

therefore, the availability A(t) at time t > 0 is represented by

Average availability must be defined on an interval of the real line. If we consider an arbitrary constant , then average availability is represented as

Limiting (or steady-state) availability is represented by [1]

Limiting average availability is also defined on an interval as,

Availability is the probability that an item will be in an operable and committable state at the start of a mission when the mission is called for at a random time, and is generally defined as uptime divided by total time (uptime plus downtime).

Series vs Parallel components

series vs parallel components Series vs parallel components.png
series vs parallel components


Let's say a series component is composed of components A, B and C. Then following formula applies:

Availability of series component = (availability of component A) x (availability of component B) x (availability of component C) [2] [3]

Therefore, combined availability of multiple components in a series is always lower than the availability of individual components.

On the other hand, following formula applies to parallel components:

Availability of parallel components = 1 - (1 - availability of component A) X (1 - availability of component B) X (1 - availability of component C) [2] [3]

10 hosts, each having 50% availability. But if they are used in parallel and fail independently, they can provide high availability. System availability chart.png
10 hosts, each having 50% availability. But if they are used in parallel and fail independently, they can provide high availability.

In corollary, if you have N parallel components each having X availability, then:

Availability of parallel components = 1 - (1 - X)^ N [3]

Using parallel components can exponentially increase the availability of overall system. [2] For example if each of your hosts has only 50% availability, by using 10 of hosts in parallel, you can achieve 99.9023% availability. [3]

Note that redundancy doesn’t always lead to higher availability. In fact, redundancy increases complexity which in turn reduces availability. According to Marc Brooker, to take advantage of redundancy, ensure that: [4]

  1. You achieve a net-positive improvement in the overall availability of your system
  2. Your redundant components fail independently
  3. Your system can reliably detect healthy redundant components
  4. Your system can reliably scale out and scale-in redundant components.

Methods and techniques to model availability

Reliability Block Diagrams or Fault Tree Analysis are developed to calculate availability of a system or a functional failure condition within a system including many factors like:

Furthermore, these methods are capable to identify the most critical items and failure modes or events that impact availability.

Definitions within systems engineering

Availability, inherent (Ai) [5] The probability that an item will operate satisfactorily at a given point in time when used under stated conditions in an ideal support environment. It excludes logistics time, waiting or administrative downtime, and preventive maintenance downtime. It includes corrective maintenance downtime. Inherent availability is generally derived from analysis of an engineering design:

  1. The impact of a repairable-element (refurbishing/remanufacture isn't repair, but rather replacement) on the availability of the system, in which it operates, equals mean time between failures MTBF/(MTBF+ mean time to repair MTTR).
  2. The impact of a one-off/non-repairable element (could be refurbished/remanufactured) on the availability of the system, in which it operates, equals the mean time to failure (MTTF)/(MTTF + the mean time to repair MTTR).

It is based on quantities under control of the designer.

Availability, achieved (Aa) [6] The probability that an item will operate satisfactorily at a given point in time when used under stated conditions in an ideal support environment (i.e., that personnel, tools, spares, etc. are instantaneously available). It excludes logistics time and waiting or administrative downtime. It includes active preventive and corrective maintenance downtime.

Availability, operational (Ao) [7] The probability that an item will operate satisfactorily at a given point in time when used in an actual or realistic operating and support environment. It includes logistics time, ready time, and waiting or administrative downtime, and both preventive and corrective maintenance downtime. This value is equal to the mean time between failure (MTBF) divided by the mean time between failure plus the mean downtime (MDT). This measure extends the definition of availability to elements controlled by the logisticians and mission planners such as quantity and proximity of spares, tools and manpower to the hardware item.

Refer to Systems engineering for more details

Basic example

If we are using equipment which has a mean time to failure (MTTF) of 81.5 years and mean time to repair (MTTR) of 1 hour:

MTTF in hours = 81.5 × 365 × 24 = 713940 (This is a reliability parameter and often has a high level of uncertainty!)
Inherent availability (Ai) = 713940 / (713940+1) = 713940 / 713941 = 99.999860%

—Ả≥〈〉〈〉〈〉

Inherent unavailability = 1 / 713940 = 0.000140%

Outage due to equipment in hours per year = 1/rate = 1/MTTF = 0.01235 hours per year.

Literature

Availability is well established in the literature of stochastic modeling and optimal maintenance. Barlow and Proschan [1975] define availability of a repairable system as "the probability that the system is operating at a specified time t." Blanchard [1998] gives a qualitative definition of availability as "a measure of the degree of a system which is in the operable and committable state at the start of mission when the mission is called for at an unknown random point in time." This definition comes from the MIL-STD-721. Lie, Hwang, and Tillman [1977] developed a complete survey along with a systematic classification of availability.

Availability measures are classified by either the time interval of interest or the mechanisms for the system downtime. If the time interval of interest is the primary concern, we consider instantaneous, limiting, average, and limiting average availability. The aforementioned definitions are developed in Barlow and Proschan [1975], Lie, Hwang, and Tillman [1977], and Nachlas [1998]. The second primary classification for availability is contingent on the various mechanisms for downtime such as the inherent availability, achieved availability, and operational availability. (Blanchard [1998], Lie, Hwang, and Tillman [1977]). Mi [1998] gives some comparison results of availability considering inherent availability.

Availability considered in maintenance modeling can be found in Barlow and Proschan [1975] for replacement models, Fawzi and Hawkes [1991] for an R-out-of-N system with spares and repairs, Fawzi and Hawkes [1990] for a series system with replacement and repair, Iyer [1992] for imperfect repair models, Murdock [1995] for age replacement preventive maintenance models, Nachlas [1998, 1989] for preventive maintenance models, and Wang and Pham [1996] for imperfect maintenance models. A very comprehensive recent book is by Trivedi and Bobbio [2017].

Applications

Availability factor is used extensively in power plant engineering. For example, the North American Electric Reliability Corporation implemented the Generating Availability Data System in 1982. [8]

See also

Related Research Articles

Mean time between failures (MTBF) is the predicted elapsed time between inherent failures of a mechanical or electronic system during normal system operation. MTBF can be calculated as the arithmetic mean (average) time between failures of a system. The term is used for repairable systems while mean time to failure (MTTF) denotes the expected time to failure for a non-repairable system.

<span class="mw-page-title-main">Weibull distribution</span> Continuous probability distribution

In probability theory and statistics, the Weibull distribution is a continuous probability distribution. It models a broad range of random variables, largely in the nature of a time to failure or time between events. Examples are maximum one-day rainfalls and the time a user spends on a web page.

Survival analysis is a branch of statistics for analyzing the expected duration of time until one event occurs, such as death in biological organisms and failure in mechanical systems. This topic is called reliability theory, reliability analysis or reliability engineering in engineering, duration analysis or duration modelling in economics, and event history analysis in sociology. Survival analysis attempts to answer certain questions, such as what is the proportion of a population which will survive past a certain time? Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?

Mean time to recovery (MTTR) is the average time that a device will take to recover from any failure. Examples of such devices range from self-resetting fuses, to whole systems which have to be repaired or replaced.

Failure rate is the frequency with which any system or component fails, expressed in failures per unit of time. It thus depends on the system conditions, time interval, and total number of systems under study. It can describe electronic, mechanical, or biological systems, in fields such as systems and reliability engineering, medicine and biology, or insurance and finance. It is usually denoted by the Greek letter (lambda).

Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, OR will operate in a defined environment without failure. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.

The Service Availability Forum is a consortium that develops, publishes, educates on and promotes open specifications for carrier-grade and mission-critical systems. Formed in 2001, it promotes development and deployment of commercial off-the-shelf (COTS) technology.

Reliability, availability and serviceability (RAS), also known as reliability, availability, and maintainability (RAM), is a computer hardware engineering term involving reliability engineering, high availability, and serviceability design. The phrase was originally used by IBM as a term to describe the robustness of their mainframe computers.

Mean time to repair (MTTR) is a basic measure of the maintainability of repairable items. It represents the average time required to repair a failed component or device. Expressed mathematically, it is the total corrective maintenance time for failures divided by the total number of corrective maintenance actions for failures during a given period of time. It generally does not include lead time for parts not readily available or other Administrative or Logistic Downtime (ALDT).

High availability (HA) is a characteristic of a system that aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period.

<span class="mw-page-title-main">Probabilistic design</span> Discipline within engineering design

Probabilistic design is a discipline within engineering design. It deals primarily with the consideration and minimization of the effects of random variability upon the performance of an engineering system during the design phase. Typically, these effects studied and optimized are related to quality and reliability. It differs from the classical approach to design by assuming a small probability of failure instead of using the safety factor. Probabilistic design is used in a variety of different applications to assess the likelihood of failure. Disciplines which extensively use probabilistic design principles include product design, quality control, systems engineering, machine design, civil engineering and manufacturing.

<span class="mw-page-title-main">Corrective maintenance</span> Maintenance task to identify, isolate, and rectify a fault

Corrective maintenance is a maintenance task performed to identify, isolate, and rectify a fault so that the failed equipment, machine, or system can be restored to an operational condition within the tolerances or limits established for in-service operations.

A prediction of reliability is an important element in the process of selecting equipment for use by telecommunications service providers and other buyers of electronic equipment, and it is essential during the design stage of engineering systems life cycle. Reliability is a measure of the frequency of equipment failures as a function of time. Reliability has a major impact on maintenance and repair costs and on the continuity of service.

Availability is the probability that a system will work as required when required during the period of a mission. The mission could be the 18-hour span of an aircraft flight. The mission period could also be the 3 to 15-month span of a military deployment. Availability includes non-operational periods associated with reliability, maintenance, and logistics.

Maintenance Philosophy is the mix of strategies that ensure an item works as expected when needed.

Software reliability testing is a field of software-testing that relates to testing a software's ability to function, given environmental conditions, for a particular amount of time. Software reliability testing helps discover many problems in the software design and functionality.

RAMP Simulation Software for Modelling Reliability, Availability and Maintainability (RAM) is a computer software application developed by WS Atkins specifically for the assessment of the reliability, availability, maintainability and productivity characteristics of complex systems that would otherwise prove too difficult, cost too much or take too long to study analytically. The name RAMP is an acronym standing for Reliability, Availability and Maintainability of Process systems.

<span class="mw-page-title-main">High-temperature operating life</span> Reliability test applied to integrated circuits

High-temperature operating life (HTOL) is a reliability test applied to integrated circuits (ICs) to determine their intrinsic reliability. This test stresses the IC at an elevated temperature, high voltage and dynamic operation for a predefined period of time. The IC is usually monitored under stress and tested at intermediate intervals. This reliability stress test is sometimes referred to as a lifetime test, device life test or extended burn in test and is used to trigger potential failure modes and assess IC lifetime.

Mean Time to Dangerous Failure. In a safety system MTTFD is the portion of failure modes that can lead to failures that may result in hazards to personnel, environment or equipment.

In the mathematical theory of probability, a generalized renewal process (GRP) or G-renewal process is a stochastic point process used to model failure/repair behavior of repairable systems in reliability engineering. Poisson point process is a particular case of GRP.

References

  1. Elsayed, E., Reliability Engineering, Addison Wesley, Reading, MA,1996
  2. 1 2 3 Sandborn, Peter; Lucyshyn, William (2022). System Sustainment: Acquisition And Engineering Processes For The Sustainment Of Critical And Legacy Systems. World Scientific. ISBN   9789811256868.
  3. 1 2 3 4 Trivedi, Kishor S.; Bobbio, Andrea (2017). Reliability and Availability Engineering: Modeling, Analysis, and Applications. Cambridge University Press. ISBN   978-1107099500.
  4. Vitillo, Roberto (23 February 2022). Understanding Distributed Systems, Second Edition: What every developer should know about large distributed applications. Roberto Vitillo. ISBN   978-1838430214.
  5. "Inherent Availability (AI)". Glossary of Defense Acquisition Acronyms and Terms. Department of Defense. Archived from the original on 13 April 2014. Retrieved 10 April 2014.
  6. "Achieved Availability (AI)". Glossary of Defense Acquisition Acronyms and Terms. Department of Defense. Archived from the original on 13 April 2014. Retrieved 10 April 2014.
  7. "Operational Availability (AI)". Glossary of Defense Acquisition Acronyms and Terms. Department of Defense. Archived from the original on 12 March 2013. Retrieved 10 April 2014.
  8. "Mandatory Reporting of Conventional Generation Performance Data" (PDF). Generating Availability Data System. North American Electric Reliability Corporation. July 2011. pp. 7, 17. Archived (PDF) from the original on 2022-10-09. Retrieved 13 March 2014.

Sources