This article needs additional citations for verification .(May 2017) |
Brittle systems theory creates an analogy between communication theory and mechanical systems. A brittle system is a system characterized by a sudden and steep decline in performance as the system state changes. This can be due to input parameters that exceed a specified input, or environmental conditions that exceed specified operating boundaries. This is the opposite of a gracefully degrading system. Brittle system analysis develops an analogy with materials science in order to analyze system brittleness. [1] A system that is brittle (but initially robust enough to gain at least some foothold in the marketplace) will tend to operate with acceptable performance until it reaches a limit and then degrade suddenly and catastrophically. The table below illustrates the concept behind the analysis using an example of a communication system.
Materials Science | Target System | Brittle Systems Analysis | Materials Science Quantification |
---|---|---|---|
Stress | Interbyte jitter, EMI, number of slaves, etc. | Amount parameter exceeds tolerance | Force per unit area within a body |
Toughness | Ability to withstand the above | System robustness | Ability to absorb energy up to failure |
Hardness* | Constant latency, throughput with stress in tolerance | Level of performance within tolerance | Resistance to deformation |
Ductility* | Gradual reduction in latency, throughput as stress exceeds tolerance | Level of performance out of tolerance | Fracture strain or reduction of area at fracture |
Plastic strain | Latency, throughput are permanently degraded | System cannot recover from degradation | Deformation between particles in a body relative to length |
Reversible strain | Latency, throughput are temporarily degraded | System can recover from degradation | Same as above, but returns to normal after force removed |
Brittle fracture | Sudden steep decline in latency, throughput | Sudden steep decline in performance | There is no reduction of area when material breaks |
Ductile fracture | Graceful degradation in latency, throughput | Graceful degradation in performance | The area at the point of fracture gradually reduces to zero |
Brittleness* | Hardness over ductility | Ratio of hardness over ductility | Ratio of hardness over ductility |
Deformation | Degradation in latency, throughput | Degradation in performance | Change in shape of a material |
Young's modulus | Stress (jitter/EMI) over reduction in latency and throughput | Amount of tolerance exceeded over degradation | Measure of the stiffness of an elastic material |
Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in unit of hertz.
Control theory is a field of control engineering and applied mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a desired state, while minimizing any delay, overshoot, or steady-state error and ensuring a level of control stability; often with the aim to achieve a degree of optimality.
An amplifier, electronic amplifier or (informally) amp is an electronic device that can increase the magnitude of a signal. It is a two-port electronic circuit that uses electric power from a power supply to increase the amplitude of a signal applied to its input terminals, producing a proportionally greater amplitude signal at its output. The amount of amplification provided by an amplifier is measured by its gain: the ratio of output voltage, current, or power to input. An amplifier is defined as a circuit that has a power gain greater than one.
Real-time computing (RTC) is the computer science term for hardware and software systems subject to a "real-time constraint", for example from event to system response. Real-time programs must guarantee response within specified time constraints, often referred to as "deadlines".
System analysis in the field of electrical engineering characterizes electrical systems and their properties. System analysis can be used to represent almost anything from population growth to audio speakers; electrical engineers often use it because of its direct relevance to many areas of their discipline, most notably signal processing, communication systems and control systems.
A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for information transfer of, for example, a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second.
Theoretical computer science is a subfield of computer science and mathematics that focuses on the abstract and mathematical foundations of computation.
A voltage regulator is a system designed to automatically maintain a constant voltage. It may use a simple feed-forward design or may include negative feedback. It may use an electromechanical mechanism, or electronic components. Depending on the design, it may be used to regulate one or more AC or DC voltages.
In software engineering, a pipeline consists of a chain of processing elements, arranged so that the output of each element is the input of the next. The concept is analogous to a physical pipeline. Usually some amount of buffering is provided between consecutive elements. The information that flows in these pipelines is often a stream of records, bytes, or bits, and the elements of a pipeline may be called filters. This is also called the pipe(s) and filters design pattern which is monolithic. Its advantages are simplicity and low cost while its disadvantages are lack of elasticity, fault tolerance and scalability. Connecting elements into a pipeline is analogous to function composition.
Reliability engineering is a sub-discipline of systems engineering that emphasizes the ability of equipment to function without failure. Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, OR will operate in a defined environment without failure. Reliability is closely related to availability, which is typically described as the ability of a component or system to function at a specified moment or interval of time.
In engineering and systems theory, redundancy is the intentional duplication of critical components or functions of a system with the goal of increasing reliability of the system, usually in the form of a backup or fail-safe, or to improve actual system performance, such as in the case of GNSS receivers, or multi-threaded computer processing.
Fault tolerance is the ability of a system to maintain proper operation despite failures or faults in one or more of its components. This capability is essential for high-availability, mission-critical, or even life-critical systems.
Many services running on modern digital telecommunications networks require accurate synchronization for correct operation. For example, if telephone exchanges are not synchronized, then bit slips will occur and degrade performance. Telecommunication networks rely on the use of highly accurate primary reference clocks which are distributed network-wide using synchronization links and synchronization supply units.
In electrical engineering and mechanical engineering, a transient response is the response of a system to a change from an equilibrium or a steady state. The transient response is not necessarily tied to abrupt events but to any event that affects the equilibrium of the system. The impulse response and step response are transient responses to a specific input.
In computing, computer performance is the amount of useful work accomplished by a computer system. Outside of specific contexts, computer performance is estimated in terms of accuracy, efficiency and speed of executing computer program instructions. When it comes to high computer performance, one or more of the following factors might be involved:
Radio-frequency (RF) engineering is a subset of electrical engineering involving the application of transmission line, waveguide, antenna, radar, and electromagnetic field principles to the design and application of devices that produce or use signals within the radio band, the frequency range of about 20 kHz up to 300 GHz.
Worst-case circuit analysis is a cost-effective means of screening a design to ensure with a high degree of confidence that potential defects and deficiencies are identified and eliminated prior to and during test, production, and delivery.
A resilient control system is one that maintains state awareness and an accepted level of operational normalcy in response to disturbances, including threats of an unexpected and malicious nature".
A mechanical filter is a signal processing filter usually used in place of an electronic filter at radio frequencies. Its purpose is the same as that of a normal electronic filter: to pass a range of signal frequencies, but to block others. The filter acts on mechanical vibrations which are the analogue of the electrical signal. At the input and output of the filter, transducers convert the electrical signal into, and then back from, these mechanical vibrations.
A shock detector, shock indicator, or impact monitor is a device which indicates whether a physical shock or impact has occurred. These usually have a binary output (go/no-go) and are sometimes called shock overload devices. Shock detectors can be used on shipments of fragile valuable items to indicate whether a potentially damaging drop or impact may have occurred. They are also used in sports helmets to help estimate if a dangerous impact may have occurred.