Block Error Rate

Last updated

Block Error Rate (BLER) is a ratio of the number of erroneous blocks to the total number of blocks ansmitted on a digital circuit.

It is used in measuring the error rate when extracting data frames from a Compact Disc (CD). The BLER measurement is often used as a quality control measure with regards to how well audio is retained on a compact disc over time.

BLER is also used for W-CDMA performance requirements tests (demodulation tests in multipath conditions, etc.). BLER is measured after channel de-interleaving and decoding by evaluating the Cyclic Redundancy Check (CRC) on each transport block.

Block Error Rate (BLER) is used in LTE/4G technology to determine the in-sync or out-of-sync indication during radio link monitoring (RLM). Normal BLER is 2% for an in-sync condition and 10% for an out-of-sync condition. [1] 8ballPool

Related Research Articles

<span class="mw-page-title-main">Compact disc</span> Digital optical disc data storage format

The compact disc (CD) is a digital optical disc data storage format that was co-developed by Philips and Sony to store and play digital audio recordings. It uses the Compact Disc Digital Audio format which typically provides 74 minutes of audio on a disc. In later years, the compact disc was adapted for non-audio computer data storage purposes as CD-ROM and its derivatives. First released in Japan in October 1982, the CD was the second optical disc technology to be invented, after the much larger LaserDisc (LD). By 2007, 200 billion CDs had been sold worldwide.

<span class="mw-page-title-main">Analog-to-digital converter</span> System that converts an analog signal into a digital signal

In electronics, an analog-to-digital converter is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may also provide an isolated measurement such as an electronic device that converts an analog input voltage or current to a digital number representing the magnitude of the voltage or current. Typically the digital output is a two's complement binary number that is proportional to the input, but there are other possibilities.

In digital transmission, the number of bit errors is the number of received bits of a data stream over a communication channel that have been altered due to noise, interference, distortion or bit synchronization errors.

<span class="mw-page-title-main">Micrometer (device)</span> Tool for the precise measurement of a components length, width, and/or depth

A micrometer, sometimes known as a micrometer screw gauge (MSG), is a device incorporating a calibrated screw widely used for accurate measurement of components in mechanical engineering and machining as well as most mechanical trades, along with other metrological instruments such as dial, vernier, and digital calipers. Micrometers are usually, but not always, in the form of calipers (opposing ends joined by a frame). The spindle is a very accurately machined screw and the object to be measured is placed between the spindle and the anvil. The spindle is moved by turning the ratchet knob or thimble until the object to be measured is lightly touched by both the spindle and the anvil.

In epidemiology, prevalence is the proportion of a particular population found to be affected by a medical condition at a specific time. It is derived by comparing the number of people found to have the condition with the total number of people studied and is usually expressed as a fraction, a percentage, or the number of cases per 10,000 or 100,000 people. Prevalence is most often used in questionnaire studies.

Sound can be recorded and stored and played using either digital or analog techniques. Both techniques introduce errors and distortions in the sound, and these methods can be systematically compared. Musicians and listeners have argued over the superiority of digital versus analog sound recordings. Arguments for analog systems include the absence of fundamental error mechanisms which are present in digital audio systems, including aliasing and associated anti-aliasing filter implementation, jitter and quantization noise. Advocates of digital point to the high levels of performance possible with digital audio, including excellent linearity in the audible band and low levels of noise and distortion.

<span class="mw-page-title-main">Binary classification</span> Dividing things between two categories

Binary classification is the task of classifying the elements of a set into one of two groups. Typical binary classification problems include:

In statistics and psychometrics, reliability is the overall consistency of a measure. A measure is said to have a high reliability if it produces similar results under consistent conditions:

"It is the characteristic of a set of test scores that relates to the amount of random error from the measurement process that might be embedded in the scores. Scores that are highly reliable are precise, reproducible, and consistent from one testing occasion to another. That is, if the testing process were repeated with a group of test takers, essentially the same results would be obtained. Various kinds of reliability coefficients, with values ranging between 0.00 and 1.00, are usually used to indicate the amount of error in the scores."

<span class="mw-page-title-main">Audio system measurements</span> Means of quantifying system performance

Audio system measurements are used to quantify audio system performance. These measurements are made for several purposes. Designers take measurements to specify the performance of a piece of equipment. Maintenance engineers make them to ensure equipment is still working to specification, or to ensure that the cumulative defects of an audio path are within limits considered acceptable. Audio system measurements often accommodate psychoacoustic principles to measure the system in a way that relates to human hearing.

<span class="mw-page-title-main">Current transformer</span> Transformer used to scale alternating current, used as sensor for AC power

A current transformer (CT) is a type of transformer that is used to reduce or multiply an alternating current (AC). It produces a current in its secondary which is proportional to the current in its primary.

<span class="mw-page-title-main">Indicator (distance amplifying instrument)</span> Distance amplifying instrument

In various contexts of science, technology, and manufacturing, an indicator is any of various instruments used to accurately measure small distances and angles, and amplify them to make them more obvious. The name comes from the concept of indicating to the user that which their naked eye cannot discern; such as the presence, or exact quantity, of some small distance.

A video signal generator is a type of signal generator which outputs predetermined video and/or television oscillation waveforms, and other signals used in the synchronization of television devices and to stimulate faults in, or aid in parametric measurements of, television and video systems. There are several different types of video signal generators in widespread use. Regardless of the specific type, the output of a video generator will generally contain synchronization signals appropriate for television, including horizontal and vertical sync pulses or sync words. Generators of composite video signals will also include a colorburst signal as part of the output.

<span class="mw-page-title-main">Sensitivity and specificity</span> Statistical measure of a binary classification

In medicine and statistics, sensitivity and specificity mathematically describe the accuracy of a test that reports the presence or absence of a medical condition. If individuals who have the condition are considered "positive" and those who do not are considered "negative", then sensitivity is a measure of how well a test can identify true positives and specificity is a measure of how well a test can identify true negatives:

In statistical hypothesis testing, a type I error, or a false positive, is the rejection of the null hypothesis when it is actually true. A type II error, or a false negative, is the failure to reject a null hypothesis that is actually false.

Continuous emission monitoring systems (CEMS) are used as a tool to monitor the effluent gas streams resulting from combustion in industrial processes. CEMS can measure flue gas for oxygen, carbon monoxide and carbon dioxide to provide information for combustion control in industrial settings. They are also used as a means to comply with air emission standards such as the United States Environmental Protection Agency's (EPA) Acid Rain Program, other US federal emission programs, or state permitted emission standards. CEMS typically consist of analyzers to measure gas concentrations within the stream, equipment to direct a sample of that gas stream to the analyzers if they are remote, equipment to condition the sample gas by removing water and other components that could interfere with the reading, pneumatic plumbing with valves that can be controlled by a PLC to route the sample gas to and away from the analyzers, a calibration and maintenance system that allows for the injection of calibration gases into the sample line, and a Data Acquisition and Handling System (DAHS) that collects and stores each data point and can perform necessary calculations required to get total mass emissions. A CEMS operates at all times even if the process it measures is not on. They can continuously collect, record and report emissions data for process monitoring and/or for compliance purposes.

In statistics, a mixed-design analysis of variance model, also known as a split-plot ANOVA, is used to test for differences between two or more independent groups whilst subjecting participants to repeated measures. Thus, in a mixed-design ANOVA model, one factor is a between-subjects variable and the other is a within-subjects variable. Thus, overall, the model is a type of mixed-effects model.

In statistics, when performing multiple comparisons, a false positive ratio is the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate is calculated as the ratio between the number of negative events wrongly categorized as positive and the total number of actual negative events.

<span class="mw-page-title-main">Statistical dispersion</span> Statistical property quantifying how much a collection of data is spread out

In statistics, dispersion is the extent to which a distribution is stretched or squeezed. Common examples of measures of statistical dispersion are the variance, standard deviation, and interquartile range. For instance, when the variance of data in a set is large, the data is widely scattered. On the other hand, when the variance is small, the data in the set is clustered.

Flow conditioning ensures that the "real world" environment closely resembles the "laboratory" environment for proper performance of inferential flowmeters like orifice, turbine, coriolis, ultrasonic etc.

<span class="mw-page-title-main">Evaluation of binary classifiers</span> Quantitative measurement of accuracy

Evaluation of a binary classifier typically assigns a numerical value, or values, to a classifier that represent its accuracy. An example is error rate, which measures how frequently the classifier makes a mistake.

References

  1. "Block Error Ratio (BLER) Measurement Description". February 28, 2014. Retrieved 23 December 2015.