MUSHRA

Last updated

MUSHRA stands for Multiple Stimuli with Hidden Reference and Anchor and is a methodology for conducting a codec listening test to evaluate the perceived quality of the output from lossy audio compression algorithms. It is defined by ITU-R recommendation BS.1534-3. [1] The MUSHRA methodology is recommended for assessing "intermediate audio quality". For very small or sensitive audio impairments, Recommendation ITU-R BS.1116-3 (ABC/HR) is recommended instead.

Contents

MUSHRA can be used to test audio codecs across a broad spectrum of use cases: music and film consumption, speech for e.g. podcasts and radio, online streaming (in which trade-offs between quality and efficiency of size and computation are paramount), modern digital telephony, and VOIP applications (which require quasi-real-time, low-bitrate encoding that remains intelligible). Professional, "audiophile", and "prosumer" uses are typically better suited to alternative tests, like the aforementioned ABC/HR, with a base assumption of high-quality, high-resolution audio wherein there will be minimal detectable differences between reference material and the codec output.

The main advantage over the mean opinion score (MOS) methodology (which serves a similar purpose) is that MUSHRA requires fewer participants to obtain statistically significant results.[ citation needed ] This is because all codecs are presented at the same time, to the same participants, such that a paired t-test or repeated measures analysis of variance can be used for statistical analysis. Furthermore, the 0–100 scale used by MUSHRA makes it possible to express perceptible differences with a high degree of granularity, especially compared to the 0-5 modified Likert scale often used by MOS experiments.

In MUSHRA, the listener is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference, and one or more anchors (i.e. severely impaired encodings that both the experimenters and participants are supposed to immediately recognise as such; used similarly to the reference to provide a baseline demonstrating - "anchoring" - for participants the actuality of the low end of the quality scale). The recommendation specifies that a low-range and a mid-range anchor should be included in the test signals. These are typically a 7 kHz and a 3.5 kHz low-pass version of the reference. The purpose of the anchors is to calibrate the scale so that minor artifacts are not unduly penalized. This is particularly important when comparing or pooling results from different labs.

Listener behavior

Both, MUSHRA and ITU BS.1116 tests [2] call for trained expert listeners who know what typical artifacts sound like and where they are likely to occur. Expert listeners also have a better internalization of the rating scale which leads to more repeatable results than with untrained listeners. Thus, with trained listeners, fewer listeners are needed to achieve statistically significant results.

It is assumed that preferences are similar for expert listeners and naive listeners and thus results of expert listeners are also predictive for consumers. In agreement with this assumption Schinkel-Bielefeld et al. [3] found no differences in the rank order between expert listeners and untrained listeners when using test signals containing only timbre and no spatial artifacts. However, Rumsey et al. [4] showed that for signals containing spatial artifacts, expert listeners weigh spatial artifacts slightly stronger than untrained listeners, who primarily focus on timbre artifacts.

In addition to this, it has been shown that expert listeners make more use of the option to listen to smaller sections of the signals under test repeatedly and perform more comparisons between the signals under test and the reference. [3] In contrast to the naive listener who produces a preference rating, expert listeners therefore produce an audio quality rating, rating the differences between the signal under test and the uncompressed original, which is the actual goal of a MUSHRA-test.

Pre- or post-screening

The MUSHRA guidelines describe two major possibilities for assessing the reliability of a listener (described below).

The easiest and most common is to disqualify, post-hoc, all listeners who rate the hidden reference repeat below 90 MUSHRA points for more than 15% of all test items. The hidden reference should, in the ideal case, be rated at 100 points to indicate perceptual equivalence with the original reference audio. While it can happen that the hidden reference and a high-quality signal are confused, the specification provides that a rating of lower than 90 should only be given when the listener is certain that the rated signal is different from the original reference, so a rating below 90 for the hidden reference is considered a clear and obvious listener error.

The other possibility to assess a listener's performance is eGauge, [5] a framework based on the analysis of variance (ANOVA). It computes agreement, repeatability, and discriminability, though only the latter two are recommended for pre- or post-screening. Agreement is the ANOVA of a listener's concurrence with the rest of the listeners. Repeatability examines the individual's internal reliability when rating the same test signal again in comparison to the variance of the other test signals. Discriminability analyses a sort of intertest reliability by checking that listeners can distinguish between test signals of different conditions. As eGauge requires listening to every test signal twice, its use is temporally inefficient in the immediate term relative to the prior method of post-screening listeners based on a hidden reference. eGauge does have advantages when used with a longer-term view. It negates the small chance of a complete redo in the rare case in which a sample's results lack sufficient statistical power due to an excessive failure rate discovered after the fact. Additionally, the initial inefficiency can be amortised over a series of experiments by removing the need for recruitment phases: if a listener has proven a reliable listener using eGauge, he or she can also be considered a reliable listener for future listening tests, provided the nature of the test is not substantially altered (e.g. a reliable listener for stereo tests is not necessarily equally good at perceiving artifacts in 5.1 or 22.2 configurations or potentially even mono formats).

Test items

It is important to choose critical test items. Specifically, items that are difficult to encode and are likely to produce artifacts. At the same time, the test items should be ecologically valid: they should be representative of broadcast material and not mere synthetic signals designed to be difficult to encode at the expense of realism. A method to choose critical material is presented by Ekeroot et al. who propose a ranking-by-elimination procedure. [6] [ further explanation needed ] While this is effective at selecting the most critical test items, it does not ensure inclusion of a variety of test items prone to different artifacts.

Ideally, a MUSHRA test item should maintain similar characteristics for its entire duration (e.g. the use of consistent instrumentation in music or the same person's voice with similar cadence and tone in spoken audio). It can be difficult for the listener to decide on a unidimensional MUSHRA rating if some parts of the items demonstrate different artifacts or stronger artifacting compared to other parts, which is rendered more likely by large variations in the characteristics of the audio. [7] Often, shorter items lead to less variability as they demonstrate greater stationarity (perceptual consistency and constancy). [8] However, even when trying to choose stationary items, ecologically valid stimuli (i.e. audio that is likely to appear or similar to that likely to appear in real-world situations such as on radio) will very often have sections that are slightly more critical than the rest of the signal (examples include keywords in a speech or major phrases of music and are dependent on the stimulus type). Stationarity is important as listeners who focus on different sections of the signal tend to evaluate it differently. Listeners who are more analytical seem to be better at identifying the most critical regions of a stimulus than those who are less analytical. [9]

Language of test items

ITU-T P.800 tests, [10] based on the mean opinion score methodology, are commonly used to evaluate telephone codecs for use in e.g. VOIP. This standard specifies that the tested speech items should always be in the native language of the listeners. When MUSHRA is used instead for these purposes, language matching becomes unnecessary. MUSHRA experiments do not aim to test the intelligibility of spoken words but solely the quality of the audio containing those words and the presence or absence of audible artifacts (e.g. distortion). A MUSHRA study with Mandarin Chinese and German listeners found no significant difference between rating foreign and native language test items. Despite the lack of distinction in the end results, listeners did need more time and comparison opportunities (repetitions) to accurately evaluate the foreign language items. [11] This compensation is impossible in ITU-T P.800 ACR tests wherein items are heard only once and no comparison to the reference audio is possible. In such tests, unlike MUSHRA tests, foreign language items are perceived and then rated as being of lower quality, irrespective of actual codec quality, when listeners' proficiency in the target language is low. [12]

Related Research Articles

<span class="mw-page-title-main">MP3</span> Digital audio format

MP3 is a coding format for digital audio developed largely by the Fraunhofer Society in Germany under the lead of Karlheinz Brandenburg. It was designed to greatly reduce the amount of data required to represent audio, yet still sound like a faithful reproduction of the original uncompressed audio to most listeners; for example, compared to CD-quality digital audio, MP3 compression can commonly achieve a 75–95% reduction in size, depending on the bit rate. In popular usage, MP3 often refers to files of sound or music recordings stored in the MP3 file format (.mp3) on consumer electronic devices.

<span class="mw-page-title-main">Sound quality</span> Assessment of the audio output from an electronic device

Sound quality is typically an assessment of the accuracy, fidelity, or intelligibility of audio output from an electronic device. Quality can be measured objectively, such as when tools are used to gauge the accuracy with which the device reproduces an original sound; or it can be measured subjectively, such as when human listeners respond to the sound or gauge its perceived similarity to another sound.

<span class="mw-page-title-main">Loudness</span> Subjective perception of sound pressure

In acoustics, loudness is the subjective perception of sound pressure. More formally, it is defined as the "attribute of auditory sensation in terms of which sounds can be ordered on a scale extending from quiet to loud". The relation of physical attributes of sound to perceived loudness consists of physical, physiological and psychological components. The study of apparent loudness is included in the topic of psychoacoustics and employs methods of psychophysics.

In data compression and psychoacoustics, transparency is the result of lossy data compression accurate enough that the compressed result is perceptually indistinguishable from the uncompressed input, i.e. perceptually lossless.

Code-excited linear prediction (CELP) is a linear predictive speech coding algorithm originally proposed by Manfred R. Schroeder and Bishnu S. Atal in 1985. At the time, it provided significantly better quality than existing low bit-rate algorithms, such as residual-excited linear prediction (RELP) and linear predictive coding (LPC) vocoders. Along with its variants, such as algebraic CELP, relaxed CELP, low-delay CELP and vector sum excited linear prediction, it is currently the most widely used speech coding algorithm. It is also used in MPEG-4 Audio speech coding. CELP is commonly used as a generic term for a class of algorithms and not for a particular codec.

Mean opinion score (MOS) is a measure used in the domain of Quality of Experience and telecommunications engineering, representing overall quality of a stimulus or system. It is the arithmetic mean over all individual "values on a predefined scale that a subject assigns to his opinion of the performance of a system quality". Such ratings are usually gathered in a subjective quality evaluation test, but they can also be algorithmically estimated.

Perceptual Speech Quality Measure (PSQM) is a computational and modeling algorithm defined in Recommendation ITU-T P.861 that objectively evaluates and quantifies voice quality of voice-band speech codecs. It may be used to rank the performance of these speech codecs with differing speech input levels, talkers, bit rates and transcodings. P.861 was withdrawn and replaced by Recommendation ITU-T P.862 (PESQ), which contains an improved speech assessment algorithm.

Video quality is a characteristic of a video passed through a video transmission or processing system that describes perceived video degradation. Video processing systems may introduce some amount of distortion or artifacts in the video signal that negatively impact the user's perception of the system. For many stakeholders in video production and distribution, ensuring video quality is an important task.

Subjective video quality is video quality as experienced by humans. It is concerned with how video is perceived by a viewer and designates their opinion on a particular video sequence. It is related to the field of Quality of Experience. Measuring subjective video quality is necessary because objective quality assessment algorithms such as PSNR have been shown to correlate poorly with subjective ratings. Subjective ratings may also be used as ground truth to develop new algorithms.

A codec listening test is a scientific study designed to compare two or more lossy audio codecs, usually with respect to perceived fidelity or compression efficiency.

An ABX test is a method of comparing two choices of sensory stimuli to identify detectable differences between them. A subject is presented with two known samples followed by one unknown sample X that is randomly selected from either A or B. The subject is then required to identify X as either A or B. If X cannot be identified reliably with a low p-value in a predetermined number of trials, then the null hypothesis cannot be rejected and it cannot be proven that there is a perceptible difference between A and B.

Perceptual Evaluation of Audio Quality (PEAQ) is a standardized algorithm for objectively measuring perceived audio quality, developed in 1994–1998 by a joint venture of experts within Task Group 6Q of the International Telecommunication Union's Radiocommunication Sector (ITU-R). It was originally released as ITU-R Recommendation BS.1387 in 1998 and last updated in 2023. It utilizes software to simulate perceptual properties of the human ear and then integrates multiple model output variables into a single metric.

Perceptual Evaluation of Speech Quality (PESQ) is a family of standards comprising a test methodology for automated assessment of the speech quality as experienced by a user of a telephony system. It was standardized as Recommendation ITU-T P.862 in 2001. PESQ is used for objective voice quality testing by phone manufacturers, network equipment vendors and telecom operators. Its usage requires a license. The first edition of PESQ's successor POLQA entered into force in 2011.

Perceptual Evaluation of Video Quality(PEVQ) is an end-to-end (E2E) measurement algorithm to score the picture quality of a video presentation by means of a 5-point mean opinion score (MOS). It is, therefore, a video quality model. PEVQ was benchmarked by the Video Quality Experts Group (VQEG) in the course of the Multimedia Test Phase 2007–2008. Based on the performance results, in which the accuracy of PEVQ was tested against ratings obtained by human viewers, PEVQ became part of the new International Standard.

Wideband audio, also known as wideband voice or HD voice, is high definition voice quality for telephony audio, contrasted with standard digital telephony "toll quality". It extends the frequency range of audio signals transmitted over telephone lines, resulting in higher quality speech. The range of the human voice extends from 100 Hz to 17 kHz but traditional, voiceband or narrowband telephone calls limit audio frequencies to the range of 300 Hz to 3.4 kHz. Wideband audio relaxes the bandwidth limitation and transmits in the audio frequency range of 50 Hz to 7 kHz. In addition, some wideband codecs may use a higher audio bit depth of 16 bits to encode samples, also resulting in much better voice quality.

VQuad-HD(Objective perceptual multimedia video quality measurement of HDTV) is a video quality testing technology for high definition video signals. It is a full-reference model, meaning that it requires access to the original and the degraded signal to estimate the quality.

P.OLQA was the working title of an ITU-T standard that covers a model to predict speech quality by means of analyzing digital speech signals. The model was standardized as Recommendation ITU-T P.863 in 2011. The second edition of the standard appeared in 2014, and the third, currently in-force edition was adopted in 2018 under the title Perceptual objective listening quality prediction. P.863 is known in the field under the name POLQA, which is often misinterpreted as "Perceptual Objective Listening Quality Analysis", but in fact, POLQA is no abbreviation and the "P" in the name stems from the P Series of ITU-T Recommendations.

High-resolution audio is a term for audio files with greater than 44.1 kHz sample rate or higher than 16-bit audio bit depth. It commonly refers to 96 or 192 kHz sample rates. However, 44.1 kHz/24-bit, 48 kHz/24-bit and 88.2 kHz/24-bit recordings also exist that are labeled HD Audio.

Hearing-Aid Speech Quality Index (HASQI) is a measure of audio quality originally designed for the evaluation of speech quality for those with a hearing aid,. It has also been shown to be able to gauge audio quality for non-speech sounds and for listeners without a hearing loss.

Dolby AC-4 is an audio compression technology developed by Dolby Laboratories. Dolby AC-4 bitstreams can contain audio channels and/or audio objects. Dolby AC-4 has been adopted by the DVB project and standardized by the ETSI.

References

  1. ITU-R recommendation BS.1534
  2. ITU-R BS.1116 (February 2015). "Methods for the subjective assessment of small impairments in audio systems".{{cite journal}}: Cite journal requires |journal= (help)CS1 maint: numeric names: authors list (link)
  3. 1 2 Schinkel-Bielefeld, N., Lotze, N. and Nagel, F. (May 2013). "Audio quality evaluation by experienced and inexperienced listeners". The Journal of the Acoustical Society of America. 133 (5): 3246. Bibcode:2013ASAJ..133.3246S. doi:10.1121/1.4805210.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  4. Rumsey, Francis; Zielinski, Slawomir; Kassier, Rafael; Bech, Søren (2005-05-31). "Relationships between experienced listener ratings of multichannel audio quality and naïve listener preferences". The Journal of the Acoustical Society of America. 117 (6): 3832–3840. Bibcode:2005ASAJ..117.3832R. doi:10.1121/1.1904305. ISSN   0001-4966. PMID   16018485.
  5. Gaëtan, Lorho; Guillaume, Le Ray; Nick, Zacharov (2010-06-13). "eGauge—A Measure of Assessor Expertise in Audio Quality Evaluations". Proceedings of the Audio Engineering Society. 38th International Conference on Sound Quality Evaluation.
  6. Ekeroot, Jonas; Berg, Jan; Nykänen, Arne (2014-04-25). "Criticality of Audio Stimuli for Listening Tests – Listening Durations During a Ranking Task". 136th Convention of the Audio Engineering Society.
  7. Max, Neuendorf; Frederik, Nagel (2011-10-19). "Exploratory Studies on Perceptual Stationarity in Listening Test - Part I: Real World Signals from Custom Listening Tests".{{cite journal}}: Cite journal requires |journal= (help)
  8. Frederik, Nagel; Max, Neuendorf (2011-10-19). "Exploratory Studies on Perceptual Stationarity in Listening Test - Part II: Synthetic Signals with Time Varying Artifacts".{{cite journal}}: Cite journal requires |journal= (help)
  9. Nadja, Schinkel-Bielefeld (2017-05-11). "Audio Quality Evaluation in MUSHRA Tests–Influences between Loop Setting and a Listeners' Ratings". 142nd Convention of the Audio Engineering Society.
  10. ITU-T P.800 (August 1996). "P.800 : Methods for subjective determination of transmission quality".{{cite journal}}: Cite journal requires |journal= (help)CS1 maint: numeric names: authors list (link)
  11. Nadja, Schinkel-Bielefeld; Zhang, Jiandong; Qin, Yili; Katharina, Leschanowsky, Anna; Fu, Shanshan (2017-05-11). "Is it Harder to Perceive Coding Artifact in Foreign Language Items? – A Study with Mandarin Chinese and German Speaking Listeners".{{cite journal}}: Cite journal requires |journal= (help)CS1 maint: multiple names: authors list (link)
  12. Blašková, Lubica; Holub, Jan (2008). "How do Non-native Listeners Perceive Quality of Transmitted Voice?" (PDF). Communications. 10 (4): 11–15. doi:10.26552/com.C.2008.4.11-14. S2CID   196699038.