Bruceton analysis

Last updated

A Bruceton analysis is one way of analyzing the sensitivity of explosives as described originally by Dixon and Mood in 1948. Also known as the "Up and Down Test" or "the staircase method", a Bruceton analysis relies upon two parameters: first stimulus and step size. A stimulus is provided to the sample, and the results noted. If a positive result is noted, then the stimulus is decremented by the step size. If a negative result occurs, the stimulus is increased. The test continues with each sample tested at a stimulus 1 step up or down from the previous stimulus if the previous result was negative or positive.

Contents

The results are tabulated and analyzed via Bruceton analysis, a simple computation of sums that can be performed by pencil and paper, to provide estimates of the mean and standard deviation. Confidence estimates are also produced.

Other analysis methods are the Neyer d-optimal test and Dror and Steinberg [2008] sequential procedure. Bruceton analysis has an advantage over the modern techniques being very simple to implement and analyze - as it was designed to be performed without a computer. The modern techniques offer a great improvement in efficiency, needing a much smaller sample size to obtain any desired significance level. Furthermore, these techniques enable the treatment of many other related experimental designs - such as when there is a need to learn the influence of more than one variable (say, testing the sensitivity of an explosive to both shock level and environment temperature), to models which are not only binary by nature (not only "detonate or not"), to experiments where you decide in advance (or "group") on more than one sample in each "run", and more. In fact, with the modern techniques the experimenter is not even constrained to specify a single model and can reflect uncertainty as to the form of the true model.

For mechanical threshold testing, typically on the up down method, originally proposed by Dixon, was used by SR Chaplan et al. in 1994. Their paper tabulated the coefficients required to process the data after testing. The mechanical thresholds generated have a discrete range of values (i.e. do not lie on an analog scale) and thus should be regarded as non-parametric for statistical purposes.

Worked examples

Example 1

Testing performed at an interval of d=0.2, testing commences one step before a change in response.

Test Data
Stimulus (xi)1234567891011121314151617181920212223242526272829303132333435363738394041
4.0XX
3.8X0XX
3.6XXXX00XXXXXX
3.4X0XXX0X0XX0000000
3.2000000X0
3.00

Each test level is assigned an index (i).

Tabulated Data
Stimulus (xi)Index (i)Number of responses (Ni)Number of non-responses (No)
4.0510
3.8421
3.6392
3.42710
3.2117
3.0001
-Total2021

As the number of responses is less than the number of non-responses, the responses are used to determine the 50% value.

Data Analysis
iNii*Ni
515
428
3927
2714
111
000
Total2055

N=Sum Ni = (1+2+9+7+1+0)=20

A=Sum i*Ni=(5+8+27+14+1+0)=55

50% level = X0+d*(A/N-0.5)=3+0.45=3.45

Example 2

Testing performed at an interval of d=0.2, testing commences one step before a change in response.

Test Data
Stimulus (xi)123456789101112131415161718192021222324252627282930313233
3.8XX
3.6XXX0XXXXXXXXXX0X
3.400000X00000000
3.20
Tabulated Data
Stimulus (xi)Index (i)Number of responses (Ni)Number of non-responses (No)
3.8320
3.62142
3.41113
3.2001
-Total1716

As the number of non-responses is less than the number of responses, the non-responses are used to determine the 50% value.

Data Analysis
iNii*Ni
300
224
11212
010
Total1516

N=Sum Ni = (0+2+12+1)=15

A=Sum i*Ni=(0+4+12+0)=16

50% level = X0+d*(A/N+0.5)=3.2+0.31=3.51

Related Research Articles

Analysis of variance (ANOVA) is a collection of statistical models and their associated estimation procedures used to analyze the differences among means. ANOVA was developed by the statistician Ronald Fisher. ANOVA is based on the law of total variance, where the observed variance in a particular variable is partitioned into components attributable to different sources of variation. In its simplest form, ANOVA provides a statistical test of whether two or more population means are equal, and therefore generalizes the t-test beyond two means. In other words, the ANOVA is used to test the difference between two or more means.

Nonparametric statistics is the branch of statistics that is not based solely on parametrized families of probability distributions. Nonparametric statistics is based on either being distribution-free or having a specified distribution but with the distribution's parameters unspecified. Nonparametric statistics includes both descriptive statistics and statistical inference. Nonparametric tests are often used when the assumptions of parametric tests are violated.

<span class="mw-page-title-main">Psychophysics</span> Branch of knowledge relating physical stimuli and psychological perception

Psychophysics quantitatively investigates the relationship between physical stimuli and the sensations and perceptions they produce. Psychophysics has been described as "the scientific study of the relation between stimulus and sensation" or, more completely, as "the analysis of perceptual processes by studying the effect on a subject's experience or behaviour of systematically varying the properties of a stimulus along one or more physical dimensions".

<span class="mw-page-title-main">Calibration curve</span> Method for determining the concentration of a substance in an unknown sample

In analytical chemistry, a calibration curve, also known as a standard curve, is a general method for determining the concentration of a substance in an unknown sample by comparing the unknown to a set of standard samples of known concentration. A calibration curve is one approach to the problem of instrument calibration; other standard approaches may mix the standard into the unknown, giving an internal standard. The calibration curve is a plot of how the instrumental response, the so-called analytical signal, changes with the concentration of the analyte.

<span class="mw-page-title-main">Mathematical statistics</span> Branch of statistics

Mathematical statistics is the application of probability theory, a branch of mathematics, to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure theory.

The safety testing of explosives involves the determination of various properties of the different energetic materials that are used in commercial, mining, and military applications. It is highly desirable to measure the conditions under which explosives can be set off for several reasons, including: safety in handling, safety in storage, and safety in use.

The limit of detection is the lowest signal, or the lowest corresponding quantity to be determined from the signal, that can be observed with a sufficient degree of confidence or statistical significance. However, the exact threshold used to decide when a signal significantly emerges above the continuously fluctuating background noise remains arbitrary and is a matter of policy and often of debate among scientists, statisticians and regulators depending on the stakes in different fields.

The Neyer d-optimal test is a sensitivity test. It can be used to answer questions such as "How far can a carton of eggs fall, on average, before one breaks?" If these egg cartons are very expensive, the person running the test would like to minimize the number of cartons dropped, to keep the experiment cheaper and to perform it faster. The Neyer test allows the experimenter to choose the experiment that gives the most information. In this case, given the history of egg cartons which have already been dropped, and whether those cartons broke or not, the Neyer test says "you will learn the most if you drop the next egg carton from a height of 32.123 meters."

In statistics, sequential analysis or sequential hypothesis testing is statistical analysis where the sample size is not fixed in advance. Instead data are evaluated as they are collected, and further sampling is stopped in accordance with a pre-defined stopping rule as soon as significant results are observed. Thus a conclusion may sometimes be reached at a much earlier stage than would be possible with more classical hypothesis testing or estimation, at consequently lower financial and/or human cost.

<span class="mw-page-title-main">Microarray analysis techniques</span>

Microarray analysis techniques are used in interpreting the data generated from experiments on DNA, RNA, and protein microarrays, which allow researchers to investigate the expression state of a large number of genes - in many cases, an organism's entire genome - in a single experiment. Such experiments can generate very large amounts of data, allowing researchers to assess the overall state of a cell or organism. Data in such large quantities is difficult - if not impossible - to analyze without the help of computer programs.

The sequential probability ratio test (SPRT) is a specific sequential hypothesis test, developed by Abraham Wald and later proven to be optimal by Wald and Jacob Wolfowitz. Neyman and Pearson's 1933 result inspired Wald to reformulate it as a sequential analysis problem. The Neyman-Pearson lemma, by contrast, offers a rule of thumb for when all the data is collected.

<span class="mw-page-title-main">Least-squares spectral analysis</span> Periodicity computation method

Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum, based on a least squares fit of sinusoids to data samples, similar to Fourier analysis. Fourier analysis, the most used spectral method in science, generally boosts long-periodic noise in long gapped records; LSSA mitigates such problems. Unlike with Fourier analysis, data need not be equally spaced to use LSSA.

In statistics, one-way analysis of variance is a technique that can be used to compare whether two sample's means are significantly different or not. This technique can be used only for numerical response data, the "Y", usually one variable, and numerical or (usually) categorical input data, the "X", always one variable, hence "one-way".

In statistics, Scheffé's method, named after the American statistician Henry Scheffé, is a method for adjusting significance levels in a linear regression analysis to account for multiple comparisons. It is particularly useful in analysis of variance, and in constructing simultaneous confidence bands for regressions involving basis functions.

In statistical quality control, the CUsUM is a sequential analysis technique developed by E. S. Page of the University of Cambridge. It is typically used for monitoring change detection. CUSUM was announced in Biometrika, in 1954, a few years after the publication of Wald's sequential probability ratio test (SPRT).

In statistics, regression validation is the process of deciding whether the numerical results quantifying hypothesized relationships between variables, obtained from regression analysis, are acceptable as descriptions of the data. The validation process can involve analyzing the goodness of fit of the regression, analyzing whether the regression residuals are random, and checking whether the model's predictive performance deteriorates substantially when applied to data that were not used in model estimation.

The marketing research process is a six-step process involving the definition of the problem being studied upon, determining what approach to take, formulation of research design, field work entailed, data preparation and analysis, and the generation of reports, how to present these reports, and overall, how the task can be accomplished.

A Thurstonian model is a stochastic transitivity model with latent variables for describing the mapping of some continuous scale onto discrete, possibly ordered categories of response. In the model, each of these categories of response corresponds to a latent variable whose value is drawn from a normal distribution, independently of the other response variables and with constant variance. Developments over the last two decades, however, have led to Thurstonian models that allow unequal variance and non zero covariance terms. Thurstonian models have been used as an alternative to generalized linear models in analysis of sensory discrimination tasks. They have also been used to model long-term memory in ranking tasks of ordered alternatives, such as the order of the amendments to the US Constitution. Their main advantage over other models ranking tasks is that they account for non-independence of alternatives. Ennis provides a comprehensive account of the derivation of Thurstonian models for a wide variety of behavioral tasks including preferential choice, ratings, triads, tetrads, dual pair, same-different and degree of difference, ranks, first-last choice, and applicability scoring. In Chapter 7 of this book, a closed form expression, derived in 1988, is given for a Euclidean-Gaussian similarity model that provides a solution to the well-known problem that many Thurstonian models are computationally complex often involving multiple integration. In Chapter 10, a simple form for ranking tasks is presented that only involves the product of univariate normal distribution functions and includes rank-induced dependency parameters. A theorem is proven that shows that the particular form of the dependency parameters provides the only way that this simplification is possible. Chapter 6 links discrimination, identification and preferential choice through a common multivariate model in the form of weighted sums of central F distribution functions and allows a general variance-covariance matrix for the items.

<span class="mw-page-title-main">Up-and-Down Designs</span>

Up-and-down designs (UDDs) are a family of statistical experiment designs used in dose-finding experiments in science, engineering, and medical research. Dose-finding experiments have binary responses: each individual outcome can be described as one of two possible values, such as success vs. failure or toxic vs. non-toxic. Mathematically the binary responses are coded as 1 and 0. The goal of dose-finding experiments is to estimate the strength of treatment (i.e., the "dose") that would trigger the "1" response a pre-specified proportion of the time. This dose can be envisioned as a percentile of the distribution of response thresholds. An example where dose-finding is used is in an experiment to estimate the LD50 of some toxic chemical with respect to mice.

References

    See also