Variable-range hopping is a model used to describe carrier transport in a disordered semiconductor or in amorphous solid by hopping in an extended temperature range. [1] It has a characteristic temperature dependence of
where is the conductivity and is a parameter dependent on the model under consideration.
The Mott variable-range hopping describes low-temperature conduction in strongly disordered systems with localized charge-carrier states [2] and has a characteristic temperature dependence of
for three-dimensional conductance (with = 1/4), and is generalized to d-dimensions
Hopping conduction at low temperatures is of great interest because of the savings the semiconductor industry could achieve if they were able to replace single-crystal devices with glass layers. [3]
The original Mott paper introduced a simplifying assumption that the hopping energy depends inversely on the cube of the hopping distance (in the three-dimensional case). Later it was shown that this assumption was unnecessary, and this proof is followed here. [4] In the original paper, the hopping probability at a given temperature was seen to depend on two parameters, R the spatial separation of the sites, and W, their energy separation. Apsley and Hughes noted that in a truly amorphous system, these variables are random and independent and so can be combined into a single parameter, the range between two sites, which determines the probability of hopping between them.
Mott showed that the probability of hopping between two states of spatial separation and energy separation W has the form:
where α−1 is the attenuation length for a hydrogen-like localised wave-function. This assumes that hopping to a state with a higher energy is the rate limiting process.
We now define , the range between two states, so . The states may be regarded as points in a four-dimensional random array (three spatial coordinates and one energy coordinate), with the "distance" between them given by the range .
Conduction is the result of many series of hops through this four-dimensional array and as short-range hops are favoured, it is the average nearest-neighbour "distance" between states which determines the overall conductivity. Thus the conductivity has the form
where is the average nearest-neighbour range. The problem is therefore to calculate this quantity.
The first step is to obtain , the total number of states within a range of some initial state at the Fermi level. For d-dimensions, and under particular assumptions this turns out to be
where . The particular assumptions are simply that is well less than the band-width and comfortably bigger than the interatomic spacing.
Then the probability that a state with range is the nearest neighbour in the four-dimensional space (or in general the (d+1)-dimensional space) is
the nearest-neighbour distribution.
For the d-dimensional case then
This can be evaluated by making a simple substitution of into the gamma function,
After some algebra this gives
and hence that
When the density of states is not constant (odd power law N(E)), the Mott conductivity is also recovered, as shown in this article.
The Efros–Shklovskii (ES) variable-range hopping is a conduction model which accounts for the Coulomb gap, a small jump in the density of states near the Fermi level due to interactions between localized electrons. [5] It was named after Alexei L. Efros and Boris Shklovskii who proposed it in 1975. [5]
The consideration of the Coulomb gap changes the temperature dependence to
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution, while the parameter is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.
In probability theory, the central limit theorem (CLT) states that, under appropriate conditions, the distribution of a normalized version of the sample mean converges to a standard normal distribution. This holds even if the original variables themselves are not normally distributed. There are several versions of the CLT, each applying in the context of different conditions.
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional (univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be k-variate normally distributed if every linear combination of its k components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value.
Electrical resistivity is a fundamental specific property of a material that measures its electrical resistance or how strongly it resists electric current. A low resistivity indicates a material that readily allows electric current. Resistivity is commonly represented by the Greek letter ρ (rho). The SI unit of electrical resistivity is the ohm-metre (Ω⋅m). For example, if a 1 m3 solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is 1 Ω, then the resistivity of the material is 1 Ω⋅m.
In physics, mean free path is the average distance over which a moving particle travels before substantially changing its direction or energy, typically as a result of one or more successive collisions with other particles.
In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, if and are real numbers then the complex conjugate of is The complex conjugate of is often denoted as or .
In statistics, Spearman's rank correlation coefficient or Spearman's ρ, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a nonparametric measure of rank correlation. It assesses how well the relationship between two variables can be described using a monotonic function.
In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form and with parametric extension for arbitrary real constants a, b and non-zero c. It is named after the mathematician Carl Friedrich Gauss. The graph of a Gaussian is a characteristic symmetric "bell curve" shape. The parameter a is the height of the curve's peak, b is the position of the center of the peak, and c controls the width of the "bell".
In probability theory and statistics, a Gaussian process is a stochastic process, such that every finite collection of those random variables has a multivariate normal distribution. The distribution of a Gaussian process is the joint distribution of all those random variables, and as such, it is a distribution over functions with a continuous domain, e.g. time or space.
In solid-state physics, the electron mobility characterises how quickly an electron can move through a metal or semiconductor when pushed or pulled by an electric field. There is an analogous quantity for holes, called hole mobility. The term carrier mobility refers in general to both electron and hole mobility.
In mathematics, particularly in operator theory and C*-algebra theory, the continuous functional calculus is a functional calculus which allows the application of a continuous function to normal elements of a C*-algebra.
The Seebeck coefficient of a material is a measure of the magnitude of an induced thermoelectric voltage in response to a temperature difference across that material, as induced by the Seebeck effect. The SI unit of the Seebeck coefficient is volts per kelvin (V/K), although it is more often given in microvolts per kelvin (μV/K).
In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance matrix on the basis of a sample from the multivariate distribution. Simple cases, where observations are complete, can be dealt with by using the sample covariance matrix. The sample covariance matrix (SCM) is an unbiased and efficient estimator of the covariance matrix if the space of covariance matrices is viewed as an extrinsic convex cone in Rp×p; however, measured using the intrinsic geometry of positive-definite matrices, the SCM is a biased and inefficient estimator. In addition, if the random variable has a normal distribution, the sample covariance matrix has a Wishart distribution and a slightly differently scaled version of it is the maximum likelihood estimate. Cases involving missing data, heteroscedasticity, or autocorrelated residuals require deeper considerations. Another issue is the robustness to outliers, to which sample covariance matrices are highly sensitive.
The softmax function, also known as softargmax or normalized exponential function, converts a vector of K real numbers into a probability distribution of K possible outcomes. It is a generalization of the logistic function to multiple dimensions, and used in multinomial logistic regression. The softmax function is often used as the last activation function of a neural network to normalize the output of a network to a probability distribution over predicted output classes.
First introduced by M. Pollak, the Coulomb gap is a soft gap in the single-particle density of states (DOS) of a system of interacting localized electrons. Due to the long-range Coulomb interactions, the single-particle DOS vanishes at the chemical potential, at low enough temperatures, such that thermal excitations do not wash out the gap.
In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.
In mathematics and theoretical physics, Noether's second theorem relates symmetries of an action functional with a system of differential equations. The theorem is named after its discoverer, Emmy Noether.
In solid-state physics, the Poole–Frenkel effect is a model describing the mechanism of trap-assisted electron transport in an electrical insulator. It is named after Yakov Frenkel, who published on it in 1938, extending the theory previously developed by H. H. Poole.
Conductivity near the percolation threshold in physics, occurs in a mixture between a dielectric and a metallic component. The conductivity and the dielectric constant of this mixture show a critical behavior if the fraction of the metallic component reaches the percolation threshold.
Charge transport mechanisms are theoretical models that aim to quantitatively describe the electric current flow through a given medium.