X-ray reflectivity (sometimes known as X-ray specular reflectivity, X-ray reflectometry, or XRR) is a surface-sensitive analytical technique used in chemistry, physics, and materials science to characterize surfaces, thin films and multilayers. [1] [2] [3] [4] [5] It is a form of reflectometry based on the use of X-rays and is related to the techniques of neutron reflectometry and ellipsometry.
The basic principle of X-ray reflectivity is to reflect a beam of X-rays from a flat surface and to then measure the intensity of X-rays reflected in the specular direction (reflected angle equal to incident angle). If the interface is not perfectly sharp and smooth then the reflected intensity will deviate from that predicted by the law of Fresnel reflectivity. The deviations can then be analyzed to obtain the density profile of the interface normal to the surface.
The earliest measurements of X-ray reflectometry were published by Heinz Kiessig in 1931, focusing mainly on the total reflection region of thin nickel films on glass. [6] First calculations of XRR curves were performed by Lyman G. Parratt in 1954. [7] Parratt's work explored the surface of copper-coated glass, but since that time the technique has been extended to a wide range of both solid and liquid interfaces.
When an interface is not perfectly sharp, but has an average electron density profile given by , then the X-ray reflectivity can be approximated by the so called Master formula: [1] : 83
Here is the reflectivity, , is the X-ray wavelength (e.g. copper's K-alpha peak at 0.154056 nm), is the density deep within the material and is the angle of incidence.
The Fresnel reflectivity, , in the limit of small angles where polarization can be neglected, is given by:
Here is the wavevector inside the material, and the critical angle , with the Thomson scattering length.
Below the critical angle (derived from Snell's law), 100% of incident radiation is reflected through total external reflection, . For , . Typically one can then use this formula to compare parameterized models of the average density profile in the z-direction with the measured X-ray reflectivity and then vary the parameters until the theoretical profile matches the measurement.
For films with multiple layers, X-ray reflectivity may show oscillations with Q (angle/wavelength), analogous to the Fabry-Pérot effect, here called Kiessig fringes. [8] The period of these oscillations can be used to infer layer thicknesses, interlayer roughnesses, electron densities and their contrasts, and complex refractive indices (which depend on atomic number and atomic form factor), for example using the Abeles matrix formalism or the recursive Parratt-formalism as follows:
where Xj is the ratio of reflected and transmitted amplitudes between layers j and j+1, dj is the thickness of layer j, and rj,j+1 is the Fresnel coefficient for layers j and j+1
where kj,z is the z component of the wavenumber. For specular reflection where the incident and reflected angles are equal, Q used previously is two times kz because . With conditions RN+1 = 0 and T1 = 1 for an N-interface system (i.e. nothing coming back from inside the semi-infinite substrate and unit amplitude incident wave), all Xj can be calculated successively. Roughness can also be accounted for by adding the factor
where is a standard deviation (aka roughness).
Thin film thickness and critical angle can also be approximated with a linear fit of squared incident angle of the peaks in rad2 vs unitless squared peak number as follows:
X-ray reflectivity measurements are analyzed by fitting to the measured data a simulated curve calculated using the recursive Parratt's formalism combined with the rough interface formula. The fitting parameters are typically layer thicknesses, densities (from which the index of refraction and eventually the wavevector z component is calculated) and interfacial roughnesses. Measurements are typically normalized so that the maximum reflectivity is 1, but normalization factor can be included in fitting, as well. Additional fitting parameters may be background radiation level and limited sample size due to which beam footprint at low angles may exceed the sample size, thus reducing reflectivity.
Several fitting algorithms have been attempted for X-ray reflectivity, some of which find a local optimum instead of the global optimum. The Levenberg-Marquardt method finds a local optimum. Due to the curve having many interference fringes, it finds incorrect layer thicknesses unless the initial guess is extraordinarily good. The derivative-free simplex method also finds a local optimum. In order to find global optimum, global optimization algorithms such as simulated annealing are required. Unfortunately, simulated annealing may be hard to parallelize on modern multicore computers. Given enough time, simulated annealing can be shown to find the global optimum with a probability approaching 1, [9] but such convergence proof does not mean the required time is reasonably low. In 1998, [10] it was found that genetic algorithms are robust and fast fitting methods for X-ray reflectivity. Thus, genetic algorithms have been adopted by the software of practically all X-ray diffractometer manufacturers and also by open source fitting software.
Fitting a curve requires a function usually called fitness function, cost function, fitting error function or figure of merit (FOM). It measures the difference between measured curve and simulated curve, and therefore, lower values are better. When fitting, the measurement and the best simulation are typically represented in logarithmic space.
From mathematical standpoint, the fitting error function takes into account the effects of Poisson-distributed photon counting noise in a mathematically correct way:
However, this function may give too much weight to the high-intensity regions. If high-intensity regions are important (such as when finding mass density from critical angle), this may not be a problem, but the fit may not visually agree with the measurement at low-intensity high-angle ranges.
Another popular fitting error function is the 2-norm in logarithmic space function. It is defined in the following way:
Needless to say, in the equation data points with zero measured photon counts need to be removed. This 2-norm in logarithmic space can be generalized to p-norm in logarithmic space. The drawback of this 2-norm in logarithmic space is that it may give too much weight to regions where relative photon counting noise is high.
The application of neural networks (NNs) in X-ray reflectivity (XRR) has gained attention for its ability to offer high analysis speed, noise tolerance and its ability to find global optima. Neural networks offer a fast and robust alternative to fit programs by learning from large synthetic datasets that are easy to calculate in the forward direction and providing quick predictions of material properties, such as layer thickness, roughness, and density. The first application of neural networks in XRR was demonstrated in the analysis of thin film growth, [11] and a wide range of publications has explored the possibilities offered by neural networks, including free form fitting, fast feedback loops for autonomous labs and online expeirmnet control.
One of the main challenges in XRR is the non-uniqueness of the inverse problem, where multiple Scattering Length Density (SLD) profiles can produce the same reflectivity curve. Recent advances in neural networks have focused on addressing this by designing architectures that explore all possible solutions, providing a broader view of potential material profiles. This development is critical in ensuring that solutions are not confined to a single, potentially incorrect branch of the solution space. [12]
An up to date overview over current analysis software can be found in the following link. [13] Diffractometer manufacturers typically provide commercial software to be used for X-ray reflectivity measurements. However, several open source software packages are also available: Refnx and Refl1D for X-ray and neutron relectometry, [14] [15] and GenX [16] [17] are commonly used open source X-ray reflectivity curve fitting software. They are implemented in the Python programming language and runs therefore on both Windows and Linux. Reflex [18] [19] is a standalone software dedicated to the simulation and analysis of X-rays and neutron reflectivity from multilayers. Micronova XRR [20] runs under Java and is therefore available on any operating system on which Java is available.
Documented neural network analysis packages such as MLreflect have also become available as an alternative approach to XRR data analysis recently. [21]
In physics, the cross section is a measure of the probability that a specific process will take place in a collision of two particles. For example, the Rutherford cross-section is a measure of probability that an alpha particle will be deflected by a given angle during an interaction with an atomic nucleus. Cross section is typically denoted σ (sigma) and is expressed in units of area, more specifically in barns. In a way, it can be thought of as the size of the object that the excitation must hit in order for the process to occur, but more exactly, it is a parameter of a stochastic process.
In particle physics, bremsstrahlung is electromagnetic radiation produced by the deceleration of a charged particle when deflected by another charged particle, typically an electron by an atomic nucleus. The moving particle loses kinetic energy, which is converted into radiation, thus satisfying the law of conservation of energy. The term is also used to refer to the process of producing the radiation. Bremsstrahlung has a continuous spectrum, which becomes more intense and whose peak intensity shifts toward higher frequencies as the change of the energy of the decelerated particles increases.
Ray transfer matrix analysis is a mathematical form for performing ray tracing calculations in sufficiently simple problems which can be solved considering only paraxial rays. Each optical element is described by a 2 × 2ray transfer matrix which operates on a vector describing an incoming light ray to calculate the outgoing ray. Multiplication of the successive matrices thus yields a concise ray transfer matrix describing the entire optical system. The same mathematics is also used in accelerator physics to track particles through the magnet installations of a particle accelerator, see electron optics.
Unit quaternions, known as versors, provide a convenient mathematical notation for representing spatial orientations and rotations of elements in three dimensional space. Specifically, they encode information about an axis-angle rotation about an arbitrary axis. Rotation and orientation quaternions have applications in computer graphics, computer vision, robotics, navigation, molecular dynamics, flight dynamics, orbital mechanics of satellites, and crystallographic texture analysis.
Flight dynamics is the science of air vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation in three dimensions about the vehicle's center of gravity (cg), known as pitch, roll and yaw. These are collectively known as aircraft attitude, often principally relative to the atmospheric frame in normal flight, but also relative to terrain during takeoff or landing, or when operating at low elevation. The concept of attitude is not specific to fixed-wing aircraft, but also extends to rotary aircraft such as helicopters, and dirigibles, where the flight dynamics involved in establishing and controlling attitude are entirely different.
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.
In analytical mechanics, generalized coordinates are a set of parameters used to represent the state of a system in a configuration space. These parameters must uniquely define the configuration of the system relative to a reference state. The generalized velocities are the time derivatives of the generalized coordinates of the system. The adjective "generalized" distinguishes these parameters from the traditional use of the term "coordinate" to refer to Cartesian coordinates.
This is a list of some vector calculus formulae for working with common curvilinear coordinate systems.
Directional statistics is the subdiscipline of statistics that deals with directions, axes or rotations in Rn. More generally, directional statistics deals with observations on compact Riemannian manifolds including the Stiefel manifold.
In probability and statistics, a circular distribution or polar distribution is a probability distribution of a random variable whose values are angles, usually taken to be in the range [0, 2π). A circular distribution is often a continuous probability distribution, and hence has a probability density, but such distributions can also be discrete, in which case they are called circular lattice distributions. Circular distributions can be used even when the variables concerned are not explicitly angles: the main consideration is that there is not usually any real distinction between events occurring at the opposite ends of the range, and the division of the range could notionally be made at any point.
Rietveld refinement is a technique described by Hugo Rietveld for use in the characterisation of crystalline materials. The neutron and X-ray diffraction of powder samples results in a pattern characterised by reflections at certain positions. The height, width and position of these reflections can be used to determine many aspects of the material's structure.
Powder diffraction is a scientific technique using X-ray, neutron, or electron diffraction on powder or microcrystalline samples for structural characterization of materials. An instrument dedicated to performing such powder measurements is called a powder diffractometer.
Neutron reflectometry is a neutron diffraction technique for measuring the structure of thin films, similar to the often complementary techniques of X-ray reflectivity and ellipsometry. The technique provides valuable information over a wide variety of scientific and technological applications including chemical aggregation, polymer and surfactant adsorption, structure of thin film magnetic systems, biological membranes, etc. It has become a technique widespread at reactor and spallation sources, with a wide range of available fitting software and standardised data formats.
In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.
Fluorescence interference contrast (FLIC) microscopy is a microscopic technique developed to achieve z-resolution on the nanometer scale.
The transfer-matrix method is a method used in optics and acoustics to analyze the propagation of electromagnetic or acoustic waves through a stratified medium; a stack of thin films. This is, for example, relevant for the design of anti-reflective coatings and dielectric mirrors.
In probability theory and directional statistics, a wrapped normal distribution is a wrapped probability distribution that results from the "wrapping" of the normal distribution around the unit circle. It finds application in the theory of Brownian motion and is a solution to the heat equation for periodic boundary conditions. It is closely approximated by the von Mises distribution, which, due to its mathematical simplicity and tractability, is the most commonly used distribution in directional statistics.
In physics and engineering, the radiative heat transfer from one surface to another is the equal to the difference of incoming and outgoing radiation from the first surface. In general, the heat transfer between surfaces is governed by temperature, surface emissivity properties and the geometry of the surfaces. The relation for heat transfer can be written as an integral equation with boundary conditions based upon surface conditions. Kernel functions can be useful in approximating and solving this integral equation.
In machine learning, a variational autoencoder (VAE) is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling. It is part of the families of probabilistic graphical models and variational Bayesian methods.
A flow-based generative model is a generative model used in machine learning that explicitly models a probability distribution by leveraging normalizing flow, which is a statistical method using the change-of-variable law of probabilities to transform a simple distribution into a complex one.