In analytical chemistry, a standard solution (titrant or titrator) is a solution containing an accurately known concentration. Standard solutions are generally prepared by dissolving a solute of known mass into a solvent to a precise volume, or by diluting a solution of known concentration with more solvent. [1] A standard solution ideally has a high degree of purity and is stable enough that the concentration can be accurately measured after a long shelf time. [2]
Making a standard solution requires great attention to detail to avoid introducing any risk of contamination that could diminish the accuracy of the concentration. For this reason, glassware with high degree of precision such as a volumetric flask, volumetric pipette, micropipettes, and automatic pipettes are used in the preparation steps. The solvent used must also be pure and readily able to dissolve the solute into a homogenous solution. [2]
Standard solutions are used for various volumetric procedures, such as determining the concentration of solutions with an unknown concentration in titrations. The concentrations of standard solutions are normally expressed in units of moles per litre (mol/L, often abbreviated to M for molarity), moles per cubic decimetre (mol/dm3), kilomoles per cubic metre (kmol/m3), grams per milliliters (g/mL), or in terms related to those used in particular titrations (such as titres).
Standard solutions can be categorized by the type of analyte used to prepare them. These analytical standards can either be a primary standard or a secondary standard.
Primary standards are compounds with known stoichiometry, high purity, and high stability under standard conditions. The compound must not be hydroscopic to have a mass that accurately represents the exact number of moles when weighed. These characteristics make primary standards reliable for preparing standard solutions with an accurate concentration just by knowing the amount of compound and solvent used. Primary standard solutions are commonly used to determine the concentration of secondary standard solutions through titration. An example of a primary standard is potassium dichromate. [3]
Secondary standards do not satisfy the requirements for a primary standard. [4] A standard solution created from a secondary standard cannot have its concentration accurately known without stoichiometric analysis against a primary standard. An example of a secondary standard is sodium hydroxide, a hydroscopic compound that is highly reactive with its surroundings. The concentration of a standard solution made with sodium hydroxide may fluctuate overtime due to the instability of the compound, requiring for calibration using a primary standard before use. [5] [6]
Standard solutions are commonly used for standardization processes in quantitative analysis to minimize error and maintain accuracy in the results.
This is the most common method of standardization which requires one or multiple standards, each containing a known concentration of the same analyte. External standards are analyzed separately from the sample unlike other methods of standardization, hence the name "external". When concentrations of a set of external solutions are plotted against a measured value such as the absorbance of each external solution, a normal calibration curve can be obtained. Multiple samples with unknown concentrations can then be analyzed using this calibration curve which make it a useful tool. The external standardization method can introduce determinate error if the matrix of the unknown solution differs drastically from the external standard. This issue can be accounted for by replicating the matrix of the unknown solution in the external standard with a process called "matrix matching". [7] [8]
A series of internal standards contain the same concentration of a chemical called the internal standard and different concentrations of the analyte. The internal standard should be chemically similar to the analyte, so that the two receive the same treatment during measurement. Internal standards are used to correct for loss of analyte during sample preparation, for example when the analyte is in a volatile solvent. If both the internal standard and the analyte lose solvent proportionally, their signals will remain identical and the ratio of their signals can be measured.
Plotting the ratio of the analyte signal to the internal standard signal against the analyte concentration results in a calibration curve. Similar to the external calibration curve, the internal calibration curve also allows to calculate the concentration of analyte in an unknown sample. [9]
In the standard addition method, a standard (usually in the form of a solution) with a known concentration is added in increasing increments to a set of solutions containing the same unknown analyte.The matrix for these solutions are identical which eliminates the matrix effect from changing the signal of the analyte when measured. For this reason, the graph produces a linear slope when a calibration curve is plotted. The concentration of the unknown analyte can then be measured after determining the value of the x-intercept. [10]
In titrations, the concentration of analyte in solution can be determined by titrating the standard solution against the analyte solution to determine the threshold of neutralization. [11] For example, to calculate the concentration of hydrogen chloride, a standard solution of known concentration, such as 0.5 M sodium hydroxide, is titrated against the hydrogen chloride solution.
Standard solutions are commonly used to determine the concentration of an analyte species via calibration curve. A calibration curve is obtained by measuring a series of standard solutions with known concentrations, which can be used to determine the concentration of an unknown sample using linear regression analysis. [12] For example, by comparing the absorbance values of a solution with an unknown concentration to a series of standard solutions with varying concentrations, the concentration of the unknown can be determined using Beer's Law.
Any form of spectroscopy can be used in this way so long as the analyte species has substantial absorbance in the spectra. The standard solution is a reference guide to discover the molarity of unknown species.
The matrix effect can negatively affect the efficiency of a calibration curve due to interactions between matrix and the analyte response. The matrix effect can be reduced by the addition of internal standards to the standard solutions, or by using the standard addition method. [13]
Internal standards are used in GC/MS and LC/MS to control for variability introduced by injection, sample preparation and other matrix effects. The ratio of peak areas between the internal standard and analyte is calculated to determine analyte concentration. [14] A common type of internal standard is an isotopically labeled analogue of the analyte, which incorporates one or more atoms of 2H, 13C, 15N and 18O into its structure. [15]
Suppose the concentration of glutamine in an unknown sample needs to be measured. To do so, a series of standard solutions containing glutamine is prepared to create a calibration curve. A table summarizing a method for creating these solutions is shown below:
Concentration of glutamine stock solution (g/mL): | 7.50 x 10-3 | ||
---|---|---|---|
Solution | Glutamine added (mL) | Dilute to mark with: | Resulting Concentration (g/mL) |
1 (blank) | 0 | Deionized water in 25 mL Volumetric Flask | 0 |
2 | 1 | 3.00 x 10-4 | |
3 | 2 | 6.00 x 10-4 | |
4 | 3 | 9.00 x 10-4 | |
5 | 4 | 1.20 x 10-3 |
Here, a stock solution of glutamine is added in increasing increments with a high-accuracy instrument and diluted to the same volume in volumetric flasks. The result is 4 standard solutions with varying known concentrations (along with a blank for instrument calibration). The final concentration in each is calculated using the formula for molar concentration.
Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration.
Atomic absorption spectroscopy (AAS) is a spectroanalytical procedure for the quantitative measurement of chemical elements. AAS is based on the absorption of light by free metallic ions that have been atomized from a sample. An alternative technique is atomic emission spectroscopy (AES).
Titration is a common laboratory method of quantitative chemical analysis to determine the concentration of an identified analyte. A reagent, termed the titrant or titrator, is prepared as a standard solution of known concentration and volume. The titrant reacts with a solution of analyte to determine the analyte's concentration. The volume of titrant that reacted with the analyte is termed the titration volume.
Ultraviolet–visible spectrophotometry refers to absorption spectroscopy or reflectance spectroscopy in part of the ultraviolet and the full, adjacent visible regions of the electromagnetic spectrum. Being relatively inexpensive and easily implemented, this methodology is widely used in diverse applied and fundamental applications. The only requirement is that the sample absorb in the UV-Vis region, i.e. be a chromophore. Absorption spectroscopy is complementary to fluorescence spectroscopy. Parameters of interest, besides the wavelength of measurement, are absorbance (A) or transmittance (%T) or reflectance (%R), and its change with time.
Chemometrics is the science of extracting information from chemical systems by data-driven means. Chemometrics is inherently interdisciplinary, using methods frequently employed in core data-analytic disciplines such as multivariate statistics, applied mathematics, and computer science, in order to address problems in chemistry, biochemistry, medicine, biology and chemical engineering. In this way, it mirrors other interdisciplinary fields, such as psychometrics and econometrics.
Gel permeation chromatography (GPC) is a type of size-exclusion chromatography (SEC), that separates high molecular weight or colloidal analytes on the basis of size or diameter, typically in organic solvents. The technique is often used for the analysis of polymers. As a technique, SEC was first developed in 1955 by Lathe and Ruthven. The term gel permeation chromatography can be traced back to J.C. Moore of the Dow Chemical Company who investigated the technique in 1964. The proprietary column technology was licensed to Waters Corporation, who subsequently commercialized this technology in 1964. GPC systems and consumables are now also available from a number of manufacturers. It is often necessary to separate polymers, both to analyze them as well as to purify the desired product.
In analytical chemistry, Karl Fischer titration is a classic titration method that uses coulometric or volumetric titration to determine trace amounts of water in a sample. It was invented in 1935 by the German chemist Karl Fischer. Today, the titration is done with an automated Karl Fischer titrator.
In analytical chemistry, a calibration curve, also known as a standard curve, is a general method for determining the concentration of a substance in an unknown sample by comparing the unknown to a set of standard samples of known concentration. A calibration curve is one approach to the problem of instrument calibration; other standard approaches may mix the standard into the unknown, giving an internal standard. The calibration curve is a plot of how the instrumental response, the so-called analytical signal, changes with the concentration of the analyte.
An acid–base titration is a method of quantitative analysis for determining the concentration of Brønsted-Lowry acid or base (titrate) by neutralizing it using a solution of known concentration (titrant). A pH indicator is used to monitor the progress of the acid–base reaction and a titration curve can be constructed.
In analytical electrochemistry, coulometry is the measure of charge (coulombs) transfer during an electrochemical redox reaction. It can be used for precision measurements of charge, but coulometry is mainly used for analytical applications to determine the amount of matter transformed.
Amperometric titration refers to a class of titrations in which the equivalence point is determined through measurement of the electric current produced by the titration reaction. It is a form of quantitative analysis.
An analyte, component, titrand, or chemical species is a substance or chemical constituent that is of interest in an analytical procedure. The remainder of the sample is called the matrix. The procedure of analysis measures the analyte's chemical or physical properties, thus establishing its identity or concentration in the sample.
Voltammetry is a category of electroanalytical methods used in analytical chemistry and various industrial processes. In voltammetry, information about an analyte is obtained by measuring the current as the potential is varied. The analytical data for a voltammetric experiment comes in the form of a voltammogram, which plots the current produced by the analyte versus the potential of the working electrode.
The limit of detection is the lowest signal, or the lowest corresponding quantity to be determined from the signal, that can be observed with a sufficient degree of confidence or statistical significance. However, the exact threshold used to decide when a signal significantly emerges above the continuously fluctuating background noise remains arbitrary and is a matter of policy and often of debate among scientists, statisticians and regulators depending on the stakes in different fields.
The Standard addition method, often used in analytical chemistry, quantifies the analyte present in an unknown. This method is useful for analyzing complex samples where a matrix effect interferes with the analyte signal. In comparison to the calibration curve method, the standard addition method has the advantage of the matrices of the unknown and standards being nearly identical. This minimizes the potential bias arising from the matrix effect when determining the concentration.
In a chemical analysis, the internal standard method involves adding the same amount of a chemical substance to each sample and calibration solution. The internal standard responds proportionally to changes in the analyte and provides a similar, but not identical, measurement signal. It must also be absent from the sample matrix to ensure there is no other source of the internal standard present. Taking the ratio of analyte signal to internal standard signal and plotting it against the analyte concentrations in the calibration solutions will result in a calibration curve. The calibration curve can then be used to calculate the analyte concentration in an unknown sample.
In analytical chemistry, quantitative analysis is the determination of the absolute or relative abundance of one, several or all particular substance(s) present in a sample. It relates to the determination of percentage of constituents in any given sample.
In chemical analysis, matrix refers to the components of a sample other than the analyte of interest. The matrix can have a considerable effect on the way the analysis is conducted and the quality of the results are obtained; such effects are called matrix effects. For example, the ionic strength of the solution can have an effect on the activity coefficients of the analytes. The most common approach for accounting for matrix effects is to build a calibration curve using standard samples with known analyte concentration and which try to approximate the matrix of the sample as much as possible. This is especially important for solid samples where there is a strong matrix influence. In cases with complex or unknown matrices, the standard addition method can be used. In this technique, the response of the sample is measured and recorded, for example, using an electrode selective for the analyte. Then, a small volume of standard solution is added and the response is measured again. Ideally, the standard addition should increase the analyte concentration by a factor of 1.5 to 3, and several additions should be averaged. The volume of standard solution should be small enough to disturb the matrix as little as possible.
Permanganometry is one of the techniques used in chemical quantitative analysis. It is a redox titration that involves the use of permanganates to measure the amount of analyte present in unknown chemical samples. It involves two steps, namely the titration of the analyte with potassium permanganate solution and then the standardization of potassium permanganate solution with standard sodium oxalate solution. The titration involves volumetric manipulations to prepare the analyte solutions.
Ion suppression in LC-MS and LC-MS/MS refers to reduced detector response, or signal:noise as a manifested effect of competition for ionisation efficiency in the ionisation source, between the analyte(s) of interest and other endogenous or exogenous species which have not been removed from the sample matrix during sample preparation. Ion suppression is not strictly a problem unless interfering compounds elute at the same time as the analyte of interest. In cases where ion suppressing species do co-elute with an analyte, the effects on the important analytical parameters including precision, accuracy and limit of detection can be extensive, severely limiting the validity of an assay's results.