Developer(s) | Nanomath LLC |
---|---|
Initial release | v1.0 1995 |
Stable release | v2.3.3 2022 |
Operating system | Windows |
Type | Scientific |
License | Proprietary commercial software |
Website | www |
SAAM II, short for "Simulation Analysis and Modeling" version 2.0, is a renowned computer program designed for scientific research in the field of bioscience. It is a descriptive and exploratory tool in drug development, tracers, metabolic disorders, and pharmacokinetics/pharmacodynamics research. It is grounded in the principles of multi-compartment model theory, which is a widely-used approach for modeling complex biological systems. SAAM II facilitates the construction and simulation of models, providing researchers with a friendly user interface allowing the quick run and multi-fitting of simple and complex (linear and nonlinear) structures and data. SAAM II is used by many Pharma and Pharmacy Schools as a drug development, research, and educational tool.
SAAM II offers a user-friendly interface that eliminates the need for coding. Within the compartmental module, users can construct models effortlessly by drag-and-dropping various model components, such as circles, arrows, and boxes. To simulate the model's behavior, creating model conditions is a straightforward process. By employing drag-and-drop experiment-building icons, users can directly specify inputs and sampling sites with ease.
The Numerical module is also available but less frequently used; it lets you write directly the equations of the model or model directly the data by predefined functions. The latter allows you to carry out a non-compartmental analysis of the data.
Funded by NIH, popKinetics is specifically developed for population analysis of compartmental models built within SAAM II. popKinetics offers the computation of two approaches for population parameter estimation: the Standard Two-Stage and Iterative Two-Stage methods. The Two-Stage methods may be favored when simplicity, computational efficiency, and minimal assumptions are desired in analyzing the population.
The results obtained from SAAM II have received indirect validation through extensive usage over many years, replication of modeling in other programs, and publication in peer-reviewed journals. Validation of the software's numerical performance was carried out against WinNonlin. In general, there was good agreement (<1% difference) between SAAM II and WinNonlin in terms of parameter estimates and model predictions. [1]
1. Pharmacokinetics and Pharmacodynamics (PK/PD) Research:
2. Population Pharmacokinetics:
3. Systems Biology:
4. Biotechnology:
5. Metabolic Diseases Research:
6. Tracer Studies:
7. Experimental Design:
8. Biological Modeling in Education:
9. Peer-Reviewed Publications:
Notably, the glucose-insulin Minimal Models that are used in clinical trials to quantify insulin improvements of antidiabetic treatments, are implemented in SAAM II. [2]
In the early 1950s, Mones Berman and others at the NIH worked on problems in radiation dosimetry. Mones decided that compartmental models (systems of differential equations) were the best way to analyze the transient (kinetic) data being collected. He started the development of a software tool that eventually became known as SAAM. The power of SAAM was its dictionary that made it possible for a user to sketch their model, and then using the dictionary and a set of rules, create an input file directly from the sketch. SAAM took this information and created the system of differential equations that described the model. This meant that the user could think about biology/pharmacology while the program did the mathematics and statistics behind the scenes. It was a very popular program, but one had to visit the NIH and work with Mones to learn how to use the program.
Between 1986 and 1994, the University of Washington working through its Resource Center for Kinetic Analysis in the Center for Bioengineering, led by Prof David Foster with the help of Loren Zech from NIH, rewrote code including a strategic user interface, which led to SAAM II. The first version was released on the SUN in 1993. The PC version was released in 1994. Through several grants, in the 2000-2012 period, Foster and Vicini worked on generating the modern version 2.1, including a population analysis add-on called popKinetics. In 2012, the Epsilon Group, a Medical Automation Company in Virginia licensed the commercial rights to improve and distribute the software.
In 2022, the commercial rights to develop and distribute SAAM II software up to current version 2.3.3 were licensed to Nanomath LLC, a consulting and software company headquartered in Washington. The leadership and management of SAAM II were assumed by Simone Perazzolo, a scientist with experience in computational modeling of biological and pharmacological systems.
SAAM II utilizes three types of integrators for Ordinary Differential Equation (ODE) solving:
SAAM II employs parameter optimization for multiple data fitting, utilizing a modified nonlinear weighted non-linear least-squares method derived from the Gauss-Newton algorithm. In regression tasks, users have the flexibility to create a weighting scheme based on either the error in the data or the model.
Additionally, SAAM II offers a Bayesian Maximum A Posteriori (MAP) option, allowing users to explore Bayesian parameter estimation. This feature enhances the analysis by incorporating prior knowledge and uncertainty into the parameter estimation process.
To assess the reliability of parameter estimates, SAAM II provides posterior and practical identifiability features. These utilize Fisher's information matrix and covariance matrix of the estimates to evaluate the quality of parameter identification, also in case of complex structures and numerous unknown variables.
Furthermore, SAAM II includes local parameter sensitivity, batch analysis, and in silico populations features, both of which are convenient tools for gaining insights into the model's behavior and assessing the impact of parameter changes on model outcomes. [3]
SAAM II can be found in curricula in many American and worldwide institutions, such as engineering, physics, and pharmacy schools.
Psychological statistics is application of formulas, theorems, numbers and laws to psychology. Statistical methods for psychology include development and application statistical theory and methods for modeling psychological data. These methods include psychometrics, factor analysis, experimental designs, and Bayesian statistics. The article also discusses journals in the same field.
Physiologically based pharmacokinetic (PBPK) modeling is a mathematical modeling technique for predicting the absorption, distribution, metabolism and excretion (ADME) of synthetic or natural chemical substances in humans and other animal species. PBPK modeling is used in pharmaceutical research and drug development, and in health risk assessment for cosmetics or general chemicals.
In pharmacology, clearance is a pharmacokinetic parameter representing the efficiency of drug elimination. This is the rate of elimination of a substance divided by its concentration. The parameter also indicates the theoretical volume of plasma from which a substance would be completely removed per unit time. Usually, clearance is measured in L/h or mL/min. The quantity reflects the rate of drug elimination divided by plasma concentration. Excretion, on the other hand, is a measurement of the amount of a substance removed from the body per unit time. While clearance and excretion of a substance are related, they are not the same thing. The concept of clearance was described by Thomas Addis, a graduate of the University of Edinburgh Medical School.
AutoChem is NASA release software that constitutes an automatic computer code generator and documenter for chemically reactive systems written by David Lary between 1993 and the present. It was designed primarily for modeling atmospheric chemistry, and in particular, for chemical data assimilation.
A multi-compartment model is a type of mathematical model used for describing the way materials or energies are transmitted among the compartments of a system. Sometimes, the physical system that we try to model in equations is too complex, so it is much easier to discretize the problem and reduce the number of parameters. Each compartment is assumed to be a homogeneous entity within which the entities being modeled are equivalent. A multi-compartment model is classified as a lumped parameters model. Similar to more general mathematical models, multi-compartment models can treat variables as continuous, such as a differential equation, or as discrete, such as a Markov chain. Depending on the system being modeled, they can be treated as stochastic or deterministic.
NONMEM is a non-linear mixed-effects modeling software package developed by Stuart L. Beal and Lewis B. Sheiner in the late 1970s at University of California, San Francisco, and expanded by Robert Bauer at Icon PLC. Its name is an acronym for NON-linear mixed effects modeling but it is especially powerful in the context of population pharmacokinetics, pharmacometrics, and PK/PD models. NONMEM models are written in NMTRAN, a dedicated model specification language that is translated into FORTRAN, compiled on the fly and executed by a command-line script. Results are presented as text output files including tables. There are multiple interfaces to assist modelers with housekeeping of files, tracking of model development, goodness-of-fit evaluations and graphical output, such as PsN and xpose and Wings for NONMEM. Current version for NONMEM is 7.5.
Pharmacokinetics, sometimes abbreviated as PK, is a branch of pharmacology dedicated to describing how the body affects a specific substance after administration. The substances of interest include any chemical xenobiotic such as pharmaceutical drugs, pesticides, food additives, cosmetics, etc. It attempts to analyze chemical metabolism and to discover the fate of a chemical from the moment that it is administered up to the point at which it is completely eliminated from the body. Pharmacokinetics is based on mathematical modeling that places great emphasis on the relationship between drug plasma concentration and the time elapsed since the drug's administration. Pharmacokinetics is the study of how an organism affects the drug, whereas pharmacodynamics (PD) is the study of how the drug affects the organism. Both together influence dosing, benefit, and adverse effects, as seen in PK/PD models.
In statistics, confirmatory factor analysis (CFA) is a special form of factor analysis, most commonly used in social science research. It is used to test whether measures of a construct are consistent with a researcher's understanding of the nature of that construct. As such, the objective of confirmatory factor analysis is to test whether the data fit a hypothesized measurement model. This hypothesized model is based on theory and/or previous analytic research. CFA was first developed by Jöreskog (1969) and has built upon and replaced older methods of analyzing construct validity such as the MTMM Matrix as described in Campbell & Fiske (1959).
Psychometric software is software that is used for psychometric analysis of data from tests, questionnaires, or inventories reflecting latent psychoeducational variables. While some psychometric analyses can be performed with standard statistical software like SPSS, most analyses require specialized tools.
The plateau principle is a mathematical model or scientific law originally developed to explain the time course of drug action (pharmacokinetics). The principle has wide applicability in pharmacology, physiology, nutrition, biochemistry, and system dynamics. It applies whenever a drug or nutrient is infused or ingested at a relatively constant rate and when a constant fraction is eliminated during each time interval. Under these conditions, any change in the rate of infusion leads to an exponential increase or decrease until a new level is achieved. This behavior is also called an approach to steady state because rather than causing an indefinite increase or decrease, a natural balance is achieved when the rate of infusion or production is balanced by the rate of loss.
PK/PD modeling is a technique that combines the two classical pharmacologic disciplines of pharmacokinetics and pharmacodynamics. It integrates a pharmacokinetic and a pharmacodynamic model component into one set of mathematical expressions that allows the description of the time course of effect intensity in response to administration of a drug dose. PK/PD modeling is related to the field of pharmacometrics.
Quantemol Ltd is based in University College London initiated by Professor Jonathan Tennyson FRS and Dr. Daniel Brown in 2004. The company initially developed a unique software tool, Quantemol-N, which provides full accessibility to the highly sophisticated UK molecular R-matrix codes, used to model electron polyatomic molecule interactions. Since then Quantemol has widened to further types of simulation, with plasmas and industrial plasma tools, in Quantemol-VT in 2013 and launched in 2016 a sustainable database Quantemol-DB, representing the chemical and radiative transport properties of a wide range of plasmas.
A Logan plot is a graphical analysis technique based on the compartment model that uses linear regression to analyze pharmacokinetics of tracers involving reversible uptake. It is mainly used for the evaluation of nuclear medicine imaging data after the injection of a labeled ligand that binds reversibly to specific receptor or enzyme.
Virtual Cell (VCell) is an open-source software platform for modeling and simulation of living organisms, primarily cells. It has been designed to be a tool for a wide range of scientists, from experimental cell biologists to theoretical biophysicists.
LIMDEP is an econometric and statistical software package with a variety of estimation tools. In addition to the core econometric tools for analysis of cross sections and time series, LIMDEP supports methods for panel data analysis, frontier and efficiency estimation and discrete choice modeling. The package also provides a programming language to allow the user to specify, estimate and analyze models that are not contained in the built in menus of model forms.
MLAB is a multi-paradigm numerical computing environment and fourth-generation programming language was originally developed at the National Institutes of Health.
Quantitative systems pharmacology (QSP) is a discipline within biomedical research that uses mathematical computer models to characterize biological systems, disease processes and drug pharmacology. QSP can be viewed as a sub-discipline of pharmacometrics that focuses on modeling the mechanisms of drug pharmacokinetics (PK), pharmacodynamics (PD), and disease processes using a systems pharmacology point of view. QSP models are typically defined by systems of ordinary differential equations (ODE) that depict the dynamical properties of the interaction between the drug and the biological system.
Leon Aarons is an Australian chemist who researches and teaches in the areas of pharmacodynamics and pharmacokinetics. He lives in the United Kingdom and from 1976 has been a professor of pharmacometrics at the University of Manchester. In the interest of promoting the effective development of drugs, the main focus of his work is optimizing pharmacological models, the design of clinical studies, and data analysis and interpretation in the field of population pharmacokinetics. From 1985 to 2010 Aarons was an editor emeritus of the Journal of Pharmacokinetics and Pharmacodynamics and is a former executive editor of the British Journal of Clinical Pharmacology.
Nonlinear mixed-effects models are a special case of regression analysis for which a range of different software solutions are available. The statistical properties of nonlinear mixed-effects models make direct estimation by a BLUE estimator impossible. Nonlinear mixed effects models are therefore estimated according to Maximum Likelihood principles. Specific estimation methods are applied, such as linearization methods as first-order (FO), first-order conditional (FOCE) or the laplacian (LAPL), approximation methods such as iterative-two stage (ITS), importance sampling (IMP), stochastic approximation estimation (SAEM) or direct sampling. A special case is use of non-parametric approaches. Furthermore, estimation in limited or full Bayesian frameworks is performed using the Metropolis-Hastings or the NUTS algorithms. Some software solutions focus on a single estimation method, others cover a range of estimation methods and/or with interfaces for specific use cases.