James Gary Propp is a professor of mathematics at the University of Massachusetts Lowell.
In high school, Propp was one of the national winners of the United States of America Mathematical Olympiad (USAMO), and an alumnus of the Hampshire College Summer Studies in Mathematics. [1] Propp obtained his AB in mathematics in 1982 at Harvard. After advanced study at Cambridge, he obtained his PhD from the University of California at Berkeley. He has held professorships at seven universities, including Harvard, MIT, the University of Wisconsin, and the University of Massachusetts Lowell.
Propp is the co-editor of the book Microsurveys in Discrete Probability (1998) and has written more than fifty journal articles on game theory, combinatorics and probability, and recreational mathematics. He lectures extensively and has served on the Mathematical Olympiad Committee of the Mathematical Association of America, which sponsors the USAMO. In the early 90s Propp lived in Boston and later in Arlington, Massachusetts. [2] [3]
In 1996, Propp and David Wilson invented coupling from the past, a method for sampling from the stationary distribution of a Markov chain among Markov chain Monte Carlo (MCMC) algorithms. Contrary to many MCMC algorithms, coupling from the past gives in principle a perfect sample from the stationary distribution. [4] [5] His papers have discussed the use of surcomplex numbers in game theory; [6] the solution to the counting of alternating sign matrices; [7] and occurrences of Grandi's series as an Euler characteristic of infinite-dimensional real projective space. [8] [9]
Propp was a member of the National Puzzlers' League under the nom Aesop. [3] He was recruited for the organisation by colleague Henri Picciotto, [2] cruciverbalist and co-author of the league's first cryptic crossword collection. [10] Propp is the creator of the "Self-Referential Aptitude Test", a humorous multiple-choice test in which all questions except the last make self-references to their own answers. It was created in the early 1990s for a puzzlers' party. [11]
Propp is the author of Tuscanini, a 1992 children's book about a musical elephant, illustrated by Ellen Weiss. [12]
In 2015 he was elected as a fellow of the American Mathematical Society "for contributions to combinatorics and probability, and for mentoring and exposition." [13]
He is married to research psychologist Alexandra (Sandi) Gubin. They have a son Adam and a daughter Eliana. [14]
Combinatorics is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science.
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. Monte Carlo methods are mainly used in three problem classes: optimization, numerical integration, and generating draws from a probability distribution.
In statistics and statistical physics, the Metropolis–Hastings algorithm is a Markov chain Monte Carlo (MCMC) method for obtaining a sequence of random samples from a probability distribution from which direct sampling is difficult. This sequence can be used to approximate the distribution or to compute an integral. Metropolis–Hastings and other MCMC algorithms are generally used for sampling from multi-dimensional distributions, especially when the number of dimensions is high. For single-dimensional distributions, there are usually other methods that can directly return independent samples from the distribution, and these are free from the problem of autocorrelated samples that is inherent in MCMC methods.
A Markov chain or Markov process is a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). It is named after the Russian mathematician Andrey Markov.
A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process with unobservable ("hidden") states. As part of the definition, HMM requires that there be an observable process whose outcomes are "influenced" by the outcomes of in a known way. Since cannot be observed directly, the goal is to learn about by observing HMM has an additional requirement that the outcome of at time must be "influenced" exclusively by the outcome of at and that the outcomes of and at must be conditionally independent of at given at time
In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as its equilibrium distribution, one can obtain a sample of the desired distribution by recording states from the chain. The more steps that are included, the more closely the distribution of the sample matches the actual desired distribution. Various algorithms exist for constructing chains, including the Metropolis–Hastings algorithm.
Combinatorics is a branch of mathematics concerning the study of finite or countable discrete structures.
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics – such as integers, graphs, and statements in logic – do not vary smoothly in this way, but have distinct, separated values. Discrete mathematics, therefore, excludes topics in "continuous mathematics" such as calculus and analysis.
In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is difficult. This sequence can be used to approximate the joint distribution ; to approximate the marginal distribution of one of the variables, or some subset of the variables ; or to compute an integral. Typically, some of the variables correspond to observations whose values are known, and hence do not need to be sampled.
Doron Zeilberger is an Israeli mathematician, known for his work in combinatorics.
Dana Randall is an American computer scientist. She works as the ADVANCE Professor of Computing, and adjunct professor of mathematics at the Georgia Institute of Technology. She is also an External Professor of the Santa Fe Institute. Previously she was executive director of the Georgia Tech Institute of Data Engineering and Science (IDEaS) that she co-founded, and director of the Algorithms and Randomness Center. Her research include combinatorics, computational aspects of statistical mechanics, Monte Carlo stimulation of Markov chains, and randomized algorithms.
Bayesian inference of phylogeny combines the information in the prior and in the data likelihood to create the so-called posterior probability of trees, which is the probability that the tree is correct given the data, the prior and the likelihood model. Bayesian inference was introduced into molecular phylogenetics in the 1990s by three independent groups: Bruce Rannala and Ziheng Yang in Berkeley, Bob Mau in Madison, and Shuying Li in University of Iowa, the last two being PhD students at the time. The approach has become very popular since the release of the MrBayes software in 2001, and is now one of the most popular methods in molecular phylogenetics.
Among Markov chain Monte Carlo (MCMC) algorithms, coupling from the past is a method for sampling from the stationary distribution of a Markov chain. Contrary to many MCMC algorithms, coupling from the past gives in principle a perfect sample from the stationary distribution. It was invented by James Propp and David Wilson in 1996.
This page lists articles related to probability theory. In particular, it lists many articles corresponding to specific probability distributions. Such articles are marked here by a code of the form (X:Y), which refers to number of random variables involved and the type of the distribution. For example (2:DC) indicates a distribution with two random variables, discrete or continuous. Other codes are just abbreviations for topics. The list of codes can be found in the table of contents.
Martin Edward Dyer is a professor in the School of Computing at the University of Leeds, Leeds, England. He graduated from the University of Leeds in 1967, obtained his MSc from Imperial College London in 1968 and his PhD from the University of Leeds in 1979. His research interests lie in theoretical computer science, discrete optimization and combinatorics. Currently, he focuses on the complexity of counting and the efficiency of Markov chain algorithms for approximate counting.
Non-uniform random variate generation or pseudo-random number sampling is the numerical practice of generating pseudo-random numbers (PRN) that follow a given probability distribution. Methods are typically based on the availability of a uniformly distributed PRN generator. Computational algorithms are then used to manipulate a single random variate, X, or often several such variates, into a new random variate Y such that these values have the required distribution. The first methods were developed for Monte-Carlo simulations in the Manhattan project, published by John von Neumann in the early 1950s.
Jeffrey Seth Rosenthal is a Canadian statistician and nonfiction author. He is a professor in the University of Toronto's department of statistics, cross-appointed with its department of mathematics.
Michael Jeffrey Larsen is an American mathematician, a distinguished professor of mathematics at Indiana University Bloomington.
Markov Chains and Mixing Times is a book on Markov chain mixing times. The second edition was written by David A. Levin, and Yuval Peres. Elizabeth Wilmer was a co-author on the first edition and is credited as a contributor to the second edition. The first edition was published in 2009 by the American Mathematical Society, with an expanded second edition in 2017.