HBJ model

Last updated

In computer science, the Helman-Bader-JaJa model [1] is a concise message-passing model of parallel computing defined with the following parameters:

This model assumes that for any subset of processors, a block permutation among the processors takes time, where is the size of the largest block.

Analysis of common parallel algorithms

Complexities of common parallel algorithms contained in the MPI libraries: [2]

Related Research Articles

<span class="mw-page-title-main">Autocorrelation</span> Correlation of a signal with a time-shifted copy of itself, as a function of shift

Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals.

<span class="mw-page-title-main">Normal distribution</span> Probability distribution

In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is The parameter is the mean or expectation of the distribution, while the parameter is the variance. The standard deviation of the distribution is . A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate.

<span class="mw-page-title-main">Permutation</span> Mathematical version of an order change

In mathematics, a permutation of a set can mean one of two different things:

<span class="mw-page-title-main">Expectation–maximization algorithm</span> Iterative method for finding maximum likelihood estimates in statistical models

In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables. The EM iteration alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step. These parameter-estimates are then used to determine the distribution of the latent variables in the next E step. It can be used, for example, to estimate a mixture of gaussians, or to solve the multiple linear regression problem.

<span class="mw-page-title-main">Drude model</span> Model of electrical conduction

The Drude model of electrical conduction was proposed in 1900 by Paul Drude to explain the transport properties of electrons in materials. Basically, Ohm's law was well established and stated that the current J and voltage V driving the current are related to the resistance R of the material. The inverse of the resistance is known as the conductance. When we consider a metal of unit length and unit cross sectional area, the conductance is known as the conductivity, which is the inverse of resistivity. The Drude model attempts to explain the resistivity of a conductor in terms of the scattering of electrons by the relatively immobile ions in the metal that act like obstructions to the flow of electrons.

Mohr–Coulomb theory is a mathematical model describing the response of brittle materials such as concrete, or rubble piles, to shear stress as well as normal stress. Most of the classical engineering materials follow this rule in at least a portion of their shear failure envelope. Generally the theory applies to materials for which the compressive strength far exceeds the tensile strength.

Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes:

  1. To provide an analytical approximation to the posterior probability of the unobserved variables, in order to do statistical inference over these variables.
  2. To derive a lower bound for the marginal likelihood of the observed data. This is typically used for performing model selection, the general idea being that a higher marginal likelihood for a given model indicates a better fit of the data by that model and hence a greater probability that the model in question was the one that generated the data.

In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression of the determinant of an n × n-matrix B as a weighted sum of minors, which are the determinants of some (n − 1) × (n − 1)-submatrices of B. Specifically, for every i, the Laplace expansion along the ith row is the equality where is the entry of the ith row and jth column of B, and is the determinant of the submatrix obtained by removing the ith row and the jth column of B. Similarly, the Laplace expansion along the jth column is the equality (Each identity implies the other, since the determinants of a matrix and its transpose are the same.)

In computer science, the prefix sum, cumulative sum, inclusive scan, or simply scan of a sequence of numbers x0, x1, x2, ... is a second sequence of numbers y0, y1, y2, ..., the sums of prefixes of the input sequence:

Neural cryptography is a branch of cryptography dedicated to analyzing the application of stochastic algorithms, especially artificial neural network algorithms, for use in encryption and cryptanalysis.

Location estimation in wireless sensor networks is the problem of estimating the location of an object from a set of noisy measurements. These measurements are acquired in a distributed manner by a set of sensors.

<span class="mw-page-title-main">Viscoplasticity</span> Theory in continuum mechanics

Viscoplasticity is a theory in continuum mechanics that describes the rate-dependent inelastic behavior of solids. Rate-dependence in this context means that the deformation of the material depends on the rate at which loads are applied. The inelastic behavior that is the subject of viscoplasticity is plastic deformation which means that the material undergoes unrecoverable deformations when a load level is reached. Rate-dependent plasticity is important for transient plasticity calculations. The main difference between rate-independent plastic and viscoplastic material models is that the latter exhibit not only permanent deformations after the application of loads but continue to undergo a creep flow as a function of time under the influence of the applied load.

The Swendsen–Wang algorithm is the first non-local or cluster algorithm for Monte Carlo simulation for large systems near criticality. It has been introduced by Robert Swendsen and Jian-Sheng Wang in 1987 at Carnegie Mellon.

In the fields of computer vision and image analysis, the scale-invariant feature operator is an algorithm to detect local features in images. The algorithm was published by Förstner et al. in 2009.

In computer science, the range query problem consists of efficiently answering several queries regarding a given interval of elements within an array. For example, a common task, known as range minimum query, is finding the smallest value inside a given range within a list of numbers.

An affine term structure model is a financial model that relates zero-coupon bond prices to a spot rate model. It is particularly useful for deriving the yield curve – the process of determining spot rate model inputs from observable bond market data. The affine class of term structure models implies the convenient form that log bond prices are linear functions of the spot rate.

In the mathematical field of algebraic number theory, the concept of principalization refers to a situation when, given an extension of algebraic number fields, some ideal of the ring of integers of the smaller field isn't principal but its extension to the ring of integers of the larger field is. Its study has origins in the work of Ernst Kummer on ideal numbers from the 1840s, who in particular proved that for every algebraic number field there exists an extension number field such that all ideals of the ring of integers of the base field become principal when extended to the larger field. In 1897 David Hilbert conjectured that the maximal abelian unramified extension of the base field, which was later called the Hilbert class field of the given base field, is such an extension. This conjecture, now known as principal ideal theorem, was proved by Philipp Furtwängler in 1930 after it had been translated from number theory to group theory by Emil Artin in 1929, who made use of his general reciprocity law to establish the reformulation. Since this long desired proof was achieved by means of Artin transfers of non-abelian groups with derived length two, several investigators tried to exploit the theory of such groups further to obtain additional information on the principalization in intermediate fields between the base field and its Hilbert class field. The first contributions in this direction are due to Arnold Scholz and Olga Taussky in 1934, who coined the synonym capitulation for principalization. Another independent access to the principalization problem via Galois cohomology of unit groups is also due to Hilbert and goes back to the chapter on cyclic extensions of number fields of prime degree in his number report, which culminates in the famous Theorem 94.

<span class="mw-page-title-main">ProbOnto</span> Knowledge base and ontology of probability distributions

ProbOnto is a knowledge base and ontology of probability distributions. ProbOnto 2.5 contains over 150 uni- and multivariate distributions and alternative parameterizations, more than 220 relationships and re-parameterization formulas, supporting also the encoding of empirical and univariate mixture distributions.

In dual decomposition a problem is broken into smaller subproblems and a solution to the relaxed problem is found. This method can be employed for MRF optimization. Dual decomposition is applied to markov logic programs as an inference technique.

<span class="mw-page-title-main">Chambolle-Pock algorithm</span> Primal-Dual algorithm optimization for convex problems

In mathematics, the Chambolle-Pock algorithm is an algorithm used to solve convex optimization problems. It was introduced by Antonin Chambolle and Thomas Pock in 2011 and has since become a widely used method in various fields, including image processing, computer vision, and signal processing.

References

  1. David R., Helman; David A., Bader; JaJa, Joseph (1998). "A Randomized Parallel Sorting Algorithm with an Experimental Study" (PDF). Journal of Parallel and Distributed Computing. 52: 1–23. doi:10.1006/jpdc.1998.1462. hdl:1903/835. Archived from the original (PDF) on 19 November 2012. Retrieved 26 October 2012.
  2. Bader, David A.; Jaja, Joseph (1996). "Practical parallel algorithms for dynamic data redistribution, median finding, and selection". Proceedings of the 10th IEEE International Parallel Processing Symposium: 292–301.