Resel

Last updated

In image analysis, a resel (from resolution element) represents the actual spatial resolution in an image or a volumetric dataset. The number of resels in the image may be lower or equal to the number of pixel/voxels in the image. In an actual image the resels can vary across the image and indeed the local resolution can be expressed as "resels per pixel" (or "resels per voxel").

Contents

In functional neuroimaging analysis, an estimate of the number of resels together with random field theory is used in statistical inference. Keith Worsley has proposed an estimate for the number of resels/roughness.

The word "resel" is related to the words "pixel", "texel", and "voxel". Waldo R. Tobler is probably among the first to use the word. [1]

See also

Related Research Articles

<span class="mw-page-title-main">Pixel</span> Physical point in a raster image

In digital imaging, a pixel, pel, or picture element is the smallest addressable element in a raster image, or the smallest addressable element in a dot matrix display device. In most digital display devices, pixels are the smallest element that can be manipulated through software.

<span class="mw-page-title-main">Voxel</span> Element representing a value on a grid in three dimensional space

A voxel is a three-dimensional counterpart to a pixel. It represents a value on a regular grid in a three-dimensional space. Voxels are frequently used in the visualization and analysis of medical and scientific data.

<span class="mw-page-title-main">Functional magnetic resonance imaging</span> MRI procedure that measures brain activity by detecting associated changes in blood flow

Functional magnetic resonance imaging or functional MRI (fMRI) measures brain activity by detecting changes associated with blood flow. This technique relies on the fact that cerebral blood flow and neuronal activation are coupled. When an area of the brain is in use, blood flow to that region also increases.

<span class="mw-page-title-main">Image segmentation</span> Partitioning a digital image into segments

In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.

In physics and mathematics, a random field is a random function over an arbitrary domain. That is, it is a function that takes on a random value at each point (or some other domain). It is also sometimes thought of as a synonym for a stochastic process with some restriction on its index set. That is, by modern definitions, a random field is a generalization of a stochastic process where the underlying parameter need no longer be real or integer valued "time" but can instead take values that are multidimensional vectors or points on some manifold.

<span class="mw-page-title-main">Volume rendering</span> Representing a 3D-modeled object or dataset as a 2D projection

In scientific visualization and computer graphics, volume rendering is a set of techniques used to display a 2D projection of a 3D discretely sampled data set, typically a 3D scalar field.

Statistical parametric mapping (SPM) is a statistical technique for examining differences in brain activity recorded during functional neuroimaging experiments. It was created by Karl Friston. It may alternatively refer to software created by the Wellcome Department of Imaging Neuroscience at University College London to carry out such analyses.

<span class="mw-page-title-main">Iterative reconstruction</span>

Iterative reconstruction refers to iterative algorithms used to reconstruct 2D and 3D images in certain imaging techniques. For example, in computed tomography an image must be reconstructed from projections of an object. Here, iterative reconstruction techniques are usually a better, but computationally more expensive alternative to the common filtered back projection (FBP) method, which directly calculates the image in a single reconstruction step. In recent research works, scientists have shown that extremely fast computations and massive parallelism is possible for iterative reconstruction, which makes iterative reconstruction practical for commercialization.

<span class="mw-page-title-main">Waldo R. Tobler</span> American geographer

Waldo Rudolph Tobler was an American-Swiss geographer and cartographer. Tobler is regarded as one of the most influential geographers and cartographers of the late 20th century and early 21st century. He is most well known for coining what has come to be referred to as Tobler's first law of geography. He also coined what has come to be referred to as Tobler's second law of geography.

Functional integration is the study of how brain regions work together to process information and effect responses. Though functional integration frequently relies on anatomic knowledge of the connections between brain areas, the emphasis is on how large clusters of neurons – numbering in the thousands or millions – fire together under various stimuli. The large datasets required for such a whole-scale picture of brain function have motivated the development of several novel and general methods for the statistical analysis of interdependence, such as dynamic causal modelling and statistical linear parametric mapping. These datasets are typically gathered in human subjects by non-invasive methods such as EEG/MEG, fMRI, or PET. The results can be of clinical value by helping to identify the regions responsible for psychiatric disorders, as well as to assess how different activities or lifestyles affect the functioning of the brain.

Spatial ecology studies the ultimate distributional or spatial unit occupied by a species. In a particular habitat shared by several species, each of the species is usually confined to its own microhabitat or spatial niche because two species in the same general territory cannot usually occupy the same ecological niche for any significant length of time.

<span class="mw-page-title-main">Voxel-based morphometry</span> Computational neuroanatomy method

Voxel-based morphometry is a computational approach to neuroanatomy that measures differences in local concentrations of brain tissue, through a voxel-wise comparison of multiple brain images. In traditional morphometry, volume of the whole brain or its subparts is measured by drawing regions of interest (ROIs) on images from brain scanning and calculating the volume enclosed. However, this is time consuming and can only provide measures of rather large areas. Smaller differences in volume may be overlooked. The value of VBM is that it allows for comprehensive measurement of differences, not just in specific structures, but throughout the entire brain. VBM registers every brain to a template, which gets rid of most of the large differences in brain anatomy among people. Then the brain images are smoothed so that each voxel represents the average of itself and its neighbors. Finally, the image volume is compared across brains at every voxel.

Compressed sensing is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals. Compressed sensing has applications in, for example, MRI where the incoherence condition is typically satisfied.

Karl John Friston FRS FMedSci FRSB is a British neuroscientist and theoretician at University College London. He is an authority on brain imaging and theoretical neuroscience, especially the use of physics-inspired statistical methods to model neuroimaging data and other random dynamical systems. Friston is a key architect of the free energy principle and active inference. In imaging neuroscience he is best known for statistical parametric mapping and dynamic causal modelling. Friston also acts as a scientific advisor to numerous groups in industry.

Document mosaicing is a process that stitches multiple, overlapping snapshot images of a document together to produce one large, high resolution composite. The document is slid under a stationary, over-the-desk camera by hand until all parts of the document are snapshotted by the camera's field of view. As the document slid under the camera, all motion of the document is coarsely tracked by the vision system. The document is periodically snapshotted such that the successive snapshots are overlap by about 50%. The system then finds the overlapped pairs and stitches them together repeatedly until all pairs are stitched together as one piece of document.

In computer vision, rigid motion segmentation is the process of separating regions, features, or trajectories from a video sequence into coherent subsets of space and time. These subsets correspond to independent rigidly moving objects in the scene. The goal of this segmentation is to differentiate and extract the meaningful rigid motion from the background and analyze it. Image segmentation techniques labels the pixels to be a part of pixels with certain characteristics at a particular time. Here, the pixels are segmented depending on its relative movement over a period of time i.e. the time of the video sequence.

<span class="mw-page-title-main">CONN (functional connectivity toolbox)</span>

CONN is a Matlab-based cross-platform imaging software for the computation, display, and analysis of functional connectivity in fMRI in the resting state and during task.

<span class="mw-page-title-main">Tobler's second law of geography</span> One of several proposed laws of geography

The second law of geography, according to Waldo Tobler, is "the phenomenon external to a geographic area of interest affects what goes on inside." This is an extension of his first. He first published it in 1999 in reply to a paper titled "Linear pycnophylactic reallocation comment on a paper by D. Martin" and then again in response to criticism of his first law of geography titled "On the First Law of Geography: A Reply." Much of this criticism was centered on the question of if laws were meaningful in geography or any of the social sciences. In this document, Tobler proposed his second law while recognizing others have proposed other concepts to fill the role of 2nd law. Tobler asserted that this phenomenon is common enough to warrant the title of 2nd law of geography. Unlike Tobler's first law of geography, which is relatively well accepted among geographers, there are a few contenders for the title of the second law of geography. Tobler's second law of geography is less well known but still has profound implications for geography and spatial analysis.

<span class="mw-page-title-main">Arbia's law of geography</span> One of several proposed laws of geography

Arbia's law of geography states, "Everything is related to everything else, but things observed at a coarse spatial resolution are more related than things observed at a finer resolution." Originally proposed as the 2nd law of geography, this is one of several laws competing for that title. Because of this, Arbia's law is sometimes referred to as the second law of geography, or Arbia's second law of geography.

<span class="mw-page-title-main">Waldo Tobler bibliography</span> Geographer Waldo Toblers publications

Waldo Tobler's publications span between 1957 and 2017, with his most productive year being 1973. Despite retirement in 1994, he continued to be involved with research for the remainder of his life. Most of his publications consist of peer-reviewed journals, without single-issue textbooks or monographs, and the quantity of publications is noted as being unremarkable compared to modern geographers. Many of his works are foundational to modern geography and cartography, and still frequently cited in modern publications, including the first paper on using computers in cartography, the establishment of analytical cartography, and coining Tobler's first and second laws of geography. His work covered a wide range of topics, with many of his papers considered to be "cartographic classics", that serve as required reading for both graduate and undergraduate students.

References

  1. "Dr". geog.tamu.edu. Archived from the original on 2002-11-06.

Bibliography