Time-activity curve

Last updated
Time-activity curve showing concentration of tracer within a tissue regions of interest (region of interest) over time. Screenshot from 2020-04-22 11-31-01.png
Time-activity curve showing concentration of tracer within a tissue regions of interest (region of interest) over time.

In medical imaging, a time-activity curve is a curve of radioactivity (in terms of concentration) plotted on the y-axis against the time plotted on the x-axis. It shows the concentration of a radiotracer within a region of interest in an image, measured over time from a dynamic scan. Generally, when a time-activity curve is obtained within a tissue, it is called as a tissue time-activity curve, which represents the concentration of tracer within a region of interest inside a tissue over time.

Contents

Modern kinetic analysis is performed in various medical imaging techniques, which requires a tissue time-activity curve as one of the inputs to the mathematical model, for example, in dynamic positron emission tomography (PET) imaging, or perfusion CT, or dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) using a dynamic scan. A dynamic scan is a scan where two dimensional (2D) or three dimensional (3D) images are acquired again and again over a time-period forming a time-series of 2D/3D image datasets. For example, a dynamic contrast-enhanced magnetic resonance imaging scan acquired over ten minutes contains short image frames acquired for 30 seconds duration to capture the fast dynamics of gadolinium tracer. Each data-point in the time-activity curve represents a measurement of tracer-concentration from the region segmented on each of these image time-frame acquired over time.

Obtaining the time-activity curve

Time-activity curves are obtained with the help of region-of-interest analysis. The region-of-interest analysis restricts the image data to a specific region on which measurements can be made, for example, lumbar vertebrae or the femoral neck. The image pixels within that specifically marked region is then replicated on all of the image frames of the dynamic scan, and an average pixel value from all image-frames is then plotted against the time at which these image-frames were obtained.

The concept is explained with an example below. Consider a dynamic image where each table represents an image, obtained at different times, let's say at time t=1 sec, t=2 sec, t=3 sec, t=4 sec, t=5 sec, and t=6 sec. In this image, let's assume, each voxel shows the concentration of tracer in the units Bq per ml. Now, let's say our target region within each image is only the central four voxels. First, the central four pixels in each image are identified, which is our region-of-interest, then an average is taken for each frame.

1111
1221
1221
1111
2222
2332
2332
2222
3333
3443
3443
3333
4444
4664
4664
4444
3333
3443
3443
3333
2222
2332
2332
2222

t=1 sec........... t=2 sec............t=3 sec............t=4 sec............t=5 sec............t=6 sec

In this example, we would have an average value of 2 for the 1st frame at t=1, 3 for the 2nd frame at t=2, 4 for the 3rd frame at t=3, 6 for the 4th frame at t=4, 4 for the 5th frame at t=5, and 3 for the 6th frame at t=6. Now, these values can be plotted on a graph, where time is on the x-axis, and the averaged concentration values on the y-axis. The graph will look like as follows (assuming that pixel values in the image will be 0 at t=0):

Time-Activity curve for the example explained in the text Screenshot from 2020-04-22 16-27-45.png
Time-Activity curve for the example explained in the text

The region of interest (the central four pixels in the above examples) can be identified using manual, [1] semi-automatic, [2] or automatic [3] methods. The manual region of interest definition requires the user to draw an arbitrary boundary around the target region, which is subjective. The boundary can be marked by points or lines with different thickness levels. The selection can also be achieved by choosing the co-ordinate values. When selecting a region of interest the user can keep track of the properties of the boundary pixels, for example, the position and the value of the currently selected pixel.

Semi-automatic methods define a region of interest with minimum user interaction, and can be broadly classified into geometric selection, [4] [2] thresholding, [5] and region growing methods, [6] or a combination of any two or any other criteria. [7] In thresholding methods, pixels above a certain intensity level in an image are included in the region of interest. In region growing methods, a user selects a seed pixel that identifies the first pixel within the region of interest and based on a stopping criterion the neighbouring pixels are attached to the seed pixel and when the algorithm stops the pixels surrounding the seed pixels form a region of interest.

Automatic methods do not require user intervention, [8] and are also referred to as iterative or adaptive methods, as they work based on prior knowledge of the region to be analysed. The majority of the semi-automatic methods can also be automated but they do need to be validated against the manual gold-standard drawn by experts. [2] [9]

Relationship with the arterial input function

Obtaining the time-activity curve within an artery is the first towards obtaining the image-derived arterial input function (IDAIF). The arterial time-activity curve is then corrected for various errors using arterial/venous blood-sample before an arterial input function (AIF) can be used as an input to the model for kinetic analysis.

See also

Related Research Articles

Computer vision tasks include methods for acquiring, processing, analyzing and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical or symbolic information, e.g. in the forms of decisions. Understanding in this context means the transformation of visual images into descriptions of the world that make sense to thought processes and can elicit appropriate action. This image understanding can be seen as the disentangling of symbolic information from image data using models constructed with the aid of geometry, physics, statistics, and learning theory.

<span class="mw-page-title-main">Image segmentation</span> Partitioning a digital image into segments

In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects. The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.

<span class="mw-page-title-main">Thresholding (image processing)</span> Image segmentation algorithm

In digital image processing, thresholding is the simplest method of segmenting images. From a grayscale image, thresholding can be used to create binary images.

<span class="mw-page-title-main">Otsu's method</span> In computer vision and image processing

In computer vision and image processing, Otsu's method, named after Nobuyuki Otsu, is used to perform automatic image thresholding. In the simplest form, the algorithm returns a single intensity threshold that separate pixels into two classes, foreground and background. This threshold is determined by minimizing intra-class intensity variance, or equivalently, by maximizing inter-class variance. Otsu's method is a one-dimensional discrete analogue of Fisher's Discriminant Analysis, is related to Jenks optimization method, and is equivalent to a globally optimal k-means performed on the intensity histogram. The extension to multi-level thresholding was described in the original paper, and computationally efficient implementations have since been proposed.

<span class="mw-page-title-main">Flat-field correction</span> Digital imaging calibration technique

Flat-field correction (FFC) is a digital imaging technique to mitigate the image detector pixel-to-pixel sensitivity and distortions in the optical path. It is a standard calibration procedure in everything from personal digital cameras to large telescopes.

<span class="mw-page-title-main">Image stitching</span> Combining multiple photographic images with overlapping fields of view

Image stitching or photo stitching is the process of combining multiple photographic images with overlapping fields of view to produce a segmented panorama or high-resolution image. Commonly performed through the use of computer software, most approaches to image stitching require nearly exact overlaps between images and identical exposures to produce seamless results, although some stitching algorithms actually benefit from differently exposed images by doing high-dynamic-range imaging in regions of overlap. Some digital cameras can stitch their photos internally.

Shot transition detection also called cut detection is a field of research of video processing. Its subject is the automated detection of transitions between shots in digital video with the purpose of temporal segmentation of videos.

Pulse-coupled networks or pulse-coupled neural networks (PCNNs) are neural models proposed by modeling a cat's visual cortex, and developed for high-performance biomimetic image processing.

Connected-component labeling (CCL), connected-component analysis (CCA), blob extraction, region labeling, blob discovery, or region extraction is an algorithmic application of graph theory, where subsets of connected components are uniquely labeled based on a given heuristic. Connected-component labeling is not to be confused with segmentation.

Region growing is a simple region-based image segmentation method. It is also classified as a pixel-based image segmentation method since it involves the selection of initial seed points.

In mathematical morphology and digital image processing, a top-hat transform is an operation that extracts small elements and details from given images. There exist two types of top-hat transform: the white top-hat transform is defined as the difference between the input image and its opening by some structuring element, while the black top-hat transform is defined dually as the difference between the closing and the input image. Top-hat transforms are used for various image processing tasks, such as feature extraction, background equalization, image enhancement, and others.

<span class="mw-page-title-main">Standardized uptake value</span>

The standardized uptake value (SUV) is a nuclear medicine term, used in positron emission tomography (PET) as well as in modern calibrated single photon emission tomography (SPECT) imaging for a semiquantitative analysis. Its use is particularly common in the analysis of [18F]fluorodeoxyglucose ([18F]FDG) images of cancer patients. It can also be used with other PET agents especially when no arterial input function is available for more detailed pharmacokinetic modeling. Otherwise measures like the fractional uptake rate (FUR) or parameters from more advanced pharmacokinetic modeling may be preferable.

<span class="mw-page-title-main">Image editing</span> Processes of altering images

Image editing encompasses the processes of altering images, whether they are digital photographs, traditional photo-chemical photographs, or illustrations. Traditional analog image editing is known as photo retouching, using tools such as an airbrush to modify photographs or editing illustrations with any traditional art medium. Graphic software programs, which can be broadly grouped into vector graphics editors, raster graphics editors, and 3D modelers, are the primary tools with which a user may manipulate, enhance, and transform images. Many image editing programs are also used to render or create computer art from scratch. The term "image editing" usually refers only to the editing of 2D images, not 3D ones.

This glossary defines terms that are used in the document "Defining Video Quality Requirements: A Guide for Public Safety", developed by the Video Quality in Public Safety (VQIPS) Working Group. It contains terminology and explanations of concepts relevant to the video industry. The purpose of the glossary is to inform the reader of commonly used vocabulary terms in the video domain. This glossary was compiled from various industry sources.

<span class="mw-page-title-main">Foreground detection</span>

Foreground detection is one of the major tasks in the field of computer vision and image processing whose aim is to detect changes in image sequences. Background subtraction is any technique which allows an image's foreground to be extracted for further processing.

<span class="mw-page-title-main">Perfusion MRI</span>

Perfusion MRI or perfusion-weighted imaging (PWI) is perfusion scanning by the use of a particular MRI sequence. The acquired data are then post-processed to obtain perfusion maps with different parameters, such as BV, BF, MTT and TTP.

Vaa3D is an Open Source visualization and analysis software suite created mainly by Hanchuan Peng and his team at Janelia Research Campus, HHMI and Allen Institute for Brain Science. The software performs 3D, 4D and 5D rendering and analysis of very large image data sets, especially those generated using various modern microscopy methods, and associated 3D surface objects. This software has been used in several large neuroscience initiatives and a number of applications in other domains. In a recent Nature Methods review article, it has been viewed as one of the leading open-source software suites in the related research fields. In addition, research using this software was awarded the 2012 Cozzarelli Prize from the National Academy of Sciences.

<span class="mw-page-title-main">Saliency map</span>

In computer vision, a saliency map is an image that highlights the region on which people's eyes focus first. The goal of a saliency map is to reflect the degree of importance of a pixel to the human visual system. For example, in this image, a person first looks at the fort and light clouds, so they should be highlighted on the saliency map. Saliency maps engineered in artificial or computer vision are typically not the same as the actual saliency map constructed by biological or natural vision.

Arterial input function (AIF), also known as a plasma input function, refers to the concentration of tracer in blood-plasma in an artery measured over time. The oldest record on PubMed shows that AIF was used by Harvey et al. in 1962 to measure the exchange of materials between red blood cells and blood plasma, and by other researchers in 1983 for positron emission tomography (PET) studies. Nowadays, kinetic analysis is performed in various medical imaging techniques, which requires an AIF as one of the inputs to the mathematical model, for example, in dynamic PET imaging, or perfusion CT, or dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI).

Positron emission tomography for bone imaging, as an in vivo tracer technique, allows the measurement of the regional concentration of radioactivity proportional to the image pixel values averaged over a region of interest (ROI) in bones. Positron emission tomography is a functional imaging technique that uses [18F]NaF radiotracer to visualise and quantify regional bone metabolism and blood flow. [18F]NaF has been used for imaging bones for the last 60 years. This article focuses on the pharmacokinetics of [18F]NaF in bones, and various semi-quantitative and quantitative methods for quantifying regional bone metabolism using [18F]NaF PET images.

References

  1. Mykkänen, Jouni M.; Juhola, Martti; Ruotsalainen, Ulla (2000). "Extracting VOIs from brain PET images". International Journal of Medical Informatics. 58–59: 51–57. doi:10.1016/s1386-5056(00)00075-7. ISSN   1386-5056. PMID   10978909.
  2. 1 2 3 Puri, T.; Blake, G. M.; Curran, K. M.; Carr, H.; Moore, A. E. B.; Colgan, N.; O'Connell, M. J.; Marsden, P. K.; Fogelman, I.; Frost, M. L. (2012). "Semi-automatic Region-of-Interest Validation at the Femur in 18F-Fluoride PET/CT". Journal of Nuclear Medicine Technology. 40 (3): 168–174. doi: 10.2967/jnmt.111.100107 . ISSN   0091-4916. PMID   22892275.
  3. Feng, Yue; Fang, Hui; Jiang, Jianmin (2005), "Region Growing with Automatic Seeding for Semantic Video Object Segmentation", Pattern Recognition and Image Analysis : ICAPR 2005, Lecture Notes in Computer Science v. 3687, Springer, pp. 542–549, doi:10.1007/11552499_60, ISBN   978-3-540-28833-6 , retrieved 12 April 2021
  4. Krak, Nanda C.; Boellaard, R.; Hoekstra, Otto S.; Twisk, Jos W. R.; Hoekstra, Corneline J.; Lammertsma, Adriaan A. (2004). "Effects of ROI definition and reconstruction method on quantitative outcome and applicability in a response monitoring trial". European Journal of Nuclear Medicine and Molecular Imaging. 32 (3): 294–301. doi:10.1007/s00259-004-1566-1. ISSN   1619-7070. PMID   15791438. S2CID   22518269.
  5. Sankur, Bu¨lent (2004). "Survey over image thresholding techniques and quantitative performance evaluation". Journal of Electronic Imaging. 13 (1): 146. Bibcode:2004JEI....13..146S. doi:10.1117/1.1631315. ISSN   1017-9909.
  6. ZHENG, L., JESSE, J. & HUGUES, T. (2001) Unseeded region growing for 3D image segmentation. Selected papers from the Pan-Sydney workshop on Visualisation – Volume 2. Sydney, Australia, Australian Computer Society, Inc.
  7. Pan, Zhigeng; Lu, Jianfeng (2007). "A Bayes-Based Region-Growing Algorithm for Medical Image Segmentation". Computing in Science & Engineering. 9 (4): 32–38. Bibcode:2007CSE.....9d..32P. doi:10.1109/mcse.2007.67. ISSN   1521-9615. S2CID   14423626.
  8. Suzuki, H.; Toriwaki, J. (1988). "Knowledge-guided automatic thresholding for 3-dimensional display of head MRI images". [1988 Proceedings] 9th International Conference on Pattern Recognition. IEEE Comput. Soc. Press. pp. 1210–1212. doi:10.1109/icpr.1988.28473. ISBN   0-8186-0878-1. S2CID   26177669.
  9. Weaver, Jean R.; Au, Jessie L-S. (1 October 1997). "Application of automatic thresholding in image analysis scoring of cells in human solid tumors labeled for proliferation markers". Cytometry. 29 (2): 128–135. doi: 10.1002/(sici)1097-0320(19971001)29:2<128::aid-cyto5>3.0.co;2-9 . ISSN   0196-4763. PMID   9332819.