Deep learning in photoacoustic imaging

Last updated
Depiction of photoacoustic tomography PhotoacousticIllustration png.png
Depiction of photoacoustic tomography

Deep learning in photoacoustic imaging combines the hybrid imaging modality of photoacoustic imaging (PA) with the rapidly evolving field of deep learning. Photoacoustic imaging is based on the photoacoustic effect, in which optical absorption causes a rise in temperature, which causes a subsequent rise in pressure via thermo-elastic expansion. [1] This pressure rise propagates through the tissue and is sensed via ultrasonic transducers. Due to the proportionality between the optical absorption, the rise in temperature, and the rise in pressure, the ultrasound pressure wave signal can be used to quantify the original optical energy deposition within the tissue. [2]

Contents

Photoacoustic imaging has applications of deep learning in both photoacoustic computed tomography (PACT) and photoacoustic microscopy (PAM). PACT utilizes wide-field optical excitation and an array of unfocused ultrasound transducers. [1] Similar to other computed tomography methods, the sample is imaged at multiple view angles, which are then used to perform an inverse reconstruction algorithm based on the detection geometry (typically through universal backprojection, [3] modified delay-and-sum, [4] or time reversal [5] [6] ) to elicit the initial pressure distribution within the tissue. PAM on the other hand uses focused ultrasound detection combined with weakly-focused optical excitation (acoustic resolution PAM or AR-PAM) or tightly-focused optical excitation (optical resolution PAM or OR-PAM). [7] PAM typically captures images point-by-point via a mechanical raster scanning pattern. At each scanned point, the acoustic time-of-flight provides axial resolution while the acoustic focusing yields lateral resolution. [1]

Applications of deep learning in PACT

The first application of deep learning in PACT was by Reiter et al. [8] in which a deep neural network was trained to learn spatial impulse responses and locate photoacoustic point sources. The resulting mean axial and lateral point location errors on 2,412 of their randomly selected test images were 0.28 mm and 0.37 mm respectively. After this initial implementation, the applications of deep learning in PACT have branched out primarily into removing artifacts from acoustic reflections, [9] sparse sampling, [10] [11] [12] limited-view, [13] [14] [15] and limited-bandwidth. [16] [14] [17] [18] There has also been some recent work in PACT toward using deep learning for wavefront localization. [19] There have been networks based on fusion of information from two different reconstructions to improve the reconstruction using deep learning fusion based networks. [20]

Using deep learning to locate photoacoustic point sources

Traditional photoacoustic beamforming techniques modeled photoacoustic wave propagation by using detector array geometry and the time-of-flight to account for differences in the PA signal arrival time. However, this technique failed to account for reverberant acoustic signals caused by acoustic reflection, resulting in acoustic reflection artifacts that corrupt the true photoacoustic point source location information. In Reiter et al., [8] a convolutional neural network (similar to a simple VGG-16 [21] style architecture) was used that took pre-beamformed photoacoustic data as input and outputted a classification result specifying the 2-D point source location.

Deep learning for PA wavefront localization

Johnstonbaugh et al. [19] was able to localize the source of photoacoustic wavefronts with a deep neural network. The network used was an encoder-decoder style convolutional neural network. The encoder-decoder network was made of residual convolution, upsampling, and high field-of-view convolution modules. A Nyquist convolution layer and differentiable spatial-to-numerical transform layer were also used within the architecture. Simulated PA wavefronts served as the input for training the model. To create the wavefronts, the forward simulation of light propagation was done with the NIRFast toolbox and the light-diffusion approximation, while the forward simulation of sound propagation was done with the K-Wave toolbox. The simulated wavefronts were subjected to different scattering mediums and Gaussian noise. The output for the network was an artifact free heat map of the targets axial and lateral position. The network had a mean error rate of less than 30 microns when localizing target below 40 mm and had a mean error rate of 1.06 mm for localizing targets between 40 mm and 60 mm. [19] With a slight modification to the network, the model was able to accommodate multi target localization. [19] A validation experiment was performed in which pencil lead was submerged into an intralipid solution at a depth of 32 mm. The network was able to localize the lead's position when the solution had a reduced scattering coefficient of 0, 5, 10, and 15 cm−1. [19] The results of the network show improvements over standard delay-and-sum or frequency-domain beamforming algorithms and Johnstonbaugh proposes that this technology could be used for optical wavefront shaping, circulating melanoma cell detection, and real-time vascular surgeries. [19]

Removing acoustic reflection artifacts (in the presence of multiple sources and channel noise)

Building on the work of Reiter et al., [8] Allman et al. [9] utilized a full VGG-16 [21] architecture to locate point sources and remove reflection artifacts within raw photoacoustic channel data (in the presence of multiple sources and channel noise). This utilization of deep learning trained on simulated data produced in the MATLAB k-wave library, and then later reaffirmed their results on experimental data.

Ill-posed PACT reconstruction

In PACT, tomographic reconstruction is performed, in which the projections from multiple solid angles are combined to form an image. When reconstruction methods like filtered backprojection or time reversal, are ill-posed inverse problems [22] due to sampling under the Nyquist-Shannon's sampling requirement or with limited-bandwidth/view, the resulting reconstruction contains image artifacts. Traditionally these artifacts were removed with slow iterative methods like total variation minimization, but the advent of deep learning approaches has opened a new avenue that utilizes a priori knowledge from network training to remove artifacts. In the deep learning methods that seek to remove these sparse sampling, limited-bandwidth, and limited-view artifacts, the typical workflow involves first performing the ill-posed reconstruction technique to transform the pre-beamformed data into a 2-D representation of the initial pressure distribution that contains artifacts. Then, a convolutional neural network (CNN) is trained to remove the artifacts, in order to produce an artifact-free representation of the ground truth initial pressure distribution.

Using deep learning to remove sparse sampling artifacts

When the density of uniform tomographic view angles is under what is prescribed by the Nyquist-Shannon's sampling theorem, it is said that the imaging system is performing sparse sampling. Sparse sampling typically occurs as a way of keeping production costs low and improving image acquisition speed. [10] The typical network architectures used to remove these sparse sampling artifacts are U-net [10] [12] and Fully Dense (FD) U-net. [11] Both of these architectures contain a compression and decompression phase. The compression phase learns to compress the image to a latent representation that lacks the imaging artifacts and other details. [23] The decompression phase then combines with information passed by the residual connections in order to add back image details without adding in the details associated with the artifacts. [23] FD U-net modifies the original U-net architecture by including dense blocks that allow layers to utilize information learned by previous layers within the dense block. [11] Another technique was proposed using a simple CNN based architecture for removal of artifacts and improving the k-wave image reconstruction. [17]

Removing limited-view artifacts with deep learning

When a region of partial solid angles are not captured, generally due to geometric limitations, the image acquisition is said to have limited-view. [24] As illustrated by the experiments of Davoudi et al., [12] limited-view corruptions can be directly observed as missing information in the frequency domain of the reconstructed image. Limited-view, similar to sparse sampling, makes the initial reconstruction algorithm ill-posed. Prior to deep learning, the limited-view problem was addressed with complex hardware such as acoustic deflectors [25] and full ring-shaped transducer arrays, [12] [26] as well as solutions like compressed sensing, [27] [28] [29] [30] [31] weighted factor, [32] and iterative filtered backprojection. [33] [34] The result of this ill-posed reconstruction is imaging artifacts that can be removed by CNNs. The deep learning algorithms used to remove limited-view artifacts include U-net [12] [15] [35] and FD U-net, [36] as well as generative adversarial networks (GANs) [14] and volumetric versions of U-net. [13] One GAN implementation of note improved upon U-net by using U-net as a generator and VGG as a discriminator, with the Wasserstein metric and gradient penalty to stabilize training (WGAN-GP). [14]

Pixel-wise interpolation and deep learning for faster reconstruction of limited-view signals

Guan et al. [36] was able to apply a FD U-net to remove artifacts from simulated limited-view reconstructed PA images. PA images reconstructed with the time-reversal process and PA data collected with either 16, 32, or 64 sensors served as the input to the network and the ground truth images served as the desired output. The network was able to remove artifacts created in the time-reversal process from synthetic, mouse brain, fundus, and lung vasculature phantoms. [36] This process was similar to the work done for clearing artifacts from sparse and limited view images done by Davoudi et al. [12] To improve the speed of reconstruction and to allow for the FD U-net to use more information from the sensor, Guan et al. proposed to use a pixel-wise interpolation as an input to the network instead of a reconstructed image. [36] Using a pixel-wise interpolation would remove the need to produce an initial image that may remove small details or make details unrecoverable by obscuring them with artifacts. To create the pixel-wise interpolation, the time-of-flight for each pixel was calculated using the wave propagation equation. Next, a reconstruction grid was created from pressure measurements calculated from the pixels' time-of-flight. Using the reconstruction grid as an input, the FD U-net was able to create artifact free reconstructed images. This pixel-wise interpolation method was faster and achieved better peak signal to noise ratios (PSNR) and structural similarity index measures (SSIM) than artifact free images created when the time-reversal images served as the input to the FD U-net. [36] This pixel-wise interpolation method was significantly faster and had comparable PSNR and SSIM than the images reconstructed from the computationally intensive iterative approach. [36] The pixel-wise method proposed in this study was only proven for in silico experiments with homogenous medium, but Guan posits that the pixel-wise method can be used for real time PAT rendering. [36]

Limited-bandwidth artifact removal with deep neural networks

The limited-bandwidth problem occurs as a result of the ultrasound transducer array's limited detection frequency bandwidth. This transducer array acts like a band-pass filter in the frequency domain, attenuating both high and low frequencies within the photoacoustic signal. [15] [16] This limited-bandwidth can cause artifacts and limit the axial resolution of the imaging system. [14] The primary deep neural network architectures used to remove limited-bandwidth artifacts have been WGAN-GP [14] and modified U-net. [15] [16] The typical method to remove artifacts and denoise limited-bandwidth reconstructions before deep learning was Wiener filtering, which helps to expand the PA signal's frequency spectrum. [14] The primary advantage of the deep learning method over Wiener filtering is that Wiener filtering requires a high initial signal-to-noise ratio (SNR), which is not always possible, while the deep learning model has no such restriction. [14]

Fusion of information for improving photoacoustic Images with deep neural networks

The complementary information is utilized using fusion based architectures for improving the photoacoustic image reconstruction. [20] Since different reconstructions promote different characteristics in the output and hence the image quality and characteristics vary if a different reconstruction technique is used. [20] A novel fusion based architecture was proposed to combine the output of two different reconstructions and give a better image quality as compared to any of those reconstructions. It includes weight sharing, and fusion of characteristics to achieve the desired improvement in the output image quality. [20]

Deep learning to improve penetration depth of PA images

High energy lasers allow for light to reach deep into tissue and they allow for deep structures to be visible in PA images. High energy lasers provide a greater penetration depth than low energy lasers. Around an 8 mm greater penetration depth for lasers with a wavelength between 690 to 900 nm. [35] The American National Standards Institute has set a maximal permissible exposure (MPE) for different biological tissues. Lasers with specifications above the MPE can cause mechanical or thermal damage to the tissue they are imaging. [35] Manwar et al. was able to increase the penetration of depth of low energy lasers that meet the MPE standard by applying a U-net architecture to the images created by a low energy laser. [35] The network was trained with images of an ex vivo sheep brain created by a low energy laser of 20 mJ as the input to the network and images of the same sheep brain created by a high energy laser of 100 mJ, 20 mJ above the MPE, as the desired output. A perceptually sensitive loss function was used to train the network to increase the low signal-to-noise ratio in PA images created by the low energy laser. The trained network was able to increase the peak-to-background ratio by 4.19 dB and penetration depth by 5.88% for photos created by the low energy laser of an in vivo sheep brain. [35] Manwar claims that this technology could be beneficial in neonatal brain imaging where transfontanelle imaging is possible to look for any lessions or injury.

Applications of deep learning in PAM

Depiction of mechanical raster scanning method Raster-scan.svg
Depiction of mechanical raster scanning method

Photoacoustic microscopy differs from other forms of photoacoustic tomography in that it uses focused ultrasound detection to acquire images pixel-by-pixel. PAM images are acquired as time-resolved volumetric data that is typically mapped to a 2-D projection via a Hilbert transform and maximum amplitude projection (MAP). [1] The first application of deep learning to PAM, took the form of a motion-correction algorithm. [37] This procedure was posed to correct the PAM artifacts that occur when an in vivo model moves during scanning. This movement creates the appearance of vessel discontinuities.

Deep learning to remove motion artifacts in PAM

The two primary motion artifact types addressed by deep learning in PAM are displacements in the vertical and tilted directions. Chen et al. [37] used a simple three layer convolutional neural network, with each layer represented by a weight matrix and a bias vector, in order to remove the PAM motion artifacts. Two of the convolutional layers contain RELU activation functions, while the last has no activation function. [37] Using this architecture, kernel sizes of 3 × 3, 4 × 4, and 5 × 5 were tested, with the largest kernel size of 5 × 5 yielding the best results. [37] After training, the performance of the motion correction model was tested and performed well on both simulation and in vivo data. [37]

Deep learning-assisted frequency-domain PAM

Noisy input, denoised output through U-Net and averaged ground truth frequency-domain PA amplitude images of two label-free Parhyale hawaiensis embryos. Yellow arrows indicate the cell membranes. Scalebars are equal to 100 mm. Unetfdpam.jpg
Noisy input, denoised output through U-Net and averaged ground truth frequency-domain PA amplitude images of two label-free Parhyale hawaiensis embryos. Yellow arrows indicate the cell membranes. Scalebars are equal to 100 μm.

Frequency-domain PAM constitutes a powerful cost-efficient imaging method integrating intensity-modulated laser beams emitted by continuous wave sources for the excitation of single-frequency PA signals. [38] Nevertheless, this imaging approach generally provides smaller signal-to-noise ratios (SNR) which can be up to two orders of magnitude lower than the conventional time-domain systems. [39] To overcome the inherent SNR limitation of frequency-domain PAM, a U-Net neural network has been utilized to augment the generated images without the need for excessive averaging or the application of high optical power on the sample. In this context, the accessibility of PAM is improved as the system’s cost is dramatically reduced while retaining sufficiently high image quality standards for demanding biological observations. [40]

See also

Related Research Articles

<span class="mw-page-title-main">Tomography</span> Imaging by sections or sectioning using a penetrative wave

Tomography is imaging by sections or sectioning that uses any kind of penetrating wave. The method is used in radiology, archaeology, biology, atmospheric science, geophysics, oceanography, plasma physics, materials science, cosmochemistry, astrophysics, quantum information, and other areas of science. The word tomography is derived from Ancient Greek τόμος tomos, "slice, section" and γράφω graphō, "to write" or, in this context as well, "to describe." A device used in tomography is called a tomograph, while the image produced is a tomogram.

<span class="mw-page-title-main">Optical coherence tomography</span> Imaging technique

Optical coherence tomography (OCT) is an imaging technique that uses interferometry with short-coherence-length light to obtain micrometer-level depth resolution and uses transverse scanning of the light beam to form two- and three-dimensional images from light reflected from within biological tissue or other scattering media. Short-coherence-length light can be obtained using a superluminescent diode (SLD) with a broad spectral bandwidth or a broadly tunable laser with narrow linewidth. The first demonstration of OCT imaging was published by a team from MIT and Harvard Medical School in a 1991 article in the journal Science. The article introduced the term "OCT" to credit its derivation from optical coherence-domain reflectometry, in which the axial resolution is based on temporal coherence. The first demonstrations of in vivo OCT imaging quickly followed.

<span class="mw-page-title-main">Tomographic reconstruction</span> Estimate object properties from a finite number of projections

Tomographic reconstruction is a type of multidimensional inverse problem where the challenge is to yield an estimate of a specific system from a finite number of projections. The mathematical basis for tomographic imaging was laid down by Johann Radon. A notable example of applications is the reconstruction of computed tomography (CT) where cross-sectional images of patients are obtained in non-invasive manner. Recent developments have seen the Radon transform and its inverse used for tasks related to realistic object insertion required for testing and evaluating computed tomography use in airport security.

<span class="mw-page-title-main">Confocal microscopy</span> Optical imaging technique

Confocal microscopy, most frequently confocal laser scanning microscopy (CLSM) or laser scanning confocal microscopy (LSCM), is an optical imaging technique for increasing optical resolution and contrast of a micrograph by means of using a spatial pinhole to block out-of-focus light in image formation. Capturing multiple two-dimensional images at different depths in a sample enables the reconstruction of three-dimensional structures within an object. This technique is used extensively in the scientific and industrial communities and typical applications are in life sciences, semiconductor inspection and materials science.

Super-resolution imaging (SR) is a class of techniques that enhance (increase) the resolution of an imaging system. In optical SR the diffraction limit of systems is transcended, while in geometrical SR the resolution of digital imaging sensors is enhanced.

<span class="mw-page-title-main">Optical neural network</span>

An optical neural network is a physical implementation of an artificial neural network with optical components. Early optical neural networks used a photorefractive Volume hologram to interconnect arrays of input neurons to arrays of output with synaptic weights in proportion to the multiplexed hologram's strength. Volume holograms were further multiplexed using spectral hole burning to add one dimension of wavelength to space to achieve four dimensional interconnects of two dimensional arrays of neural inputs and outputs. This research led to extensive research on alternative methods using the strength of the optical interconnect for implementing neuronal communications.

Medical optical imaging is the use of light as an investigational imaging technique for medical applications, pioneered by American Physical Chemist Britton Chance. Examples include optical microscopy, spectroscopy, endoscopy, scanning laser ophthalmoscopy, laser Doppler imaging, and optical coherence tomography. Because light is an electromagnetic wave, similar phenomena occur in X-rays, microwaves, and radio waves.

<span class="mw-page-title-main">Photoacoustic imaging</span> Imaging using the photoacoustic effect

Photoacoustic imaging or optoacoustic imaging is a biomedical imaging modality based on the photoacoustic effect. Non-ionizing laser pulses are delivered into biological tissues and part of the energy will be absorbed and converted into heat, leading to transient thermoelastic expansion and thus wideband ultrasonic emission. The generated ultrasonic waves are detected by ultrasonic transducers and then analyzed to produce images. It is known that optical absorption is closely associated with physiological properties, such as hemoglobin concentration and oxygen saturation. As a result, the magnitude of the ultrasonic emission, which is proportional to the local energy deposition, reveals physiologically specific optical absorption contrast. 2D or 3D images of the targeted areas can then be formed.

Ultrasound-modulated optical tomography (UOT), also known as Acousto-Optic Tomography (AOT), is a hybrid imaging modality that combines light and sound; it is a form of tomography involving ultrasound. It is used in imaging of biological soft tissues and has potential applications for early cancer detection. As a hybrid modality which uses both light and sound, UOT provides some of the best features of both: the use of light provides strong contrast and sensitivity ; these two features are derived from the optical component of UOT. The use of ultrasound allows for high resolution, as well as a high imaging depth. However, the difficulty of tackling the two fundamental problems with UOT have caused UOT to evolve relatively slowly; most work in the field is limited to theoretical simulations or phantom / sample studies.

Super-resolution microscopy is a series of techniques in optical microscopy that allow such images to have resolutions higher than those imposed by the diffraction limit, which is due to the diffraction of light. Super-resolution imaging techniques rely on the near-field or on the far-field. Among techniques that rely on the latter are those that improve the resolution only modestly beyond the diffraction-limit, such as confocal microscopy with closed pinhole or aided by computational methods such as deconvolution or detector-based pixel reassignment, the 4Pi microscope, and structured-illumination microscopy technologies such as SIM and SMI.

<span class="mw-page-title-main">Digital holographic microscopy</span>

Digital holographic microscopy (DHM) is digital holography applied to microscopy. Digital holographic microscopy distinguishes itself from other microscopy methods by not recording the projected image of the object. Instead, the light wave front information originating from the object is digitally recorded as a hologram, from which a computer calculates the object image by using a numerical reconstruction algorithm. The image forming lens in traditional microscopy is thus replaced by a computer algorithm. Other closely related microscopy methods to digital holographic microscopy are interferometric microscopy, optical coherence tomography and diffraction phase microscopy. Common to all methods is the use of a reference wave front to obtain amplitude (intensity) and phase information. The information is recorded on a digital image sensor or by a photodetector from which an image of the object is created (reconstructed) by a computer. In traditional microscopy, which do not use a reference wave front, only intensity information is recorded and essential information about the object is lost.

Time Stretch Microscopy also known as Serial time-encoded amplified imaging/microscopy or stretched time-encoded amplified imaging/microscopy' (STEAM) is a fast real-time optical imaging method that provides MHz frame rate, ~100 ps shutter speed, and ~30 dB optical image gain. Based on the Photonic Time Stretch technique, STEAM holds world records for shutter speed and frame rate in continuous real-time imaging. STEAM employs the Photonic Time Stretch with internal Raman amplification to realize optical image amplification to circumvent the fundamental trade-off between sensitivity and speed that affects virtually all optical imaging and sensing systems. This method uses a single-pixel photodetector, eliminating the need for the detector array and readout time limitations. Avoiding this problem and featuring the optical image amplification for dramatic improvement in sensitivity at high image acquisition rates, STEAM's shutter speed is at least 1000 times faster than the state - of - the - art CCD and CMOS cameras. Its frame rate is 1000 times faster than the fastest CCD cameras and 10 - 100 times faster than the fastest CMOS cameras.

Convolutional neural network (CNN) is a regularized type of feed-forward neural network that learns feature engineering by itself via filters optimization. Vanishing gradients and exploding gradients, seen during backpropagation in earlier neural networks, are prevented by using regularized weights over fewer connections. For example, for each neuron in the fully-connected layer 10,000 weights would be required for processing an image sized 100 × 100 pixels. However, applying cascaded convolution kernels, only 25 neurons are required to process 5x5-sized tiles. Higher-layer features are extracted from wider context windows, compared to lower-layer features.

<span class="mw-page-title-main">DeepDream</span> Software program

DeepDream is a computer vision program created by Google engineer Alexander Mordvintsev that uses a convolutional neural network to find and enhance patterns in images via algorithmic pareidolia, thus creating a dream-like appearance reminiscent of a psychedelic experience in the deliberately overprocessed images.

The Beckman Laser Institute is an interdisciplinary research center for the development of optical technologies and their use in biology and medicine. Located on the campus of the University of California, Irvine in Irvine, California, an independent nonprofit corporation was created in 1982, under the leadership of Michael W. Berns, and the actual facility opened on June 4, 1986. It is one of a number of institutions focused on translational research, connecting research and medical applications. Researchers at the institute have developed laser techniques for the manipulation of structures within a living cell, and applied them medically in treatment of skin conditions, stroke, and cancer, among others.

Lihong V. Wang is the Bren Professor of Medical Engineering and Electrical Engineering at the Andrew and Peggy Cherng Department of Medical Engineering at California Institute of Technology and was formerly the Gene K. Beare Distinguished Professorship of Biomedical Engineering at Washington University in St. Louis. Wang is known for his contributions to the field of Photoacoustic imaging technologies. Wang was elected as the member of National Academy of Engineering (NAE) in 2018.

Super-resolution photoacoustic imaging is a set of techniques used to enhance spatial resolution in photoacoustic imaging. Specifically, these techniques primarily break the optical diffraction limit of the photoacoustic imaging system. It can be achieved in a variety of mechanisms, such as blind structured illumination, multi-speckle illumination, or photo-imprint photoacoustic microscopy in Figure 1.

<span class="mw-page-title-main">Photoacoustic microscopy</span>

Photoacoustic microscopy is an imaging method based on the photoacoustic effect and is a subset of photoacoustic tomography. Photoacoustic microscopy takes advantage of the local temperature rise that occurs as a result of light absorption in tissue. Using a nanosecond pulsed laser beam, tissues undergo thermoelastic expansion, resulting in the release of a wide-band acoustic wave that can be detected using a high-frequency ultrasound transducer. Since ultrasonic scattering in tissue is weaker than optical scattering, photoacoustic microscopy is capable of achieving high-resolution images at greater depths than conventional microscopy methods. Furthermore, photoacoustic microscopy is especially useful in the field of biomedical imaging due to its scalability. By adjusting the optical and acoustic foci, lateral resolution may be optimized for the desired imaging depth.

U-Net is a convolutional neural network that was developed for biomedical image segmentation at the Computer Science Department of the University of Freiburg. The network is based on a fully convolutional neural network whose architecture was modified and extended to work with fewer training images and to yield more precise segmentation. Segmentation of a 512 × 512 image takes less than a second on a modern GPU.

Photoacoustic flow cytometry or PAFC is a biomedical imaging modality that utilizes photoacoustic imaging to perform flow cytometry. A flow of cells passes a photoacoustic system producing individual signal response. Each signal is counted to produce a quantitative evaluation of the input sample.

References

  1. 1 2 3 4 Wang, Lihong V. (2009-08-29). "Multiscale photoacoustic microscopy and computed tomography". Nature Photonics. 3 (9): 503–509. Bibcode:2009NaPho...3..503W. doi:10.1038/nphoton.2009.157. ISSN   1749-4885. PMC   2802217 . PMID   20161535.
  2. Beard, Paul (2011-08-06). "Biomedical photoacoustic imaging". Interface Focus. 1 (4): 602–631. doi:10.1098/rsfs.2011.0028. ISSN   2042-8898. PMC   3262268 . PMID   22866233.
  3. Xu, Minghua; Wang, Lihong V. (2005-01-19). "Universal back-projection algorithm for photoacoustic computed tomography". Physical Review E. 71 (1): 016706. Bibcode:2005PhRvE..71a6706X. doi:10.1103/PhysRevE.71.016706. hdl: 1969.1/180492 . PMID   15697763.
  4. Kalva, Sandeep Kumar; Pramanik, Manojit (August 2016). "Experimental validation of tangential resolution improvement in photoacoustic tomography using modified delay-and-sum reconstruction algorithm". Journal of Biomedical Optics. 21 (8): 086011. Bibcode:2016JBO....21h6011K. doi: 10.1117/1.JBO.21.8.086011 . hdl: 10356/82178 . ISSN   1083-3668. PMID   27548773.
  5. Bossy, Emmanuel; Daoudi, Khalid; Boccara, Albert-Claude; Tanter, Mickael; Aubry, Jean-François; Montaldo, Gabriel; Fink, Mathias (2006-10-30). "Time reversal of photoacoustic waves" (PDF). Applied Physics Letters. 89 (18): 184108. Bibcode:2006ApPhL..89r4108B. doi:10.1063/1.2382732. ISSN   0003-6951. S2CID   121195599.
  6. Treeby, Bradley E; Zhang, Edward Z; Cox, B T (2010-09-24). "Photoacoustic tomography in absorbing acoustic media using time reversal". Inverse Problems. 26 (11): 115003. Bibcode:2010InvPr..26k5003T. doi:10.1088/0266-5611/26/11/115003. ISSN   0266-5611. S2CID   14745088.
  7. Wang, Lihong V.; Yao, Junjie (2016-07-28). "A Practical Guide to Photoacoustic Tomography in the Life Sciences". Nature Methods. 13 (8): 627–638. doi:10.1038/nmeth.3925. ISSN   1548-7091. PMC   4980387 . PMID   27467726.
  8. 1 2 3 Reiter, Austin; Bell, Muyinatu A Lediju (2017-03-03). Oraevsky, Alexander A; Wang, Lihong V (eds.). "A machine learning approach to identifying point source locations in photoacoustic data". Photons Plus Ultrasound: Imaging and Sensing 2017. International Society for Optics and Photonics. 10064: 100643J. Bibcode:2017SPIE10064E..3JR. doi:10.1117/12.2255098. S2CID   35030143.
  9. 1 2 Allman, Derek; Reiter, Austin; Bell, Muyinatu A. Lediju (June 2018). "Photoacoustic Source Detection and Reflection Artifact Removal Enabled by Deep Learning". IEEE Transactions on Medical Imaging. 37 (6): 1464–1477. doi:10.1109/TMI.2018.2829662. ISSN   1558-254X. PMC   6075868 . PMID   29870374.
  10. 1 2 3 Antholzer, Stephan; Haltmeier, Markus; Schwab, Johannes (2019-07-03). "Deep learning for photoacoustic tomography from sparse data". Inverse Problems in Science and Engineering. 27 (7): 987–1005. doi:10.1080/17415977.2018.1518444. ISSN   1741-5977. PMC   6474723 . PMID   31057659.
  11. 1 2 3 Guan, Steven; Khan, Amir A.; Sikdar, Siddhartha; Chitnis, Parag V. (February 2020). "Fully Dense UNet for 2-D Sparse Photoacoustic Tomography Artifact Removal". IEEE Journal of Biomedical and Health Informatics. 24 (2): 568–576. arXiv: 1808.10848 . doi:10.1109/jbhi.2019.2912935. ISSN   2168-2194. PMID   31021809. S2CID   52143594.
  12. 1 2 3 4 5 6 Davoudi, Neda; Deán-Ben, Xosé Luís; Razansky, Daniel (2019-09-16). "Deep learning optoacoustic tomography with sparse data". Nature Machine Intelligence. 1 (10): 453–460. doi:10.1038/s42256-019-0095-3. ISSN   2522-5839. S2CID   202640890.
  13. 1 2 Hauptmann, Andreas; Lucka, Felix; Betcke, Marta; Huynh, Nam; Adler, Jonas; Cox, Ben; Beard, Paul; Ourselin, Sebastien; Arridge, Simon (June 2018). "Model-Based Learning for Accelerated, Limited-View 3-D Photoacoustic Tomography". IEEE Transactions on Medical Imaging. 37 (6): 1382–1393. doi: 10.1109/TMI.2018.2820382 . ISSN   1558-254X. PMC   7613684 . PMID   29870367. S2CID   4321879.
  14. 1 2 3 4 5 6 7 8 Vu, Tri; Li, Mucong; Humayun, Hannah; Zhou, Yuan; Yao, Junjie (2020-03-25). "Feature article: A generative adversarial network for artifact removal in photoacoustic computed tomography with a linear-array transducer". Experimental Biology and Medicine. 245 (7): 597–605. doi:10.1177/1535370220914285. ISSN   1535-3702. PMC   7153213 . PMID   32208974.
  15. 1 2 3 4 Waibel, Dominik; Gröhl, Janek; Isensee, Fabian; Kirchner, Thomas; Maier-Hein, Klaus; Maier-Hein, Lena (2018-02-19). "Reconstruction of initial pressure from limited view photoacoustic images using deep learning". In Wang, Lihong V; Oraevsky, Alexander A (eds.). Photons Plus Ultrasound: Imaging and Sensing 2018. Vol. 10494. International Society for Optics and Photonics. pp. 104942S. Bibcode:2018SPIE10494E..2SW. doi:10.1117/12.2288353. ISBN   9781510614734. S2CID   57745829.
  16. 1 2 3 Awasthi, Navchetan (28 February 2020). "Deep Neural Network Based Sinogram Super-resolution and Bandwidth Enhancement for Limited-data Photoacoustic Tomography". Published in: IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 67 (12): 2660–2673. doi:10.1109/TUFFC.2020.2977210. hdl: 10356/146551 . PMID   32142429. S2CID   212621872.
  17. 1 2 Awasthi, Navchetan; Pardasani, Rohit; Sandeep Kumar Kalva; Pramanik, Manojit; Yalavarthy, Phaneendra K. (2020). "Sinogram super-resolution and denoising convolutional neural network (SRCN) for limited data photoacoustic tomography". arXiv: 2001.06434 [eess.IV].
  18. Gutta, Sreedevi; Kadimesetty, Venkata Suryanarayana; Kalva, Sandeep Kumar; Pramanik, Manojit; Ganapathy, Sriram; Yalavarthy, Phaneendra K. (2017-11-02). "Deep neural network-based bandwidth enhancement of photoacoustic data". Journal of Biomedical Optics. 22 (11): 116001. Bibcode:2017JBO....22k6001G. doi: 10.1117/1.jbo.22.11.116001 . hdl: 10356/86305 . ISSN   1083-3668. PMID   29098811.
  19. 1 2 3 4 5 6 Johnstonbaugh, Kerrick; Agrawal, Sumit; Durairaj, Deepit Abhishek; Fadden, Christopher; Dangi, Ajay; Karri, Sri Phani Krishna; Kothapalli, Sri-Rajasekhar (December 2020). "A Deep Learning approach to Photoacoustic Wavefront Localization in Deep-Tissue Medium". IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 67 (12): 2649–2659. doi:10.1109/tuffc.2020.2964698. ISSN   0885-3010. PMC   7769001 . PMID   31944951.
  20. 1 2 3 4 Awasthi, Navchetan (3 April 2019). "PA-Fuse: deep supervised approach for the fusion of photoacoustic images with distinct reconstruction characteristics". Published in: Biomedical Optics Express. 10 (5): 2227–2243. doi:10.1364/BOE.10.002227. PMC   6524595 . PMID   31149371.
  21. 1 2 Simonyan, Karen; Zisserman, Andrew (2015-04-10). "Very Deep Convolutional Networks for Large-Scale Image Recognition". arXiv: 1409.1556 [cs.CV].
  22. Agranovsky, Mark; Kuchment, Peter (2007-08-28). "Uniqueness of reconstruction and an inversion procedure for thermoacoustic and photoacoustic tomography with variable sound speed". Inverse Problems. 23 (5): 2089–2102. arXiv: 0706.0598 . Bibcode:2007InvPr..23.2089A. doi:10.1088/0266-5611/23/5/016. ISSN   0266-5611. S2CID   17810059.
  23. 1 2 Ronneberger, Olaf; Fischer, Philipp; Brox, Thomas (2015), "U-Net: Convolutional Networks for Biomedical Image Segmentation", Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, Lecture Notes in Computer Science, vol. 9351, Springer International Publishing, pp. 234–241, arXiv: 1505.04597 , Bibcode:2015arXiv150504597R, doi:10.1007/978-3-319-24574-4_28, ISBN   978-3-319-24573-7, S2CID   3719281
  24. Xu, Yuan; Wang, Lihong V.; Ambartsoumian, Gaik; Kuchment, Peter (2004-03-11). "Reconstructions in limited-view thermoacoustic tomography" (PDF). Medical Physics. 31 (4): 724–733. Bibcode:2004MedPh..31..724X. doi:10.1118/1.1644531. ISSN   0094-2405. PMID   15124989.
  25. Huang, Bin; Xia, Jun; Maslov, Konstantin; Wang, Lihong V. (2013-11-27). "Improving limited-view photoacoustic tomography with an acoustic reflector". Journal of Biomedical Optics. 18 (11): 110505. Bibcode:2013JBO....18k0505H. doi:10.1117/1.jbo.18.11.110505. ISSN   1083-3668. PMC   3818029 . PMID   24285421.
  26. Xia, Jun; Chatni, Muhammad R.; Maslov, Konstantin; Guo, Zijian; Wang, Kun; Anastasio, Mark; Wang, Lihong V. (2012). "Whole-body ring-shaped confocal photoacoustic computed tomography of small animals in vivo". Journal of Biomedical Optics. 17 (5): 050506. Bibcode:2012JBO....17e0506X. doi:10.1117/1.jbo.17.5.050506. ISSN   1083-3668. PMC   3382342 . PMID   22612121.
  27. Sandbichler, M.; Krahmer, F.; Berer, T.; Burgholzer, P.; Haltmeier, M. (January 2015). "A Novel Compressed Sensing Scheme for Photoacoustic Tomography". SIAM Journal on Applied Mathematics. 75 (6): 2475–2494. arXiv: 1501.04305 . Bibcode:2015arXiv150104305S. doi:10.1137/141001408. ISSN   0036-1399. S2CID   15701831.
  28. Provost, J.; Lesage, F. (April 2009). "The Application of Compressed Sensing for Photo-Acoustic Tomography". IEEE Transactions on Medical Imaging. 28 (4): 585–594. doi:10.1109/tmi.2008.2007825. ISSN   0278-0062. PMID   19272991. S2CID   11398335.
  29. Haltmeier, Markus; Sandbichler, Michael; Berer, Thomas; Bauer-Marschallinger, Johannes; Burgholzer, Peter; Nguyen, Linh (June 2018). "A sparsification and reconstruction strategy for compressed sensing photoacoustic tomography". The Journal of the Acoustical Society of America. 143 (6): 3838–3848. arXiv: 1801.00117 . Bibcode:2018ASAJ..143.3838H. doi:10.1121/1.5042230. ISSN   0001-4966. PMID   29960458. S2CID   49643233.
  30. Liang, Jinyang; Zhou, Yong; Winkler, Amy W.; Wang, Lidai; Maslov, Konstantin I.; Li, Chiye; Wang, Lihong V. (2013-07-22). "Random-access optical-resolution photoacoustic microscopy using a digital micromirror device". Optics Letters. 38 (15): 2683–6. Bibcode:2013OptL...38.2683L. doi:10.1364/ol.38.002683. ISSN   0146-9592. PMC   3784350 . PMID   23903111.
  31. Duarte, Marco F.; Davenport, Mark A.; Takhar, Dharmpal; Laska, Jason N.; Sun, Ting; Kelly, Kevin F.; Baraniuk, Richard G. (March 2008). "Single-pixel imaging via compressive sampling". IEEE Signal Processing Magazine. 25 (2): 83–91. Bibcode:2008ISPM...25...83D. doi:10.1109/msp.2007.914730. hdl: 1911/21682 . ISSN   1053-5888. S2CID   11454318.
  32. Paltauf, G; Nuster, R; Burgholzer, P (2009-05-08). "Weight factors for limited angle photoacoustic tomography". Physics in Medicine and Biology. 54 (11): 3303–3314. Bibcode:2009PMB....54.3303P. doi:10.1088/0031-9155/54/11/002. ISSN   0031-9155. PMC   3166844 . PMID   19430108.
  33. Liu, Xueyan; Peng, Dong; Ma, Xibo; Guo, Wei; Liu, Zhenyu; Han, Dong; Yang, Xin; Tian, Jie (2013-05-14). "Limited-view photoacoustic imaging based on an iterative adaptive weighted filtered backprojection approach". Applied Optics. 52 (15): 3477–83. Bibcode:2013ApOpt..52.3477L. doi:10.1364/ao.52.003477. ISSN   1559-128X. PMID   23736232.
  34. Ma, Songbo; Yang, Sihua; Guo, Hua (2009-12-15). "Limited-view photoacoustic imaging based on linear-array detection and filtered mean-backprojection-iterative reconstruction". Journal of Applied Physics. 106 (12): 123104–123104–6. Bibcode:2009JAP...106l3104M. doi:10.1063/1.3273322. ISSN   0021-8979.
  35. 1 2 3 4 5 Manwar, Rayyan; Li, Xin; Mahmoodkalayeh, Sadreddin; Asano, Eishi; Zhu, Dongxiao; Avanaki, Kamran (2020). "Deep learning protocol for improved photoacoustic brain imaging". Journal of Biophotonics. 13 (10): e202000212. doi:10.1002/jbio.202000212. ISSN   1864-0648. PMC   10906453 . PMID   33405275. S2CID   224845812.
  36. 1 2 3 4 5 6 7 Guan, Steven; Khan, Amir A.; Sikdar, Siddhartha; Chitnis, Parag V. (2020). "Limited View and Sparse Photoacoustic Tomography for Neuroimaging with Deep Learning". Scientific Reports. 10 (1): 8510. arXiv: 1911.04357 . Bibcode:2020NatSR..10.8510G. doi:10.1038/s41598-020-65235-2. PMC   7244747 . PMID   32444649.
  37. 1 2 3 4 5 Chen, Xingxing; Qi, Weizhi; Xi, Lei (2019-10-29). "Deep-learning-based motion-correction algorithm in optical resolution photoacoustic microscopy". Visual Computing for Industry, Biomedicine, and Art. 2 (1): 12. doi: 10.1186/s42492-019-0022-9 . ISSN   2524-4442. PMC   7099543 . PMID   32240397.
  38. Tserevelakis, George J.; Mavrakis, Kostas G.; Kakakios, Nikitas; Zacharakis, Giannis (2021-10-01). "Full image reconstruction in frequency-domain photoacoustic microscopy by means of a low-cost I/Q demodulator". Optics Letters. 46 (19): 4718–4721. Bibcode:2021OptL...46.4718T. doi:10.1364/OL.435146. ISSN   0146-9592. PMID   34598182.
  39. Langer, Gregor; Buchegger, Bianca; Jacak, Jaroslaw; Klar, Thomas A.; Berer, Thomas (2016-07-01). "Frequency domain photoacoustic and fluorescence microscopy". Biomedical Optics Express. 7 (7): 2692–3302. doi:10.1364/BOE.7.002692. ISSN   2156-7085. PMC   4948622 . PMID   27446698.
  40. Tserevelakis, George J.; Barmparis, Georgios D.; Kokosalis, Nikolaos; Giosa, Eirini Smaro; Pavlopoulos, Anastasios; Tsironis, Giorgos P.; Zacharakis, Giannis (2023-05-15). "Deep learning-assisted frequency-domain photoacoustic microscopy". Optics Letters. 48 (10): 2720–2723. Bibcode:2023OptL...48.2720T. doi:10.1364/OL.486624. ISSN   1539-4794. PMID   37186749. S2CID   258229033.

Photoacoustic imaging

Photoacoustic microscopy

Photoacoustic effect