Deep Learning Enabled Real-Time Photoacoustic Tomography System via Single Data Acquisition Channel
>> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 1
Abstract —Photoacoustic computed tomography (PACT) combines the optical contrast of optical imaging and the penetrability of sonography. In this work, we develop a novel PACT system to provide real-time imaging, which is achieved by 120-elements ultrasound array, but for the first time, only using single data acquisition (DAQ) channel. To reduce channel number of DAQ, we superimpose 30 nearby channels’ signals together in analog domain, shrinking to 4 channels of data (120/30=4). Furthermore, a four-to-one delay-line module is designed to combine this 4 channels’ data into one channel before entering the single-channel DAQ, followed by decoupling the signals after data acquisition (DAQ). In order to reconstruct the image from four superimposed 30-channels’ PA signals, we train a dedicated deep learning model to reconstruct final PA image. In this paper, we present the preliminary result of a phantom study, which manifests its robust real-time imaging performance. The significance of this novel PACT system is that it dramatically reduces the cost of multi-channel DAQ module (from 120 channels to 1 channel), paving the way to portable, low-cost and real-time PACT system.
Index Terms —Photoacoustic computed tomography, deep learning, delay-line I. I NTRODUCTION S a new imaging modality, photoacoustic computed tomography (PACT) has emerged to show great potential in biomedical imaging areas, , which is based on photoacoustic (PA) effect generating ultrasound by a nanosecond pulsed laser [1-4]. It blends the spatial resolution of ultrasound imaging and the high contrast of spectroscopic optical absorption. The ultrasonic detectors are placed around the object to receive the PA signals simultaneously. Then, a reconstruction algorithm is used to recover the initial pressure distribution [5-7]. In recent years, it has been applied in many preclinical or clinical applications, such as small animal functional imaging and early breast tumor detection [8-19]. Several PACT systems have been developed to image the specific tissues (e.g. breast) or the whole body of small animals This research was funded by Natural Science Foundation of Shanghai (18ZR1425000), and National Natural Science Foundation of China (61805139). (Corresponding author: Fei Gao.) Hengrong Lan and Daohuai Jiang are equal to this work. Hengrong Lan, Daohuai Jiang, Changchun Yang are with the Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China, with Chinese Academy of in real time, which requires high performance toward preclinical or clinical application [20-25]. Specifically, the reported PACT systems are mainly improved in the following ways: (1) Increasing the number of transducers in planar or hemispherical array scheme. Gamelin et al. proposed a real-time PACT system with 512-elements for small animals’ imaging [21]; Lin et al. developed a single-breath-hold PACT breast imaging system delivering high image quality with 512-elements ultrasound probe [22]; (2) Combining other imaging modality (e.g. ultrasound imaging) with PACT in linear array scheme. Park et al. presented a PA, ultrasound, magnetic resonance triple-mode system [23]; (3) Reducing the total system cost. SK Kalva et al. presented a fast imaging PACT system based on pulsed laser diode (LD) [24]; Zafar et al. developed a low-cost scanning system using just a few detectors [25]. However, reducing the number of detectors and DAQ channels may result in poorer image quality damage or slower imaging speed (mechanical scanning is required). Recently, deep learning has been greatly developed in signals, images, and video processing. Deep learning methods have cut a figure in PA image reconstruction problems from raw data or imperfect image [26-32]. To the best of our knowledge, there is no PACT system that can achieve real-time imaging using only single DAQ channel. In this paper, we report a novel low-cost PACT system, which achieves real-time imaging performance via only single channel DAQ after PA signals’ superimposition and 4-to-1 delay-line module in analog domain, followed by deep learning based image reconstruction from the superimposed and delayed PA signals. II. M ETHOD A. Overview
The single-channel PACT system is shown in Fig. 1: A pulsed laser is controlled by a computer. Placing in the water tank, a 120-channels ring transducer array can real-timely receive the 120-channels’ PA signals, which are fed into an analog summator module. By superimposing every 30 channels’
Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai 200050, China, and also with University of Chinese Academy of Sciences, Beijing 100049, China. Feng Gao and Fei Gao are with the Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China (e-mail: [email protected]).
Deep Learning Enabled Real-Time Photoacoustic Tomography System via Single Data Acquisition Channel
Hengrong Lan,
Daohuai Jiang, Changchun Yang, Feng Gao, Fei Gao,
Member, IEEE A REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 2 PA signals into 1 channel, 120-channels’ PA signals are then reduced to four channels. After pre-amplification, the four superimposed PA signals go through a 4-to-1 delay-line module, which can properly delay the four PA signals and sum them into one channel (Preliminary results about the 4-to-1 delay-line module can be found in [33, 34]). We summarize the flow diagram of the system operation in Fig. 2. 120-channels PA signals can be detected by transducers, feeding them into t he summator module that superimposes them into 4 channels’ data. The 4 channels’ PA signals are fed into the delay-line module, which can combine 4 channels into one channel PA signal. Then single-channel DAQ converts this one channel PA signal to digital data. In digital domain, the delayed signals can be recovered back to four channels via simple signal processing, which can be used to reconstruct the final PA image by dedicated deep learning framework. The delay-line module and deep-learning based reconstruction framework will be introduced in the following sessions.
Fig. 2. The flow diagram of the operation for proposed single-channel PACT system. B. Four-to-one delay-line module
The four-to-one delay-line module is to merge 4 PA signals into one channel properly. To achieve this, we proposed the time-sharing multiplex transmission method for PACT system. The PA signal is intrinsically a very short ultrasound pulse, whose duration is usually less than 50 microseconds. Therefore, four PA signals with different and sufficient time delays (e.g. 0 μ s, 50 μ s, 100 μ s, 150 μ s) can be merged into one composite signal, which can be recovered in the digital domain. To achieve tens of microseconds time delay for analog pulse signals, we proposed and fabricated a four-to-one delay-line module based on acoustic delay method. Fig. 3. The schematic of the four-to-one delay-line module, (a) the structure of the delay unit, UT: ultrasonic transmitter, UR: ultrasonic receiver, TM: ultrasound transmission medium, (b) the structure of the four-to-one delay-line module.
The schematic of the delay-line module is shown in Fig. 3, including three delay units and a multichannel adder in Fig. 3(b). Fig. 3 (a) shows the delay unit structure, which includes two ultrasound transducers to transfer the signals between electrical and ultrasound modes, i.e. one for transmitting and another for receiving. The delay time depends on the length of the ultrasound transmission medium. The relationship between the length of the transmission medium and delay time can be calculated easily with the fixed ultrasound propagation speed in
Fig. 1. The overview of the proposed single-channel PACT system. PC: personal computer. Pre-amp: Pre-amplifier
REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 3 this medium. Fig. 3 (b) shows the structure of the four-to-one delay-line module, the delay time of each delay unit is different and constant. By applying this module, the four coinstantaneous PA signals with different time delay can be merged into one output by an analog signal summator. In this work, the delay time of each unit is 50, 100 and 150 microseconds, respectively. The composite signal can be reconstructed into four independent signals with the time-shifting operation in the digital domain. The signal reconstruction can be separated into two steps basically. As shown in Fig. 4, Fig. 4(a) is the four-to-one composite signal, (b) shows the separated signals and Fig. 4(c) are the reconstructed signals after the time-shifting operation.
Fig. 4. The separation process for the delay-line module. (a) the one channel four-to-one PA signal; (b) four channels separated delayed PA signals; (c) four channels recovered PA signals. C. Deep learning reconstruction architecture
The deep learning architecture to reconstruct the PA image is shown in Fig. 5, which takes four superimposed PA signals coming from the abovementioned delay-line unit as input, and output the reconstructed PA image. As shown in Fig. 5, an encoder comprises a long short-term memory (LSTM) and a full connection layer, which encode input signals to 64 feature sizes before the decoder. It is noteworthy that the 64 feature size needs reshape to 8×8 size before we take it into decoder. For the decoder, four up-sampled layers comprise an up-sampled operation and two convolutions, batch normalizations, and leaky Rectified Linear Unit (ReLU) operations. Afterward, we can obtain the final image through a Residual-block (Res-block) [35]. Considering that the size of the input has an extreme asymmetry including four size spatial channels and 2048 size temporal distribution, we apply a recurrent neural network (RNN) to process the spatial-temporal data and extract the semantic information to full connection layers that encode the semantic features. The semantic feature from the encoder is fed into the decoder after a reshaping operation. The decoder converts the semantic features to an image, which is composed of four up-sampled layers as follows: (1) where up ( · ) is an up-sampled operation, w and w are weight of two convolutions, and we use leaky ReLU as activation function that can be expressed as: (2) where l is the coefficient of leakage, equal to 0.2 in this work. After that, the image features are fed into a Res-block, which is expressed as follow: (3) Considering that deep learning is a data-driven method, we need plenty of training data. But the current PAI equipment is still not available in the clinic, we have to use synthetic data generated by MATLAB toolbox k-Wave [36]. We use mean square error (MSE) loss to train the network, which is expressed as follow: (4) where gt and y denote ground-truth and output image respectively. In this paper, Pytorch is used to implement the deep learning method. The network is trained on the hardware platform, which consists of two Intel Xeon E5-2690 (2.6GHz) CPUs and four NVIDIA GTX 1080Ti graphics cards. The batch size is set as 64, and the initial learning rate is 0.005. The optimization algorithm we select in this paper is Adam [37]. III. E XPERIMENTS
The deep-learning-based method requires plenty of data for training, so we declare the generation of data set with the aid of the simulation in session III.A. We need to train our deep learning model on the training set, and then demonstrate our system on the phantom experiment. A. Deep learning model training
We use MATLAB toolbox k-Wave to generate the data, which uses numerical phantoms consisting of four discs. The discs are randomly placed in the region of interest (ROI) within 38.4×38.4 mm area, and the size of disc is randomly set from 0.75mm to 2.25mm. 120 sensors are evenly placed as a circle, whose center frequency is 7.5 MHz with 80% fractional bandwidth. The speed of ultrasound is 1500 m/s in soft tissue. The transducers can receive 2048 points data for every channel, and then we superimpose 30 channels’ data, leading to four channels’ superimposed PA signals. Namely, every data can be allotted 4×2048 size. Finally, we obtain 4500 training data and 100 test data. B. Phantom experiment
We further demonstrate the phantom experiment using our proposed PACT system. We printed four black balls by 3D printer and fabricated an agarose gel phantom with these four balls inside. As shown in Fig. 1, 120 elements transducer (7.5MHz, Doppler Inc.) is placed surrounding the phantom. The pulsed laser (532 nm wavelength, 450 mJ pulse energy, 10 Hz repetition rate) is used to illuminate the phantom, and only one acquisition channel of an oscilloscope (DPO5204B, Tektronix) is used to collect the PA data after superimposition operation,
UP( ) Re LU{ Re LU[ ( )]}, x w w up x = * * , 0( ) , 0 x xf x x x l > ì = í × £î , Res( ) Re LU{ [ ReLU( )]}. x w x w w x = * + * *
1( ) ,2 rec F
L y y gt = - REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 4 delay-line module, and amplification (AMP16t, PhotoSound). Moreover, we also compare the time consumption of the conventional single-channel PACT system with our system. IV. R ESULTS A. Simulation results
We show two samples’ results of the test set in Fig. 6, which contains four discs for every sample. Fig. 6(a) and Fig. 6(c) are ground-truth of these samples, Fig. 6(b) and Fig. 6(d) are reconstructed results by proposed deep learning framework respectively. The results show some blurs and deformations as the yellow arrows indicated in Fig. 6(b) and 6(d), but the locations and size are maintained quite well with satisfactory contrast and resolution.
Fig.6. The reconstructed results of test samples. (a) ground-truth of sample 1; (b) reconstructed results of sample 1; (c) ground-truth of sample 2; (c) reconstructed results of sample 2. The yellow arrows indicate the blurs and deformations of the results. B. Phantom experiment results
We further demonstrated our system using black-ball phantoms with a different distribution, and we plot the one channel raw PA data and recovered four channels’ PA data in Fig. 7. One channel superimposed and delayed PA data received by DAQ is shown in Fig. 7 (a), which has sufficient 50 µ s delay time for every channel. Fig. 7(b)-(e) are recovered PA data from Fig. 7 (a) of the four channels, and every channel’s PA data indicates the superimposition of 30 channels’ raw PA signals. The phantom imaging result is shown in Fig. 8, which shows a good match between the phantom’s photograph (Fig. 8(a)) and the reconstructed PA image (Fig. 8(b)). We can further compare the time consumption of our proposed PACT system with a conventional single-channel PACT system (e.g. [17]). We divide the operation procedure into the data acquisition and image processing. Data acquisition includes the PA signal detection and processing before entering the DAQ; image processing includes signals recovery and image reconstruction after the PA signal is digitized by DAQ. For conventional single-channel PACT system, the transducer needs to mechanically rotate at 120 positions for PA signal detection repeatedly, which is quite time-consuming (261 seconds for 120 positions’ PA signal detection in Table. 1). The image reconstruction of conventional PACT system is by delay-and-sum (DAS) algorithm. On the other hand, our proposed PACT system acquires all the PA data within 2.35 ms using only single-channel DAQ, followed by deep-learning based image reconstruction algorithm that is much faster than DAS (28 ms v.s. 159 ms). By calculating the total time consumption shown in Table. 1, our proposed PACT system is nearly 8600 times faster than conventional single-channel PACT system. It shows the great potential of our proposed PACT system for real-time PA imaging with significantly lower DAQ cost using single channel. Last but not least, the quality of the reconstructed PA image by the deep-learning based algorithm shows much less artifacts compared with conventional DAS algorithm (Fig. 8(c)). TABLE II. The time consumption of the proposed system and the conventional single-channel PACT system. Time consumption
Our proposed single-channel PACT system Conventional single-channel PACT system
Data Acquirement 2.35ms 261.6s Image Processing 28ms 159ms Total 30.35ms 261.759s V. C ONCLUSIONS
In this paper, we developed a novel low-cost real-time PACT system that collects 120 channels’ PA data by only single data acquisition channel. 30-to-1 superimposition can decline the 120 channels’ PA data to four channels, then four-
Fig. 5. The overview of proposed deep learning architecture, the superimposed signals are converted to semantic features by LSTM and reshaped as 256 features. The final three blue features contain a residual operation. LSTM: long short-term memory, FC: full connection.
REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 5 to-one delay-line module can further combine the four channels into one channel PA data. To prove the feasibility of the system, we can reconstruct the disc-like phantom using four superimposed PA signals benefiting from our proposed deep learning approach. The phantom result has been demonstrated using our proposed PACT system and shows a robust performance with much less artifacts compared with the conventional PACT system. Furthermore, we can implement the real-time imaging, which is never achieved in conventional single-channel PACT system. In the future work, we will further improve the system using economical laser source for even lower cost, and apply our system on vessel or other in-vivo imaging applications. R EFERENCES [1] L. V. Wang, "Tutorial on Photoacoustic Microscopy and Computed Tomography,"
IEEE Journal of Selected Topics in Quantum Electronics, vol. 14, no. 1, pp. 171-179, 2008, doi: 10.1109/jstqe.2007.913398. [2] L. V. Wang and J. Yao, "A practical guide to photoacoustic tomography in the life sciences,"
Nat Methods, vol. 13, no. 8, pp. 627-38, Jul 28 2016, doi: 10.1038/nmeth.3925. [3] H. Zhong, T. Duan, H. Lan, M. Zhou, and F. Gao, "Review of Low-Cost Photoacoustic Sensing and Imaging Based on Laser Diode and Light-Emitting Diode,"
Sensors (Basel), vol. 18, no. 7, Jul 13 2018, doi: 10.3390/s18072264. [4] Y. Zhou, J. Yao, and L. V. Wang, "Tutorial on photoacoustic tomography,"
J Biomed Opt, vol. 21, no. 6, p. 61007, Jun 2016, doi: 10.1117/1.JBO.21.6.061007. [5] Y. Dong, T. Görner, and S. Kunis, "An algorithm for total variation regularized photoacoustic imaging,"
Advances in Computational Mathematics, vol. 41, no. 2, pp. 423-438, 2014, doi: 10.1007/s10444-014-9364-1. [6] S. R. Arridge, M. M. Betcke, B. T. Cox, F. Lucka, and B. E. Treeby, "On the adjoint operator in photoacoustic tomography,"
Inverse Problems, vol. 32, no. 11, 2016, doi: 10.1088/0266-5611/32/11/115012. [7] M. Xu and L. V. Wang, "Universal back-projection algorithm for photoacoustic computed tomography,"
Phys Rev E Stat Nonlin Soft Matter Phys, vol. 71, no. 1 Pt 2, p. 016706, Jan 2005, doi: 10.1103/PhysRevE.71.016706. [8] F. Gao, Q. Peng, X. Feng, B. Gao, and Y. Zheng, "Single-Wavelength Blood Oxygen Saturation Sensing With Combined Optical Absorption and Scattering,"
IEEE Sensors Journal, vol. 16, no. 7, pp. 1943-1948, 2016, doi: 10.1109/jsen.2015.2510744. [9] Y. Wang, S. Hu, K. Maslov, Y. Zhang, Y. Xia, and L. V. Wang, "In vivo integrated photoacoustic and confocal microscopy of hemoglobin oxygen saturation and oxygen partial pressure,"
Optics letters, vol. 36, no. 7, pp. 1029-1031, 2011. [10] X. Wang, X. Xie, G. Ku, L. V. Wang, and G. Stoica, "Noninvasive imaging of hemoglobin concentration and oxygenation in the rat brain using high-resolution photoacoustic tomography,"
Journal of biomedical optics, vol. 11, no. 2, pp. 024015-024015-9, 2006. [11] J. Shah, S. Park, S. Aglyamov, T. Larson, L. Ma, K. Sokolov et al. , "Photoacoustic imaging and temperature measurement for photothermal cancer therapy,"
Journal of biomedical optics, vol. 13, no. 3, pp. 034024-034024-9, 2008. [12] V. P. Zharov, E. I. Galanzha, E. V. Shashkov, N. G. Khlebtsov, and V. V. Tuchin, "In vivo photoacoustic flow cytometry for monitoring of circulating single cancer cells and contrast agents,"
Optics letters, vol. 31, no. 24, pp. 3623-3625, 2006.
Fig. 7. (a) PA data received by one-channel DAQ; (b) recovered first superimposed PA data; (c) recovered second superimposed PA data; (d) recovered third superimposed PA data; (e) recovered fourth superimposed PA data.
Fig.8. (a) The photograph of the phantom, (b) The reconstructed image result of the black-ball phantom by our proposed deep-learning based algorithm, (c) The conventional single-channel PACT reconstructed result.
REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) < 6 [13] W. Liu, B. Lan, L. Hu, R. Chen, Q. Zhou, and J. Yao, "Photoacoustic thermal flowmetry with a single light source,"
J Biomed Opt, vol. 22, no. 9, pp. 1-6, Sep 2017, doi: 10.1117/1.JBO.22.9.096001. [14] F. Gao, X. Feng, R. Zhang, S. Liu, R. Ding, R. Kishor et al. , "Single laser pulse generates dual photoacoustic signals for differential contrast photoacoustic imaging,"
Sci Rep, vol. 7, no. 1, p. 626, Apr 04 2017, doi: 10.1038/s41598-017-00725-4. [15] B. Yan, H. Qin, C. Huang, C. Li, Q. Chen, and D. Xing, "Single-wavelength excited photoacoustic-fluorescence microscopy for in vivo pH mapping,"
Opt Lett, vol. 42, no. 7, pp. 1253-1256, Apr 01 2017, doi: 10.1364/OL.42.001253. [16] A. Mandelis, "Imaging cancer with photoacoustic radar,"
Physics Today, vol. 70, no. 5, pp. 42-48, 2017, doi: 10.1063/pt.3.3554. [17] H. Lan, T. Duan, H. Zhong, M. Zhou, and F. Gao, "Photoacoustic Classification of Tumor Model Morphology Based on Support Vector Machine: A Simulation and Phantom Study,"
IEEE Journal of Selected Topics in Quantum Electronics, vol. 25, no. 1, pp. 1-9, 2019, doi: 10.1109/jstqe.2018.2856583. [18] M. Pramanik, G. Ku, C. Li, and L. V. Wang, "Design and evaluation of a novel breast cancer detection system combining both thermoacoustic (TA) and photoacoustic (PA) tomography,"
Medical physics, vol. 35, no. 6Part1, pp. 2218-2223, 2008. [19] H. Lan, T. Duan, D. Jiang, H. Zhong, M. Zhou, and F. Gao, "Dual-contrast nonlinear photoacoustic sensing and imaging based on single high-repetition-rate pulsed laser,"
IEEE Sensors Journal, pp. 1-1, 2019, doi: 10.1109/jsen.2019.2902849. [20] L. Li, L. Zhu, C. Ma, L. Lin, J. Yao, L. Wang et al. , "Single-impulse panoramic photoacoustic computed tomography of small-animal whole-body dynamics at high spatiotemporal resolution,"
Nature Biomedical Engineering, vol. 1, no. 5, p. 0071, 2017, doi: 10.1038/s41551-017-0071. [21] J. Gamelin, A. Maurudis, A. Aguirre, F. Huang, P. Guo, L. V. Wang et al. , "A real-time photoacoustic tomography system for small animals,"
Optics express, vol. 17, no. 13, pp. 10489-10498, 2009. [22] L. Lin, P. Hu, J. Shi, C. M. Appleton, K. Maslov, L. Li et al. , "Single-breath-hold photoacoustic computed tomography of the breast,"
Nat Commun, vol. 9, no. 1, p. 2352, Jun 15 2018, doi: 10.1038/s41467-018-04576-z. [23] S. Park, J. Jang, J. Kim, Y. S. Kim, and C. Kim, "Real-time Triple-modal Photoacoustic, Ultrasound, and Magnetic Resonance Fusion Imaging of Humans,"
IEEE Trans Med Imaging,
Apr 24 2017, doi: 10.1109/TMI.2017.2696038. [24] S. K. Kalva, P. K. Upputuri, and M. Pramanik, "High-speed, low-cost, pulsed-laser-diode-based second-generation desktop photoacoustic tomography system,"
Opt Lett, vol. 44, no. 1, pp. 81-84, Jan 1 2019, doi: 10.1364/OL.44.000081. [25] M. Zafar, K. Kratkiewicz, R. Manwar, and M. Avanaki, "Development of Low-Cost Fast Photoacoustic Computed Tomography: System Characterization and Phantom Study,"
Applied Sciences, vol. 9, no. 3, 2019, doi: 10.3390/app9030374. [26] H. Lan, K. Zhou, C. Yang, J. Cheng, J. Liu, S. Gao et al. , "Ki-GAN: Knowledge Infusion Generative Adversarial Network for Photoacoustic Image Reconstruction In Vivo," in
Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 , (Lecture Notes in Computer Science, 2019, ch. Chapter 31, pp. 273-281. [27] A. Hauptmann, F. Lucka, M. Betcke, N. Huynh, J. Adler, B. Cox et al. , "Model-Based Learning for Accelerated, Limited-View 3-D Photoacoustic Tomography,"
IEEE Trans Med Imaging, vol. 37, no. 6, pp. 1382-1393, Jun 2018, doi: 10.1109/TMI.2018.2820382. [28] H. Lan, K. Zhou, C. Yang, J. Liu, S. Gao, and F. Gao, "Hybrid Neural Network for Photoacoustic Imaging Reconstruction," in , 2019: IEEE. [29] Y. E. Boink, S. Manohar, and C. Brune, "A partially learned algorithm for joint photoacoustic reconstruction and segmentation," arXiv preprint arXiv:1906.07499,
Opt Lett, vol. 43, no. 12, pp. 2752-2755, Jun 15 2018, doi: 10.1364/OL.43.002752. [31] L. V. Wang, A. A. Oraevsky, L. Maier-Hein, K. Maier-Hein, T. Kirchner, F. Isensee et al. , "Reconstruction of initial pressure from limited view photoacoustic images using deep learning," presented at the Photons Plus Ultrasound: Imaging and Sensing 2018, 2018. [32] C. Yang and F. Gao, "EDA-Net: Dense Aggregation of Deep and Shallow Information Achieves Quantitative Photoacoustic Blood Oxygenation Imaging Deep in Human Breast," in
Medical Image Computing and Computer Assisted Intervention – MICCAI 2019 , (Lecture Notes in Computer Science, 2019, ch. Chapter 28, pp. 246-254. [33] D. Jiang, H. Lan, H. Zhong, Y. Zhao, H. Li, and F. Gao, "Low-Cost Photoacoustic Tomography System Based on Multi-Channel Delay-Line Module,"
IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 66, no. 5, pp. 778-782, 2019, doi: 10.1109/tcsii.2019.2908432. [34] D. Jiang, Y. Xu, Y. Zhao, H. Lan, and F. Gao, "Low-Cost Photoacoustic Tomography System Based on Water-Made Acoustic Delay-Line," presented at the 2019 IEEE International Ultrasonics Symposium (IUS), Glasgow, United Kingdom, United Kingdom, 2019. [35] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in
Proceedings of the IEEE conference on computer vision and pattern recognition , 2016, pp. 770-778. [36] B. E. Treeby and B. T. Cox, "k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,"
Journal of biomedical optics, vol. 15, no. 2, pp. 021314-021314-12, 2010. [37] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980,arXiv preprint arXiv:1412.6980,