Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yichen Wu is active.

Publication


Featured researches published by Yichen Wu.


Scientific Reports | 2016

Color calibration and fusion of lens-free and mobile-phone microscopy images for high-resolution and accurate color reproduction

Yibo Zhang; Yichen Wu; Yun Zhang; Aydogan Ozcan

Lens-free holographic microscopy can achieve wide-field imaging in a cost-effective and field-portable setup, making it a promising technique for point-of-care and telepathology applications. However, due to relatively narrow-band sources used in holographic microscopy, conventional colorization methods that use images reconstructed at discrete wavelengths, corresponding to e.g., red (R), green (G) and blue (B) channels, are subject to color artifacts. Furthermore, these existing RGB colorization methods do not match the chromatic perception of human vision. Here we present a high-color-fidelity and high-resolution imaging method, termed “digital color fusion microscopy” (DCFM), which fuses a holographic image acquired at a single wavelength with a color-calibrated image taken by a low-magnification lens-based microscope using a wavelet transform-based colorization method. We demonstrate accurate color reproduction of DCFM by imaging stained tissue sections. In particular we show that a lens-free holographic microscope in combination with a cost-effective mobile-phone-based microscope can generate color images of specimens, performing very close to a high numerical-aperture (NA) benchtop microscope that is corrected for color distortions and chromatic aberrations, also matching the chromatic response of human vision. This method can be useful for wide-field imaging needs in telepathology applications and in resource-limited settings, where whole-slide scanning microscopy systems are not available.


Scientific Reports | 2016

Sparsity-based multi-height phase recovery in holographic microscopy

Yair Rivenson; Yichen Wu; Hongda Wang; Yibo Zhang; Alborz Feizi; Aydogan Ozcan

High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6–8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm2 using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.


Scientific Reports | 2016

Demosaiced pixel super-resolution for multiplexed holographic color imaging.

Yichen Wu; Yibo Zhang; Wei Luo; Aydogan Ozcan

To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired.


Light-Science & Applications | 2017

Air quality monitoring using mobile microscopy and machine learning

Yichen Wu; Ashutosh Shiledar; Yicheng Li; Jeffrey Wong; Steve Feng; X. D. Chen; C. H. Chen; Kevin Jin; Saba Janamian; Zhe Yang; Zachary S. Ballard; Zoltán Göröcs; Alborz Feizi; Aydogan Ozcan

Rapid, accurate and high-throughput sizing and quantification of particulate matter (PM) in air is crucial for monitoring and improving air quality. In fact, particles in air with a diameter of ≤2.5 μm have been classified as carcinogenic by the World Health Organization. Here we present a field-portable cost-effective platform for high-throughput quantification of particulate matter using computational lens-free microscopy and machine-learning. This platform, termed c-Air, is also integrated with a smartphone application for device control and display of results. This mobile device rapidly screens 6.5 L of air in 30 s and generates microscopic images of the aerosols in air. It provides statistics of the particle size and density distribution with a sizing accuracy of ~93%. We tested this mobile platform by measuring the air quality at different indoor and outdoor environments and measurement times, and compared our results to those of an Environmental Protection Agency–approved device based on beta-attenuation monitoring, which showed strong correlation to c-Air measurements. Furthermore, we used c-Air to map the air quality around Los Angeles International Airport (LAX) over 24 h to confirm that the impact of LAX on increased PM concentration was present even at >7 km away from the airport, especially along the direction of landing flights. With its machine-learning-based computational microscopy interface, c-Air can be adaptively tailored to detect specific particles in air, for example, various types of pollen and mold and provide a cost-effective mobile solution for highly accurate and distributed sensing of air quality.


arXiv: Computer Vision and Pattern Recognition | 2018

Extended depth-of-field in holographic image reconstruction using deep learning based auto-focusing and phase-recovery.

Yichen Wu; Yair Rivenson; Yibo Zhang; Zhensong Wei; Harun Gunaydin; Xing Lin; Aydogan Ozcan

Holography encodes the three dimensional (3D) information of a sample in the form of an intensity-only recording. However, to decode the original sample image from its hologram(s), auto-focusing and phase-recovery are needed, which are in general cumbersome and time-consuming to digitally perform. Here we demonstrate a convolutional neural network (CNN) based approach that simultaneously performs auto-focusing and phase-recovery to significantly extend the depth-of-field (DOF) in holographic image reconstruction. For this, a CNN is trained by using pairs of randomly de-focused back-propagated holograms and their corresponding in-focus phase-recovered images. After this training phase, the CNN takes a single back-propagated hologram of a 3D sample as input to rapidly achieve phase-recovery and reconstruct an in focus image of the sample over a significantly extended DOF. This deep learning based DOF extension method is non-iterative, and significantly improves the algorithm time-complexity of holographic image reconstruction from O(nm) to O(1), where n refers to the number of individual object points or particles within the sample volume, and m represents the focusing search space within which each object point or particle needs to be individually focused. These results highlight some of the unique opportunities created by data-enabled statistical image reconstruction methods powered by machine learning, and we believe that the presented approach can be broadly applicable to computationally extend the DOF of other imaging modalities.


Methods | 2017

Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring

Yichen Wu; Aydogan Ozcan

Optical compound microscope has been a major tool in biomedical imaging for centuries. Its performance relies on relatively complicated, bulky and expensive lenses and alignment mechanics. In contrast, the lensless microscope digitally reconstructs microscopic images of specimens without using any lenses, as a result of which it can be made much smaller, lighter and lower-cost. Furthermore, the limited space-bandwidth product of objective lenses in a conventional microscope can be significantly surpassed by a lensless microscope. Such lensless imaging designs have enabled high-resolution and high-throughput imaging of specimens using compact, portable and cost-effective devices to potentially address various point-of-care, global-health and telemedicine related challenges. In this review, we discuss the operation principles and the methods behind lensless digital holographic on-chip microscopy. We also go over various applications that are enabled by cost-effective and compact implementations of lensless microscopy, including some recent work on air quality monitoring, which utilized machine learning for high-throughput and accurate quantification of particulate matter in air. Finally, we conclude with a brief future outlook of this computational imaging technology.


arXiv: Instrumentation and Detectors | 2018

Spatial mapping and analysis of aerosols during a forest fire using computational mobile microscopy

Yichen Wu; Ashutosh Shiledar; Jeffrey Wong; Aydogan Ozcan; Yi Luo; Cheng Chen; Bijie Bai; Yibo Zhang; Miu Tamamitsu

Forest fires are a major source of particulate matter (PM) air pollution on a global scale. The composition and impact of PM are typically studied using only laboratory instruments and extrapolated to real fire events owing to a lack of analytical techniques suitable for field-settings. To address this and similar field test challenges, we developed a mobilemicroscopy- and machine-learning-based air quality monitoring platform called c-Air, which can perform air sampling and microscopic analysis of aerosols in an integrated portable device. We tested its performance for PM sizing and morphological analysis during a recent forest fire event in La Tuna Canyon Park by spatially mapping the PM. The result shows that with decreasing distance to the fire site, the PM concentration increases dramatically, especially for particles smaller than 2 µm. Image analysis from the c-Air portable device also shows that the increased PM is comparatively strongly absorbing and asymmetric, with an aspect ratio of 0.5–0.7. These PM features indicate that a major portion of the PM may be open-flame-combustion-generated element carbon soot-type particles. This initial small-scale experiment shows that c-Air has some potential for forest fire monitoring.


Quantitative Phase Imaging IV | 2018

A robust holographic autofocusing criterion based on edge sparsity: Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

Yibo Zhang; Hongda Wang; Yichen Wu; Aydogan Ozcan; Miu Tamamitsu

The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.


Light-Science & Applications | 2018

A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples

Zoltán Gӧrӧcs; Miu Tamamitsu; Vittorio Bianco; Patrick Wolf; Shounak Roy; Koyoshi Shindo; Kyrollos Yanny; Yichen Wu; Hatice Ceylan Koydemir; Yair Rivenson; Aydogan Ozcan

We report a deep learning-enabled field-portable and cost-effective imaging flow cytometer that automatically captures phase-contrast color images of the contents of a continuously flowing water sample at a throughput of 100 mL/h. The device is based on partially coherent lens-free holographic microscopy and acquires the diffraction patterns of flowing micro-objects inside a microfluidic channel. These holographic diffraction patterns are reconstructed in real time using a deep learning-based phase-recovery and image-reconstruction method to produce a color image of each micro-object without the use of external labeling. Motion blur is eliminated by simultaneously illuminating the sample with red, green, and blue light-emitting diodes that are pulsed. Operated by a laptop computer, this portable device measures 15.5 cm × 15 cm × 12.5 cm, weighs 1 kg, and compared to standard imaging flow cytometers, it provides extreme reductions of cost, size and weight while also providing a high volumetric throughput over a large object size range. We demonstrated the capabilities of this device by measuring ocean samples at the Los Angeles coastline and obtaining images of its micro- and nanoplankton composition. Furthermore, we measured the concentration of a potentially toxic alga (Pseudo-nitzschia) in six public beaches in Los Angeles and achieved good agreement with measurements conducted by the California Department of Public Health. The cost-effectiveness, compactness, and simplicity of this computational platform might lead to the creation of a network of imaging flow cytometers for large-scale and continuous monitoring of the ocean microbiome, including its plankton composition.Bio-analysis: Rapidly spotting toxicity in a drop of the oceanA portable device that combines holographic imaging with artificial intelligence can rapidly detect potentially harmful algae in ocean water. Aydogan Ozcan, Zoltan Gorocs and colleagues from the University of California Los Angeles in the United States developed an inexpensive flow cytometer that pumps water samples containing tiny marine organisms, past an LED chip pulsing red, blue, and green light simultaneously. Deep learning algorithms trained to recognize background signals automatically analyze the holographic interference patterns created by the marine organisms and rapidly generate color images with microscale resolution. Sample throughput is boosted 10-fold over conventional imaging flow cytometry by avoiding the use of lenses. Using a lightweight and inexpensive prototype, the team monitored plankton levels at six public beaches and detected a likely toxic organism, the algae Pseudo-nitzschia, at levels matching those from public health laboratories.


Journal of Biophotonics | 2018

Accurate color imaging of pathology slides using holography and absorbance spectrum estimation of histochemical stains

Yibo Zhang; Tairan Liu; Yujia Huang; Da Teng; Yinxu Bian; Yichen Wu; Yair Rivenson; Alborz Feizi; Aydogan Ozcan

Holographic microscopy presents challenges for color reproduction due to the usage of narrow-band illumination sources, which especially impacts the imaging of stained pathology slides for clinical diagnoses. Here, an accurate color holographic microscopy framework using absorbance spectrum estimation is presented. This method uses multispectral holographic images acquired and reconstructed at a small number (e.g., three to six) of wavelengths, estimates the absorbance spectrum of the sample, and projects it onto a color tristimulus. Using this method, the wavelength selection is optimized to holographically image 25 pathology slide samples with different tissue and stain combinations to significantly reduce color errors in the final reconstructed images. The results can be used as a practical guide for various imaging applications and, in particular, to correct color distortions in holographic imaging of pathology samples spanning different dyes and tissue types.

Collaboration


Dive into the Yichen Wu's collaboration.

Top Co-Authors

Avatar

Aydogan Ozcan

University of California

View shared research outputs
Top Co-Authors

Avatar

Yibo Zhang

University of California

View shared research outputs
Top Co-Authors

Avatar

Alborz Feizi

University of California

View shared research outputs
Top Co-Authors

Avatar

Yair Rivenson

University of California

View shared research outputs
Top Co-Authors

Avatar

Hongda Wang

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Luo

University of California

View shared research outputs
Top Co-Authors

Avatar

Xing Lin

University of California

View shared research outputs
Top Co-Authors

Avatar

Alex Guziak

University of California

View shared research outputs
Top Co-Authors

Avatar

Alon Greenbaum

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge