Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Harun Gunaydin is active.

Publication


Featured researches published by Harun Gunaydin.


Light-Science & Applications | 2018

Phase recovery and holographic image reconstruction using deep learning in neural networks

Yair Rivenson; Yibo Zhang; Harun Gunaydin; Da Teng; Aydogan Ozcan

Phase recovery from intensity-only measurements forms the heart of coherent imaging techniques and holography. In this study, we demonstrate that a neural network can learn to perform phase recovery and holographic image reconstruction after appropriate training. This deep learning-based approach provides an entirely new framework to conduct holographic imaging by rapidly eliminating twin-image and self-interference-related spatial artifacts. This neural network-based method is fast to compute and reconstructs phase and amplitude images of the objects using only one hologram, requiring fewer measurements in addition to being computationally faster. We validated this method by reconstructing the phase and amplitude images of various samples, including blood and Pap smears and tissue sections. These results highlight that challenging problems in imaging science can be overcome through machine learning, providing new avenues to design powerful computational imaging systems.


arXiv: Learning | 2017

Deep learning microscopy

Yair Rivenson; Zoltán Göröcs; Harun Gunaydin; Yibo Zhang; Hongda Wang; Aydogan Ozcan

We demonstrate that a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field-of-view and depth-of-field. After its training, the only input to this network is an image acquired using a regular optical microscope, without any changes to its design. We blindly tested this deep learning approach using various tissue samples that are imaged with low-resolution and wide-field systems, where the network rapidly outputs an image with remarkably better resolution, matching the performance of higher numerical aperture lenses, also significantly surpassing their limited field-of-view and depth-of-field. These results are transformative for various fields that use microscopy tools, including e.g., life sciences, where optical microscopy is considered as one of the most widely used and deployed techniques. Beyond such applications, our presented approach is broadly applicable to other imaging modalities, also spanning different parts of the electromagnetic spectrum, and can be used to design computational imagers that get better and better as they continue to image specimen and establish new transformations among different modes of imaging.


ACS Photonics | 2018

Deep Learning Enhanced Mobile-Phone Microscopy

Yair Rivenson; Hatice Ceylan Koydemir; Hongda Wang; Zhensong Wei; Zhengshuang Ren; Harun Gunaydin; Yibo Zhang; Zoltán Göröcs; Kyle Liang; Derek Tseng; Aydogan Ozcan

Mobile-phones have facilitated the creation of field-portable, cost-effective imaging and sensing technologies that approach laboratory-grade instrument performance. However, the optical imaging interfaces of mobile-phones are not designed for microscopy and produce spatial and spectral distortions in imaging microscopic specimens. Here, we report on the use of deep learning to correct such distortions introduced by mobile-phone-based microscopes, facilitating the production of high-resolution, denoised and colour-corrected images, matching the performance of benchtop microscopes with high-end objective lenses, also extending their limited depth-of-field. After training a convolutional neural network, we successfully imaged various samples, including blood smears, histopathology tissue sections, and parasites, where the recorded images were highly compressed to ease storage and transmission for telemedicine applications. This method is applicable to other low-cost, aberrated imaging systems, and could offer alternatives for costly and bulky microscopes, while also providing a framework for standardization of optical images for clinical and biomedical applications.


arXiv: Computer Vision and Pattern Recognition | 2018

Extended depth-of-field in holographic image reconstruction using deep learning based auto-focusing and phase-recovery.

Yichen Wu; Yair Rivenson; Yibo Zhang; Zhensong Wei; Harun Gunaydin; Xing Lin; Aydogan Ozcan

Holography encodes the three dimensional (3D) information of a sample in the form of an intensity-only recording. However, to decode the original sample image from its hologram(s), auto-focusing and phase-recovery are needed, which are in general cumbersome and time-consuming to digitally perform. Here we demonstrate a convolutional neural network (CNN) based approach that simultaneously performs auto-focusing and phase-recovery to significantly extend the depth-of-field (DOF) in holographic image reconstruction. For this, a CNN is trained by using pairs of randomly de-focused back-propagated holograms and their corresponding in-focus phase-recovered images. After this training phase, the CNN takes a single back-propagated hologram of a 3D sample as input to rapidly achieve phase-recovery and reconstruct an in focus image of the sample over a significantly extended DOF. This deep learning based DOF extension method is non-iterative, and significantly improves the algorithm time-complexity of holographic image reconstruction from O(nm) to O(1), where n refers to the number of individual object points or particles within the sample volume, and m represents the focusing search space within which each object point or particle needs to be individually focused. These results highlight some of the unique opportunities created by data-enabled statistical image reconstruction methods powered by machine learning, and we believe that the presented approach can be broadly applicable to computationally extend the DOF of other imaging modalities.


bioRxiv | 2018

Deep learning achieves super-resolution in fluorescence microscopy

Hongda Wang; Yair Rivenson; Yiyin Jin; Zhensong Wei; Ronald Gao; Harun Gunaydin; Laurent A. Bentolila; Aydogan Ozcan

We present a deep learning-based method for achieving super-resolution in fluorescence microscopy. This data-driven approach does not require any numerical models of the imaging process or the estimation of a point spread function, and is solely based on training a generative adversarial network, which statistically learns to transform low resolution input images into super-resolved ones. Using this method, we super-resolve wide-field images acquired with low numerical aperture objective lenses, matching the resolution that is acquired using high numerical aperture objectives. We also demonstrate that diffraction-limited confocal microscopy images can be transformed by the same framework into super-resolved fluorescence images, matching the image resolution acquired with a stimulated emission depletion (STED) microscope. The deep network rapidly outputs these super-resolution images, without any iterations or parameter search, and even works for types of samples that it was not trained for.


Optica | 2018

Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery

Yichen Wu; Yair Rivenson; Yibo Zhang; Zhensong Wei; Harun Gunaydin; Xing Lin; Aydogan Ozcan


arXiv: Computer Vision and Pattern Recognition | 2018

Deep learning-based virtual histology staining using auto-fluorescence of label-free tissue.

Yair Rivenson; Hongda Wang; Zhensong Wei; Yibo Zhang; Harun Gunaydin; Aydogan Ozcan


conference on lasers and electro optics | 2018

Non-Iterative Holographic Image Reconstruction and Phase Retrieval Using a Deep Convolutional Neural Network

Yair Rivenson; Yibo Zhang; Harun Gunaydin; Da Teng; Aydogan Ozean


conference on lasers and electro optics | 2018

Deep Learning Microscopy: Enhancing Resolution, Field-of-View and Depth-of-Field of Optical Microscopy Images Using Neural Networks

Yair Rivenson; Zoltán Göröcs; Harun Gunaydin; Yibo Zhang; Hongda Wang; Aydogan Ozcan


Imaging and Applied Optics 2018 (3D, AO, AIO, COSI, DH, IS, LACSEA, LS&C, MATH, pcAOP) | 2018

Deep Learning Enhances Mobile Microscopy

Hongda Wang; Yair Rivenson; Hatice Ceylan Koydemir; Zhensong Wei; Zhengshuang Ren; Harun Gunaydin; Yibo Zhang; Zoltán Göröcs; Kyle Liang; Derek Tseng; Aydogan Ozcan

Collaboration


Dive into the Harun Gunaydin's collaboration.

Top Co-Authors

Avatar

Yair Rivenson

University of California

View shared research outputs
Top Co-Authors

Avatar

Aydogan Ozcan

University of California

View shared research outputs
Top Co-Authors

Avatar

Yibo Zhang

University of California

View shared research outputs
Top Co-Authors

Avatar

Hongda Wang

University of California

View shared research outputs
Top Co-Authors

Avatar

Zhensong Wei

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Da Teng

University of California

View shared research outputs
Top Co-Authors

Avatar

Xing Lin

University of California

View shared research outputs
Top Co-Authors

Avatar

Yichen Wu

University of California

View shared research outputs
Top Co-Authors

Avatar

Derek Tseng

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge