Donghak Shin
University of Connecticut
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Donghak Shin.
Optics Letters | 2010
Donghak Shin; Mehdi Daneshpanah; Arun Anand; Bahram Javidi
Optofluidic devices offer flexibility for a variety of tasks involving biological specimen. We propose a system for three-dimensional (3D) sensing and identification of biological micro-organisms. This system consists of a microfluidic device along with a digital holographic microscope and relevant statistical recognition algorithms. The microfluidic channel is used to house the micro-organisms, while the holographic microscope and a CCD camera record their digital holograms. The holograms can be computationally reconstructed in 3D using a variety of algorithms, such as the Fresnel transform. Statistical recognition algorithms are used to analyze and identify the micro-organisms from the reconstructed wavefront. Experimental results are presented. Because of computational reconstruction of wavefronts in holographic imaging, this technique offers unique advantages that allow one to image micro-organisms within a deep channel while removing the inherent microfluidic-induced aberration through interferometery.
Optics Letters | 2012
Donghak Shin; Mehdi Daneshpanah; Bahram Javidi
The performance of multiview three-dimensional imaging systems depends on several factors, including the number of sensors, sensor pixel size, relative sensor configuration, imaging optics, and computational reconstruction algorithm. Therefore, it is important to compare the performance of such systems under equally constrained resources. In this Letter, we develop a unifying framework to evaluate the lateral and axial resolution of N-ocular imaging systems ranging from stereo (two cameras) to multiple sensors (integral imaging) under fixed resource constraints. The proposed framework enables one to evaluate the system performance as a function of sensing parameters such as the number of cameras, the number of pixels, parallax, pixel size, lens aperture, and focal length. We carry out Monte Carlo simulations based on this framework to evaluate system performance as a function of sensing parameters. To the best of our knowledge, this is the first report on quantitative analysis of N-ocular imaging systems under common resource constraints.
Optics Letters | 2012
Donghak Shin; Bahram Javidi
In this Letter, we propose a multiperspective three-dimensional (3D) imaging system using axially distributed stereo image sensing. In this proposed method, the stereo camera is translated along its optical axis and multiple axial elemental image pairs for a 3D scene are collected. The captured elemental images are reconstructed in 3D using a computational reconstruction algorithm based on ray back-projection. The proposed method is applied to partially occluded object visualization. Optical experiments are performed to verify the approach.
IEEE\/OSA Journal of Display Technology | 2011
Donghak Shin; Bahram Javidi
We present experiments on 3D visualization of partially occluded objects using axially distributed sensing. The axially distributed sensing method provides collection of 3D information for a partially occluded object and the 3D images are visualized using the modified digital reconstruction algorithm based on inverse ray projection model. We apply this method to a camouflage setting where the object is partially occluded by a camouflage net. The optical experiments are performed to capture longitudinal elemental images of a partially camouflaged object and to visualize the 3D images with digital reconstruction. To the best of our knowledge, this is the first report to apply the axially distributed sensing method to visualizing occluded objects.
Optics Letters | 2010
Donghak Shin; Myungjin Cho; Bahram Javidi
We propose three-dimensional (3D) optical microscopy using axially distributed image sensing. In the proposed method, the micro-objects are optically magnified and their axially distributed images are recorded by moving the image sensor along a common optical axis. The 3D volumetric images are generated from the recorded axial image set using a computational reconstruction algorithm based on ray backprojection. Preliminary experimental results are presented. To the best of our knowledge, this is the first report on 3D optical microscopy using axially distributed sensing.
IEEE\/OSA Journal of Display Technology | 2012
Donghak Shin; Bahram Javidi
We present a method to visualize 3D objects in scattering media using the axially distributed sensing method. The scattered elemental images of the 3D objects in scattering media are obtained by moving the camera along the common optical axis. To reduce the scattering effect of the recorded elemental images, a statistical image processing algorithm is applied to the scattering elemental images and the estimated elemental images with the reduced scattering effects are obtained. The estimated elemental images of the objects are used to visualize the 3D scene using the computational reconstruction algorithm based on ray back-projection. We present preliminary experimental results to illustrate how the scattering effects may be reduced by the proposed method.
IEEE\/OSA Journal of Display Technology | 2012
Donghak Shin; Bahram Javidi
In this paper, we present an N-ocular imaging system with tilted image sensors to improve the depth resolution for objects under fixed system resource constraints. A nonplanar arrangement of image sensors enables one to improve the system performance due to the increase in the common field of view (FOV). We analyze the depth resolution based on the two point sources resolution criterion as a function of sensing parameters such as the number of cameras, the number of pixels, parallax, pixel size, and focal length. We carry out Monte Carlo simulations in the analysis. The results indicate that the proposed method may improve the common FOV zone and thus provide improved depth resolution for N -ocular imaging system when the objects are not very far from the sensors.
Optics Letters | 2012
Donghak Shin; Bahram Javidi
In this Letter, we propose an improved three-dimensional (3D) image reconstruction method for integral imaging. We use subpixel sensing of the optical rays of the 3D scene projected onto the image sensor. When reconstructing the 3D image, we use a calculated minimum subpixel distance for each sensor pixel instead of the average pixel value of integrated pixels from elemental images. The minimum subpixel distance is defined by measuring the distance between the center of the sensor pixel and the physical position of the imaging lens point spread function onto the sensor, which is projected from each reconstruction point for all elemental images. To show the usefulness of the proposed method, preliminary 3D imaging experiments are presented. Experimental results reveal that the proposed method may improve 3D imaging visualization because of the superior sensing and reconstruction of optical ray direction and intensity information for 3D objects.
Proceedings of SPIE | 2012
Donghak Shin; Bahram Javidi
In this paper, we present an overview of three-dimensional imaging and visualization of camouflaged objects using axially distributed sensing. The axially distributed sensing method collects three-dimensional information for a camouflaged object. Using the corrected elemental images, three-dimensional slice images are visualized using the digital reconstruction algorithm based on inverse ray projection model. In addition, we introduce the analysis of the depth resolution in our axially distributed sensing structure. The optical experiments are performed to capture longitudinal elemental images of a camouflaged object and to visualize the three-dimensional slice images with digital reconstruction.
Proceedings of SPIE | 2011
Donghak Shin; Myungjin Cho; Bahram Javidi
In this paper, we propose a 3D sensing and visualization of micro-objects using an axially distributed image capture system. In the proposed method, the micro-object is optically magnified and the axial images of magnified micro-object are recorded using axially distributed image capture. The recorded images are used to visualize the 3D scene using the computational reconstruction algorithm based on ray back-projection. To show the usefulness of the proposed method, we carry out preliminary experiments and present the results.