Steve Feng
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Steve Feng.
ACS Nano | 2014
Qingshan Wei; Richie Nagi; Kayvon Sadeghi; Steve Feng; Eddie Yan; So Jung Ki; Romain Caire; Derek Tseng; Aydogan Ozcan
Detection of environmental contamination such as trace-level toxic heavy metal ions mostly relies on bulky and costly analytical instruments. However, a considerable global need exists for portable, rapid, specific, sensitive, and cost-effective detection techniques that can be used in resource-limited and field settings. Here we introduce a smart-phone-based hand-held platform that allows the quantification of mercury(II) ions in water samples with parts per billion (ppb) level of sensitivity. For this task, we created an integrated opto-mechanical attachment to the built-in camera module of a smart-phone to digitally quantify mercury concentration using a plasmonic gold nanoparticle (Au NP) and aptamer based colorimetric transmission assay that is implemented in disposable test tubes. With this smart-phone attachment that weighs <40 g, we quantified mercury(II) ion concentration in water samples by using a two-color ratiometric method employing light-emitting diodes (LEDs) at 523 and 625 nm, where a custom-developed smart application was utilized to process each acquired transmission image on the same phone to achieve a limit of detection of ∼3.5 ppb. Using this smart-phone-based detection platform, we generated a mercury contamination map by measuring water samples at over 50 locations in California (USA), taken from city tap water sources, rivers, lakes, and beaches. With its cost-effective design, field-portability, and wireless data connectivity, this sensitive and specific heavy metal detection platform running on cellphones could be rather useful for distributed sensing, tracking, and sharing of water contamination information as a function of both space and time.
Proceedings of the National Academy of Sciences of the United States of America | 2011
Serhan O. Isikman; Waheb Bishara; Sam Mavandadi; Frank Yu; Steve Feng; Randy Lau; Aydogan Ozcan
We present a lens-free optical tomographic microscope, which enables imaging a large volume of approximately 15 mm3 on a chip, with a spatial resolution of < 1 μm × < 1 μm × < 3 μm in x, y and z dimensions, respectively. In this lens-free tomography modality, the sample is placed directly on a digital sensor array with, e.g., ≤ 4 mm distance to its active area. A partially coherent light source placed approximately 70 mm away from the sensor is employed to record lens-free in-line holograms of the sample from different viewing angles. At each illumination angle, multiple subpixel shifted holograms are also recorded, which are digitally processed using a pixel superresolution technique to create a single high-resolution hologram of each angular projection of the object. These superresolved holograms are digitally reconstructed for an angular range of ± 50°, which are then back-projected to compute tomograms of the sample. In order to minimize the artifacts due to limited angular range of tilted illumination, a dual-axis tomography scheme is adopted, where the light source is rotated along two orthogonal axes. Tomographic imaging performance is quantified using microbeads of different dimensions, as well as by imaging wild-type Caenorhabditis elegans. Probing a large volume with a decent 3D spatial resolution, this lens-free optical tomography platform on a chip could provide a powerful tool for high-throughput imaging applications in, e.g., cell and developmental biology.
ACS Nano | 2015
Brandon Berg; Bingen Cortazar; Derek Tseng; Haydar Ozkan; Steve Feng; Qingshan Wei; Raymond Yan Lok Chan; Jordi Burbano; Qamar Farooqui; Michael A. Lewinski; Dino Di Carlo; Omai B. Garner; Aydogan Ozcan
Standard microplate based enzyme-linked immunosorbent assays (ELISA) are widely utilized for various nanomedicine, molecular sensing, and disease screening applications, and this multiwell plate batched analysis dramatically reduces diagnosis costs per patient compared to nonbatched or nonstandard tests. However, their use in resource-limited and field-settings is inhibited by the necessity for relatively large and expensive readout instruments. To mitigate this problem, we created a hand-held and cost-effective cellphone-based colorimetric microplate reader, which uses a 3D-printed opto-mechanical attachment to hold and illuminate a 96-well plate using a light-emitting-diode (LED) array. This LED light is transmitted through each well, and is then collected via 96 individual optical fibers. Captured images of this fiber-bundle are transmitted to our servers through a custom-designed app for processing using a machine learning algorithm, yielding diagnostic results, which are delivered to the user within ∼1 min per 96-well plate, and are visualized using the same app. We successfully tested this mobile platform in a clinical microbiology laboratory using FDA-approved mumps IgG, measles IgG, and herpes simplex virus IgG (HSV-1 and HSV-2) ELISA tests using a total of 567 and 571 patient samples for training and blind testing, respectively, and achieved an accuracy of 99.6%, 98.6%, 99.4%, and 99.4% for mumps, measles, HSV-1, and HSV-2 tests, respectively. This cost-effective and hand-held platform could assist health-care professionals to perform high-throughput disease screening or tracking of vaccination campaigns at the point-of-care, even in resource-poor and field-settings. Also, its intrinsic wireless connectivity can serve epidemiological studies, generating spatiotemporal maps of disease prevalence and immunity.
ACS Nano | 2014
Steve Feng; Romain Caire; Bingen Cortazar; Mehmet Turan; Andrew L. Wong; Aydogan Ozcan
We demonstrate a Google Glass-based rapid diagnostic test (RDT) reader platform capable of qualitative and quantitative measurements of various lateral flow immunochromatographic assays and similar biomedical diagnostics tests. Using a custom-written Glass application and without any external hardware attachments, one or more RDTs labeled with Quick Response (QR) code identifiers are simultaneously imaged using the built-in camera of the Google Glass that is based on a hands-free and voice-controlled interface and digitally transmitted to a server for digital processing. The acquired JPEG images are automatically processed to locate all the RDTs and, for each RDT, to produce a quantitative diagnostic result, which is returned to the Google Glass (i.e., the user) and also stored on a central server along with the RDT image, QR code, and other related information (e.g., demographic data). The same server also provides a dynamic spatiotemporal map and real-time statistics for uploaded RDT results accessible through Internet browsers. We tested this Google Glass-based diagnostic platform using qualitative (i.e., yes/no) human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) tests. For the quantitative RDTs, we measured activated tests at various concentrations ranging from 0 to 200 ng/mL for free and total PSA. This wearable RDT reader platform running on Google Glass combines a hands-free sensing and image capture interface with powerful servers running our custom image processing codes, and it can be quite useful for real-time spatiotemporal tracking of various diseases and personal medical conditions, providing a valuable tool for epidemiology and mobile health.
ACS Nano | 2014
Qingshan Wei; Wei Luo; Samuel Chiang; Tara Kappel; Crystal Mejia; Derek Tseng; Raymond Yan Lok Chan; Eddie Yan; Hangfei Qi; Faizan Shabbir; Haydar Ozkan; Steve Feng; Aydogan Ozcan
DNA imaging techniques using optical microscopy have found numerous applications in biology, chemistry and physics and are based on relatively expensive, bulky and complicated set-ups that limit their use to advanced laboratory settings. Here we demonstrate imaging and length quantification of single molecule DNA strands using a compact, lightweight and cost-effective fluorescence microscope installed on a mobile phone. In addition to an optomechanical attachment that creates a high contrast dark-field imaging setup using an external lens, thin-film interference filters, a miniature dovetail stage and a laser-diode for oblique-angle excitation, we also created a computational framework and a mobile phone application connected to a server back-end for measurement of the lengths of individual DNA molecules that are labeled and stretched using disposable chips. Using this mobile phone platform, we imaged single DNA molecules of various lengths to demonstrate a sizing accuracy of <1 kilobase-pairs (kbp) for 10 kbp and longer DNA samples imaged over a field-of-view of ∼2 mm2.
PLOS ONE | 2012
Sam Mavandadi; Stoyan Dimitrov; Steve Feng; Frank Yu; Uzair Sikora; Oguzhan Yaglidere; Swati Padmanabhan; Karin Nielsen; Aydogan Ozcan
In this work we investigate whether the innate visual recognition and learning capabilities of untrained humans can be used in conducting reliable microscopic analysis of biomedical samples toward diagnosis. For this purpose, we designed entertaining digital games that are interfaced with artificial learning and processing back-ends to demonstrate that in the case of binary medical diagnostics decisions (e.g., infected vs. uninfected), with the use of crowd-sourced games it is possible to approach the accuracy of medical experts in making such diagnoses. Specifically, using non-expert gamers we report diagnosis of malaria infected red blood cells with an accuracy that is within 1.25% of the diagnostics decisions made by a trained medical professional.
PLOS ONE | 2012
Sam Mavandadi; Steve Feng; Frank Yu; Stoyan Dimitrov; Karin Nielsen-Saines; William R. Prescott; Aydogan Ozcan
We propose a methodology for digitally fusing diagnostic decisions made by multiple medical experts in order to improve accuracy of diagnosis. Toward this goal, we report an experimental study involving nine experts, where each one was given more than 8,000 digital microscopic images of individual human red blood cells and asked to identify malaria infected cells. The results of this experiment reveal that even highly trained medical experts are not always self-consistent in their diagnostic decisions and that there exists a fair level of disagreement among experts, even for binary decisions (i.e., infected vs. uninfected). To tackle this general medical diagnosis problem, we propose a probabilistic algorithm to fuse the decisions made by trained medical experts to robustly achieve higher levels of accuracy when compared to individual experts making such decisions. By modelling the decisions of experts as a three component mixture model and solving for the underlying parameters using the Expectation Maximisation algorithm, we demonstrate the efficacy of our approach which significantly improves the overall diagnostic accuracy of malaria infected cells. Additionally, we present a mathematical framework for performing ‘slide-level’ diagnosis by using individual ‘cell-level’ diagnosis data, shedding more light on the statistical rules that should govern the routine practice in examination of e.g., thin blood smear samples. This framework could be generalized for various other tele-pathology needs, and can be used by trained experts within an efficient tele-medicine platform.
Scientific Reports | 2016
Steve Feng; Derek Tseng; Dino Di Carlo; Omai B. Garner; Aydogan Ozcan
Routine antimicrobial susceptibility testing (AST) can prevent deaths due to bacteria and reduce the spread of multi-drug-resistance, but cannot be regularly performed in resource-limited-settings due to technological challenges, high-costs, and lack of trained professionals. We demonstrate an automated and cost-effective cellphone-based 96-well microtiter-plate (MTP) reader, capable of performing AST without the need for trained diagnosticians. Our system includes a 3D-printed smartphone attachment that holds and illuminates the MTP using a light-emitting-diode array. An inexpensive optical fiber-array enables the capture of the transmitted light of each well through the smartphone camera. A custom-designed application sends the captured image to a server to automatically determine well-turbidity, with results returned to the smartphone in ~1 minute. We tested this mobile-reader using MTPs prepared with 17 antibiotics targeting Gram-negative bacteria on clinical isolates of Klebsiella pneumoniae, containing highly-resistant antimicrobial profiles. Using 78 patient isolate test-plates, we demonstrated that our mobile-reader meets the FDA-defined AST criteria, with a well-turbidity detection accuracy of 98.21%, minimum-inhibitory-concentration accuracy of 95.12%, and a drug-susceptibility interpretation accuracy of 99.23%, with no very major errors. This mobile-reader could eliminate the need for trained diagnosticians to perform AST, reduce the cost-barrier for routine testing, and assist in spatio-temporal tracking of bacterial resistance.
Light-Science & Applications | 2017
Yichen Wu; Ashutosh Shiledar; Yicheng Li; Jeffrey Wong; Steve Feng; X. D. Chen; C. H. Chen; Kevin Jin; Saba Janamian; Zhe Yang; Zachary S. Ballard; Zoltán Göröcs; Alborz Feizi; Aydogan Ozcan
Rapid, accurate and high-throughput sizing and quantification of particulate matter (PM) in air is crucial for monitoring and improving air quality. In fact, particles in air with a diameter of ≤2.5 μm have been classified as carcinogenic by the World Health Organization. Here we present a field-portable cost-effective platform for high-throughput quantification of particulate matter using computational lens-free microscopy and machine-learning. This platform, termed c-Air, is also integrated with a smartphone application for device control and display of results. This mobile device rapidly screens 6.5 L of air in 30 s and generates microscopic images of the aerosols in air. It provides statistics of the particle size and density distribution with a sizing accuracy of ~93%. We tested this mobile platform by measuring the air quality at different indoor and outdoor environments and measurement times, and compared our results to those of an Environmental Protection Agency–approved device based on beta-attenuation monitoring, which showed strong correlation to c-Air measurements. Furthermore, we used c-Air to map the air quality around Los Angeles International Airport (LAX) over 24 h to confirm that the impact of LAX on increased PM concentration was present even at >7 km away from the airport, especially along the direction of landing flights. With its machine-learning-based computational microscopy interface, c-Air can be adaptively tailored to detect specific particles in air, for example, various types of pollen and mold and provide a cost-effective mobile solution for highly accurate and distributed sensing of air quality.
Proceedings of SPIE | 2016
Steve Feng; Minjae Woo; Hannah Kim; Eunso Kim; Sojung Ki; Lei Shao; Aydogan Ozcan
We developed an easy-to-use and widely accessible crowd-sourcing tool for rapidly training humans to perform biomedical image diagnostic tasks and demonstrated this platform’s ability on middle and high school students in South Korea to diagnose malaria infected red-blood-cells (RBCs) using Giemsa-stained thin blood smears imaged under light microscopes. We previously used the same platform (i.e., BioGames) to crowd-source diagnostics of individual RBC images, marking them as malaria positive (infected), negative (uninfected), or questionable (insufficient information for a reliable diagnosis). Using a custom-developed statistical framework, we combined the diagnoses from both expert diagnosticians and the minimally trained human crowd to generate a gold standard library of malaria-infection labels for RBCs. Using this library of labels, we developed a web-based training and educational toolset that provides a quantified score for diagnosticians/users to compare their performance against their peers and view misdiagnosed cells. We have since demonstrated the ability of this platform to quickly train humans without prior training to reach high diagnostic accuracy as compared to expert diagnosticians. Our initial trial group of 55 middle and high school students has collectively played more than 170 hours, each demonstrating significant improvements after only 3 hours of training games, with diagnostic scores that match expert diagnosticians’. Next, through a national-scale educational outreach program in South Korea we recruited >1660 students who demonstrated a similar performance level after 5 hours of training. We plan to further demonstrate this tool’s effectiveness for other diagnostic tasks involving image labeling and aim to provide an easily-accessible and quickly adaptable framework for online training of new diagnosticians.