Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Juan C. Caicedo is active.

Publication


Featured researches published by Juan C. Caicedo.


international conference on computer vision | 2015

Active Object Localization with Deep Reinforcement Learning

Juan C. Caicedo; Svetlana Lazebnik

We present an active detection model for localizing objects in scenes. The model is class-specific and allows an agent to focus attention on candidate regions for identifying the correct location of a target object. This agent learns to deform a bounding box using simple transformation actions, with the goal of determining the most specific location of target objects following top-down reasoning. The proposed localization agent is trained using deep reinforcement learning, and evaluated on the Pascal VOC 2007 dataset. We show that agents guided by the proposed model are able to localize a single instance of an object after analyzing only between 11 and 25 regions in an image, and obtain the best detection results among systems that do not use object proposals for object localization.


International Journal of Computer Vision | 2017

Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models

Bryan A. Plummer; Liwei Wang; Chris M. Cervantes; Juan C. Caicedo; Julia Hockenmaier; Svetlana Lazebnik

The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research.


bioRxiv | 2016

Automating Morphological Profiling with Generic Deep Convolutional Networks

Nick Pawlowski; Juan C. Caicedo; Shantanu Singh; Anne E. Carpenter; Amos J. Storkey

Morphological profiling aims to create signatures of genes, chemicals and diseases from microscopy images. Current approaches use classical computer vision-based segmentation and feature extraction. Deep learning models achieve state-of-the-art performance in many computer vision tasks such as classification and segmentation. We propose to transfer activation features of generic deep convolutional networks to extract features for morphological profiling. Our approach surpasses currently used methods in terms of accuracy and processing speed. Furthermore, it enables fully automated processing of microscopy images without need for single cell identification.


bioRxiv | 2017

CytoGAN: Generative Modeling of Cell Images

Peter Goldsborough; Nick Pawlowski; Juan C. Caicedo; Shantanu Singh; Anne E. Carpenter

We explore the application of Generative Adversarial Networks to the domain of morphological profiling of human cultured cells imaged by fluorescence microscopy. When evaluated for their ability to group cell images responding to treatment by chemicals of known classes, we find that adversarially learned representations are superior to autoencoder-based approaches. While currently inferior to classical computer vision and transfer learning, the adversarial framework enables useful visualization of the variation of cellular images due to their generative capabilities.


bioRxiv | 2018

Evaluation of Deep Learning Strategies for Nucleus Segmentation in Fluorescence Images

Juan C. Caicedo; Jonathan Roth; Allen Goodman; Tim Becker; Kyle W. Karhohs; Claire McQuin; Shantanu Singh; Fabian J. Theis; Anne E. Carpenter

Identifying nuclei is often a critical first step in analyzing microscopy images of cells, and classical image processing algorithms are still commonly used for this task. Recent studies indicate that deep learning may yield superior accuracy, but its performance has not been evaluated for high-throughput nucleus segmentation in large collections of images. We compare two deep learning strategies for identifying nuclei in fluorescence microscopy images (U-Net and DeepCell) alongside a classical approach that does not use machine learning. We measure accuracy, types of errors, and computational complexity to benchmark these approaches on a large data set. We publicly release the set of 23,165 manually annotated nuclei and source code to reproduce the results. Our evaluation shows that U-Net outperforms both pixel-wise classification networks and classical algorithms. Although deep learning requires more computation and annotation time than classical algorithms, it improves accuracy and halves the number of errors.


computer vision and pattern recognition | 2018

Weakly Supervised Learning of Single-Cell Feature Embeddings

Juan C. Caicedo; Claire McQuin; Allen Goodman; Shantanu Singh; Anne E. Carpenter

We study the problem of learning representations for single cells in microscopy images to discover biological relationships between their experimental conditions. Many new applications in drug discovery and functional genomics require capturing the morphology of individual cells as comprehensively as possible. Deep convolutional neural networks (CNNs) can learn powerful visual representations, but require ground truth for training; this is rarely available in biomedical profiling experiments. While we do not know which experimental treatments produce cells that look alike, we do know that cells exposed to the same experimental treatment should generally look similar. Thus, we explore training CNNs using a weakly supervised approach that uses this information for feature learning. In addition, the training stage is regularized to control for unwanted variations using mixup or RNNs. We conduct experiments on two different datasets; the proposed approach yields single-cell embeddings that are more accurate than the widely adopted classical features, and are competitive with previously proposed transfer learning approaches.


PLOS Biology | 2018

CellProfiler 3.0: Next-generation image processing for biology

Claire McQuin; Allen Goodman; Vasiliy S. Chernyshev; Lee Kamentsky; Beth Cimini; Kyle W. Karhohs; Minh Doan; Liya Ding; Susanne M. Rafelski; Derek Thirstrup; Winfried Wiegraebe; Shantanu Singh; Tim Becker; Juan C. Caicedo; Anne E. Carpenter

CellProfiler has enabled the scientific research community to create flexible, modular image analysis pipelines since its release in 2005. Here, we describe CellProfiler 3.0, a new version of the software supporting both whole-volume and plane-wise analysis of three-dimensional (3D) image stacks, increasingly common in biomedical research. CellProfiler’s infrastructure is greatly improved, and we provide a protocol for cloud-based, large-scale image processing. New plugins enable running pretrained deep learning models on images. Designed by and for biologists, CellProfiler equips researchers with powerful computational tools via a well-documented user interface, empowering biologists in all fields to create quantitative, reproducible image analysis workflows.


Nature Methods | 2017

Data-analysis strategies for image-based cell profiling

Juan C. Caicedo; Samuel J. Cooper; Florian Heigwer; Scott Warchal; Peng Qiu; Csaba Molnar; Aliaksei Vasilevich; Joseph D. Barry; Harmanjit Singh Bansal; Oren Z. Kraus; Mathias J. Wawer; Lassi Paavolainen; Markus D. Herrmann; Mohammad Hossein Rohban; Jane Hung; Holger Hennig; John Concannon; Ian Smith; Paul A. Clemons; Shantanu Singh; Paul Rees; Peter Horvath; Roger G. Linington; Anne E. Carpenter


international symposium on biomedical imaging | 2018

Combining morphological and migration profiles of in vitro time-lapse data

Tim Becker; Juan C. Caicedo; Shantanu Singh; Markus Weckmann; Anne E. Carpenter


Blood | 2017

Label-Free Analyses of Minimal Residual Disease in ALL Using Deep Learning and Imaging Flow Cytometry

Minh Doan; Marian Case; Dino Masic; Holger Hennig; Claire McQuin; Allen Goodman; Juan C. Caicedo; Olaf Wolkenhauer; Huw D. Summers; Anne E. Carpenter; Andy Filby; Paul Rees; Julie Irving

Collaboration


Dive into the Juan C. Caicedo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge