Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andre Esteva is active.

Publication


Featured researches published by Andre Esteva.


Nature | 2017

Dermatologist-level classification of skin cancer with deep neural networks

Andre Esteva; Brett Kuprel; Roberto A. Novoa; Justin M. Ko; Susan M. Swetter; Helen M. Blau; Sebastian Thrun

Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images—two orders of magnitude larger than previous datasets—consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care.


Journal of Experimental Psychology: General | 2016

Visual Scenes are Categorized by Function

Michelle R. Greene; Christopher Baldassano; Andre Esteva; Diane M. Beck; Li Fei-Fei

How do we know that a kitchen is a kitchen by looking? Traditional models posit that scene categorization is achieved through recognizing necessary and sufficient features and objects, yet there is little consensus about what these may be. However, scene categories should reflect how we use visual information. Therefore, we test the hypothesis that scene categories reflect functions, or the possibilities for actions within a scene. Our approach is to compare human categorization patterns with predictions made by both functions and alternative models. We collected a large-scale scene category distance matrix (5 million trials) by asking observers to simply decide whether 2 images were from the same or different categories. Using the actions from the American Time Use Survey, we mapped actions onto each scene (1.4 million trials). We found a strong relationship between ranked category distance and functional distance (r = .50, or 66% of the maximum possible correlation). The function model outperformed alternative models of object-based distance (r = .33), visual features from a convolutional neural network (r = .39), lexical distance (r = .27), and models of visual features. Using hierarchical linear regression, we found that functions captured 85.5% of overall explained variance, with nearly half of the explained variance captured only by functions, implying that the predictive power of alternative models was because of their shared variance with the function-based model. These results challenge the dominant school of thought that visual features and objects are sufficient for scene categorization, suggesting instead that a scenes category may be determined by the scenes function.


bioRxiv | 2016

Two Distinct Scene-Processing Networks Connecting Vision and Memory.

Christopher Baldassano; Andre Esteva; Li Fei-Fei; Diane M. Beck

Visual Abstract A number of regions in the human brain are known to be involved in processing natural scenes, but the field has lacked a unifying framework for understanding how these different regions are organized and interact. We provide evidence from functional connectivity and meta-analyses for a new organizational principle, in which scene processing relies upon two distinct networks that split the classically defined parahippocampal place area (PPA). The first network of strongly connected regions consists of the occipital place area/transverse occipital sulcus and posterior PPA, which contain retinotopic maps and are not strongly coupled to the hippocampus at rest. The second network consists of the caudal inferior parietal lobule, retrosplenial complex, and anterior PPA, which connect to the hippocampus (especially anterior hippocampus), and are implicated in both visual and nonvisual tasks, including episodic memory and navigation. We propose that these two distinct networks capture the primary functional division among scene-processing regions, between those that process visual features from the current view of a scene and those that connect information from a current scene view with a much broader temporal and spatial context. This new framework for understanding the neural substrates of scene-processing bridges results from many lines of research, and makes specific functional predictions.


Nature | 2017

Corrigendum: Dermatologist-level classification of skin cancer with deep neural networks

Andre Esteva; Brett Kuprel; Roberto A. Novoa; Justin M. Ko; Susan M. Swetter; Helen M. Blau; Sebastian Thrun

This corrects the article DOI: 10.1038/nature21056


medical image computing and computer assisted intervention | 2016

Vision-Based Classification of Developmental Disorders Using Eye-Movements

Guido Pusiol; Andre Esteva; Scott S. Hall; Michael C. Frank; Arnold Milstein; Li Fei-Fei

This paper proposes a system for fine-grained classification of developmental disorders via measurements of individuals’ eye-movements using multi-modal visual data. While the system is engineered to solve a psychiatric problem, we believe the underlying principles and general methodology will be of interest not only to psychiatrists but to researchers and engineers in medical machine vision. The idea is to build features from different visual sources that capture information not contained in either modality. Using an eye-tracker and a camera in a setup involving two individuals speaking, we build temporal attention features that describe the semantic location that one person is focused on relative to the other person’s face. In our clinical context, these temporal attention features describe a patient’s gaze on finely discretized regions of an interviewing clinician’s face, and are used to classify their particular developmental disorder.


Cell | 2018

In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images

Eric Martin Christiansen; Samuel J. Yang; D. Michael Ando; Ashkan Javaherian; Gaia Skibinski; Scott Lipnick; Elliot Mount; Alison O’Neil; Kevan Shah; Alicia K. Lee; Piyush Goyal; William Fedus; Ryan Poplin; Andre Esteva; Marc Berndl; Lee L. Rubin; Philip C. Nelson; Steven Finkbeiner


arXiv: Neurons and Cognition | 2013

On the Technology Prospects and Investment Opportunities for Scalable Neuroscience

Thomas Dean; Biafra Ahanonu; Mainak Chowdhury; Anjali Datta; Andre Esteva; Daniel Eth; Nobie Redmon; Oleg Rumyantsev; Ysis Tarter


arXiv: Computer Vision and Pattern Recognition | 2016

Skin Cancer Detection and Tracking using Data Synthesis and Deep Learning.

Yunzhu Li; Andre Esteva; Brett Kuprel; Roberto A. Novoa; Justin M. Ko; Sebastian Thrun


Journal of Vision | 2015

Two distinct scene processing networks connecting vision and memory.

Christopher Baldassano; Andre Esteva; Diane M. Beck; Li Fei-Fei


Journal of Investigative Dermatology | 2018

Melanoma Early Detection: Big Data, Bigger Picture

Tracy Petrie; Ravikant Samatham; Alexander Witkowski; Andre Esteva; Sancy A. Leachman

Collaboration


Dive into the Andre Esteva's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge