Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason Yosinski is active.

Publication


Featured researches published by Jason Yosinski.


computer vision and pattern recognition | 2015

Deep neural networks are easily fooled: High confidence predictions for unrecognizable images

Anh Mai Nguyen; Jason Yosinski; Jeff Clune

Deep neural networks (DNNs) have recently been achieving state-of-the-art performance on a variety of pattern-recognition tasks, most notably visual classification problems. Given that DNNs are now able to classify objects in images with near-human-level performance, questions naturally arise as to what differences remain between computer and human vision. A recent study [30] revealed that changing an image (e.g. of a lion) in a way imperceptible to humans can cause a DNN to label the image as something else entirely (e.g. mislabeling a lion a library). Here we show a related result: it is easy to produce images that are completely unrecognizable to humans, but that state-of-the-art DNNs believe to be recognizable objects with 99.99% confidence (e.g. labeling with certainty that white noise static is a lion). Specifically, we take convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and then find images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class. It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call “fooling images” (more generally, fooling examples). Our results shed light on interesting differences between human vision and current DNNs, and raise questions about the generality of DNN computer vision.


computer vision and pattern recognition | 2017

Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space

Anh Tuan Nguyen; Jeff Clune; Yoshua Bengio; Alexey Dosovitskiy; Jason Yosinski

Generating high-resolution, photo-realistic images has been a long-standing goal in machine learning. Recently, Nguyen et al. [37] showed one interesting way to synthesize novel images by performing gradient ascent in the latent space of a generator network to maximize the activations of one or multiple neurons in a separate classifier network. In this paper we extend this method by introducing an additional prior on the latent code, improving both sample quality and sample diversity, leading to a state-of-the-art generative model that produces high quality images at higher resolutions (227 × 227) than previous generative models, and does so for all 1000 ImageNet categories. In addition, we provide a unified probabilistic interpretation of related activation maximization methods and call the general class of models Plug and Play Generative Networks. PPGNs are composed of 1) a generator network G that is capable of drawing a wide range of image types and 2) a replaceable condition network C that tells the generator what to draw. We demonstrate the generation of images conditioned on a class (when C is an ImageNet or MIT Places classification network) and also conditioned on a caption (when C is an image captioning network). Our method also improves the state of the art of Multifaceted Feature Visualization [40], which generates the set of synthetic inputs that activate a neuron in order to better understand how deep neural networks operate. Finally, we show that our model performs reasonably well at the task of image inpainting. While image models are used in this paper, the approach is modality-agnostic and can be applied to many types of data.


genetic and evolutionary computation conference | 2015

Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning

Anh Mai Nguyen; Jason Yosinski; Jeff Clune

The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search avoids this problem by encouraging a search in all interesting directions. That occurs by replacing a performance objective with a reward for novel behaviors, as defined by a human-crafted, and often simple, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a novelty pressure in image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g. churches, mosques, obelisks, etc.). Here we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithms key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: e.g. producing intelligent software, robot controllers, optimized physical components, and art.


european conference on applications of evolutionary computation | 2013

Evolving gaits for physical robots with the HyperNEAT generative encoding: the benefits of simulation

Suchan Lee; Jason Yosinski; Kyrre Glette; Hod Lipson; Jeff Clune

Creating gaits for physical robots is a longstanding and open challenge. Recently, the HyperNEAT generative encoding was shown to automatically discover a variety of gait regularities, producing fast, coordinated gaits, but only for simulated robots. A follow-up study found that HyperNEAT did not produce impressive gaits when they were evolved directly on a physical robot. A simpler encoding hand-tuned to produce regular gaits was tried on the same robot, and outperformed HyperNEAT, but these gaits were first evolved in simulation before being transferred to the robot. In this paper, we tested the hypothesis that the beneficial properties of HyperNEAT would outperform the simpler encoding if HyperNEAT gaits are first evolved in simulation before being transferred to reality. That hypothesis was confirmed, resulting in the fastest gaits yet observed for this robot, including those produced by nine different algorithms from three previous papers describing gaitgenerating techniques for this robot. This result is important because it confirms that the early promise shown by generative encodings, specifically HyperNEAT, are not limited to simulation, but work on challenging real-world engineering challenges such as evolving gaits for real robots.


Cybernetics and Information Technologies | 2012

Learning Fast Quadruped Robot Gaits with the RL PoWER Spline Parameterization

Haocheng Shen; Jason Yosinski; Petar Kormushev; Darwin G. Caldwell; Hod Lipson

Abstract Legged robots are uniquely privileged over their wheeled counterparts in their potential to access rugged terrain. However, designing walking gaits by hand for legged robots is a difficult and time-consuming process, so we seek algorithms for learning such gaits to automatically using real world experimentation. Numerous previous studies have examined a variety of algorithms for learning gaits, using an assortment of different robots. It is often difficult to compare the algorithmic results from one study to the next, because the conditions and robots used vary. With this in mind, we have used an open-source, 3D printed quadruped robot called QuadraTot, so the results may be verified, and hopefully improved upon, by any group so desiring. Because many robots do not have accurate simulators, we test gait-learning algorithms entirely on the physical robot. Previous studies using the QuadraTot have compared parameterized splines, the HyperNEAT generative encoding and genetic algorithm. Among these, the research on the genetic algorithm was conducted by (G l e t t e et al., 2012) in a simulator and tested on a real robot. Here we compare these results to an algorithm called Policy learning by Weighting Exploration with the Returns, or RL PoWER. We report that this algorithm has learned the fastest gait through only physical experiments yet reported in the literature, 16.3% faster than reported for HyperNEAT. In addition, the learned gaits are less taxing on the robot and more repeatable than previous record-breaking gaits.


Evolutionary Computation | 2016

Understanding innovation engines: Automated creativity and improved stochastic optimization via deep learning

Anh Tuan Nguyen; Jason Yosinski; Jeff Clune

The Achilles Heel of stochastic optimization algorithms is getting trapped on local optima. Novelty Search mitigates this problem by encouraging exploration in all interesting directions by replacing the performance objective with a reward for novel behaviors. This reward for novel behaviors has traditionally required a human-crafted, behavioral distance function. While Novelty Search is a major conceptual breakthrough and outperforms traditional stochastic optimization on certain problems, it is not clear how to apply it to challenging, high-dimensional problems where specifying a useful behavioral distance function is difficult. For example, in the space of images, how do you encourage novelty to produce hawks and heroes instead of endless pixel static? Here we propose a new algorithm, the Innovation Engine, that builds on Novelty Search by replacing the human-crafted behavioral distance with a Deep Neural Network (DNN) that can recognize interesting differences between phenotypes. The key insight is that DNNs can recognize similarities and differences between phenotypes at an abstract level, wherein novelty means interesting novelty. For example, a DNN-based novelty search in the image space does not explore in the low-level pixel space, but instead creates a pressure to create new types of images (e.g., churches, mosques, obelisks, etc.). Here, we describe the long-term vision for the Innovation Engine algorithm, which involves many technical challenges that remain to be solved. We then implement a simplified version of the algorithm that enables us to explore some of the algorithm’s key motivations. Our initial results, in the domain of images, suggest that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain: for example, producing intelligent software, robot controllers, optimized physical components, and art.


Artificial Life | 2012

Aracna: An Open-Source Quadruped Platform for Evolutionary Robotics

Sara Lohmann; Jason Yosinski; Eric Gold; Jeff Clune; Jeremy E. Blum; Hod Lipson

We describe a new, quadruped robot platform, Aracna, which requires non-intuitive motor commands in order to locomote and thus provides an interesting challenge for gait learning algorithms, such as those frequently developed in the Evolutionary Computation and Artificial Life communities. Aracna is an open-source hardware project composed of off-the-shelf and 3D-printed parts, enabling other research teams to modify its design according to their scientific needs. Aracna was designed to overcome the shortcomings of a previous quadruped robot platform, whose legs were so heavy that the motors could not reliably execute the commands sent to them. We avoid this problem by locating all motors in the body core instead of on the legs and through a design which enables the servos to have a greater mechanical advantage. Specifically, each of the four legs has two joints controlled by separate four-bar linkage mechanisms that drive the pitch of the hip joint and knee joint. This novel design causes unconventional kinematics, creating an opportunity for gaitlearning algorithms, which excel in counter-intuitive design spaces where human engineers tend to underperform. Because it is low-cost, flexible, kinematically interesting, and and improvement over a previous design, Aracna provides a useful new hardware platform for testing algorithms that automatically generate robotic behaviors.


Phytopathology | 2017

Automated Identification of Northern Leaf Blight-Infected Maize Plants from Field Imagery Using Deep Learning

Chad DeChant; Tyr Wiesner-Hanks; Siyuan Chen; Ethan L. Stewart; Jason Yosinski; Michael A. Gore; Rebecca J. Nelson; Hod Lipson

Northern leaf blight (NLB) can cause severe yield loss in maize; however, scouting large areas to accurately diagnose the disease is time consuming and difficult. We demonstrate a system capable of automatically identifying NLB lesions in field-acquired images of maize plants with high reliability. This approach uses a computational pipeline of convolutional neural networks (CNNs) that addresses the challenges of limited data and the myriad irregularities that appear in images of field-grown plants. Several CNNs were trained to classify small regions of images as containing NLB lesions or not; their predictions were combined into separate heat maps, then fed into a final CNN trained to classify the entire image as containing diseased plants or not. The system achieved 96.7% accuracy on test set images not used in training. We suggest that such systems mounted on aerial- or ground-based vehicles can help in automated high-throughput plant phenotyping, precision breeding for disease resistance, and reduced pesticide use through targeted application across a variety of plant and disease categories.


computer vision and pattern recognition | 2016

Recombinator Networks: Learning Coarse-to-Fine Feature Aggregation

Sina Honari; Jason Yosinski; Pascal Vincent; Chris Pal

Deep neural networks with alternating convolutional, max-pooling and decimation layers are widely used in state of the art architectures for computer vision. Max-pooling purposefully discards precise spatial information in order to create features that are more robust, and typically organized as lower resolution spatial feature maps. On some tasks, such as whole-image classification, max-pooling derived features are well suited, however, for tasks requiring precise localization, such as pixel level prediction and segmentation, max-pooling destroys exactly the information required to perform well. Precise localization may be preserved by shallow convnets without pooling but at the expense of robustness. Can we have our max-pooled multilayered cake and eat it too? Several papers have proposed summation and concatenation based methods for combining upsampled coarse, abstract features with finer features to produce robust pixel level predictions. Here we introduce another model - dubbed Recombinator Networks - where coarse features inform finer features early in their formation such that finer features can make use of several layers of computation in deciding how to use coarse features. The model is trained once, end-to-end and performs better than summation-based architectures, reducing the error from the previous state of the art on two facial keypoint datasets, AFW and AFLW, by 30% and beating the current state-of-the-art on 300W without using extra data. We improve performance even further by adding a denoising prediction model based on a novel convnet formulation.


Artificial Life | 2016

Webal comes of age: A review of the first 21 years of artificial life on the web

Tim Taylor; Joshua Evan Auerbach; Josh C. Bongard; Jeff Clune; Simon J. Hickinbotham; Charles Ofria; Mizuki Oka; Sebastian Risi; Kenneth O. Stanley; Jason Yosinski

We present a survey of the first 21 years of web-based artificial life (WebAL) research and applications, broadly construed to include the many different ways in which artificial life and web technologies might intersect. Our survey covers the period from 1994—when the first WebAL work appeared—up to the present day, together with a brief discussion of relevant precursors. We examine recent projects, from 2010–2015, in greater detail in order to highlight the current state of the art. We follow the survey with a discussion of common themes and methodologies that can be observed in recent work and identify a number of likely directions for future work in this exciting area.

Collaboration


Dive into the Jason Yosinski's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshua Bengio

Université de Montréal

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge