Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Timothy Arthur Mann is active.

Publication


Featured researches published by Timothy Arthur Mann.


IEEE Transactions on Autonomous Mental Development | 2013

Autonomous and Interactive Improvement of Binocular Visual Depth Estimation through Sensorimotor Interaction

Timothy Arthur Mann; Yunjung Park; Sungmoon Jeong; Minho Lee; Yoonsuck Choe

We investigate how a humanoid robot with a randomly initialized binocular vision system can learn to improve judgments about egocentric distances using limited action and interaction that might be available to human infants. First, we show how distance estimation can be improved autonomously. We consider our approach to be autonomous because the robot learns to accurately estimate distance without a human teacher providing the distances to training targets. We find that actions that, in principle, do not alter the robots distance to the target are a powerful tool for exposing estimation errors. These errors can be used to train a distance estimator. Furthermore, the simple action used (i.e., neck rotation) does not require high level cognitive processing or fine motor skill. Next, we investigate how interaction with humans can further improve visual distance estimates. We find that human interaction can improve distance estimates for far targets outside of the robots peripersonal space. This is accomplished by extending our autonomous approach above to integrate additional information provided by a human. Together these experiments suggest that both action and interaction are important tools for improving perceptual estimates.


international conference on development and learning | 2010

Prenatal to postnatal transfer of motor skills through motor-compatible sensory representations

Timothy Arthur Mann; Yoonsuck Choe

How can sensory-motor skills developed as a fetus transfer to postnatal life? We investigate a simulated reaching task by training controllers under prenatal conditions (i.e. confined space) and evaluating them based on postnatal conditions (i.e. targets outside of the confined training space). One possible solution is to identify a sensory representation that is easy to extrapolate over. We compared two kinds of sensory representations: world-centered sensory representation based on Cartesian coordinates and agent-centered sensory representation based on polar coordinates. Despite similar performance under prenatal conditions, controllers using agent-centered sensory representation had significantly better performance than controllers using world-centered sensory representation under postnatal conditions. It turns out that the success of the agent-centered sensory representation is (in part) due to being complementary to the action encodings. Further analysis shows that the action encodings (i.e. changes in joint angles) were highly predictive of the change in state when agent-centered sensory representation was used (but not world-centered). This suggests that a powerful strategy for transferring sensory-motor skills to postnatal life involves selecting a sensory representation that complements the action encodings used by an agent.


Archive | 2012

Evolution of Time in Neural Networks: From the Present to the Past, and Forward to the Future

Ji Ryang Chung; Jaerock Kwon; Timothy Arthur Mann; Yoonsuck Choe

What is time? Since the function of the brain is closely tied in with that of time, investigating the origin of time in the brain can help shed light on this question. In this paper, we propose to use simulated evolution of artificial neural networks to investigate the relationship between time and brain function, and the evolution of time in the brain. A large number of neural network models are based on a feedforward topology (perceptrons, backpropagation networks, radial basis functions, support vector machines, etc.), thus lacking dynamics. In such networks, the order of input presentation is meaningless (i.e., it does not affect the behavior) since the behavior is largely reactive. That is, such neural networks can only operate in the present, having no access to the past or the future. However, biological neural networks are mostly constructed with a recurrent topology, and recurrent (artificial) neural network models are able to exhibit rich temporal dynamics, thus time becomes an essential factor in their operation. In this paper, we will investigate the emergence of recollection and prediction in evolving neural networks. First, we will show how reactive, feedforward networks can evolve a memory-like function (recollection) through utilizing external markers dropped and detected in the environment. Second, we will investigate how recurrent networks with more predictable internal state trajectory can emerge as an eventual winner in evolutionary struggle when competing networks with less predictable trajectory show the same level of behavioral performance. We expect our results to help us better understand the evolutionary origin of recollection and prediction in neuronal networks, and better appreciate the role of time in brain function.


BMC Neuroscience | 2010

Neural conduction delay forces the emergence of predictive function in simulated evolution

Timothy Arthur Mann; Yoonsuck Choe

Evidence from biological studies suggests that humans are able to predict the sensory consequences of their own actions [1]. Computational studies also demonstrate the advantage of systems that predict sensory consequences of actions over those that predict the value of actions alone [2]. But how could the ability to predict sensory consequences of actions have evolved? One solution suggested by [3] is that prediction mechanisms first evolved to deal with natural sources of delay. Delay is commonly considered to be a purely negative feature of real world systems; however, we argue that delay can actually encourage evolution of the prediction of sensory consequences. We hypothesize that increasing sensory delay to an evolving population of sensory-motor agents will increase reliance on internal prediction of sensory consequences. To test our hypothesis we evolved populations of artificial neural networks at a complex control task (i.e. pole balancing, see figure ​figure1)1) with varied neural conduction delay (Δt) between sensory neurons and input to the control network (see figure ​figure2),2), which estimates the long term cost of applying a specific action. For top fitness networks, hidden unit activations were recorded as well as the true consequent sensory state during several evaluation trials. Each sensory variable was associated with the hidden unit that the sensory variable was maximally correlated with. Taking the average of these correlation values provides a measure of how well an agent can predict the sensory consequences of actions. We expected to find that increasing sensory delay also increases the average correlation measure described above. Figure 1 Cart-Pole Balancing Figure 2 Control network structure The result of the experiment (summarized in figure ​figure3)3) show that with no delay successful agents use a range of strategies, however, as delay increases successful strategies are forced to rely more and more on prediction of the next state to compensate for sensory delay. This seems surprising when considering that under conditions of no delay it is considerably easier to predict the next state than conditions with increased delay. Figure 3 Absolute correlation between hidden until activations and variables of the state at time t+Δt as delay increases. Although the common conception of delay is negative, sensory delay can direct natural selection to favor individuals that are better able to predict the sensory consequences of their actions.


human-robot interaction | 2009

Human-robot interaction observations from a proto-study using SUAVs for structural inspection

Maarten van Zomeren; Joshua M. Peschel; Timothy Arthur Mann; Gabe Knezek; James Doebbler; Jeremy J. Davis; Tracy Hammond; Augustinus H. J. Oomes; Robin R. Murphy

Small unmanned aerial vehicles (SUAVs) have been used for post-disaster structural inspection in the aftermaths of disasters such as Hurricane Katrina and the Berkman Plaza II parking garage collapse [Pratt et al. 2008; Murphy 2006; Murphy et al. 2008]. Video and photos captured from SUAVs provided responders with unique vantage points; unfortunately, interpretation and use of the imagery proved difficult for experts both on- and off-site [Pratt et al. 2008]. This was mostly attributed to spatial data confusion and excess, as inconsistent labeling conventions appeared in post-Katrina missions.


granular computing | 2008

Effects of varying the delay distribution in random, scale-free, and small-world networks

Bum Soon Jang; Timothy Arthur Mann; Yoonsuck Choe

Graph-theory-based approaches have been used with great success when analyzing abstract properties of natural and artificial networks. However, these approaches have not factored in delay, which plays an important role in real-world networks. In this paper, we (1) developed a simple yet powerful method to include delay in graph-based analysis of networks, and (2) evaluated how different classes of networks (random, scale-free, and small-world) behave under different forms of delay (peaked, unimodal, or uniform delay distribution). We compared results from synthetically generated networks using two different sets of algorithms for network construction. In the first approach (naive), we generated directed graphs following the literal definition of the three types of networks. In the second approach (modified conventional), we adapted methods by Erdos-Renyi (random), Barabasi (scale-free), and Watts-Strogatz (small-world). With these networks, we investigated the effect of adding and varying the delay distribution. As a measure of robustness to added delay, we calculated the ratio between the sum of shortest path length between every node. Our main findings show that different types of network show different levels of robustness, but the shape of the delay distribution has more influence on the overall result, where uniformly randomly distributed delay showed the most robust result. Other network parameters such as neighborhood size in small-world networks were also found to play a key role in delay tolerance. These results are expected to extend our understanding of the relationship between network structure and delay.


international joint conference on artificial intelligence | 2017

Approximate Value Iteration with Temporally Extended Actions (Extended Abstract)

Timothy Arthur Mann; Shie Mannor; Doina Precup

The options framework provides a concrete way to implement and reason about temporally extended actions. Existing literature has demonstrated the value of planning with options empirically, but there is a lack of theoretical analysis formalizing when planning with options is more efficient than planning with primitive actions. We provide a general analysis of the convergence rate of a popular Approximate Value Iteration (AVI) algorithm called Fitted Value Iteration (FVI) with options. Our analysis reveals that longer duration options and a pessimistic estimate of the value function both lead to faster convergence. Furthermore, options can improve convergence even when they are suboptimal and sparsely distributed throughout the state space. Next we consider generating useful options for planning based on a subset of landmark states. This suggests a new algorithm, Landmarkbased AVI (LAVI), that represents the value function only at landmark states. We analyze OFVI and LAVI using the proposed landmark-based options and compare the two algorithms. Our theoretical and experimental results demonstrate that options can play an important role in AVI by decreasing approximation error and inducing fast convergence.


european workshop on reinforcement learning | 2012

Directed Exploration in Reinforcement Learning with Transferred Knowledge

Timothy Arthur Mann; Yoonsuck Choe


uncertainty in artificial intelligence | 2018

A Dual Approach to Scalable Verification of Deep Networks

Krishnamurthy Dvijotham; Robert Stanforth; Sven Gowal; Timothy Arthur Mann; Pushmeet Kohli


arXiv: Learning | 2016

Iterative Hierarchical Optimization for Misspecified Problems (IHOMP)

Daniel J. Mankowitz; Timothy Arthur Mann; Shie Mannor

Collaboration


Dive into the Timothy Arthur Mann's collaboration.

Top Co-Authors

Avatar

Shie Mannor

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniel J. Mankowitz

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Assaf Hallak

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

François Schnitzler

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

André da Motta Salles Barreto

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge