Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Judy A. Franklin is active.

Publication


Featured researches published by Judy A. Franklin.


IEEE Control Systems Magazine | 1994

Acquiring robot skills via reinforcement learning

V. Gullapalli; Judy A. Franklin; H. Benbrahim

Skill acquisition is a difficult , yet important problem in robot performance. The authors focus on two skills, namely robotic assembly and balancing and on two classic tasks to develop these skills via learning: the peg-in hole insertion task, and the ball balancing task. A stochastic real-valued (SRV) reinforcement learning algorithm is described and used for learning control and the authors show how it can be used with nonlinear multilayer ANNs. In the peg-in-hole insertion task the SRV network successfully learns to insert to insert a peg into a hole with extremely low clearance, in spite of high sensor noise. In the ball balancing task the SRV network successfully learns to balance the ball with minimal feedback.<<ETX>>


Informs Journal on Computing | 2006

Recurrent Neural Networks for Music Computation

Judy A. Franklin

Some researchers in the computational sciences have considered music computation, including music reproduction and generation, as a dynamic system, i.e., a feedback process. The key element is that the state of the musical system depends on a history of past states. Recurrent (neural) networks have been deployed as models for learning musical processes. We first present a tutorial discussion of recurrent networks, covering those that have been used for music learning. Following this, we examine a thread of development of these recurrent networks for music computation that shows how more intricate music has been learned as the state of the art in recurrent networks improves. We present our findings that show that a long short-term memory recurrent network, with new representations that include music knowledge, can learn musical tasks, and can learn to reproduce long songs. Then, given a reharmonization of the chordal structure, it can generate an improvisation.


conference on decision and control | 1988

Refinement of robot motor skills through reinforcement learning

Judy A. Franklin

An extension of earlier work in the refinement of robotic motor control using reinforcement learning is described. It is no longer assumed that the magnitude of the state-dependent nonlinear torque is known. The learning controller learns about not only the presence of the torque, but also its magnitude. The ability of the learning system to learn this real-valued mapping from output feedback and reference input to control signal is facilitated by a stochastic algorithm that uses reinforcement feedback. A learning controller that can learn nonlinear mappings holds many possibilities for extending existing adaptive control research.<<ETX>>


international symposium on neural networks | 1992

Real-time learning: a ball on a beam

H. Benbrahim; J.S. Doleac; Judy A. Franklin; Oliver G. Selfridge

In the Real-Time Learning Laboratory at GTE Laboratories, machine learning algorithms are being implemented on hardware testbeds. A modified connectionist actor-critic system has been applied to a ball balancing task. The system learns to balance a ball on a beam in less than 5 min and maintains the balance. A ball can roll along a few inches of a track on a flat metal beam, which an electric motor can rotate. A computer learning system running on a PC senses the position of the ball and the angular position of the beam. The system learns to prevent the ball from reaching either end of the beam. The system has shown to be robust through sensor noise and mechanical changes; it has also generated many interesting questions for future research.<<ETX>>


conference on decision and control | 1989

Historical perspective and state of the art in connectionist learning control

Judy A. Franklin

Connectionist learning control is surveyed, starting with work by learning control engineers in the sixties and early seventies. The controllers are reviewed in a roughly chronological order, stressing the concepts and interaction of components in each learning control architecture. Some comparisons with adaptive control techniques are made, some necessarily so because of the integration of adaptive control techniques into some of the systems.<<ETX>>


Archive | 1996

Recent Advances in Robot Learning

Sebastian Thrun; Judy A. Franklin; Tom M. Mitchell

Machine Learning.- Real-World Robotics: Learning, to Plan for Robust Execution.- Robot Programming by Demonstration (RPD): Supporting the Induction by Human Interaction.- Performance Improvement of Robot Continuous-Path Operation through Iterative Learning Using Neural Networks.- Learning Controllers for Industrial Robots.- Active Learning for Vision-Based Robot Grasping.- Purposive Behavior Acquisition for a Real Robot by Vision-Based Reinforcement Learning.- Learning Concepts from Sensor Data of a Mobile Robot.


american control conference | 1987

Compliant Control Using Robust Multivariable Feedback Methods

Theodore E. Djaferis; B. Murah; Judy A. Franklin

In this paper we deal with the problem of dynamic control of robotic manipulators in the presence of uncertainty. Specifically we focus on compliant control in the context of surface tracing. We use the Combined Force-Position control architecture and develop control laws based on a system model obtained by operating point linearization techniques. Robust analysis and design methods are presented using frequency domain multivariable feedback methods. Compensator design is carried out for a two link planar manipulator, and experimental results are shown while a guarded move is being executed.


International Journal on Artificial Intelligence Tools | 2005

RECURRENT NEURAL NETWORKS FOR MUSICAL PITCH MEMORY AND CLASSIFICATION

Judy A. Franklin; Krystal K. Locke

We present results from experiments in using several pitch representations for jazz-oriented musical tasks performed by a recurrent neural network. We have run experiments with several kinds of rec...


vehicular technology conference | 1992

Learning channel allocation strategies in real time

Judy A. Franklin; M.D. Smith; J.C. Yun

Preliminary investigations into using connectionist machine learning for dynamic channel allocation in real time are described. The algorithms were implemented on a simple radio testbed. It consists of a channel allocator and two channel requesters. The channel allocator is a computer that communicates via a transceiver. It learns to model the time-dependent behavior of the two channel requesters, and thereby learns to allocate channels dynamically. Channels are requested by two different transceivers run by small processors. The learning criterion is to minimize a cost function of channel use. The results show that models of channel activity can be learned and that controllers can learn to use these models to allocate channels. A comparison indicates that such controllers perform better than a fixed controller that does not learn.<<ETX>>


conference on decision and control | 1992

Qualitative reinforcement learning control

Judy A. Franklin

An attempt is made to develop a reinforcement learning controller for a system described in more abstract or behavioral terms than those addressed by most controllers. The learning experiments center on the behavior of a ball rolling on a track. The evolution is from prediction to control of the behavior. An attempt is also made to evaluate the experiments in order to think about learning and experimentation at higher levels. The abstract description in the ball system considered is provided by a qualitative behavior of the system, given certain state information and given certain knowledge. The knowledge used is described, and the necessity of solving the problem is explained. The knowledge description is cast as part of a hierarchical controller, and generalizations to higher forms of learning are proposed.<<ETX>>

Collaboration


Dive into the Judy A. Franklin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

B. Murah

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Theodore E. Djaferis

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tom M. Mitchell

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge