Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel L. Silver is active.

Publication


Featured researches published by Daniel L. Silver.


canadian conference on artificial intelligence | 2002

The Task Rehearsal Method of Life-Long Learning: Overcoming Impoverished Data

Daniel L. Silver; Robert E. Mercer

The task rehearsal method (TRM) is introduced as an approach to life-long learning that uses the representation of previously learned tasks as a source of inductive bias. This inductive bias enables TRM to generate more accurate hypotheses for new tasks that have small sets of training examples. TRM has a knowledge retention phase during which the neural network representation of a successfully learned task is stored in a domain knowledge database, and a knowledge recall and learning phase during which virtual examples of stored tasks are generated from the domain knowledge. The virtual examples are rehearsed as secondary tasks in parallel with the learning of a new (primary) task using the ?MTL neural network algorithm, a variant of multiple task learning (MTL). The results of experiments on three domains show that TRM is effective in retaining task knowledge in a representational form and transferring that knowledge in the form of virtual examples. TRM with ?MTL is shown to develop more accurate hypotheses for tasks that suffer from impoverished training sets.


canadian conference on artificial intelligence | 2004

Sequential Consolidation of Learned Task Knowledge

Daniel L. Silver; Ryan Poirier

A fundamental problem of life-long machine learning is how to consolidate the knowledge of a learned task within a long-term memory structure (domain knowledge) without the loss of prior knowledge. Consolidated domain knowledge makes more efficient use of memory and can be used for more efficient and effective transfer of knowledge when learning future tasks. Relevant background material on knowledge based inductive learning and the transfer of task knowledge using multiple task learning (MTL) neural networks is reviewed. A theory of task knowledge consolidation is presented that uses a large MTL network as the long-term memory structure and task rehearsal to overcome the stability-plasticity problem and the loss of prior knowledge. The theory is tested on a synthetic domain of diverse tasks and it is shown that, under the proper conditions, task knowledge can be sequentially consolidated within an MTL network without loss of prior knowledge. In fact, a steady increase in the accuracy of consolidated domain knowledge is observed.


Machine Learning | 2008

Inductive transfer with context-sensitive neural networks

Daniel L. Silver; Ryan Poirier; Duane Currie

Context-sensitive Multiple Task Learning, or csMTL, is presented as a method of inductive transfer which uses a single output neural network and additional contextual inputs for learning multiple tasks. Motivated by problems with the application of MTL networks to machine lifelong learning systems, csMTL encoding of multiple task examples was developed and found to improve predictive performance. As evidence, the csMTL method is tested on seven task domains and shown to produce hypotheses for primary tasks that are often better than standard MTL hypotheses when learning in the presence of related and unrelated tasks. We argue that the reason for this performance improvement is a reduction in the number of effective free parameters in the csMTL network brought about by the shared output node and weight update constraints due to the context inputs. An examination of IDT and SVM models developed from csMTL encoded data provides initial evidence that this improvement is not shared across all machine learning models.


Machine Learning | 2008

Guest editor's introduction: special issue on inductive transfer learning

Daniel L. Silver; Kristin P. Bennett

Inductive transfer or transfer learning refers to the problem of retaining and applying the knowledge learned in one or more tasks to develop efficiently an effective hypothesis for a new task. While all learning involves generalization across problem instances, transfer learning emphasizes the transfer of knowledge across domains, tasks, and distributions that are similar but not the same. For example, learning to recognize chairs might help to recognize tables; or learning to play checkers might improve the learning of chess. While people are adept at inductive transfer, even across widely disparate domains, we have only begun to develop associated computational learning theory and there are few machine learning systems that exhibit knowledge transfer. Inductive transfer invokes some of the most important questions in artificial intelligence. Amongst its challenges are questions such as: • What is the best representation and method for retaining learned background knowledge? How does one index into such knowledge? • What is the best representation and method for transferring prior knowledge to a new task? • How does the use of prior knowledge affect hypothesis search heuristics? • What is the nature of similarity or relatedness between tasks for the purposes of learning? Can it be measured? • What role does curriculum play in the sequential learning of tasks?


Data Mining, Intrusion Detection, Information Assurance, and Data Networks Security 2008 | 2008

Using received signal strength variation for surveillance in residential areas

Sajid Hussain; Richard Peters; Daniel L. Silver

There are various uses of wireless sensor technology, ranging from medical, to environmental, to military. One possible usage is home security. A wireless sensor network could be used to detect the presence of an intruder. We have investigated the use of Received Signal Strength Indicator (RSSI) values to determine the mobility of an intruder and have found that accurate intruder detection is possible for at least short distances (up to 20 feet). The results of interference monitoring show that a wireless sensor network could be a feasible alternative for security and surveillance of homes.


international work conference on the interplay between natural and artificial computation | 2007

Requirements for Machine Lifelong Learning

Daniel L. Silver; Ryan Poirier

A system that is capable of retaining learned knowledge and selectively transferring portions of that knowledge as a source of inductive bias during new learning would be a significant advance in artificial intelligence and inductive modeling. We define such a system to be a machine lifelong learning, or ML3 system. This paper makes an initial effort at specifying the scope of ML3 systems and their functional requirements.


Learning to learn | 1998

The parallel transfer of task knowledge using dynamic learning rates based on a measure of relatedness

Daniel L. Silver; Robert E. Mercer

With a distinction made between two forms of task knowledge transfer, representational and functional, ηMTL, a modified version of the MTL method of functional (parallel) transfer, is introduced. The ηMTL method employs a separate learning rate, η k , for each task output node k, η k varies as a function of a measure of relatedness, R k , between the th task and the primary task of interest. Results of experiments demonstrate the ability of ηMTL to dynamically select the most related source task(s) for the functional transfer of prior domain knowledge. The ηMTL method of learning is nearly equivalent to standard MTL when all parallel tasks are sufficiently related to the primary task, and is similar to single task learning when none of the parallel tasks are related to the primary task.


canadian conference on artificial intelligence | 2003

Selective transfer of task knowledge using stochastic noise

Daniel L. Silver; Peter McCracken

The selective transfer of task knowledge within the context of artificial neural networks is studied using a modified version of ηMTL (multiple task learning) previously reported. sMTL is a knowledge based inductive learning system that uses prior task knowledge and stochastic noise to adjust its inductive bias when learning a new task. The MTL representation of previously learned and consolidated tasks is used as the starting point for learning a new primary task. Task rehearsal ensures the stability of related secondary task knowledge within the sMTL network and stochastic noise is used to create plasticity in the network so as to allow the new task to be learned. sMTL controls the level of noise to each secondary task based on a measure of secondary to primary task relatedness. Experiments demonstrate that from impoverished training sets, sMTL uses the prior representations to quickly develop predictive models that have (1) superior generalization ability compared with models produced by single task learning or standard MTL and (2) equivalent generalization ability compared with models produced by ηMTL.


artificial general intelligence | 2011

Machine lifelong learning: challenges and benefits for artificial general intelligence

Daniel L. Silver

We propose that it is appropriate to more seriously consider the nature of systems that are capable of learning over a lifetime. There are three reasons for taking this position. First, there exists a body of related work for this research under names such as constructive induction, continual learning, sequential task learning and most recently learning with deep architectures. Second, the computational and data storage power of modern computers are capable of implementing and testing machine lifelong learning systems. Third, there are significant challenges and benefits to pursuing programs of research in the area to AGI and brain sciences. This paper discusses each of the above in the context of a general framework for machine lifelong learning.


International Journal of Web and Grid Services | 2010

User profile management: reference model and web services implementation

Zhongxu Ma; Daniel L. Silver; Elhadi M. Shakshuki

A user profile is a structured representation of an individual users characteristics and personal preferences with respect to a software application or computing device. As the variety and complexity of applications and mobile devices increase, there is a growing need and interest in personalisation. This necessitates methods of managing user profile content such that it can be accessed, updated and potentially shared over communication networks. This research investigates User Profile Management (UPM) as a network-based service for managing user profile content. The major requirements for a good UPM service are defined. A reference model is proposed that includes an architecture, profile data schema, a protocol, basic command functions and security mechanisms. A prototype UPM service and four client applications based on the reference model are developed using Java and web services technologies. Scenarios are constructed to demonstrate the value of the UPM reference model and the web services implementation. We conclude that the proposed reference model provides a solid foundation for developing UPM services and that web services technologies are suitable for implementing the reference model in a distributed network environment.

Collaboration


Dive into the Daniel L. Silver's collaboration.

Top Co-Authors

Avatar

Robert E. Mercer

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge