Bradford Heap
University of New South Wales
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bradford Heap.
australasian joint conference on artificial intelligence | 2011
Bradford Heap; Maurice Pagnucco
Multi-robot task allocation research has focused on sequential single-item auctions and various extensions as quick methods for allocating tasks to robots with small overall team costs. In this paper we outline the benefits of grouping tasks with positive synergies together and auctioning clusters of tasks rather than individual tasks. We show that with task-clustering the winner determination costs remain the same as sequential single-item auctions and that auctioning task-clusters can result in overall smaller team costs.
multiagent system technologies | 2013
Bradford Heap; Maurice Pagnucco
In this paper we study an extension of the multi-robot task allocation problem for online tasks requiring pickup and delivery. We extend our previous work on sequential single-cluster auctions to handle this more complex task allocation problem. Our empirical experiments analyse this technique in the domain of an environment with dynamic task insertion. We consider the trade-off between solution quality and overall planning time in globally reallocating all uncompleted tasks versus local replanning upon the insertion of a new task. Our key result shows that global reallocation of all uncompleted tasks outperforms local replanning in minimising robot path distances.
australasian joint conference on artificial intelligence | 2012
Bradford Heap; Maurice Pagnucco
Recent research has shown the benefits of using K-means clustering in task allocation to robots. However, there is little evaluation of other clustering techniques. In this paper we compare K-means clustering to single-linkage clustering and consider the effects of straight line and true path distance metrics in cluster formation. Our empirical results show single-linkage clustering with a true path distance metric provides the best solutions to the multi-robot task allocation problem when used in sequential single-cluster auctions.
pacific rim international conference on artificial intelligence | 2014
Bradford Heap; Alfred Krzywicki; Wayne Wobcke; Michael Bain; Paul Compton
In this paper we consider the problem of job recommendation, suggesting suitable jobs to users based on their profiles. We compare a baseline method treating users and jobs as documents, where suitability is measured using cosine similarity, with a model that incorporates job transitions trained on the career progressions of a set of users. We show that the job transition model outperforms cosine similarity. Furthermore, a cascaded system combining career transitions with cosine similarity generates more recommendations of a similar quality. The analysis is conducted by examining data from 2,400 LinkedIn users, and evaluated by determining how well the methods predict users’ current positions from their profiles and previous position history.
advanced data mining and applications | 2017
Bradford Heap; Alfred Krzywicki; Susanne Schmeidl; Wayne Wobcke; Michael Bain
Constructing datasets to analyse the progression of conflicts has been a longstanding objective of peace and conflict studies research. In essence, the problem is to reliably extract relevant text snippets and code (annotate) them using an ontology that is meaningful to social scientists. Such an ontology usually characterizes either types of violent events (killing, bombing, etc.), and/or the underlying drivers of conflict, themselves hierarchically structured, for example security, governance and economics, subdivided into conflict-specific indicators. Numerous coding approaches have been proposed in the social science literature, ranging from fully automated “machine” coding to human coding. Machine coding is highly error prone, especially for labelling complex drivers, and suffers from extraction of duplicated events, but human coding is expensive, and suffers from inconsistency between annotators; thus hybrid approaches are required. In this paper, we analyse experimentally how human input can most effectively be used in a hybrid system to complement machine coding. Using two newly created real-world datasets, we show that machine learning methods improve on rule-based automated coding for filtering large volumes of input, while human verification of relevant/irrelevant text leads to improved performance of machine learning for predicting multiple labels in the ontology.
pacific rim international conference on multi-agents | 2013
Bradford Heap; Maurice Pagnucco
The task allocation problem with pickup and delivery is an extension of the widely studied multi-robot task allocation (MRTA) problem which, in general, considers each task as a single location to visit. Within the robotics domain distributed auctions are a popular method for task allocation [4]. In this work, we consider a team of autonomous mobile robots making deliveries in an office-like environment. Each robot has a set of tasks to complete, and each task is composed of a pickup location and a delivery location. The robots seek to complete their assigned tasks either minimising distance travelled or time taken according to a global team objective. During execution, individual robots may fail due to malfunctioning equipment or running low on battery power.
pacific rim knowledge acquisition workshop | 2018
Alfred Krzywicki; Wayne Wobcke; Michael Bain; Susanne Schmeidl; Bradford Heap
A major problem in the field of peace and conflict studies is to extract events from a variety of news sources. The events need to be coded with an event type and annotated with entities from a domain specific ontology for future retrieval and analysis. The problem is dynamic in nature, characterised by new or changing groups and targets, and the emergence of new types of events. A number of automated event extraction systems exist that detect thousands of events on a daily basis. The resulting datasets, however, lack sufficient coverage of specific domains and suffer from too many duplicated and irrelevant events. Therefore expert event coding and validation is required to ensure sufficient quality and coverage of a conflict. We propose a new framework for semi-automatic rule-based event extraction and coding based on the use of deep syntactic-semantic patterns created from normal user input to an event annotation system. The method is implemented in a prototype Event Coding Assistant that processes news articles to suggest relevant events to a user who can correct or accept the suggestions. Over time as a knowledge base of patterns is built, event extraction accuracy improves and, as shown by analysis of system logs, the workload of the user is decreased.
pacific rim international conference on artificial intelligence | 2018
Gavin Katz; Bradford Heap; Wayne Wobcke; Michael Bain; Sandeepa Kannangara
To attract and retain a new demographic of viewers, television producers have aimed to engage audiences through the “second screen” via social media. This paper concerns the use of Twitter during live television broadcasts of a panel show, the Australian Broadcasting Corporation’s political and current affairs show Q&A, where the TV audience can post tweets, some of which appear in a tickertape on the TV screen and are broadcast to all viewers. We present a method for aggregating audience opinions expressed via Twitter that could be used for live feedback after each segment of the show. We investigate segment classification models in the incremental setting, and use a combination of domain-specific and general training data for sentiment analysis. The aggregated analysis can be used to determine polarizing and volatile panellists, controversial topics and bias in the selection of tweets for on-screen display.
multiagent system technologies | 2013
Bradford Heap
My thesis is concerned with task allocation in multi-robot teams operating in dynamic environments. The key contribution of this work is the development of a distributed multi-robot task allocation auction that allocates clusters of tasks to robots over multiple bidding rounds. Empirical evaluation has shown this auction routine performs well in handling online task insertion and task reallocation upon robot failure.
national conference on artificial intelligence | 2012
Bradford Heap; Maurice Pagnucco