Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julie Porteous is active.

Publication


Featured researches published by Julie Porteous.


Journal of Artificial Intelligence Research | 2004

Ordered landmarks in planning

Jörg Hoffmann; Julie Porteous; Laura Sebastia

Many known planning tasks have inherent constraints concerning the best order in which to achieve the goals. A number of research efiorts have been made to detect such constraints and to use them for guiding search, in the hope of speeding up the planning process. We go beyond the previous approaches by considering ordering constraints not only over the (top-level) goals, but also over the sub-goals that will necessarily arise during planning. Landmarks are facts that must be true at some point in every valid solution plan. We extend Koehler and Hoffmanns definition of reasonable orders between top level goals to the more general case of landmarks. We show how landmarks can be found, how their reasonable orders can be approximated, and how this information can be used to decompose a given planning task into several smaller sub-tasks. Our methodology is completely domain- and planner-independent. The implementation demonstrates that the approach can yield significant runtime performance improvements when used as a control loop around state-of-the-art sub-optimal planning systems, as exemplified by FF and LPG.


ACM Transactions on Intelligent Systems and Technology | 2010

Applying planning to interactive storytelling: Narrative control using state constraints

Julie Porteous; Marc Cavazza; Fred Charles

We have seen ten years of the application of AI planning to the problem of narrative generation in Interactive Storytelling (IS). In that time planning has emerged as the dominant technology and has featured in a number of prototype systems. Nevertheless key issues remain, such as how best to control the shape of the narrative that is generated (e.g., by using narrative control knowledge, i.e., knowledge about narrative features that enhance user experience) and also how best to provide support for real-time interactive performance in order to scale up to more realistic sized systems. Recent progress in planning technology has opened up new avenues for IS and we have developed a novel approach to narrative generation that builds on this. Our approach is to specify narrative control knowledge for a given story world using state trajectory constraints and then to treat these state constraints as landmarks and to use them to decompose narrative generation in order to address scalability issues and the goal of real-time performance in larger story domains. This approach to narrative generation is fully implemented in an interactive narrative based on the “Merchant of Venice.” The contribution of the work lies both in our novel use of state constraints to specify narrative control knowledge for interactive storytelling and also our development of an approach to narrative generation that exploits such constraints. In the article we show how the use of state constraints can provide a unified perspective on important problems faced in IS.


Software - Practice and Experience | 1995

A requirements capture method and its use in an air traffic control application

Thomas Leo McCluskey; Julie Porteous; Y. Naik; C. N. Taylor; Sara Jones

This paper describes our experience in capturing, using a formal specification language, a model of the knowledge‐intensive domain of oceanic air traffic control. This model is intended to form part of the requirements specification for a decision support system for air traffic controllers. We give an overview of the methods we used in analysing the scope of the domain, choosing an appropriate formalism, developing a domain model, and validating the model in various ways. Central to the method was the development of a formal requirements engineering environment which provided automated tools for model validation and maintenance.


acm multimedia | 2011

Generating story variants with constrained video recombination

Alberto Piacenza; Fabrizio Guerrini; Nicola Adami; Riccardo Leonardi; Julie Porteous; Jonathan Teutenberg; Marc Cavazza

We present a novel approach to the automatic generation of filmic variants within an implemented Video-Based Storytelling (VBS) system that successfully integrates video segmentation with stochastically controlled re-ordering techniques and narrative generation via AI planning. We have introduced flexibility into the video recombination process by sequencing video shots in a way that maintains local video consistency and this is combined with exploitation of shot polysemy to enable shot reuse in a range of valid semantic contexts. Results of evaluations on output narratives using a shared set of video data show consistency in terms of local video sequences and global causality with no loss of generative power.


acm multimedia | 2010

Interactive storytelling via video content recombination

Julie Porteous; Sergio Benini; Luca Canini; Fred Charles; Marc Cavazza; Riccardo Leonardi

In the paper we present a prototype of video-based storytelling that is able to generate multiple story variants from a baseline video. The video content for the system is generated by an adaptation of forefront video summarisation techniques that decompose the video into a number of Logical Story Units (LSU) representing sequences of contiguous and interconnected shots sharing a common semantic thread. Alternative storylines are generated using AI Planning techniques and these are used to direct the combination of elementary LSU for output. We report early results from experiments with the prototype in which the reordering of video shots on the basis of their high-level semantics produces trailers giving the illusion of different storylines.


augmented human international conference | 2014

Towards emotional regulation through neurofeedback

Marc Cavazza; Fred Charles; Gabor Aranyi; Julie Porteous; Stephen W. Gilroy; Gal Raz; Nimrod Jakob Keynan; Avihay Cohen; Gilan Jackont; Yael Jacob; Eyal Soreq; Ilana Klovatch; Talma Hendler

This paper discusses the potential of Brain-Computer Interfaces based on neurofeedback methods to support emotional control and pursue the goal of emotional control as a mechanism for human augmentation in specific contexts. We illustrate this discussion through two proof-of-concept, fully-implemented experiments: one controlling disposition towards virtual characters using pre-frontal alpha asymmetry, and the other aimed at controlling arousal through activity of the amygdala. In the first instance, these systems are intended to explore augmentation technologies that would be incorporated into various media-based systems rather than permanently affect user behaviour.


acm multimedia | 2011

Changing video arrangement for constructing alternative stories

Alberto Piacenza; Fabrizio Guerrini; Nicola Adami; Riccardo Leonardi; Jonathan Teutenberg; Julie Porteous; Marc Cavazza

Currently, automatic generation of filmic variants faces a number of key technical issues and thus it usually resorts to the shooting of multiple versions of alternative scenes. However, recent advancements in video analysis has made this objective feasible, though semantic consistency must be somehow preserved. This demo presents a video-based storytelling (VBS) system that successfully integrates video processing with narrative generation by means of a shared semantic description. The novel filmic variants are constructed through a flexible video recombination process that takes advantage of the polysemy of baseline video segments. The short output video clips shown in this demo prove how the generated narratives are semantically consistent while keeping generative power intact.


intelligent user interfaces | 2012

PINTER: interactive storytelling with physiological input

Stephen W. Gilroy; Julie Porteous; Fred Charles; Marc Cavazza

The dominant interaction paradigm in Interactive Storytelling (IS) systems so far has been active interventions by the user by means of a variety of modalities. PINTER is an IS system that uses physiological inputs - surface electromyography (EMG) and galvanic skin response (GSR) [1] - as a form of passive interaction, opening up the possibility of the use of traditional filmic techniques [2, 3] to implement IS without requiring immersion-breaking interactive responses. The goal of this demonstration is to illustrate the ways in which passive interaction combined with filmic visualisation, dialogue and music, and a plan-based narrative generation approach can form a new basis for an adaptive interactive narrative.


artificial intelligence in medicine in europe | 2013

Instantiating Interactive Narratives from Patient Education Documents

Fred Charles; Marc Cavazza; Cameron G. Smith; Gersende Georg; Julie Porteous

In this paper, we present a proof-of-concept demonstrator of an Interactive Narrative for patient education. Traditionally, patient education documents are produced by health agencies, yet these documents can be challenging to understand for a large fraction of the population. In contrast, an Interactive Narrative supports a game-like exploration of the situations described in patient education documents, which should facilitate understanding, whilst also familiarising patients with real-world situations. A specific feature of our prototype is that its plan-based narrative representations can be instantiated in part from the original patient education document, using NLP techniques. In the paper we introduce our interactive narrative techniques and follow this with a discussion of specific issues in text interpretation related to the occurrence of clinical actions. We then suggest mechanisms to generate direct or indirect representations of such actions in the virtual world as part of Interactive Narrative generation.


virtual reality international conference | 2014

Integrating virtual agents in BCI neurofeedback systems

Marc Cavazza; Fred Charles; Stephen W. Gilroy; Julie Porteous; Gabor Aranyi; Gal Raz; Nimrod Jakob Keynan; Avihay Cohen; Gilan Jackont; Yael Jacob; Eyal Soreq; Ilana Klovatch; Talma Hendler

The recent development of Brain-Computer Interfaces (BCI) to Virtual World has resulted in a growing interest in realistic visual feedback. In this paper, we investigate the potential role of Virtual Agents in neurofeedback systems, which constitute an important paradigm for BCI. We discuss the potential impact of virtual agents on some important determinants of neurofeedback in the context of affective BCI. Throughout the paper, we illustrate our presentation with two fully implemented neurofeedback prototypes featuring virtual agents: the first is an interactive narrative in which the user empathises with the character through neurofeedback; the second recreates a natural environment in which crowd behaviour becomes a metaphor for arousal and the user engages in emotional regulation.

Collaboration


Dive into the Julie Porteous's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eyal Soreq

Tel Aviv Sourasky Medical Center

View shared research outputs
Top Co-Authors

Avatar

Gal Raz

Tel Aviv Sourasky Medical Center

View shared research outputs
Top Co-Authors

Avatar

Ilana Klovatch

Tel Aviv Sourasky Medical Center

View shared research outputs
Top Co-Authors

Avatar

Talma Hendler

Tel Aviv Sourasky Medical Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge