Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alan P. Parkes is active.

Publication


Featured researches published by Alan P. Parkes.


Multimedia Tools and Applications | 1997

The Application of Video Semantics and Theme Representation in Automated Video Editing

Frank Nack; Alan P. Parkes

This paper considers the automated generation of humorous video sequences from arbitrary video material. We present a simplified model of the editing process. We then outline our approach to narrativity and visual humour, discuss the problems of context and shot-order in video and consider influences on the editing process. We describe the role of themes and semantic fields in the generation of content oriented video scenes. We then present the architecture of AUTEUR, an experimental system that embodies mechanisms to interpret, manipulate and generate video. An example of a humorous video sequence generated by AUTEUR is described.


Archive | 2008

Finite State Transducers

Alan P. Parkes

In this chapter, we consider the types of computation that can be carried out by finite state transducers (FSTs), which are the finite state recognisers(FSRs) of Chapter 4, with output.


ITCom 2002: The Convergence of Information Technologies and Communications | 2002

Real-time multimedia tagging and content-based retrieval for CCTV surveillance systems

Alan J. Perrott; Adam T. Lindsay; Alan P. Parkes

As the number of installed surveillance cameras increases, and the cost of storing the compressed digital multimedia decreases, the CCTV industry is facing the prospect of large multimedia archives where it may be very difficult to locate specific content. To be able to get the full benefit of this wealth of multimedia data, we need to be able to automatically highlight events of interest to the operator in real-time. We also need to make it possible to quickly identify and retrieve content which meets particular criteria. We show how advances in the Internet and multimedia systems can be used to effectively analyze, tag, store, search and retrieve multimedia content in surveillance systems. IP cameras are utilized for multimedia compression and delivery over the Internet or intranet. The recorded multimedia is analyzed in real-time, and metadata descriptors are automatically generated to describe the multimedia content. The emerging ISO MPEG-7 standard is used to define application-specific multimedia descriptors and description schemes, and to enforce a standard Description Definition Language (DDL) for multimedia management. Finally, a graphical multimedia retrieval application is used to provide content-based searching, browsing, retrieval and playback over the Internet or intranet.


Information Processing and Management | 1989

The prototype Cloris system: describing, retrieving and discussing videodisc stills and sequences

Alan P. Parkes

Abstract Given the problem of the incorporation of videodisc technology into an I ntelligent Computer- A ssisted I nstruction (ICAI) framework, this article presents an outline of the methodological basis for, and the architecture and operation of, a prototype V ideo-based ICAI (VICAI) system. The aim of the system is to allow the learner to control the learning environment by watching video material (stills and moving film), interrupting it, and entering into a discussion about the visual and conceptual aspects of the events in progress and the objects on view. The prototype is to be used for experimentation with learners to investigate the strategic and learner-modeling requirements of VICAI systems.


Applied Artificial Intelligence | 1997

Toward the automated editing of theme oriented video sequences

Frank Nack; Alan P. Parkes

This article considers the automated generation of humorous video sequences from arbitrary video material. We present a simplified model of the editing process. We then outline our approach to narrativity and visual humor, discuss the problems of context and shot order in video, and consider influences on the editing process. We describe the role of themes and semantic fields in the generation of content-oriented video scenes. We then present the architecture of AUTEUR, an experimental system that embodies mechanisms to interpret, manipulate, and generate video. An example of a humorous video sequence generated by AUTEUR is described.


Applied Artificial Intelligence | 1997

Film sequence generation strategies for automatic intelligent video editing

Sean Butler; Alan P. Parkes

In this article, we describe an approach to generic automatic intelligent video editing, which was used in the implementation of the LIVE system. We discuss cinema theory and related systems that recombine film fragments into sequences. We introduce LIVE and describe the architecture and implementation of the system. We discuss the generic film fragment generation rules implemented within the system, with examples of input and results. Finally, we discuss the future direction of our research.


Archive | 1992

Computer-Controlled Video for Intelligent Interactive Use: a Description Methodology

Alan P. Parkes

Regardless of any definition which one might adopt for terms such as ‘multimedia’, a qualitative difference between the interfaces described in this book and traditional ones is their richness. However, richness implies complexity, which has to be controlled. Video is an example of such richness and this chapter describes one approach to manage its complexity. Before one can control any phenomenon, one has to be able to refer to its components, to describe them systematically and consistently. Video-based intelligent tutoring systems that can cope flexibly with unanticipated teaching situations may need to use artificial intelligence techniques to enable them to draw inferences for themselves about the scope and relevance of pre-recorded video sequences. Parkes, working within the discipline of artificial intelligence and education addresses this and related problems. His chapter includes introduction of a description methodology for application to both still and moving images, and illustrates this with some examples. Implications for the development of multimedia education systems in general are also noted.


Innovations in Education and Training International | 1987

Towards a Script‐Based Representation Language for Educational Films

Alan P. Parkes

Abstract This paper represents the foundation for work utilizing artificial intelligence (AI) techniques in the development of a representation language for educational films on videodisc to be used by an intelligent computer‐assisted instruction (ICAI) system. The language will facilitate a situation in which the IGAI system can discuss the content and access selected components of films in a way which responds to a learners individual requirements. The paper discusses aspects of the syntax and semantics of film, and presents a scenario for the use of film by an ICAI system. An outline of the language is presented and, using examples from the domain of car driving, its application to a piece of film is discussed. An appendix contains a description of the notation used in the examples.


Signal Processing-image Communication | 1996

Filmic space-time diagrams for video structure representation

Sean Butler; Alan P. Parkes

In this paper, we propose a novel approach to computer representation for interactive film editing. We describe a prototype system developed at Lancaster which uses this approach, and show how such a system supports film editing. Our work involves the isolation of several principles of filmicity and the combination of these with an event structure and film grammar to produce a system which can intelligently edit a film. Filmic principles are rules which can be used when editing a film to produce a sequence which guarantees a smooth perception of the film in its entirety. The visual structure representation method discussed herein is a tool used in our research.


BCS HCI | 2005

User Interface Overloading: A Novel Approach for Handheld Device Text Input

James Allan Hudson; Alan Dix; Alan P. Parkes

Text input with a PDA is not as easy as it should be, especially when compared to a desktop set up with a standard keyboard. The abundance of attempted solutions to the text input problem for mobile devices provides evidence of the difficulties, and suggests the need for more imaginative approaches. We propose a novel gesture driven layer interaction model using animated transparent overlays, which integrates agreeably with common windowing models.

Collaboration


Dive into the Alan P. Parkes's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge