Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gregor Miller is active.

Publication


Featured researches published by Gregor Miller.


international conference on distributed smart cameras | 2008

Hive: A distributed system for vision processing

Amir Afrah; Gregor Miller; Donovan H. Parks; Martin Matthias Finke; Sidney S. Fels

We have built a novel vision processing system architecture called Hive. Hive fills a gap in the vision middleware by providing mechanisms for simple setup and configuration of distributed vision computation. Hive facilitates communication between independent cross-platform modules via an extensible protocol, allowing these distributed modules to form a vision processing pipeline. A plug-in interface allows general software to be represented as Hive modules: e.g. drivers for hardware devices such as cameras or implementations of particular vision algorithms. The modules are set up as a peer-to-peer network which allows for automated data transfer, callbacks and synchronization. We describe the architecture, communication protocol, plug-in interface and control system for the modules. A distributed face tracking system demonstrates the simplicity and flexibility for creating complex distributed vision applications using Hive.


canadian conference on computer and robot vision | 2011

A Conceptual Structure for Computer Vision

Gregor Miller; Sidney S. Fels; Steve Oldridge

The research presented in this paper represents several novel conceptual contributions to the computer vision literature. In this position paper, our goal is to define the scope of computer vision analysis and discuss a new categorisation of the computer vision problem. We first provide a novel decomposition of computer vision into base components which we term the axioms of vision. These are used to define researcher-level and developer-level access to vision algorithms, in a way which does not require expert knowledge of computer vision. We discuss a new line of thought for computer vision by basing analyses on descriptions of the problem instead of in terms of algorithms. From this an abstraction can be developed to provide a layer above algorithmic details. This is extended to the idea of a formal description language which may be automatically interpreted thus allowing those not familiar with computer vision techniques to utilise sophisticated methods.


international conference on computer graphics and interactive techniques | 2014

Spheree: a 3D perspective-corrected interactive spherical scalable display

Fátima Ferreira; Marcio Cabral; Olavo Belloc; Gregor Miller; Celso Setsuo Kurashima; R. de Deus Lopes; Ian Stavness; Junia Coutinho Anacleto; Marcelo Knörich Zuffo; Sidney S. Fels

We constructed a personal, spherical, multi-projector perspective-corrected rear-projected display called Spheree. Spheree uses multiple calibrated pico-projectors inside a spherical display with content rendered from a user-centric viewpoint. Spheree uses optical tracking for head-coupled rendering, providing parallax-based 3D depth cues. Spheree is compact, supporting direct interaction techniques. For example, 3D models can be modified via 3D interactions on the sphere, providing a 3D sculpture experience.


international conference on entertainment computing | 2009

MiniDiver: A Novel Mobile Media Playback Interface for Rich Video Content on an iPhoneTM

Gregor Miller; Sidney S. Fels; Matthias Finke; Will Motz; Walker Eagleston; Chris Eagleston

We describe our new mobile media content browser called a MiniDiver . MiniDiving considers media browsing as a personal experience that is viewed, personalized, saved, shared and annotated. When placed on a mobile platform, such as the iPhoneTM, consideration of the particular interface elements lead to new ways to experience media content. The MiniDiver interface elements currently supports multi-camera selection, video hyperlinks, history mechanisms and semantic and episodic video search. We compare performance of the MiniDiver on different media streams to illustrate its feasibility.


2011 IEEE Workshop on Person-Oriented Vision | 2011

User oriented language model for face detection

Daesik Jang; Gregor Miller; Sid Fels; Steve Oldridge

This paper provides a novel approach for a user oriented language model for face detection. Even though there are many open source or commercial libraries to solve the problem of face detection, they are still hard to use because they require specific knowledge on details of algorithmic techniques. This paper proposes a high-level language model for face detection with which users can develop systems easily and even without specific knowledge on face detection theories and algorithms. Important conditions are firstly considered to categorize the large problem space of face detection. The conditions identified here are then represented as expressions in terms of a language model so that developers can use them to express various problems. Once the conditions are expressed by users, the proposed associated interpreter interprets the conditions to find and organize the best algorithms to solve the represented problem with corresponding conditions. We show a proof-of-concept implementation and some test and analyze example problems to show the ease of use and usability.


human factors in computing systems | 2011

MediaDiver: viewing and annotating multi-view video

Gregor Miller; Sidney S. Fels; Abir Al Hajri; Michael Ilich; Zoltan Foley-Fisher; Manuel Fernandez; Daesik Jang

We propose to bring our novel rich media interface called MediaDiver demonstrating our new interaction techniques for viewing and annotating multiple view video. The demonstration allows attendees to experience novel moving target selection methods (called Hold and Chase), new multi-view selection techniques, automated quality of view analysis to switch viewpoints to follow targets, integrated annotation methods for viewing or authoring meta-content and advanced context sensitive transport and timeline functions. As users have become increasingly sophisticated when managing navigation and viewing of hyper-documents, they transfer their expectations to new media. Our proposal is a demonstration of the technology required to meet these expectations for video. Thus users will be able to directly click on objects in the video to link to more information or other video, easily change camera views and mark-up the video with their own content. The applications of this technology stretch from home video management to broadcast quality media production, which may be consumed on both desktop and mobile platforms.


international conference on human-computer interaction | 2013

Video Navigation with a Personal Viewing History

Abir Al-Hajri; Gregor Miller; Sidney S. Fels; Matthew Fong

We describe a new video interface based on a recorded personal navigation history which provides simple mechanisms to quickly find and watch previously viewed intervals, highlight segments of video the user found interesting and support other video tasks such as crowd-sourced video popularity measures and consumer-level video editing. Our novel history interface lets users find previously viewed intervals more quickly and provides a more enjoyable video navigation experience, as demonstrated by the study we performed. The user study tasked participants with viewing a pre-defined history of a subset of the video and answering questions about the video content: 83.9% of questions (average) were answered correctly using the personal navigation history, while 65.5% were answered using the state-of-art method; they took significantly less time to answer a question using our method. The full video navigation interface received an 82% average QUIS rating. The results show that our history interface can be an effective part of video players and browsers.


canadian conference on computer and robot vision | 2011

Mapping the Problem Space of Image Registration

Steve Oldridge; Gregor Miller; Sidney S. Fels

In this paper we explore a conceptual mapping of the image registration problem into an N-Dimensional problem space based on the properties of the images being registered, in contrast to traditional surveys of image registration which divide the field algorithmically. The five main dimensions of our proposed mapping are variations in: spatial alignment, intensity, focus, sensor type, and structure. Individual algorithms can be thought of as supporting a volume of solutions within the problem domain map, although they are typically designed to solve problems along a single dimension. Existing image registration papers and techniques are taxonomized within this mapping according to these major dimensions. The focus of this paper is threefold. First, an up-to-date survey of image registration techniques is provided, building from previous seminal surveys. Second, a novel taxonomy is presented that organizes the registration problem space based on the variation between the images being registered. Finally, a number of new research areas made possible under this novel taxonomy are examined, and a path is laid out for future research in the field.


user interface software and technology | 2010

Using temporal video annotation as a navigational aid for video browsing

Stefanie Müller; Gregor Miller; Sidney S. Fels

Video is a complex information space that requires advanced navigational aids for effective browsing. The increasing number of temporal video annotations offers new opportunities to provide video navigation according to a users needs. We present a novel video browsing interface called TAV (Temporal Annotation Viewing) that provides the user with a visual overview of temporal video annotations. TAV enables the user to quickly determine the general content of a video, the location of scenes of interest and the type of annotations that are displayed while watching the video. An ongoing user study will evaluate our novel approach.


international symposium on multimedia | 2009

Vision System Development through Separation of Management and Processing

Amir Afrah; Gregor Miller; Sidney S. Fels

We are addressing two aspects of vision-based system development that are not fully exploited in current frameworks: abstraction over low-level details and high-level module reusability. Through an evaluation of existing frameworks, we relate these shortcomings to the lack of systematic classification of sub-tasks in vision-based system development. In this paper we present our work-in-progress which addresses these two issues by classifying vision into decoupled sub-tasks, hence defining a clear scope for a vision-based system development framework and its sub-components. Firstly, we decompose the task of vision system development into data management and processing. We then proceed to further decompose data management into three components: data access, conversion and transportation. We present the Vision Utility (VU) framework which provides abstraction over the vision system data management and verify this approach through an example vision system.

Collaboration


Dive into the Gregor Miller's collaboration.

Top Co-Authors

Avatar

Sidney S. Fels

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Steve Oldridge

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Matthew Fong

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ian Stavness

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar

Abir Al-Hajri

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Qian Zhou

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Daesik Jang

Kunsan National University

View shared research outputs
Top Co-Authors

Avatar

Abir Al Hajri

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Amir Afrah

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Kai Wu

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge