Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sidney S. Fels is active.

Publication


Featured researches published by Sidney S. Fels.


computer vision and pattern recognition | 2007

A Linear Programming Approach for Multiple Object Tracking

Hao Jiang; Sidney S. Fels; James J. Little

We propose a linear programming relaxation scheme for the class of multiple object tracking problems where the inter-object interaction metric is convex and the intra-object term quantifying object state continuity may use any metric. The proposed scheme models object tracking as a multi-path searching problem. It explicitly models track interaction, such as object spatial layout consistency or mutual occlusion, and optimizes multiple object tracks simultaneously. The proposed scheme does not rely on track initialization and complex heuristics. It has much less average complexity than previous efficient exhaustive search methods such as extended dynamic programming and is found to be able to find the global optimum with high probability. We have successfully applied the proposed method to multiple object tracking in video streams.


advanced video and signal based surveillance | 2008

Evaluation of Background Subtraction Algorithms with Post-Processing

Donovan H. Parks; Sidney S. Fels

Processing a video stream to segment foreground objects from the background is a critical first step in many computer vision applications. Background subtraction (BGS) is a commonly used technique for achieving this segmentation. The popularity of BGS largely comes from its computational efficiency, which allows applications such as human-computer interaction, video surveillance, and traffic monitoring to meet their real-time goals. Numerous BGS algorithms and a number of post-processing techniques that aim to improve the results of these algorithms have been proposed. In this paper, we evaluate several popular, state-of-the-art BGS algorithms and examine how post-processing techniques affect their performance. Our experimental results demonstrate that post-processing techniques can significantly improve the foreground segmentation masks produced by a BGS algorithm. We provide recommendations for achieving robust foreground segmentation based on the lessons learned performing this comparative study.


IEEE Transactions on Neural Networks | 1997

Glove-talk II - a neural-network interface which maps gestures to parallel formant speech synthesizer controls

Sidney S. Fels; Geoffrey E. Hinton

Glove-TalkII is a system which translates hand gestures to speech through an adaptive interface. Hand gestures are mapped continuously to ten control parameters of a parallel formant speech synthesizer. The mapping allows the hand to act as an artificial vocal tract that produces speech in real time. This gives an unlimited vocabulary in addition to direct control of fundamental frequency and volume. Currently, the best version of Glove-TalkII uses several input devices (including a Cyberglove, a ContactGlove, a three-space tracker, and a foot pedal), a parallel formant speech synthesizer, and three neural networks. The gesture-to-speech task is divided into vowel and consonant production by using a gating network to weight the outputs of a vowel and a consonant neural network. The gating network and the consonant network are trained with examples from the user. The vowel network implements a fixed user-defined relationship between hand position and vowel sound and does not require any training examples from the user. Volume, fundamental frequency, and stop consonants are produced with a fixed mapping from the input devices. One subject has trained to speak intelligibly with Glove-TalkII. He speaks slowly but with far more natural sounding pitch variations than a text-to-speech synthesizer.


Lecture Notes in Computer Science | 1998

C-MAP: Building a Context-Aware Mobile Assistant for Exhibition Tours

Yasuyuki Sumi; Tameyuki Etani; Sidney S. Fels; Nicolas Simonet; Kaoru Kobayashi; Kenji Mase

This paper presents the objectives and progress of the Context-aware Mobile Assistant Project (C-MAP). The C-MAP is an attempt to build a personal mobile assistant that provides visitors touring exhibitions with information based on their locations and individual interests. We have prototyped the first version of the mobile assistant and used an open house exhibition held by our research laboratory for a testbed. A personal guide agent with a life-like animated character on a mobile computer guides users using exhibition maps which are personalized depending on their physical and mental contexts. This paper also describes services for facilitating new encounters and information sharing among visitors and exhibitors who have shared interests during/after the exhibition tours.


new interfaces for musical expression | 2003

Contexts of collaborative musical experiences

Tina Blaine; Sidney S. Fels

We explore a variety of design criteria applicable to the creation of collaborative interfaces for musical experience. The main factor common to the design of most collaborative interfaces for novices is that musical control is highly restricted, which makes it possible to easily learn and participate in the collective experience. Balancing this trade off is a key concern for designers, as this happens at the expense of providing an upward path to virtuosity with the interface. We attempt to identify design considerations exemplified by a sampling of recent collaborative devices primarily oriented toward novice interplay. It is our intention to provide a non-technical overview of design issues inherent in configuring multiplayer experiences, particularly for entry-level players.


Communications of The ACM | 1997

Reactive environments

Jeremy R. Cooperstock; Sidney S. Fels; William Buxton; Kenneth C. Smith

increasingly widespread, we are confronted with the burden of controlling a myriad of complex devices in our day-today activities. While many people today could hardly imagine living in electronics-free homes or working in offices without computers, few of us have truly mastered full control of our VCRs, microwave ovens, or office photocopiers. Rather than making our lives easier, as technology was intended to do, it has complicated our activities with lengthy instruction manuals and confusing user interfaces. Designers have been trying to make the computer more “user-friendly” ever since its inception. The last two decades have brought us the notable advances of keyboard terminals, graphics displays, and pointing devices, as well as the graphical user interface, introduced in 1981 by the Xerox Star and popularized by the Apple Macintosh. Most recently, we have seen the emergence of pen-based and portable computers. However, despite this progress of interface improvements, very little has changed in terms of how we work with these machines. The basic rules of interaction are the same as they were in the days of the ENIAC: users must engage in an explicit, machine oriented dialogue with the computer rather than interact with the computer as they do with other people. In the last few years, computer scientists have begun talking about a new approach to human-computer interaction in which computing would not necessitate sitting in front of a screen and isolating ourselves from the world around us. Instead, in a computer-augmented environment, electronic systems could be merged into the physical world to provide computer functionality to everyday objects. This idea is exemplified by Ubiquitous Computing (UbiComp)


symposium on usable privacy and security | 2007

Towards understanding IT security professionals and their tools

David Botta; Rodrigo Werlinger; André Gagné; Konstantin Beznosov; Lee Iverson; Sidney S. Fels; Brian D. Fisher

We report preliminary results of our ongoing field study of IT professionals who are involved in security management. We interviewed a dozen practitioners from five organizations to understand their workplace and tools. We analyzed the interviews using a variation of Grounded Theory and predesigned themes. Our results suggest that the job of IT security management is distributed across multiple employees, often affiliated with different organizational units or groups within a unit and responsible for different aspects of it. The workplace of our participants can be characterized by their responsibilities, goals, tasks, and skills. Three skills stand out as significant in the IT security management workplace: inferential analysis, pattern recognition, and bricolage.


Organised Sound | 2002

Mapping transparency through metaphor: towards more expressive musical instruments

Sidney S. Fels; Ashley Gadd; Axel G. E. Mulder

We define a two-axis transparency framework that can be used as a predictor of the expressivity of a musical device. One axis is the players transparency scale, while the other is the audiences transparency scale. Through consideration of both traditional instruments and new technology-driven interfaces, we explore the role that metaphor plays in developing expressive devices. Metaphor depends on a literature, which forms the basis for making transparent device mappings. We examine four examples of systems that use metaphor: Iamascope, Sound Sculpting, MetaMuse and Glove-TalkII; and discuss implications on transparency and expressivity. We believe this theory provides a framework for design and evaluation of new human–machine and human–human interactions, including musical instruments.


Journal of Biomechanics | 2008

A dynamic model of jaw and hyoid biomechanics during chewing

A.G. Hannam; Ian Stavness; John E. Lloyd; Sidney S. Fels

Our understanding of human jaw biomechanics has been enhanced by computational modelling, but comparatively few studies have addressed the dynamics of chewing. Consequently, ambiguities remain regarding predicted jaw-gapes and forces on the mandibular condyles. Here, we used a new platform to simulate unilateral chewing. The model, based on a previous study, included curvilinear articular guidance, a mobile hyoid apparatus, and a compressible food bolus. Muscles were represented by Hill-type actuators with drive profiles tuned to produce target jaw and hyoid movements. The cycle duration was 732 ms. At maximum gape, the lower incisor-point was 20.1mm down, 5.8mm posterior, and 2.3mm lateral to its initial, tooth-contact position. Its maximum laterodeviation to the working-side during closing was 6.1mm, at which time the bolus was struck. The hyoids movement, completed by the end of jaw-opening, was 3.4mm upward and 1.6mm forward. The mandibular condyles moved asymmetrically. Their compressive loads were low during opening, slightly higher on the working-side at bolus-collapse, and highest bilaterally when the teeth contacted. The models movements and the directions of its condylar forces were consistent with experimental observations, resolving seeming discordances in previous simulations. Its inclusion of hyoid dynamics is a step towards modelling mastication.


Archive | 2012

ArtiSynth: A Fast Interactive Biomechanical Modeling Toolkit Combining Multibody and Finite Element Simulation

John E. Lloyd; Ian Stavness; Sidney S. Fels

ArtiSynth (http://www.artisynth.org) is an open source, Java-based biomechanical simulation environment for modeling complex anatomical systems composed of both rigid and deformable structures. Models can be built from a rich set of components, including particles, rigid bodies, finite elements with both linear and nonlinear materials, point-to-point muscles, and various bilateral and unilateral constraints including contact. A state-of-the-art physics simulator provides forward simulation capabilities that combine multibody and finite element models. Inverse simulation capabilities allow the computation of the muscle activations needed to achieve prescribed target motions. ArtiSynth is highly interactive, with component parameters and state variables exposed as properties that can be interactively read and adjusted as the simulation proceeds. Streams of input and output data, used for controlling or observing the simulation, can be viewed, arranged, and edited on an interactive timeline display, and support is provided for the graphical editing of model structures.

Collaboration


Dive into the Sidney S. Fels's collaboration.

Top Co-Authors

Avatar

Ian Stavness

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar

Gregor Miller

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Junia Coutinho Anacleto

Federal University of São Carlos

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John E. Lloyd

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Negar M. Harandi

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Roberto Calderon

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Rodger Lea

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge