Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jason Schlessman is active.

Publication


Featured researches published by Jason Schlessman.


computer vision and pattern recognition | 2006

Hardware/Software Co-Design of an FPGA-based Embedded Tracking System

Jason Schlessman; Cheng-Yao Chen; Wayne H. Wolf; I. Burak Özer; Kenji Fujino; Kazurou Itoh

This paper discusses a practical design experience pertaining to a tracking system employing optical flow. The system was previously extracted from an existing software implementation and modified for FPGA deployment. Details are provided regarding transference of the resulting high-level design to a usable form for FPGA fabrics. Furthermore, discussion is given for obstacles made manifest in embedded vision design and the methods employed for overcoming them. This is attempted with the intent of maintaining a consistent level of vision algorithm performance as well as meeting real-time requirements. The system discussed differs from previous embedded systems employing optical flow in that it consists strictly of fully disclosed nonproprietary transferable components while providing performance measures for power consumption, latency, and area. The system was synthesized onto a Xilinx Virtex-II Pro XC2VP30 FPGA utilizing less than 25% of system resources, performing with a maximum operating frequency of 67MHz without pipelining, and consuming 497mW of power.


international conference on multimedia and expo | 2006

SCCS: A Scalable Clustered Camera System for Multiple Object Tracking Communicating Via Message Passing Interface

Senem Velipasalar; Jason Schlessman; Cheng-Yao Chen; Wayne H. Wolf; Jaswinder Pal Singh

We introduce the scalable clustered camera system, a peer-to-peer multi-camera system for multi-object tracking, where different CPUs are used to process inputs from distinct cameras. Instead of transferring control of tracking jobs from one camera to another, each camera in our system performs its own tracking and keeps its own tracks for each target object, thus providing fault tolerance. A fast and robust tracking method is proposed to perform tracking on each camera view, while maintaining consistent labeling. In addition, we introduce a new communication protocol, where the decisions about when and with whom to communicate are made such that frequency and size of transmitted messages are minimized. This protocol incorporates variable synchronization capabilities, so as to allow flexibility with accuracy tradeoffs. We discuss our implementation, consisting of a parallel computing cluster, with communication between the cameras performed by MPI. We present experimental results which demonstrate the success of the proposed peer-to-peer multi-camera tracking system, with accuracy of 95% for a high frequency of synchronization, as well as a worst-case of 15 frames of latency in recovering correct labels at low synchronization frequencies


Eurasip Journal on Image and Video Processing | 2008

A Scalable Clustered Camera System for Multiple Object Tracking

Senem Velipasalar; Jason Schlessman; Cheng-Yao Chen; Wayne H. Wolf; Jaswinder Pal Singh

Reliable and efficient tracking of objects by multiple cameras is an important and challenging problem, which finds wide-ranging application areas. Most existing systems assume that data from multiple cameras is processed on a single processing unit or by a centralized server. However, these approaches are neither scalable nor fault tolerant. We propose multicamera algorithms that operate on peer-to-peer computing systems. Peer-to-peer vision systems require codesign of image processing and distributed computing algorithms as well as sophisticated communication protocols, which should be carefully designed and verified to avoid deadlocks and other problems. This paper introduces the scalable clustered camera system, which is a peer-to-peer multicamera system for multiple object tracking. Instead of transferring control of tracking jobs from one camera to another, each camera in the presented system performs its own tracking, keeping its own trajectories for each target object, which provides fault tolerance. A fast and robust tracking algorithm is proposed to perform tracking on each camera view, while maintaining consistent labeling. In addition, a novel communication protocol is introduced, which can handle the problems caused by communication delays and different processor loads and speeds, and incorporates variable synchronization capabilities, so as to allow flexibility with accuracy tradeoffs. This protocol was exhaustively verified by using the SPIN verification tool. The success of the proposed system is demonstrated on different scenarios captured by multiple cameras placed in different setups. Also, simulation and verification results for the protocol are presented.


computer vision and pattern recognition | 2005

Computer Vision on FPGAs: Design Methodology and its Application to Gesture Recognition

Mainak Sen; Ivan Corretjer; Fiorella Haim; Sankalita Saha; Shuvra S. Bhattacharyya; Jason Schlessman; Wayne H. Wolf

In this paper we develop a design methodology for generating efficient, target specific Hardware Description Language (HDL) code from an algorithm through the use of coarse-grain reconfigurable dataflow graphs as a representation to guide the designer. We demonstrate this methodology through an algorithm for gesture recognition that has been developed previously in software [9]. Using the recently introduced modeling technique of homogeneous parameterized dataflow (HPDF) [3], which effectively captures the structure of an important class of computer vision applications, we systematically transform the gesture recognition application into a streamlined HDL implementation, which is based on Verilog and VHDL. To demonstrate the utility and efficiency of our approach we synthesize the HDL implementation on the Xilinx Virtex II FPGA. This paper describes our design methodology based on the HPDF representation, which offers useful properties in terms of verifying correctness and exposing performance- enhancing transformations; discusses various challenges that we addressed in efficiently linking the HPDFbased application representation to target-specific HDL code; and provides experimental results pertaining to the mapping of the gesture recognition application onto the Virtex II using our methodology.


IEEE Transactions on Very Large Scale Integration Systems | 2011

Reconfigurable SRAM Architecture With Spatial Voltage Scaling for Low Power Mobile Multimedia Applications

Minki Cho; Jason Schlessman; Wayne H. Wolf; Saibal Mukhopadhyay

This paper presents a dynamically reconfigurable SRAM array for low-power mobile multimedia application. The proposed structure use a lower voltage for cells storing low-order bits and a nominal voltage for cells storing higher order bits. The architecture allows reconfigure the number of bits in the low-voltage mode to change the error characteristics of the array in run-time. Simulations in predictive 70 nm nodes show that the proposed array can obtain 45% savings in memory power with a marginal (~10%) reduction in image quality.


asia and south pacific design automation conference | 2009

Accuracy-aware SRAM: a reconfigurable low power SRAM architecture for mobile multimedia applications

Minki Cho; Jason Schlessman; Wayne H. Wolf; Saibal Mukhopadhyay

We propose a dynamically reconfigurable SRAM architecture for low-power mobile multimedia applications. Parametric failures due to manufacturing variations limit the opportunities for power saving in SRAM. We show that, using a lower voltage for cells storing low-order bits and a nominal voltage for cells storing higher order bits, ~45% savings in memory power can be achieved with a marginal (~10%) reduction in image quality. A reconfigurable array structure is developed to dynamically reconfigure the number of bits in different voltage domains.


Eurasip Journal on Embedded Systems | 2007

Dataflow-based mapping of computer vision algorithms onto FPGAs

Mainak Sen; Ivan Corretjer; Fiorella Haim; Sankalita Saha; Jason Schlessman; Tiehan Lv; Shuvra S. Bhattacharyya; Wayne H. Wolf

We develop a design methodology for mapping computer vision algorithms onto an FPGA through the use of coarse-grain reconfigurable dataflow graphs as a representation to guide the designer. We first describe a new dataflow modeling technique called homogeneous parameterized dataflow (HPDF), which effectively captures the structure of an important class of computer vision applications. This form of dynamic dataflow takes advantage of the property that in a large number of image processing applications, data production and consumption rates can vary, but are equal across dataflow graph edges for any particular application iteration. After motivating and defining the HPDF model of computation, we develop an HPDF-based design methodology that offers useful properties in terms of verifying correctness and exposing performance-enhancing transformations; we discuss and address various challenges in efficiently mapping an HPDF-based application representation into target-specific HDL code; and we present experimental results pertaining to the mapping of a gesture recognition application onto the Xilinx Virtex II FPGA.


international conference on multimedia and expo | 2007

Heterogeneous MPSoC Architectures for Embedded Computer Vision

Jason Schlessman; Mark Lodato; I. Burak Özer; Wayne H. Wolf

In this paper, architectures for two distinct embedded computer vision operations are presented. Motivation is given for the utilization of heterogeneous processing cores on a single chip. In addition, a brief discussion of applicability of multi-processor system on a chip (MPSoC) design challenges and techniques to nascent multi-core development considerations is given. Furthermore, a composite architecture consisting of the two distinct operations is discussed, with relative merits of this approach provided. Finally, experimental analysis is given for the applicability and feasibility of these heterogeneous multiprocessor architectures. Area, power, and cycle times are provided for each of the aforementioned designs. The architectural mappings were implemented on a Xilinx Virtex-II Pro V2P30 FPGA, and are shown to operate without pipelining at 50 MHz, utilizing roughly 46% of FPGA resources, and consuming 565 mW of power.


international conference on multimedia and expo | 2006

Design and Verification of Communication Protocols for Peer-to-Peer Multimedia Systems

Senem Velipasalar; Chang Hong Lin; Jason Schlessman; Wayne H. Wolf

This paper addresses issues pertaining to the necessity of utilizing formal verification methods in the design of protocols for peer-to-peer multimedia systems. These systems require sophisticated communication protocols, and these protocols require verification. We discuss two sample protocols designed for two distinct peer-to-peer computer vision applications, namely multi-object multi-camera tracking and distributed gesture recognition. We present simulation and verification results for these protocols, obtained by using the SPIN verification tool, and discuss the importance of verifying the protocols used in peer-to-peer multimedia systems


international conference on distributed smart cameras | 2007

Real-Time Human Motion Detection with Distributed Smart Cameras

Mark Daniels; Kate Muldawer; Jason Schlessman; I. Burak Özer; Wayne H. Wolf

Many smart camera security systems employ a single camera model; this makes depth perception impossible and the occlusion of objects (either by fixtures or by other body parts of the subject) prevents meaningful task automation. Multi-camera systems have significant overhead in communication and three-dimensional modeling. We have developed a multi-camera system capable of overcoming this issue. Two cameras observing the same space from different vantage points provide depth perception of a subject so that the positions of the hands and face can be mapped in three dimensions. Unlike other three-dimensional modeling programs, we use an ultra-compression method and build on existing message passing interface (MPI) middleware for communication, allowing for real-time performance. Our application provides a framework for robust motion detection and gesture recognition.

Collaboration


Dive into the Jason Schlessman's collaboration.

Top Co-Authors

Avatar

Wayne H. Wolf

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marilyn Wolf

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Minki Cho

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Saibal Mukhopadhyay

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge