Tiehan Lv
Princeton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Tiehan Lv.
IEEE Computer | 2002
Wayne H. Wolf; I. Burak Özer; Tiehan Lv
Recent technological advances are enabling a new generation of smart cameras that represent a quantum leap in sophistication. While todays digital cameras capture images, smart cameras capture high-level descriptions of the scene and analyze what they see. These devices could support a wide variety of applications including human and animal detection, surveillance, motion analysis, and facial identification. Video processing has an insatiable demand for real-time performance. Smart cameras leverage very large-scale integration to meet this need in a low-cost, low-power system with substantial memory. Moving well beyond pixel processing and compression, these VLSI systems run a wide range of algorithms to extract meaning from streaming video. Recently, Princeton University researchers developed a first-generation smart camera system that can detect people and analyze their movement in real time. Because they push the design space in so many dimensions, these smart cameras are a leading-edge application for embedded system research.
design, automation, and test in europe | 2004
Jiang Xu; Wayne H. Wolf; Jörg Henkel; Srimat T. Chakradhar; Tiehan Lv
In this paper we study bus-based and switch-based on-chip networks for an embedded video application, the smart camera SoC (system on chip). We analyze network performance and overall system performance in detail. We explore system performance using crossbars with different sizes, fixed size but different numbers of ports, and different numbers of shared memories. We find that network is a performance bottleneck in our design, and the system using an optimized NoC can outperform one using a bus by 132%. Our simulations are based upon recorded real communication traces, which give more accurate system performance. Our study finds that for the Smart Camera system, a 16-bit/port 3/spl times/3 crossbar with two shared memories shows 85.7% performance improvement over the bus-based model and also has less maximum network throughput than the bus-based model. This design example illustrates a methodology to quickly and accurately estimate the performance of NoCs at architecture level.
international conference on acoustics, speech, and signal processing | 2005
Mainak Sen; Shuvra S. Bhattacharyya; Tiehan Lv; Wayne H. Wolf
We describe a new dataflow model called homogeneous parameterized dataflow (HPDF). This form of dynamic dataflow graph takes advantage of the fact that in a large number of image processing applications, data production and consumption rates, though dynamic, are equal across graph edges for any particular iteration, which leads to a homogeneous rate of actor execution, even though data production and consumption values are dynamic and vary across graph edges. We discuss existing dataflow models and formulate in detail the HPDF model. We develop examples of applications that are described naturally in terms of HPDF semantics and present experimental results that demonstrate the efficacy of the HPDF approach.
international conference on multimedia and expo | 2004
Chang Hong Lin; Tiehan Lv; Wayne H. Wolf; I.B. Ozer
We describe a peer-to-peer multiple-camera architecture for a distributed real-time gesture recognition system. Previous work attaches multiple cameras to a server This simplifies many design problems but is impractical for real-world installations. Our architecture uses a network of relatively inexpensive cameras to gather images in order to provide high resolution at low cost. Computations are done at the embedded processors in each camera, without using a centralized server. We also propose a methodology for transforming well-defined single-camera algorithms to multiple cameras. We migrate our single-camera gesture recognition system into multiple cameras with slightly overlapped views. In order to minimize the communication bandwidth and power consumption, only selected contours or ellipses information is transmitted between the cameras.
Eurasip Journal on Embedded Systems | 2007
Mainak Sen; Ivan Corretjer; Fiorella Haim; Sankalita Saha; Jason Schlessman; Tiehan Lv; Shuvra S. Bhattacharyya; Wayne H. Wolf
We develop a design methodology for mapping computer vision algorithms onto an FPGA through the use of coarse-grain reconfigurable dataflow graphs as a representation to guide the designer. We first describe a new dataflow modeling technique called homogeneous parameterized dataflow (HPDF), which effectively captures the structure of an important class of computer vision applications. This form of dynamic dataflow takes advantage of the property that in a large number of image processing applications, data production and consumption rates can vary, but are equal across dataflow graph edges for any particular application iteration. After motivating and defining the HPDF model of computation, we develop an HPDF-based design methodology that offers useful properties in terms of verifying correctness and exposing performance-enhancing transformations; we discuss and address various challenges in efficiently mapping an HPDF-based application representation into target-specific HDL code; and we present experimental results pertaining to the mapping of a gesture recognition application onto the Xilinx Virtex II FPGA.
international conference on multimedia and expo | 2004
Tiehan Lv; I. Burak Özer; Wayne H. Wolf
Background subtraction algorithms are critical to many video recognition/analysis systems and have been studied for decades. Most of the algorithms assume that the camera is fixed. We propose a background subtraction algorithm that works when a shaking camera is present. In this algorithm, the input frames are compensated and compared with the given reference frame to separate foreground objects from the background. Experimental results show that the proposed method outperforms the widely used Gaussian mixture model based method in both fixed camera and shaking camera scenarios with respect to accuracy, robustness, and efficiency.
design, automation, and test in europe | 2003
Tiehan Lv; Jörg Henkel; Haris Lekatsas; Wayne H. Wolf
Signal integrity is and will continue to be a major concern in deep sub-micron VLSI designs where the proximity of signal carrying lines leads to crosstalk, unpredictable signal delays and other parasitic side effects. Our scheme uses bus encoding that guarantees that at any time any two signal carrying lines will be separated by at least one grounded line and thus providing a high degree of signal integrity. This comes at a small overhead of only one additional bus line (the closest related work needs 14 additional lines for a 32-bit bus) and a small average performance decrease of 0.36%. By means of a large set of real-world applications, we compare our scheme to other state-of-the-art approaches and present comparisons in terms of degree of integrity, overhead (e.g. additional lines required) and a possible performance decrease.
international conference on multimedia and expo | 2003
Wayne H. Wolf; I. Burak Özer; Tiehan Lv
This paper describes our new multiple-camera architecture for real-time video analysis. This architecture uses an array of relatively inexpensive cameras to gather images in order to provide high resolution at low cost. The system also uses a hierarchy of cameras, including both wide-angle and telephoto views. Wide-angle cameras are responsible for camera coordination while telephoto cameras are primarily responsible for detailed processing of parts of the scene.
international conference on multimedia and expo | 2002
I. Burak Özer; Tiehan Lv; Wayne H. Wolf
We propose a smart camera system where the cameras detect the presence of a person and recognize activities of this person. A relational graph-based modeling of human body and a HMM-based activity recognition of the body parts are proposed for real-time video analysis. The results show that more than 86 percent of the body parts and 88 percent of the activities are correctly classified. We also describe the relationship between the activity detection algorithms and the architectures required to perform these tasks in real time. We achieve a processing rate of more than 20 frames per second for each TriMedia video capture board.
international conference on information technology research and education | 2003
Tiehan Lv; I. Burak Özer; Wayne H. Wolf
This paper discusses the VLSI architecture for distributed smart camera systems. We first introduce the core algorithm of the smart camera systems and then compare two different approaches of implementing a single node smart camera system. We show that by using heterogeneous multiprocessors, we can achieve a 150 frames/sec processing speed with a small die area cost of 22.7 mm/sup 2/. This approach requires less than half die size compared to multiple VLIW processors approach. In addition, the issues related to distributed smart cameras such as task scheduling, inter-processor communication, and synchronization is discussed.