Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Edgar J. Lobaton is active.

Publication


Featured researches published by Edgar J. Lobaton.


international conference on distributed smart cameras | 2008

CITRIC: A low-bandwidth wireless camera network platform

Phoebus Chen; Parvez Ahammad; Colby Boyer; Shih-I Huang; Leon Lin; Edgar J. Lobaton; Marci Meingast; Songhwai Oh; Simon Wang; Posu Yan; Allen Y. Yang; Chuohao Yeo; Lung-Chung Chang; J. D. Tygar; Shankar Sastry

In this paper, we propose and demonstrate a novel wireless camera network system, called CITRIC. The core component of this system is a new hardware platform that integrates a camera, a frequency-scalable (up to 624 MHz) CPU, 16MB FLASH, and 64MB RAM onto a single device. The device then connects with a standard sensor network mote to form a camera mote. The design enables in-network processing of images to reduce communication requirements, which has traditionally been high in existing camera networks with centralized processing. We also propose a back-end client/server architecture to provide a user interface to the system and support further centralized processing for higher-level applications. Our camera mote enables a wider variety of distributed pattern recognition applications than traditional platforms because it provides more computing power and tighter integration of physical components while still consuming relatively little power. Furthermore, the mote easily integrates with existing low-bandwidth sensor networks because it can communicate over the IEEE 802.15.4 protocol with other sensor network platforms. We demonstrate our system on three applications: image compression, target tracking, and camera localization.


IEEE Transactions on Multimedia | 2011

High-Quality Visualization for Geographically Distributed 3-D Teleimmersive Applications

Ramanarayan Vasudevan; Gregorij Kurillo; Edgar J. Lobaton; Tony Bernardin; Oliver Kreylos; Ruzena Bajcsy; Klara Nahrstedt

The growing popularity of 3-D movies has led to the rapid development of numerous affordable consumer 3-D displays. In contrast, the development of technology to generate 3-D content has lagged behind considerably. In spite of significant improvements to the quality of imaging devices, the accuracy of the algorithms that generate 3-D data, and the hardware available to render such data, the algorithms available to calibrate, reconstruct, and then visualize such data remain difficult to use, extremely noise sensitive, and unreasonably slow. In this paper, we present a multi-camera system that creates a highly accurate (on the order of a centimeter), 3-D reconstruction of an environment in real-time (under 30 ms) that allows for remote interaction between users. This paper focuses on addressing the aforementioned deficiencies by describing algorithms to calibrate, reconstruct, and render objects in the system. We demonstrate the accuracy and speed of our results on a variety of benchmarks and data collected from our own system.


IEEE Transactions on Image Processing | 2010

A Distributed Topological Camera Network Representation for Tracking Applications

Edgar J. Lobaton; Ramanarayan Vasudevan; Ruzena Bajcsy; Shankar Sastry

Sensor networks have been widely used for surveillance, monitoring, and tracking. Camera networks, in particular, provide a large amount of information that has traditionally been processed in a centralized manner employing a priori knowledge of camera location and of the physical layout of the environment. Unfortunately, these conventional requirements are far too demanding for ad-hoc distributed networks. In this article, we present a simplicial representation of a camera network called the camera network complex (CN-complex), that accurately captures topological information about the visual coverage of the network. This representation provides a coordinate-free calibration of the sensor network and demands no localization of the cameras or objects in the environment. A distributed, robust algorithm, validated via two experimental setups, is presented for the construction of the representation using only binary detection information. We demonstrate the utility of this representation in capturing holes in the coverage, performing tracking of agents, and identifying homotopic paths.


international symposium on multimedia | 2008

A Framework for Collaborative Real-Time 3D Teleimmersion in a Geographically Distributed Environment

Gregorij Kurillo; Ramanarayan Vasudevan; Edgar J. Lobaton; Ruzena Bajcsy

In this paper, we present a framework for immersive 3D video conferencing and geographically distributed collaboration. Our multi-camera system performs a full-body 3D reconstruction of users in real time and renders their image in a virtual space allowing remote interaction between users and the virtual environment. The paper features an overview of the technology and algorithms used for calibration, capturing, and reconstruction. We introduce stereo mapping using adaptive triangulation which allows for fast (under 25 ms) and robust real-time 3D reconstruction. The chosen representation of the data provides high compression ratios for transfer to a remote site. The algorithm produces partial 3D meshes, instead of dense point clouds, which are combined on the renderer to create a unified model of the user. We have successfully demonstrated the use of our system in various applications such as remote dancing and immersive Tai Chi learning.


international conference on multimedia and expo | 2010

Real-time stereo-vision system for 3D teleimmersive collaboration

Ramanarayan Vasudevan; Zhong Zhou; Gregorij Kurillo; Edgar J. Lobaton; Ruzena Bajcsy; Klara Nahrstedt

Though the variety of desktop real time stereo vision systems has grown considerably in the past several years, few make any verifiable claims about the accuracy of the algorithms used to construct 3D data or describe how the data generated by such systems, which is large in size, can be effectively distributed. In this paper, we describe a system that creates an accurate (on the order of a centimeter), 3D reconstruction of an environment in real time (under 30 ms) that also allows for remote interaction between users. This paper addresses how to reconstruct, compress, and visualize the 3D environment. In contrast to most commercial desktop real time stereo vision systems our algorithm produces 3D meshes instead of dense point clouds, which we show allows for better quality visualizations. The chosen representation of the data also allows for high compression ratios for transfer to remote sites. We demonstrate the accuracy and speed of our results on a variety of benchmarks.


IEEE Transactions on Control Systems and Technology | 2009

Modeling and Optimization Analysis of a Single-Flagellum Micro-Structure Through the Method of Regularized Stokeslets

Edgar J. Lobaton; Alexandre M. Bayen

Bacteria such as Rhodobacter sphaeroides use a single flagellum for propulsion and change of orientation. These types of simple organisms have inspired microrobotic designs with potential applications in medicine, which motivates this work. In this paper, an elastic model for a single-flagellum micro-structure is presented and followed by an analysis of the system based on optimization. The model is based on the method of Regularized Stokeslets which allows for a discretization of the system into particles connected by spring forces. The optimization analysis leads to the design of an optimal elasticity distribution that maximizes the mean forward speed of the structure. These elasticity coefficients are obtained through the use of adjoint-based optimization. The results are illustrated through simulations showing improvement on the swimming pattern of the micro-structure.


Proceedings of the first annual ACM SIGMM conference on Multimedia systems | 2010

A methodology for remote virtual interaction in teleimmersive environments

Ramanarayan Vasudevan; Edgar J. Lobaton; Gregorij Kurillo; Ruzena Bajcsy; Tony Bernardin; Bernd Hamann; Klara Nahrstedt

Though the quality of imaging devices, the accuracy of algorithms that construct 3D data, and the hardware available to render such data have all improved, the algorithms available to calibrate, reconstruct, and then visualize such data are difficult to use, extremely noise sensitive, and unreasonably slow. In this paper, we describe a multi-camera system that creates a highly accurate (on the order of a centimeter), 3D reconstruction of an environment in real time (under 30 ms) that allows for remote interaction between users. The paper addresses the aforementioned deficiencies by featuring an overview of the technology and algorithms used to calibrate, reconstruct, and render objects in the system. The algorithm produces partial 3D meshes, instead of dense point clouds, which are combined on the renderer to create a unified model of the environment. The chosen representation of the data allows for high compression ratios for transfer to remote sites. We demonstrate the accuracy and speed of our results on a variety of benchmarks and data collected from our own system.


international conference on robotics and automation | 2011

Planning curvature-constrained paths to multiple goals using circle sampling

Edgar J. Lobaton; Jinghe Zhang; Sachin Patil; Ron Alterovitz

We present a new sampling-based method for planning optimal, collision-free, curvature-constrained paths for nonholonomic robots to visit multiple goals in any order. Rather than sampling configurations as in standard sampling-based planners, we construct a roadmap by sampling circles of constant curvature and then generating feasible transitions between the sampled circles. We provide a closed-form formula for connecting the sampled circles in 2D and generalize the approach to 3D workspaces. We then formulate the multigoal planning problem as finding a minimum directed Steiner tree over the roadmap. Since optimally solving the multi-goal planning problem requires exponential time, we propose greedy heuristics to efficiently compute a path that visits multiple goals. We apply the planner in the context of medical needle steering where the needle tip must reach multiple goals in soft tissue, a common requirement for clinical procedures such as biopsies, drug delivery, and brachytherapy cancer treatment. We demonstrate that our multi-goal planner significantly decreases tissue that must be cut when compared to sequential execution of single-goal plans.


international conference on computer vision | 2011

Robust topological features for deformation invariant image matching

Edgar J. Lobaton; Ramanarayan Vasudevan; Ron Alterovitz; Ruzena Bajcsy

Local photometric descriptors are a crucial low level component of numerous computer vision algorithms. In practice, these descriptors are constructed to be invariant to a class of transformations. However, the development of a descriptor that is simultaneously robust to noise and invariant under general deformation has proven difficult. In this paper, we introduce the Topological-Attributed Relational Graph (T-ARG), a new local photometric descriptor constructed from homology that is provably invariant to locally bounded deformation. This new robust topological descriptor is backed by a formal mathematical framework. We apply T-ARG to a set of benchmark images to evaluate its performance. Results indicate that T-ARG significantly outperforms traditional descriptors for noisy, deforming images.


european conference on computer vision | 2010

Local occlusion detection under deformations using topological invariants

Edgar J. Lobaton; Ramanarayan Vasudevan; Ruzena Bajcsy; Ron Alterovitz

Occlusions provide critical cues about the 3D structure of man-made and natural scenes. We present a mathematical framework and algorithm to detect and localize occlusions in image sequences of scenes that include deforming objects. Our occlusion detector works under far weaker assumptions than other detectors. We prove that occlusions in deforming scenes occur when certain well-defined local topological invariants are not preserved. Our framework employs these invariants to detect occlusions with a zero false positive rate under assumptions of bounded deformations and color variation. The novelty and strength of this methodology is that it does not rely on spatio-temporal derivatives or matching, which can be problematic in scenes including deforming objects, but is instead based on a mathematical representation of the underlying cause of occlusions in a deforming 3D scene. We demonstrate the effectiveness of the occlusion detector using image sequences of natural scenes, including deforming cloth and hand motions.

Collaboration


Dive into the Edgar J. Lobaton's collaboration.

Top Co-Authors

Avatar

Ruzena Bajcsy

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shankar Sastry

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ron Alterovitz

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Parvez Ahammad

University of California

View shared research outputs
Top Co-Authors

Avatar

Tony Bernardin

University of California

View shared research outputs
Top Co-Authors

Avatar

Allen Y. Yang

University of California

View shared research outputs
Top Co-Authors

Avatar

Bernd Hamann

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge