Huang Lee
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Huang Lee.
international conference on distributed smart cameras | 2008
Linda Tessens; Marleen Morbée; Huang Lee; Wilfried Philips; Hamid K. Aghajan
Within a camera network, the contribution of a camera to the observation of a scene depends on its viewpoint and on the scene configuration. This is a dynamic property, as the scene content is subject to change over time and the camera configuration might not be fixed, e.g. in a mobile network. In this work, we address the problem of effectively determining the principle viewpoint within a network, i.e. the view that contributes most to the desired observation of a scene. This selection is based on the information from each camerapsilas observations of persons in a scene, and only low data rate information is required to be sent over wireless channels since the image frames are first locally processed by each sensor node before transmission. The principal view, complemented with one or more helper views, constitutes a significantly more efficient scene representation than the totality of the available views. This is of great value for the reduction of the amount of image data that needs to be stored or transmitted over the network.
Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks | 2006
Huang Lee; Hamid Aghajan
A collaborative vision-based technique is proposed for localizing the nodes of a surveillance network based on observations of a non-cooperative moving target. The proposed method employs lightweight in-node image processing and limited data exchange between the nodes to determine the positions and orientations of the nodes participating in synchronized observations of the target. A node with an opportunistic observation of a passing target broadcasts a synchronizing packet and triggers image capture by its neighbors. In the cluster of participating nodes, the triggering node and a helper node define a relative coordinate system. Once a small number of joint observations of the target are made by the nodes, the model allows for a decentralized or a cluster-based solution for the localization problem.No images are transferred between the network nodes for the localization task, making the proposed method efficient and scalable. Simulation and experimental results are provided to verify the performance of the proposed technique.
asilomar conference on signals, systems and computers | 2005
Huang Lee; Hamid Aghajan
We introduce a novel localization technique that can jointly estimate the locations of a moving target and the sensor nodes in a wireless image sensor network. The proposed method is based on in-node image processing and can be implemented in a decentralized or clustered fashion. In our approach, two image sensors are used to define a relative coordinate system. In order to synchronize the observations, the node defined as the origin broadcasts packets that trigger image capture at other nodes. In the decentralized version of the technique, each one of the two reference nodes broadcasts its image plane position of the moving target at a few time instances. Each of the other nodes in the network that can detect the target in its image plane upon receiving a number of triggering broadcasts, calculates its own relative coordinates and orientation as well as the coordinates of the observed target. In the clustered version of the proposed technique, observations gathered by the nodes within a neighborhood cluster are sent to a cluster-head, which can be the reference node at the origin. The cluster-head combines the data and calculates the coordinates of the target and all the nodes that contributed observations. Experimental results are provided to verify the performance of both versions of the proposed algorithm.
sensor, mesh and ad hoc communications and networks | 2006
Huang Lee; Hattie Zhi Chen Dong; Hamid K. Aghajan
We present a vision-based solution to the problem of topology discovery and localization of wireless sensor networks. In the proposed model, a robot controlled by the network is introduced to assist with localization of a network of image sensors, which are assumed to have image planes parallel to the agents motion plane. The localization algorithm for the scenario where the moving agent has knowledge of its global coordinates is first studied. This baseline scenario is then used to build more complex localization algorithms in which the robot has no knowledge of its global positions. Two cases where the sensors have overlapping and non-overlapping fields of view (FOVs) are investigated. In order to implement the discovery algorithms for these two different cases, a forest structure is introduced to represent the topology of the network. We consider the collection of sensors with overlapping FOVs as a tree in the forest. The robot searches for nodes in each tree through boundary patrolling, while it searches for other trees by a radial pattern motion. Numerical analyses are provided to verify the proposed algorithms. Finally, experiment results show that the sensor coordinates estimated by the proposed algorithms accurately reflect the results found by manual methods
advanced concepts for intelligent vision systems | 2008
Huang Lee; Linda Tessens; Marleen Morbée; Hamid K. Aghajan; Wilfried Philips
Within a camera network, the contribution of a camera to the observations of a scene depends on its viewpoint and on the scene configuration. This is a dynamic property, as the scene content is subject to change over time. An automatic selection of a subset of cameras that significantly contributes to the desired observation of a scene can be of great value for the reduction of the amount of transmitted and stored image data. We propose a greedy algorithm for camera selection in practical vision networks where the selection decision has to be taken in real time. The selection criterion is based on the information from each camera sensors observations of persons in a scene, and only low data rate information is required to be sent over wireless channels since the image frames are first locally processed by each sensor node before transmission. Experimental results show that the performance of the proposed greedy algorithm is close to the performance of the optimal selection algorithm. In addition, we propose communication protocols for such camera networks, and through experiments, we show the proposed protocols improve latency and observation frequency without deteriorating the performance.
broadband communications, networks and systems | 2005
Laura Savidge; Huang Lee; Hamid K. Aghajan; Andrea J. Goldsmith
We investigate the use of distributed image sensing for network localization, dynamic routing, and load balancing in wireless sensor networks. In particular, the image sensors are first used to obtain angular bearing information between each network node and a set of other nodes, mobile agents, or targets. This data is used to construct the relative geographic topology of the network. The image sensors are then employed to make periodic measurements, which are reported to the destination via multihop routing. Nodes may also infrequently detect an event from which a set of image frames need to be reported. These high-bandwidth event reports may cause packet queues to develop at the routing nodes along paths to the destination. We propose a distributed routing scheme that employs a cost function based on location data, in-node queue sizes, and energy levels at neighboring nodes. Our scheme also implements a set of relative priority levels for the event-based and periodic data packets. Simulation results are presented and indicate improved network lifetime, lower end-to-end average and maximum delays, and significantly reduced buffer size requirements for the network nodes
ACM Transactions on Sensor Networks | 2010
Huang Lee; Abtin Keshavarzian; Hamid K. Aghajan
In wireless sensor networks, periodic data collection appears in many applications. During data collection, messages from sensor nodes are periodically collected and sent back to a set of base stations for processing. In this article, we present and analyze a near-lifetime-optimal and scalable solution for data collection in stationary wireless sensor networks and an energy-efficient packet exchange mechanism. In our solution, instead of using a fixed network topology, we construct a set of communication topologies and apply each topology to different data collection cycles. We not only use the flexibility in distributing the traffic load across different routes in the network (spatial load balancing), but also balance the energy consumption in the time domain (temporal load balancing). We show that this method achieves an average energy consumption rate very close to the optimal value found by network flow optimization techniques. To increase the scalability, we further extend our solution such that it can be applied to networks with multiple base stations where each base station only stores part of the network configuration, cooperating with each other to find a global solution in a distributed manner. The proposed methods are analyzed and evaluated by simulations.
multimedia signal processing | 2008
Marleen Morbée; Linda Tessens; Huang Lee; Wilfried Philips; Hamid K. Aghajan
Within a camera network, the contribution of a camera to the observation of a scene depends on its viewpoint and on the scene configuration. This is a dynamic property, as the scene content is subject to change over time. An automatic selection of a subset of cameras that significantly contributes to the desired observation of a scene can be of great value for the reduction of the amount of transmitted or stored image data. In this work, we propose low data rate schemes to select from a vision network a subset of cameras that provides a good frontal observation of the persons in the scene and allows for the best approximation of their 3D shape. We also investigate to what degree low data rates trade off quality of reconstructed 3D shapes.
global communications conference | 2008
Huang Lee; Abtin Keshavarzian; Hamid K. Aghajan
Immediate notification of urgent but rare events and delivery of time sensitive actuation commands appear in many practical wireless sensor and actuator network applications. Multi-parent wake-up scheduling was presented as a technique which can provide bi-directional end-to-end latency guarantees while optimizing the node battery lifetime. This method takes a cross-layer approach where multiple routes for transfer of messages and wake-up schedules for nodes are crafted in synergy to reduce overall message latencies. In this paper, we generalize the multi-parent method to support a multi-cluster model for the network where we assume that the network has multiple central points called cluster-head (CH) that are in charge of scheduling the nodes in the network. A key step in multi-parent method is to divide the nodes in network into disjoint groups such that each node has at least one link to a node in each group. We formulate this step as a graph coloring problem which is shown to be NP-complete. We propose an algorithm where all the cluster-heads cooperate to find a heuristic solution for the graph coloring optimization problem in a distributed manner. We show that each cluster-head requires less memory and computational power compared to the case where one cluster-head finds the global solution, therefore the solution is very scalable.
international conference on acoustics, speech, and signal processing | 2006
Huang Lee; Laura Savidge; Hamid Aghajan
We present novel techniques for localization of nodes in a wireless image sensor network. Based on visual observations of a moving object by the network nodes, the proposed techniques employ simple image processing functions to produce equations that contain the node positions and orientation angles as the unknown parameters. Observations made at the nodes relate the position of the observed object to the physical coordinates of the node via the mapped position of the object in the nodes image plane. In one formulation of the problem, multiple observations by a network node from a moving beacon with known coordinates result in a system of equations with a rank-deficient matrix. Hence, the solution for the desired node coordinates lies in the null space of the data matrix. In a second formulation, a different configuration of image sensor deployment with more degrees of freedom results in a least-squares solution for the unknown parameters. In a third formulation, multiple observations are made at each node from a target which moves at a fixed velocity vector. The solution to this problem formulation is also shown to correspond to the null space of the data matrix. The proposed algorithms are based on in-node processing and hence are scalable to large networks. Simulation and experimental results are provided in the paper