Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mauricio Casares is active.

Publication


Featured researches published by Mauricio Casares.


IEEE Transactions on Image Processing | 2010

Cooperative Object Tracking and Composite Event Detection With Wireless Embedded Smart Cameras

Youlu Wang; Senem Velipasalar; Mauricio Casares

Embedded smart cameras have limited processing power, memory, energy, and bandwidth. Thus, many system- and algorithm-wise challenges remain to be addressed to have operational, battery-powered wireless smart-camera networks. We present a wireless embedded smart-camera system for cooperative object tracking and detection of composite events spanning multiple camera views. Each camera is a CITRIC mote consisting of a camera board and wireless mote. Lightweight and robust foreground detection and tracking algorithms are implemented on the camera boards. Cameras exchange small-sized data wirelessly in a peer-to-peer manner. Instead of transferring or saving every frame or trajectory, events of interest are detected. Simpler events are combined in a time sequence to define semantically higher-level events. Event complexity can be increased by increasing the number of primitives and/or number of camera views they span. Examples of consistently tracking objects across different cameras, updating location of occluded/lost objects from other cameras, and detecting composite events spanning two or three camera views, are presented. All the processing is performed on camera boards. Operating current plots of smart cameras, obtained when performing different tasks, are also presented. Power consumption is analyzed based upon these measurements.


IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2013

Automatic Fall Detection and Activity Classification by a Wearable Embedded Smart Camera

Koray Ozcan; Anvith Katte Mahabalagiri; Mauricio Casares; Senem Velipasalar

Robust detection of events and activities, such as falling, sitting, and lying down, is a key to a reliable elderly activity monitoring system. While fast and precise detection of falls is critical in providing immediate medical attention, other activities like sitting and lying down can provide valuable information for early diagnosis of potential health problems. In this paper, we present a fall detection and activity classification system using wearable cameras. Since the camera is worn by the subject, monitoring is not limited to confined areas, and extends to wherever the subject may go including indoors and outdoors. Furthermore, since the captured images are not of the subject, privacy concerns are alleviated. We present a fall detection algorithm employing histograms of edge orientations and strengths, and propose an optical flow-based method for activity classification. The first set of experiments has been performed with prerecorded video sequences from eight different subjects wearing a camera on their waist. Each subject performed around 40 trials, which included falling, sitting, and lying down. Moreover, an embedded smart camera implementation of the algorithm was also tested on a CITRIC platform with subjects wearing the CITRIC camera, and each performing 50 falls and 30 non-fall activities. Experimental results show the success of the proposed method.


international conference on distributed smart cameras | 2008

Light-weight salient foreground detection for embedded smart cameras

Mauricio Casares; Senem Velipasalar

Limited processing power and memory in embedded smart camera nodes necessitate the design of light-weight algorithms for computer vision tasks. Considering the memory requirements of an algorithm and its portability to an embedded processor should be an integral part of the algorithm design in addition to the accuracy requirements. This paper presents a light-weight and efficient background modeling and foreground detection algorithm that is highly robust against lighting variations and non-static backgrounds including scenes with swaying trees, water fountains, rippling water effects and rain. Contrary to many traditional methods, the memory requirement for the data saved for each pixel is very small, and the algorithm provides very reliable results with gray-level images as well. The proposed method selectively updates the background model with an automatically adaptive rate, thus can adapt to rapid changes. As opposed to traditional methods, pixels are not always treated individually, and information about neighbors is incorporated into decision making. The algorithm differentiates between salient and non-salient motion based on the reliability or unreliability of a pixelpsilas location, and by considering neighborhood information. The results obtained with various challenging outdoor and indoor sequences are presented, and compared with the results of different state of the art background subtraction methods. The experimental results demonstrate the success of the proposed light-weight salient foreground detection method.


computational intelligence and security | 2012

Autonomous tracking of vehicle rear lights and detection of brakes and turn signals

Akhan Almagambetov; Mauricio Casares; Senem Velipasalar

Automatic detection of vehicle alert signals is extremely critical in autonomous vehicle applications and collision avoidance systems, as these detection systems can help in the prevention of deadly and costly accidents. In this paper, we present a novel and lightweight algorithm that uses a Kalman filter and a codebook to achieve a high level of robustness. The algorithm is able to detect braking and turning signals of the vehicle in front both during the daytime and at night (daytime detection being a major advantage over current research), as well as correctly track a vehicle despite changing lanes or encountering periods of no or low-visibility of the vehicle in front. We demonstrate that the proposed algorithm is able to detect the signals accurately and reliably under different lighting conditions.


IEEE Transactions on Circuits and Systems for Video Technology | 2011

Adaptive Methodologies for Energy-Efficient Object Detection and Tracking With Battery-Powered Embedded Smart Cameras

Mauricio Casares; Senem Velipasalar

Battery-powered wireless embedded smart cameras have limited processing power, memory and energy. Since video processing tasks consume considerable amount of energy, it is essential to have lightweight algorithms to increase the energy efficiency of camera nodes. Moreover, just grabbing and buffering a frame require significant amount of energy. Thus, it is not sufficient to only focus on the vision algorithms. Methodologies are needed to determine when and how long a camera can be idle. In this paper, we first present a feedback method for detection and tracking, which provides significant savings in processing time. We take advantage of these savings by sending the microprocessor to idle state at the end of processing a frame. Then, we present an adaptive methodology that can send the camera to idle state not only when the scene is empty but also when there are target objects. Idle state duration is adaptively changed based on the speeds of tracked objects. We then introduce a combined method that employs the feedback method and the adaptive methodology together, and provides further savings in energy consumption. We provide a detailed comparison of these methods, and present experimental results showing the gains in processing time as well as the significant savings in energy consumption and increase in battery life.


advanced video and signal based surveillance | 2009

Cooperative Object Tracking and Event Detection with Wireless Smart Cameras

Youlu Wang; Mauricio Casares; Senem Velipasalar

Wireless embedded smart cameras not only capture images, but also can perform processing and communication. However,many system- and algorithm-wise challenges remain to be addressed to have operational, battery-powered wireless smart-camera networks, since they have limited processing power, memory, energy and bandwidth. In this paper, we present a wireless, embedded smart camera system for cooperative object tracking and event detection, wherein each camera platform consists of a camera board and a wireless mote. Light-weight background subtraction and tracking algorithms are implemented and run on the camera boards. Cameras communicate in a peer-to-peer manner over wireless links to exchange data, and thus to consistently track objects. In a wireless smart camera system, transferring large amounts of data between cameras should be avoided, since it requires more power, and incurs more communication delay. In the presented system, cameras exchange small-size packets for communication. Also, with wireless smart cameras, it is not viable to transfer all the captured frames to a base station due to limited resources. Instead, we define events of interest beforehand, and embedded smart cameras save only those portions of the live video capture where the defined event scenario occurs. We present results of tracking and detecting objects entering a region of interest, all of which are performed on the microprocessor of camera boards. We also show examples of consistently tracking objects, moving across different camera views, by wireless data exchange.


advanced video and signal based surveillance | 2012

A Robust Algorithm for the Detection of Vehicle Turn Signals and Brake Lights

Mauricio Casares; Akhan Almagambetov; Senem Velipasalar

Robust and lightweight detection of alert signals of front vehicle, such as turn signals and brake lights, is extremely critical, especially in autonomous vehicle applications. Even with cars that are driven by human beings, automatic detection of these signals can aid in the prevention of otherwise deadly accidents. This paper presents a novel, robust and lightweight algorithm for detecting brake lights and turn signals both at night and during the day. The proposed method employs a Kalman filter to reduce the processing load. Much research is focused only on the detection of brake lights at night, but our algorithm is able to detect turn signals as well as brake lights under any lighting conditions with high accuracy rates.


IEEE Transactions on Industrial Electronics | 2015

Robust and Computationally Lightweight Autonomous Tracking of Vehicle Taillights and Signal Detection by Embedded Smart Cameras

Akhan Almagambetov; Senem Velipasalar; Mauricio Casares

An important aspect of collision avoidance and driver assistance systems, as well as autonomous vehicles, is the tracking of vehicle taillights and the detection of alert signals (turns and brakes). In this paper, we present the design and implementation of a robust and computationally lightweight algorithm for a real-time vision system, capable of detecting and tracking vehicle taillights, recognizing common alert signals using a vehicle-mounted embedded smart camera, and counting the cars passing on both sides of the vehicle. The system is low-power and processes scenes entirely on the microprocessor of an embedded smart camera. In contrast to most existing work that addresses either daytime or nighttime detection, the presented system provides the ability to track vehicle taillights and detect alert signals regardless of lighting conditions. The mobile vision system has been tested in actual traffic scenes and the results obtained demonstrate the performance and the lightweight nature of the algorithm.


advanced video and signal based surveillance | 2010

Resource-Efficient Salient Foreground Detection for Embedded Smart Cameras br Tracking Feedback

Mauricio Casares; Senem Velipasalar

Battery-powered wireless embedded smart cameras havelimited processing power, memory and energy. Since videoprocessing tasks consume significant amount of power,the problem of limited resources becomes even more pro-nounced, and necessitates designing light-weight algo-rithms suitable for embedded platforms. In this paper, wepresent a resource-efficient salient foreground detection andtracking algorithm. Contrary to traditional methods thatimplement foreground object detection and tracking inde-pendently and in a sequential manner, the proposed methoduses the feedback from the tracking stage in the foregroundobject detection. We compare the proposed method with asequential method on the microprocessor of an embeddedsmart camera, and present the savings in the processingtime and energy consumption and the gain in the lifetimeof a battery-powered camera for different scenarios. Thepresented method provides significant savings in terms ofthe processing time of a frame. We take advantage of thesesavings by sending the microprocessor to idle state at theend of processing a frame, and when the scene is empty.


Transportation Research Record | 2011

Improving Safety and Mobility at High-Speed Intersections with Innovations in Sensor Technology

Anuj Sharma; Darcy M Bullock; Senem Velipasalar; Mauricio Casares; Jacob Schmitz; Nathaniel Burnett

A series of innovations has been made in the vehicle sensors field. Technologies such as IntelliDrive and radar-based smart sensors make it possible to track each vehicle in proximity to an intersection. However, current technologies have limitations, such as lack of robustness, accuracy, or level of penetration. This paper assumes an accurate wide-area detector (WAD), which might be soon available, and highlights the potential benefits that might be derived in safety and efficiency of operations at high-speed intersections from the deployment of the WAD. Two critical areas in which wide-area detection can lead to significant improvements are discussed: (a) location of crash risk on onset of yellow and (b) location of vehicles on onset of yellow. A case study was conducted at an instrumented intersection in Noblesville, Indiana, to estimate potential improvement from the use of an ideally operating WAD and green extension logic for signal control. Findings revealed that the replacement of the single loop detector sensor with a WAD sensor would lead to an additional 1.4 vehicles being served per lane on the cross street per unit vehicle provided with dilemma zone protection on the highspeed approach. Results also showed that speed traps should be used only after accounting for the trade-off between safety and efficiency and the traffic control logic. When speed traps were designed with generic dilemma zone boundaries at the Noblesville site, the dilemma zone protection was provided only 57% of the time because vehicles accelerated or decelerated after passing the speed trap.

Collaboration


Dive into the Mauricio Casares's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Youlu Wang

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Anuj Sharma

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrea Prati

Università Iuav di Venezia

View shared research outputs
Top Co-Authors

Avatar

Paolo Santinelli

University of Modena and Reggio Emilia

View shared research outputs
Top Co-Authors

Avatar

Rita Cucchiara

University of Modena and Reggio Emilia

View shared research outputs
Top Co-Authors

Avatar

Alvaro Pinto

University of Nebraska–Lincoln

View shared research outputs
Researchain Logo
Decentralizing Knowledge