Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Swarup Medasani is active.

Publication


Featured researches published by Swarup Medasani.


international symposium on neural networks | 2004

Active learning system for object fingerprinting

Swarup Medasani; Narayan Srinivasa; Yuri Owechko

Object fingerprinting and identification is a critical part of effective visual surveillance systems. In this paper, we present an approach to actively learn the object models in order to fingerprint the objects. Our approach uses a view-based classifier cascade that actively learns to recognize the generic class of the object. Salient features unique to the specific instance of the selected class of objects are modeled using fuzzy attribute relational graphs. These graphs are also adapted to represent object information gathered from multiple views. Preliminary results are quite promising and extensive studies are underway to ascertain the use of the system in more complicated scenarios.


IEEE Transactions on Fuzzy Systems | 2001

Graph matching by relaxation of fuzzy assignments

Swarup Medasani; Raghu Krishnapuram; YoungSik Choi

Graphs are very powerful and widely used representational tools in computer applications. We present a relaxation approach to (sub)graph matching based on a fuzzy assignment matrix. The algorithm has a computational complexity of O(n/sup 2/m/sup 2/) where n and m are the number of nodes in the two graphs being matched, and can perform both exact and inexact matching. To illustrate the performance of the algorithm, we summarize the results obtained for more than 12 000 pairs of graphs of varying types (weighted graphs, attributed graphs, and noisy graphs). We also compare our results with those obtained using the graduated assignment algorithm.


computer vision and pattern recognition | 2005

A Swarm-Based Volition/Attention Framework for Object Recognition

Yuri Owechko; Swarup Medasani

Visual attention helps identify the salient parts of a scene and enables efficient object recognition by allocating visual resources to more relevant regions of the scene. In this paper, we present an object recognition framework that combines top-down volitional recognition with attention processes using a swarm of cooperating intelligent agents. Each agent in the swarm is a selfcontained independent classifier that can, given any location in the image, predict the presence of a particular object of interest. Our framework combines bottom-up attention and top-down object classification using Particle Swarm Optimization (PSO) dynamics in a novel architecture that utilizes spatially-modulated evolutionary search to rapidly detect objects of interest in a scene. We use bottom-up maps that are automatically built from saliency, past swarm experience, and constraints on possible object positions to modify the swarm’s behavior and help guide the swarm in locating objects. We present fast object detection/recognition results for a variety of video sequences. Our results show that our framework allows objects to be quickly and accurately located and classified using very sparse sampling of the scene.


Intelligent Computing: Theory and Applications V | 2007

Behavior recognition using cognitive swarms and fuzzy graphs

Swarup Medasani; Yuri Owechko

Behavior analysis deals with understanding and parsing a video sequence to generate a high-level description of object actions and inter-object interactions. In this paper, we describe a behavior recognition system that can model and detect spatio-temporal interactions between detected entities in a visual scene by using ideas from swarm optimization, fuzzy graphs, and object recognition. Extensions of Particle Swarm Optimization based approaches for object recognition are first used to detect entities in video scenes. Our hierarchical generic event detection scheme uses fuzzy graphical models for representing the spatial associations as well as the temporal dynamics of the discovered scene entities. The spatial and temporal attributes of associated objects and groups of objects are handled in separate layers in the hierarchy. We also describe a new behavior specification language that helps the user analyst easily describe the event that needs to be detected using either simple linguistic queries or graphical queries. Our experimental results show that the approach is promising for detecting complex behaviors.


ieee swarm intelligence symposium | 2005

Cognitive swarms for rapid detection of objects and associations in visual imagery

Yuri Owechko; Swarup Medasani

We have developed a new optimization-based framework for computer vision that combines ideas from particle swarm optimization (PSO) and statistical pattern recognition to rapidly and accurately detect and classify objects in visual imagery. Swarm intelligence is used to locate objects by optimizing the classification confidence level. We have used our cognitive swarm framework to rapidly detect people, ground vehicles, and boats, and to recognize behaviors based on object associations, such as people exiting and entering vehicles, for applications in security, surveillance, target recognition, and automotive active safety.


electronic imaging | 2005

Possibilistic particle swarms for optimization

Swarup Medasani; Yuri Owechko

We present a new approach for extending the particle swarm optimization algorithm to multi-optima problems by using ideas from possibility theory. An elastic constraint is used to let the particles dynamically explore the solution space in two phases. In the exploratory phase, particles explore the space in an effort to track the global minima while also traversing the local minima. In the exploitatory phase, particles disperse in the local neighborhoods to locate the best local minima. The proposed PPSO has been applied to data clustering and object detection. Our preliminary results indicate that the proposed approach is efficient and robust.


Information Visualization | 2002

Vision-based fusion system for smart airbag applications

Yuri Owechko; Narayan Srinivasa; Swarup Medasani; Riccardo Boscolo

We describe a vision system for intelligent automotive airbag systems which can adapt airbag deployment according to the nature and position of the occupant, thereby reducing the risks for occupants who are in infant seats in the front passenger seat and all occupants who are too close to the airbag. Our system utilizes a stereo camera to extract multiple feature sets from the same video stream. A unique feature of our system is the fusion module which is trained to optimally combine the results of multiple classifiers to decide when to enable airbag deployment. We have successfully demonstrated this system operating in a test vehicle at real-time rates (20 updates/sec) with high accuracies (>98%) for a large variety of situations and lighting conditions.


computer vision and pattern recognition | 2011

Strip Histogram Grid for efficient LIDAR segmentation from urban environments

Thommen Korah; Swarup Medasani; Yuri Owechko

As part of a large-scale 3D recognition system for LI-DAR data from urban scenes, we describe an approach for segmenting millions of points into coherent regions that ideally belong to a single real-world object. Segmentation is crucial because it allows further tasks such as recognition, navigation, and data compression to exploit contextual information. A key contribution is our novel Strip Histogram Grid representation that encodes the scene as a grid of vertical 3D population histograms rising up from the locally detected ground. This scheme captures the nature of the real world, thereby making segmentation tasks intuitive and efficient. Our algorithms work across a large spectrum of urban objects ranging from buildings and forested areas to cars and other small street side objects. The methods have been applied to areas spanning several kilometers in multiple cities with data collected from both aerial and ground sensors exhibiting different properties. We processed almost a billion points spanning an area of 3.3 km2 in less than an hour on a regular desktop.


international conference on intelligent transportation systems | 2003

High performance sensor fusion architecture for vision-based occupant detection

Yuri Owechko; Narayan Srinivasa; Swarup Medasani; Riccardo Boscolo

We describe a fast and reliable vision system for detecting and recognizing occupants in automobiles. The main advantage of our system is its high accuracy due to the use of fusion module, which combines the results of multiple classifiers operating on different types of features (edges, scale, and range) from the same image. Another advantage is that since the same image sensor is used to generate the multiple feature types, cost is reduced. We utilize an active illumination strategy to provide adequate illumination car seat and shadow fill-in during both the night and day. Occupant position detection and recognition is performed on the actively illuminated image using the same algorithm under both night and day conditions. Our system can use images from commercially available CMOS vision sensors and thus very cost-effective and efficient for smart air bag applications in automobiles. We have successfully demonstrated this system operating in a test vehicle at real-time video rates (30 updates/sec) with high accuracies for a large variety of situations and lighting conditions.


Computer Vision and Image Understanding | 2001

Categorization of Image Databases for Efficient Retrieval Using Robust Mixture Decomposition

Swarup Medasani; Raghu Krishnapuram

In this paper, we present a robust mixture decomposition technique that automatically finds a compact representation of the data in terms of components. We apply it to the problem of organizing databases for efficient retrieval. The time taken for retrieval is an order of magnitude smaller than that of exhaustive search methods. We also compare our approach with other methods for decomposition that use traditional criteria such as Akaike, Schwarz, and minimum description length. We report results on the VisTex texture image database from the MIT Media Lab.

Collaboration


Dive into the Swarup Medasani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge