Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Somnath Sengupta is active.

Publication


Featured researches published by Somnath Sengupta.


asian conference on computer vision | 2006

Texture classification using a novel, soft-set theory based classification algorithm

Milind M. Mushrif; Somnath Sengupta; A. K. Ray

In this paper, we have presented a new algorithm for classification of the natural textures. The proposed classification algorithm is based on the notions of soft set theory. The soft-set theory was proposed by D. Molodtsov which deals with the uncertainties. The choice of convenient parameterization strategies such as real numbers, functions, and mappings makes soft-set theory very convenient and practicable for decision making applications. This has motivated us to use soft set theory for classification of the textures. The proposed algorithm has very low computational complexity when compared with Bayes classification technique and also yields very good classification accuracy. For feature extraction, the textures are decomposed using standard dyadic wavelets. The feature vector is obtained by calculating averaged L1-norm energy of each decomposed channel. The database consists of 25 texture classes selected from Bordatz texture Album. Experimental results show the superiority of the proposed approach compared with some existing methods.


Pattern Recognition Letters | 2001

Robust camera parameter estimation using genetic algorithm

Subhas Hati; Somnath Sengupta

Abstract In this paper, we propose a genetic algorithm (GA)-based approach to determine the external parameters of the camera from the knowledge of a given set of points in object space. We study the effect of noise and presence of outliers, and also mismatch resulting from incorrect correspondences between the object space points and the image space points, on the estimation of three translation parameters and three rotational parameters of a camera. The average of the magnitudes of the translation errors varies from 2.25 cm to 5 mm and the average of the magnitudes of the rotational errors varies from 0.4° to 0.25° at 20 dB SNR. The error in parameter estimation is insignificant upto three pairs of mismatched points out of 20 points in object space and skyrockets when four or more pairs of points are mismatched. These results have clearly established the robustness of GA in external camera parameter estimation.


international conference on multimedia and expo | 2006

Event-Importance Based Customized and Automatic Cricket Highlight Generation

Maheshkumar H. Kolekar; Somnath Sengupta

In this paper, we present a novel approach towards customized and automated generation of sports highlights from its extracted events and semantic concepts. A recorded sports video is first divided into slots, based on the game progress and for each slot, an importance-based concept and event-selection is proposed to include those in the highlights. Using our approach, we have successfully extracted highlights from recorded video of cricket match


IEEE Transactions on Systems, Man, and Cybernetics | 2014

Detection of Moving Objects Using Multi-channel Kernel Fuzzy Correlogram Based Background Subtraction

Pojala Chiranjeevi; Somnath Sengupta

In this paper, we examine the suitability of correlogram for background subtraction, as a step towards moving object detection. Correlogram captures inter-pixel relationships in a region and is seen to be effective for modeling the dynamic backgrounds. A multi-channel correlogram is proposed using inter-channel and intra-channel correlograms to exploit full color information and the inter-pixel relations on the same color planes and across the planes. We thereafter derive a novel feature, termed multi-channel kernel fuzzy correlogram, composed by applying a fuzzy membership transformation over multi-channel correlogram. Multi-channel kernel fuzzy correlogram maps multi-channel correlogram into a reduced dimensionality space and is less sensitivity to noise. The approach handles multimodal distributions without using multiple models per pixel unlike traditional approaches. The approach does not require ideal background frames for background model initialization and can be initialized with moving objects also. Effectiveness of the proposed method is illustrated on different video sequences.


Journal of Electronic Imaging | 2011

Moving object detection in the presence of dynamic backgrounds using intensity and textural features

Pojala Chiranjeevi; Somnath Sengupta

Moving object detection in the presence of dynamic backgrounds remains a challenging problem in video surveillance. Earlier work established that the background subtraction technique based on a covariance matrix descriptor is effective and robust for dynamic backgrounds. The work proposed herein extends this concept further, using the covariance-matrix descriptor derived from local textural properties, instead of directly computing from the local image features. The proposed approach models each pixel with a covariance matrix and a mean feature vector and the model is dynamically updated. We made extensive studies with the proposed technique to demonstrate the effectiveness of statistics on local textural properties.


IEEE Transactions on Broadcasting | 2015

Bayesian Network-Based Customized Highlight Generation for Broadcast Soccer Videos

Maheshkumar H. Kolekar; Somnath Sengupta

Sports highlight generation techniques aim at condensing a full-length video to a significantly shortened version that still preserves the main interesting content of the original video. In this paper, we present the system for automatically generating the highlights from sports TV broadcasts. The proposed system detects exciting clips based on audio features and then classify the individual scenes within the clip into events such as replay, player, referee, spectator, and players gathering. A probabilistic Bayesian belief network based on observed events is used to assign semantic concept-labels to the exciting clips, such as goals, saves, yellow-cards, red-cards, and kicks in soccer video sequences. The labeled clips are selected according to their degree of importance to include in the highlights. We have successfully generated highlights from soccer video sequences.


IEEE Signal Processing Letters | 2012

New Fuzzy Texture Features for Robust Detection of Moving Objects

Pojala Chiranjeevi; Somnath Sengupta

Robust detection of moving objects in presence of dynamic backgrounds is yet a challenging problem. In this letter, we propose a fuzzy membership transformation to be applied on the co-occurrence vector to derive a rich fuzzy transformed co-occurrence vector with shared membership values in a reduced dimensionality vector space. Fuzzy statistical texture features, derived from this fuzzy transformed co-occurrence vector, are able to improve the robustness in detecting moving objects, as compared to the traditional statistical texture features and other contemporary moving object segmentation approaches.


IEEE Transactions on Image Processing | 2014

Neighborhood Supported Model Level Fuzzy Aggregation for Moving Object Segmentation

Pojala Chiranjeevi; Somnath Sengupta

We propose a new algorithm for moving object detection in the presence of challenging dynamic background conditions. We use a set of fuzzy aggregated multifeature similarity measures applied on multiple models corresponding to multimodal backgrounds. The algorithm is enriched with a neighborhood-supported model initialization strategy for faster convergence. A model level fuzzy aggregation measure driven background model maintenance ensures more robustness. Similarity functions are evaluated between the corresponding elements of the current feature vector and the model feature vectors. Concepts from Sugeno and Choquet integrals are incorporated in our algorithm to compute fuzzy similarities from the ordered similarity function values for each model. Model updating and the foreground/background classification decision is based on the set of fuzzy integrals. Our proposed algorithm is shown to outperform other multi-model background subtraction algorithms. The proposed approach completely avoids explicit offline training to initialize background model and can be initialized with moving objects also. The feature space uses a combination of intensity and statistical texture features for better object localization and robustness. Our qualitative and quantitative studies illustrate the mitigation of varieties of challenging situations by our approach.


asian conference on computer vision | 2006

A hierarchical framework for generic sports video classification

Maheshkumar H. Kolekar; Somnath Sengupta

A five layered, event driven hierarchical framework for generic sports video classification has been proposed in this paper. The top layer classifications are based on a few popular audio and video content analysis techniques like short-time energy and Zero Crossing Rate (ZCR) for audio and Hidden Markov Model (HMM) based techniques for video, using color and motion as features. The lower layer classifications are done by applying game specific rules to recognize major events of the game. The proposed framework has been successfully tested with cricket and football video sequences. The event-related classifications bring us a step closer to the ultimate goal of semantic classifications that would be ideally required for sports highlight generation.


indian conference on computer vision, graphics and image processing | 2008

Semantic Event Detection and Classification in Cricket Video Sequence

Maheshkumar H. Kolekar; Kannappan Palaniappan; Somnath Sengupta

In this paper, we present a novel hierarchical framework and effective algorithms for cricket event detection and classification. The proposed scheme performs a topdown video event detection and classification using hierarchical tree which avoids shot detection and clustering. In the hierarchy, at level-1, we use audio features, to extract excitement clips from the cricket video. At level-2, we classify excitement clips into real-time and replay segments. At level-3, we classify these segments into field view and non-field view based on dominant grass color ratio. At level-4a, we classify field view into pitch-view, long-view, and boundary view using motion-mask. At level-4b, we classify non-field view into close-up and crowd using edge density feature. At level-5a, we classify close-ups into the three frequently occurring classes batsman, bowler/fielder, umpire using jersey color feature. At level-5b, we classify crowd segment into the two frequently occurring classes spectator and playerspsila gathering using color feature. We show promising results, with correctly classified cricket events, enabling structural and temporal analysis, such as highlight extraction, and video skimming.

Collaboration


Dive into the Somnath Sengupta's collaboration.

Top Co-Authors

Avatar

Maheshkumar H. Kolekar

Indian Institute of Technology Patna

View shared research outputs
Top Co-Authors

Avatar

Siddhartha Mukhopadhyay

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Alok Kanti Deb

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Pojala Chiranjeevi

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Indrajit Chakrabarti

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Rohan Mukherjee

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Vijay Kumar

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Abhishek Midya

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar

Subhas Hati

Indian Institute of Technology Kharagpur

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge