Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mukesh Kumar Saini is active.

Publication


Featured researches published by Mukesh Kumar Saini.


IEEE Access | 2015

Toward Social Internet of Vehicles: Concept, Architecture, and Applications

Kazi Masudul Alam; Mukesh Kumar Saini; Abdulmotaleb El Saddik

The main vision of the Internet of Things (IoT) is to equip real-life physical objects with computing and communication power so that they can interact with each other for the social good. As one of the key members of IoT, Internet of Vehicles (IoV) has seen steep advancement in communication technologies. Now, vehicles can easily exchange safety, efficiency, infotainment, and comfort-related information with other vehicles and infrastructures using vehicular ad hoc networks (VANETs). We leverage on the cloud-based VANETs theme to propose cyber-physical architecture for the Social IoV (SIoV). SIoV is a vehicular instance of the Social IoT (SIoT), where vehicles are the key social entities in the machine-to-machine vehicular social networks. We have identified the social structures of SIoV components, their relationships, and the interaction types. We have mapped VANETs components into IoT-A architecture reference model to offer better integration of SIoV with other IoT domains. We also present a communication message structure based on automotive ontologies, the SAE J2735 message set, and the advanced traveler information system events schema that corresponds to the social graph. Finally, we provide the implementation details and the experimental analysis to demonstrate the efficacy of the proposed system as well as include different application scenarios for various user groups.


acm multimedia | 2012

MoViMash: online mobile video mashup

Mukesh Kumar Saini; Raghudeep Gadde; Shuicheng Yan; Wei Tsang Ooi

With the proliferation of mobile video cameras, it is becoming easier for users to capture videos of live performances and socially share them with friends and public. As an attendee of such live performances typically has limited mobility, each video camera is able to capture only from a range of restricted viewing angles and distance, producing a rather monotonous video clip. At such performances, however, multiple video clips can be captured by different users, likely from different angles and distances. These videos can be combined to produce a more interesting and representative mashup of the live performances for broadcasting and sharing. The earlier works select video shots merely based on the quality of currently available videos. In real video editing process, however, recent selection history plays an important role in choosing future shots. In this work, we present MoViMash, a framework for automatic online video mashup that makes smooth shot transitions to cover the performance from diverse perspectives. Shot transition and shot length distributions are learned from professionally edited videos. Further, we introduce view quality assessment in the framework to filter out shaky, occluded, and tilted videos. To the best of our knowledge, this is the first attempt to incorporate history-based diversity measurement, state-based video editing rules, and view quality in automated video mashup generations. Experimental results have been provided to demonstrate the effectiveness of MoViMash framework.


Multimedia Tools and Applications | 2014

W3-privacy: understanding what, when, and where inference channels in multi-camera surveillance video

Mukesh Kumar Saini; Pradeep K. Atrey; Sharad Mehrotra; Mohan S. Kankanhalli

Huge amounts of video are being recorded every day by surveillance systems. Since video is capable of recording and preserving an enormous amount of information which can be used in many applications, it is worth examining the degree of privacy loss that might occur due to public access to the recorded video. A fundamental requirement of privacy solutions is an understanding and analysis of the inference channels than can lead to a breach of privacy. Though inference channels and privacy risks are well studied in traditional data sharing applications (e.g., hospitals sharing patient records for data analysis), privacy assessments of video data have been limited to the direct identifiers such as people’s faces in the video. Other important inference channels such as location (Where), time (When), and activities (What) are generally overlooked. In this paper we propose a privacy loss model that highlights and incorporates identity leakage through multiple inference channels that exist in a video due to what, when, and where information. We model the identity leakage and the sensitive information separately and combine them to calculate the privacy loss. The proposed identity leakage model is able to consolidate the identity leakage through multiple events and multiple cameras. The experimental results are provided to demonstrate the proposed privacy analysis framework.


acm sigmm conference on multimedia systems | 2013

The jiku mobile video dataset

Mukesh Kumar Saini; Seshadri Padmanabha Venkatagiri; Wei Tsang Ooi; Mun Choon Chan

Proliferation of mobile devices with video recording capability has lead to a tremendous growth in the amount of user-generated mobile videos. Researchers have embarked on developing new interesting applications and enhancement algorithms for mobile video. There is, however, no standard dataset with videos that could represent characteristics of mobile videos captured in realistic scenarios. In this paper, we present our effort to create one such dataset, consisting of videos simultaneously recorded using mobile devices in an unconstrained manner by multiple users attending performance events. Each video is accompanied by concurrent readings from accelerometer and compass sensors. At the time of writing, the dataset contains 473 video clips, with a total length of 30 hours 41 minutes and total size of 122.8 GB. We believe this dataset is useful as a common benchmark dataset for a variety of different research topics on mobile videos, including video analytics, video quality enhancement, and automatic video mashups.


ACM Computing Surveys | 2015

How Close are We to Realizing a Pragmatic VANET Solution? A Meta-Survey

Mukesh Kumar Saini; Abdulhameed Alelaiwi; Abdulmotaleb El Saddik

Vehicular Ad-hoc Networks (VANETs) are seen as the key enabling technology of Intelligent Transportation Systems (ITS). In addition to safety, VANETs also provide a cost-effective platform for numerous comfort and entertainment applications. A pragmatic solution of VANETs requires synergistic efforts in multidisciplinary areas of communication standards, routings, security and trust. Furthermore, a realistic VANET simulator is required for performance evaluation. There have been many research efforts in these areas, and consequently, a number of surveys have been published on various aspects. In this article, we first explain the key characteristics of VANETs, then provide a meta-survey of research works. We take a tutorial approach to introducing VANETs and gradually discuss intricate details. Extensive listings of existing surveys and research projects have been provided to assess development efforts. The article is useful for researchers to look at the big picture and channel their efforts in an effective way.


international conference on multimedia and expo | 2010

Privacy modeling for video data publication

Mukesh Kumar Saini; Pradeep K. Atrey; Sharad Mehrotra; Sabu Emmanuel; Mohan S. Kankanhalli

Video cameras are being extensively used in many applications. Huge amounts of video are being recorded and stored everyday by surveillance systems. Any proposed application of this data raises severe privacy concerns. An assessment of privacy loss is necessary before any potential application of the data. In traditional methods of privacy modeling, researchers have focused on explicit means of identity leakage like facial information, etc. However, other implicit inference channels through which individuals an identity can be learned have not been considered. For example, an adversary can observe the behavior, look at the places visited and combine that with the temporal information to infer the identity of the person in the video. In this work, we thoroughly investigate privacy issues involved with the video data considering both implicit and explicit channels. We first establish an analogy with the statistical databases and then propose a model to calculate the privacy loss that might occur due to publication of the video data. The experimental results demonstrate the utility of the proposed model.


advances in multimedia | 2012

Adaptive transformation for robust privacy protection in video surveillance

Mukesh Kumar Saini; Pradeep K. Atrey; Sharad Mehrotra; Mohan S. Kankanhalli

Privacy is a big concern in current video surveillance systems. Due to privacy issues, many strategic places remain unmonitored leading to security threats. The main problem with existing privacy protection methods is that they assume availability of accurate region of interest (RoI) detectors that can detect and hide the privacy sensitive regions such as faces. However, the current detectors are not fully reliable, leading to breaches in privacy protection. In this paper, we propose a privacy protection method that adopts adaptive data transformation involving the use of selective obfuscation and global operations to provide robust privacy even with unreliable detectors. Further, there are many implicit privacy leakage channels that have not been considered by researchers for privacy protection. We block both implicit and explicit channels of privacy leakage. Experimental results show that the proposed method incurs 38% less distortion of the information needed for surveillance in comparison to earlier methods of global transformation; while still providing near-zero privacy loss.


IEEE Journal of Selected Topics in Signal Processing | 2014

Coherency Based Spatio-Temporal Saliency Detection for Video Object Segmentation

Dwarikanath Mahapatra; Syed Omer Gilani; Mukesh Kumar Saini

Extracting moving and salient objects from videos is important for many applications like surveillance and video retargeting. In this paper we use spatial and temporal coherency information to segment salient objects in videos. While many methods use motion information from videos, they do not exploit coherency information which has the potential to give more accurate saliency maps. Spatial coherency maps identify regions belonging to regular objects, while temporal coherency maps identify regions with high coherent motion. The two coherency maps are combined to obtain the final spatio-temporal map identifying salient regions. Experimental results on public datasets show that our method outperforms two competing methods in segmenting moving objects from videos.


IEEE Transactions on Multimedia | 2012

Adaptive Workload Equalization in Multi-Camera Surveillance Systems

Mukesh Kumar Saini; Xiangyu Wang; Pradeep K. Atrey; Mohan S. Kankanhalli

Surveillance and monitoring systems generally employ a large number of cameras to capture peoples activities in the environment. These activities are analyzed by hosts (human operators and/or computers) for threat detection. Threat detection is a target centric task in which the behavior of each target is analyzed separately, which requires a significant amount of human attention and is a computationally intensive task for automatic analysis. In order to meet the real-time requirements of surveillance, it is necessary to distribute the video processing load over multiple hosts. In general, cameras are statically assigned to the hosts; we show that this is not a desirable solution as the workload for a particular camera may vary over time depending on the number of targets in its view. In the future, this uneven distribution of workload will become more critical as the sensing infrastructures are being deployed on the cloud. In this paper, we model the camera workload as a function of the number of targets, and use that to dynamically assign video feeds to the hosts. Experimental results show that the proposed model successfully captures the variability of the workload, and that the dynamic workload assignment provides better results than a static assignment.


international conference on multimedia and expo | 2011

Anonymous surveillance

Mukesh Kumar Saini; Pradeep K. Atrey; Sharad Mehrotra; Mohan S. Kankanhalli

Video surveillance is a very effective tool of surveillance that enables a single security agent to monitor wide areas. However, it compromises the privacy of the individuals. There have been attempts to obfuscate face and silhouette regions of the images to hide the identity of individuals. We recognize that in traditional surveillance systems, the viewer generally has sufficient contextual knowledge about location of the camera, time, and activity patterns; which can lead to identity leakage even when the visual cues (face and appearance) are not present. In this way, the viewer can relate the identity of individuals to the sensitive information in the video causing privacy loss. In order to provide robust privacy preservation, the context knowledge needs to be decoupled from the video; however, human monitoring of the videos is also necessary for the assessment of the situation. In this paper we propose anonymous surveillance framework that decouples the contextual knowledge and video to the minimal extent required for situation assessment. The experimental results confirm that the proposed framework is very effective in protecting the privacy, yet does not affect much of the surveillance utility of the data.

Collaboration


Dive into the Mukesh Kumar Saini's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohan S. Kankanhalli

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Pradeep K. Atrey

State University of New York System

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Tsang Ooi

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge