Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Soyeb Nagori is active.

Publication


Featured researches published by Soyeb Nagori.


international symposium on circuits and systems | 2014

High throughput VLSI architecture for HEVC SAO encoding for ultra HDTV

Mihir Mody; Hrushikesh Garud; Soyeb Nagori; Dipan Kumar Mandal

This paper presents a high performance, silicon area efficient, and software configurable hardware architecture for sample adaptive offset (SAO) encoding. The paper proposes a novel architecture consisting of single largest coding unit (LCU) stage SAO operation, unified data path for luma and chroma channels, add-on external interfaces on frame level statistics collection units to allow fine control over the parameter estimation process, flexible rate control and artifact avoidance algorithms. The unified data path consists of 2D-block based processing with 3 pipeline stages for statistics generation and multiple offset rate-distortion cost estimation blocks for high performance. The proposed design after placement and routing is expected to take-up approximately 0.15 mm2 of silicon area in 28nm CMOS process. The proposed design at 200 MHz supports 4K Ultra HD video encoding at 60fps. Simulation experiments have shown average bit-rate saving of up to 4.3% with in-loop SAO filtering and various encoder configurations.


ieee international conference on high performance computing data and analytics | 2015

High Performance Front Camera ADAS Applications on TI's TDA3X Platform

Mihir Mody; Pramod Swami; Kedar Chitnis; Shyam Jagannathan; Kumar Desappan; Anshu Jain; Deepak Kumar Poddar; Zoran Nikolic; Prashanth Viswanath; Manu Mathew; Soyeb Nagori; Hrushikesh Garud

Advanced driver assistance systems (ADAS) are designed to increase drivers situational awareness and road safety by providing essential information, warning and automatic intervention to reduce the possibility/severity of an accident. Of the various types of ADAS modalities available, camera based ADAS are being widely adopted for their usefulness in varied applications, overall reliability and adaptability to new requirements. But camera based ADAS also represents a complex, high-performance, and low-power compute problem, requiring specialized solutions. This paper introduces a high performance front camera ADAS based on a small area, low power System-on-Chip (SoC) solution from Texas Instruments called Texas Instruments Driver Assist 3x (TDA3x). The paper illustrates compute capabilities of the device in implementation of a typical front camera ADAS. The paper also introduces key programming concepts related to heterogeneous programmable compute cores in the SoC and the software framework to use those cores in order to develop the front camera solutions. These aspects will be of interest not only to the ADAS developers but for computer vision and compute intensive embedded system development.


computer vision and pattern recognition | 2017

Sparse, Quantized, Full Frame CNN for Low Power Embedded Devices

Manu Mathew; Kumar Desappan; Pramod Swami; Soyeb Nagori

This paper presents methods to reduce the complexity of convolutional neural networks (CNN). These include: (1) A method to quickly and easily sparsify a given network. (2) Fine tune the sparse network to obtain the lost accuracy back (3) Quantize the network to be able to implement it using 8-bit fixed point multiplications efficiently. (4) We then show how an inference engine can be designed to take advantage of the sparsity. These techniques were applied to full frame semantic segmentation and the degradation due to the sparsity and quantization is found to be negligible. We show by analysis that the complexity reduction achieved is significant. Results of implementation on Texas Instruments TDA2x SoC [17] are presented. We have modified Caffe CNN framework to do the sparse, quantized training described in this paper. The source code for the training is made available at https://github.com/tidsp/caffe-jacinto


computer vision and pattern recognition | 2016

A Diverse Low Cost High Performance Platform for Advanced Driver Assistance System (ADAS) Applications

Prashanth Viswanath; Kedar Chitnis; Pramod Swami; Mihir Mody; Sujith Shivalingappa; Soyeb Nagori; Manu Mathew; Kumar Desappan; Shyam Jagannathan; Deepak Kumar Poddar; Anshu Jain; Hrushikesh Garud; Vikram V. Appia; Mayank Mangla; Shashank Dabral

Advanced driver assistance systems (ADAS) are becoming more and more popular. Lot of the ADAS applications such as Lane departure warning (LDW), Forward Collision Warning (FCW), Automatic Cruise Control (ACC), Auto Emergency Braking (AEB), Surround View (SV) that were present only in high-end cars in the past have trickled down to the low and mid end vehicles. Lot of these applications are also mandated by safety authorities such as EUNCAP and NHTSA. In order to make these applications affordable in the low and mid end vehicles, it is important to have a cost effective, yet high performance and low power solution. Texas Instruments (TIs) TDA3x is an ideal platform which addresses these needs. This paper illustrates mapping of different algorithms such as SV, LDW, Object detection (OD), Structure From Motion (SFM) and Camera-Monitor Systems (CMS) to the TDA3x device, thereby demonstrating its compute capabilities. We also share the performance for these embedded vision applications, showing that TDA3x is an excellent high performance device for ADAS applications.


international conference on signal processing | 2014

A fast color constancy scheme for automobile video cameras

Hrushikesh Garud; Uday Kiran Pudipeddi; Kumar Desappan; Soyeb Nagori

Color constancy (CC) is an important requirement for all digital imaging and many computer vision systems. Video photography applications such as automobile video cameras required to have the CC schemes that exhibit quick and accurate response to rapidly changing scene illuminant. This paper presents a fast and effective CC scheme specifically suited for automobile video cameras. The proposed CC scheme uses combination of a static method for illuminant estimation called White Patch Retinex (WPR) and a computationally efficient linear transformation model for WB correction of the images. The scheme also introduces novel temporal filtering of the WB parameters to avoid the field flicker noise. Exhaustive testing of the scheme in laboratory and real life test conditions have shown the it to be effective under various lighting conditions. The paper also presents details of its implementation on an embedded processing platform to achieve HD video processing at the frame rate of 60 frames per second.


international conference on consumer electronics | 2017

Efficient object detection and classification on low power embedded systems

Shyam Jagannathan; Kumar Desappan; Pramod Swami; Manu Mathew; Soyeb Nagori; Kedar Chitnis; Yogesh Marathe; Deepak Kumar Poddar; Suriya Narayanan

Identifying real world 3D objects such as pedestrians, vehicles and traffic signs using 2D images is a challenging task. There are multiple approaches to tackle this problem with varying degree of detection accuracy and implementation complexity. Some approaches use “hand coded” object features such as Histogram of Oriented Gradients (HOG), Haar, Scale Invariant Feature Transform (SIFT) along with a linear classifier such as Support Vector Machine (SVM), Adaptive Boosting (AdaBoost) to detect objects. Recent developments have shown that a deep multi-layered Convolution Neural Network (CNN) classifier can learn the object features on its own and also classify at an accuracy surpassing human vision. In this paper we combine both the approaches; “object detection” is done using HOG features and AdaBoost cascade classifier and “object classification” is done using CNN to classify the type of objects being detected. The proposed method is implemented on TIs low power TDA3x SoC.


2010 IEEE 4th International Conference on Internet Multimedia Services Architecture and Application | 2010

Sub-picture based rate control algorithm for achieving real time encoding and improved video quality for H.264 HD encoder on embedded video SOCs

Naveen Srinivasamurthy; Soyeb Nagori; Girish Srinivasa Murthy; Satish Kumar

In this paper we present a rate control algorithm to control the encoded picture size in a video sequence within a specified maximum limit. The need to control the encoded picture size arises in several video applications; (i) video conferencing to ensure the glass to glass delay is within acceptable conversational delay, (ii) video encoders complying with H.241 MTU packetization constraints to ensure that for all pictures real-time encoding can be achieved. The proposed rate control algorithm adjusts the quantization scale at the end of every row in the video sequence. Bits consumed by the already encoded macroblocks are monitored and the quantization scale is increased if the bits consumed are more than the average bits per macroblock as determined by the maximum encoded picture constraint. It is shown that in a video encoder complying with H.241 MTU packetization constraints the picture encoding time does not always meet the real-time encoding constraint. However, by using the proposed rate control algorithm in the video encoder (complying with H.241 MTU packetization constraints) it is shown that the encoding time for all pictures always meets the real-time constraint. Additionally, it is also shown that using the proposed rate control enables improvement in overall video quality. For sequence #3 the PSNR was increased by 3.8 dB and the DMOS was decreased by 44 points when the proposed rate control algorithm was used compared to the previous rate control algorithm indicating significant video quality improvements1. Thus with our proposed algorithm the dual requirements of (i) real-time encoding and (ii) improved video quality, are both simultaneously achieved.


international conference on consumer electronics | 2017

Real time Structure from Motion for Driver Assistance System

Deepak Kumar Poddar; Pramod Swami; Soyeb Nagori; Prashanth Viswanath; Manu Mathew; Desappan Kumar; Anshu Jain; Shyam Jagannathan

Understanding of 3D surrounding is an important problem in Advanced Driver Assistance Systems (ADAS). Structure from Motion (SfM) is well known computer vision technique for estimating 3D structure from 2D image sequences. Inherent complexities of the SfM pose different algorithmic and implementation challenges to have an efficient enablement on embedded processor for real time processing. This paper focuses on highlighting such challenges and innovative solutions for them. The paper proposes an efficient SfM solution that has been implemented on Texas Instruments TDA3x series of System on Chip (SoC). The TDA3x SoC contains one vector processor (known as EVE) and two C66x DSPs as co-processors which are useful for computationally intensive vision processing. The proposed SfM solution which performs Sparse Optical Flow, Fundamental matrix estimation, Triangulation, 3D points pruning consumes 42% of EVE and 10% of one DSP for 25 fps processing of one mega pixel image resolution.


2011 IEEE 5th International Conference on Internet Multimedia Systems Architecture and Application | 2011

Novel rate distortion optimized region of interest video coding for embedded video SOCs

Naveen Srinivasamurthy; Soyeb Nagori; Manoj Koul

In this paper we present a state of the art, practical, realtime, region of interest (ROI) video encoder implemented on the Texas Instruments TMS320DM3x SOC. The proposed algorithm is a novel rate distortion optimized ROI coding algorithm with low complexity making it ideal for implementing on embedded video SOCs with low computational and memory resources while achieving excellent perceptual quality. The proposed solution is a complete solution incorporating ROI processing in the entire video chain from front-end face detection to back-end video compression. It is probably one of the first video capture and compression system implemented on an embedded SOC which relies on specialized rate distortion method for ROI coding using object detection methods from the front end or user inputs. Extensive subjective evaluation has been performed on the proposed algorithm for various resolutions ranging from CIF to 1080p video resolutions at different bitrates for over 300 test cases. Significant subjective quality enhancements have been observed for video sequences over all the different video resolutions at various different bitrates. With the proposed algorithm competitive subjective quality is achieved for video conferencing sequences at 300 kbps for 720p and at 96 kbps for CIF when compared to the case where no ROI based rate distortion methods for coding are used. On the Texas Instruments TMS320DM3x SOC the ROI videoencoder achieved realtime performance for 1080p video resolution at 30 fps.


Archive | 2011

REGION OF INTEREST (ROI) VIDEO ENCODING

Mehmet Umut Demircin; Do-Kyoung Kwon; Naveen Srinivasamurthy; Manoj Koul; Soyeb Nagori

Collaboration


Dive into the Soyeb Nagori's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge