Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Parikshit Sakurikar is active.

Publication


Featured researches published by Parikshit Sakurikar.


ieee international symposium on parallel & distributed processing, workshops and phd forum | 2013

Fast, Scalable Parallel Comparison Sort on Hybrid Multicore Architectures

Dip Sankar Banerjee; Parikshit Sakurikar; Kishore Kothapalli

Sorting has been a topic of immense research value since the inception of Computer Science. Hybrid computing on multicore architectures involves computing simultaneously on a tightly coupled heterogeneous collection of devices. In this work, we consider a multicore CPU along with a many core GPU as our experimental hybrid platform. In this work, we present a hybrid comparison based sorting algorithm which utilizes a many-core GPU and a multi-core CPU to perform sorting. The algorithm is broadly based on splitting the input list according to a large number of splitters followed by creating independent sub lists. Sorting the independent sub lists results in sorting the entire original list. On a CPU+GPU platform consisting of an Intel i7 980 and an Nvidia GTX 580, our algorithm achieves a 20% gain over the current best known comparison sort result that was published by Davidson et. al. [In Par 2012]. On the above experimental platform, our results are better by 40% on average over a similar GPU-alone algorithm proposed by Leischner et. al. [IPDPS 2010]. Our results also show that our algorithm and its implementation scale with the size of the input. We also show that such performance gains can be obtained on other hybrid CPU+GPU platforms.


computer vision and pattern recognition | 2014

Dense View Interpolation on Mobile Devices Using Focal Stacks

Parikshit Sakurikar; P. J. Narayanan

Light field rendering is a widely used technique to generate novel views of a scene from novel viewpoints. Interpolative methods for light field rendering require a dense description of the scene in the form of closely spaced images. In this work, we present a simple method for dense view interpolation over general static scenes, using commonly available mobile devices. We capture an approximate focal stack of the scene from adjacent camera locations and interpolate intermediate images by shifting each focal region according to appropriate disparities. We do not rely on focus distance control to capture focal stacks and describe an automatic method of estimating the focal textures and the blur and disparity parameters required for view interpolation.


ieee international conference on high performance computing data and analytics | 2014

Comparison sorting on hybrid multicore architectures for fixed and variable length keys

Dip Sankar Banerjee; Parikshit Sakurikar; Kishore Kothapalli

Sorting has been a topic of immense research value since the inception of computer science. Hybrid computing on multicore architectures involves computing simultaneously on a tightly coupled heterogeneous collection of devices. In this work, we consider a multicore CPU along with a manycore GPU as our experimental hybrid platform. In this work, we present a hybrid comparison based sorting algorithm which utilizes a many-core GPU and a multi-core CPU to perform sorting. The algorithm is broadly based on splitting the input list according to a large number of splitters followed by creating independent sublists. Sorting the independent sublists results in sorting the entire original list. On a CPU + GPU platform consisting of an Intel i7-980X and an NVidia GTX 580, our algorithm achieves a 20% gain over the current best known comparison sort result that was published by (Davidson et al., 2012). On the above experimental platform, our results are better by 40% on average over a similar GPU-alone algorithm proposed by (Leischner et al., 2010). We also extend our sorting algorithm for fixed length keys to variable length keys. We use a look-ahead based approach to sort strings and obtain around a 24% benefit compared to the current best known solution. Our results also show that our algorithm and its implementation scale with the size of the input. We also show that such performance gains can be obtained on other hybrid CPU + GPU platforms.


workshop on applications of computer vision | 2012

Fast graph cuts using shrink-expand reparameterization

Parikshit Sakurikar; P. J. Narayanan

Global optimization of MRF energy using graph cuts is widely used in computer vision. As the images are getting larger, faster graph cuts are needed without sacrificing optimality. Initializing or reparameterizing a graph using results of a similar one has provided efficiency in the past. In this paper, we present a method to speedup graph cuts using shrink-expand reparameterization. Our scheme merges the nodes of a given graph to shrink it. The resulting graph and its mincut are expanded and used to reparameterize the original graph for faster convergence. Graph shrinking can be done in different ways. We use a block-wise shrinking similar to multiresolution processing of images in our Multiresolution Cuts algorithm. We also develop a hybrid approach that can mix nodes from different levels without affecting optimality. Our algorithm is particularly suited for processing large images. The processing time on the full detail graph reduces nearly by a factor of 4. The overall application time including all book-keeping is faster by a factor of 2 on various types of images.


Archive | 2018

RefocusGAN: Scene Refocusing Using a Single Image

Parikshit Sakurikar; Ishit Mehta; Vineeth N. Balasubramanian; P. J. Narayanan

Post-capture control of the focus position of an image is a useful photographic tool. Changing the focus of a single image involves the complex task of simultaneously estimating the radiance and the defocus radius of all scene points. We introduce RefocusGAN, a deblur-then-reblur approach to single image refocusing. We train conditional adversarial networks for deblurring and refocusing using wide-aperture images created from light-fields. By appropriately conditioning our networks with a focus measure, an in-focus image and a refocus control parameter \(\delta \), we are able to achieve generic free-form refocusing over a single image.


international conference on multimedia and expo | 2017

SynCam: Capturing sub-frame synchronous media using smartphones

Ishit Mehta; Parikshit Sakurikar; Rajvi Shah; P. J. Narayanan

Smartphones have become the de-facto capture devices for everyday photography. Unlike traditional digital cameras, smartphones are versatile devices with auxiliary sensors, processing power, and networking capabilities. In this work, we harness the communication capabilities of smartphones and present a synchronous/co-ordinated multi-camera capture system. Synchronous capture is important for many image/video fusion and 3D reconstruction applications. The proposed system provides an inexpensive and effective means to capture multi-camera media for such applications. Our coordinated capture system is based on a wireless protocol that uses NTP based synchronization and device specific lag compensation. It achieves sub-frame synchronization across all participating smartphones of even heterogeneous make and model. We propose a new method based on fiducial markers displayed on an LCD screen to temporally calibrate smart-phone cameras. We demonstrate the utility and versatility of this system to enhance traditional videography and to create novel visual representations such as panoramic videos, HDR videos, multi-view 3D reconstruction, multi-flash imaging, and multi-camera social media.


indian conference on computer vision, graphics and image processing | 2016

Intrinsic image decomposition using focal stacks

Saurabh Saini; Parikshit Sakurikar; P. J. Narayanan

In this paper, we presents a novel method (RGBF-IID) for intrinsic image decomposition of a wild scene without any restrictions on the complexity, illumination or scale of the image. We use focal stacks of the scene as input. A focal stack captures a scene at varying focal distances. Since focus depends on distance to the object, this representation has information beyond an RGB image towards an RGBD image with depth. We call our representation an RGBF image to highlight this. We use a robust focus measure and generalized random walk algorithm to compute dense probability maps across the stack. These maps are used to define sparse local and global pixel neighbourhoods, adhering to the structure of the underlying 3D scene. We use these neighbourhood correspondences with standard chromaticity assumptions as constraints in an optimization system. We present our results on both indoor and outdoor scenes using manually captured stacks of random objects under natural as well as artificial lighting conditions. We also test our system on a larger dataset of synthetically generated focal stacks from NYUv2 and MPI Sintel datasets and show competitive performance against current state-of-the-art IID methods that use RGBD images. Our method provides a strong evidence for the potential of RGBF modality in place of RGBD in computer vision.


indian conference on computer vision, graphics and image processing | 2012

Increasing intensity resolution on a single display using spatio-temporal mixing

Pawan Harish; Parikshit Sakurikar; P. J. Narayanan

Displays have seen much improvements over the years, with enhancements in spatial resolution and vertical refresh, etc., to provide better and smoother visual experiences. Color intensity resolution, however, has not changed much over the past few decades. Most displays are still limited to 8-bits per channel. Simultaneously, much work has gone into capturing high dynamic range images. Mapping these directly to current displays loses information that may be critical to many applications. We present a way to enhance intensity resolution of a given display by mixing intensities over spatial or temporal domains. Our system sacrifices high vertical refresh and spatial resolution in order to gain intensity resolution. We present three ways to mix intensities: spatially, temporally and spatio-temporally. The systems produce in-between-intensities not present on the base display, which are clearly distinguishable by the naked eye. We evaluate our systems using both a camera and human subjects, evaluating whether they scale the intensity resolution and also ensuring that the newly generated intensities follow the display model.


arXiv: Distributed, Parallel, and Cluster Computing | 2013

CPU and/or GPU: Revisiting the GPU Vs. CPU Myth

Kishore Kothapalli; Dip Sankar Banerjee; P. J. Narayanan; Surinder Sood; Aman Kumar Bahl; Shashank Sharma; Shrenik Lad; Krishna Kumar Singh; Kiran Kumar Matam; Sivaramakrishna Bharadwaj; Rohit Nigam; Parikshit Sakurikar; Aditya Deshpande; Ishan Misra; Siddharth Choudhary; Shubham Gupta


international conference on computer vision | 2017

Composite Focus Measure for High Quality Depth Maps

Parikshit Sakurikar; P. J. Narayanan

Collaboration


Dive into the Parikshit Sakurikar's collaboration.

Top Co-Authors

Avatar

P. J. Narayanan

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Ishit Mehta

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Dip Sankar Banerjee

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Kishore Kothapalli

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Aditya Deshpande

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Ishan Misra

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Kiran Kumar Matam

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Pawan Harish

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Pranjal Kumar Rai

International Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Rajvi Shah

International Institute of Information Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge