Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Suren Jayasuriya is active.

Publication


Featured researches published by Suren Jayasuriya.


computer vision and pattern recognition | 2016

ASP Vision: Optically Computing the First Layer of Convolutional Neural Networks Using Angle Sensitive Pixels

Huaijin Chen; Suren Jayasuriya; Jiyue Yang; Judy Stephen; Sriram Sivaramakrishnan; Ashok Veeraraghavan; Alyosha Molnar

Deep learning using convolutional neural networks (CNNs) is quickly becoming the state-of-the-art for challenging computer vision applications. However, deep learnings power consumption and bandwidth requirements currently limit its application in embedded and mobile systems with tight energy budgets. In this paper, we explore the energy savings of optically computing the first layer of CNNs. To do so, we utilize bio-inspired Angle Sensitive Pixels (ASPs), custom CMOS diffractive image sensors which act similar to Gabor filter banks in the V1 layer of the human visual cortex. ASPs replace both image sensing and the first layer of a conventional CNN by directly performing optical edge filtering, saving sensing energy, data bandwidth, and CNN FLOPS to compute. Our experimental results (both on synthetic data and a hardware prototype) for a variety of vision tasks such as digit recognition, object recognition, and face identification demonstrate 97% reduction in image sensor power consumption and 90% reduction in data bandwidth from sensor to CPU, while achieving similar performance compared to traditional deep learning pipelines.


international conference on computational photography | 2014

A switchable light field camera architecture with Angle Sensitive Pixels and dictionary-based sparse coding

Matthew Hirsch; Sriram Sivaramakrishnan; Suren Jayasuriya; Albert Wang; Alyosha Molnar; Ramesh Raskar; Gordon Wetzstein

We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and applied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced reconstruction algorithms, we show that-contrary to light field cameras today-our system can use the same measurements captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization.


custom integrated circuits conference | 2014

A Baseband Technique for Automated LO Leakage Suppression Achieving < −80dBm in Wideband Passive Mixer-First Receivers

Suren Jayasuriya; Dong Yang; Alyosha Molnar

A baseband technique is presented to detect and suppress LO leakage in wideband passive mixer-first receivers. Using a variable shunting resistance on the RF port, the LO leakage signal is modulated, down-converted and detected from baseband outputs. Current DACs injecting to the baseband port are up-converted and can be adjusted to cancel LO leakage. Suppression of LO on the RF port <; -80dBm is shown with a fully automated algorithm without the aid of RF spectrum monitoring.


Medical Decision Making | 2016

Changing Cycle Lengths in State-Transition Models Challenges and Solutions

Jagpreet Chhatwal; Suren Jayasuriya; Elamin H. Elbasha

The choice of a cycle length in state-transition models should be determined by the frequency of clinical events and interventions. Sometimes there is need to decrease the cycle length of an existing state-transition model to reduce error in outcomes resulting from discretization of the underlying continuous-time phenomena or to increase the cycle length to gain computational efficiency. Cycle length conversion is also frequently required if a new state-transition model is built using observational data that have a different measurement interval than the model’s cycle length. We show that a commonly used method of converting transition probabilities to different cycle lengths is incorrect and can provide imprecise estimates of model outcomes. We present an accurate approach that is based on finding the root of a transition probability matrix using eigendecomposition. We present underlying mathematical challenges of converting cycle length in state-transition models and provide numerical approximation methods when the eigendecomposition method fails. Several examples and analytical proofs show that our approach is more general and leads to more accurate estimates of model outcomes than the commonly used approach. MATLAB codes and a user-friendly online toolkit are made available for the implementation of the proposed methods.


international conference on 3d vision | 2015

Depth Fields: Extending Light Field Techniques to Time-of-Flight Imaging

Suren Jayasuriya; Adithya Kumar Pediredla; Sriram Sivaramakrishnan; Alyosha Molnar; Ashok Veeraraghavan

A variety of techniques such as light field, structured illumination, and time-of-flight (TOF) are commonly used for depth acquisition in consumer imaging, robotics and many other applications. Unfortunately, each technique suffers from its individual limitations preventing robust depth sensing. In this paper, we explore the strengths and weaknesses of combining light field and time-of-flight imaging, particularly the feasibility of an on-chip implementation as a single hybrid depth sensor. We refer to this combination as depth field imaging. Depth fields combine light field advantages such as synthetic aperture refocusing with TOF imaging advantages such as high depth resolution and coded signal processing to resolve multipath interference. We show applications including synthesizing virtual apertures for TOF imaging, improved depth mapping through partial and scattering occluders, and single frequency TOF phase unwrapping. Utilizing space, angle, and temporal coding, depth fields can improve depth sensing in the wild and generate new insights into the dimensions of lights plenoptic function.


Optics Letters | 2015

Dual light field and polarization imaging using CMOS diffractive image sensors

Suren Jayasuriya; Sriram Sivaramakrishnan; Ellen Chuang; Debashree Guruaribam; Albert Wang; Alyosha Molnar

In this Letter we present, to the best of our knowledge, the first integrated CMOS image sensor that can simultaneously perform light field and polarization imaging without the use of external filters or additional optical elements. Previous work has shown how photodetectors with two stacks of integrated metal gratings above them (called angle sensitive pixels) diffract light in a Talbot pattern to capture four-dimensional light fields. We show, in addition to diffractive imaging, that these gratings polarize incoming light and characterize the response of these sensors to polarization and incidence angle. Finally, we show two applications of polarization imaging: imaging stress-induced birefringence and identifying specular reflections in scenes to improve light field algorithms for these scenes.


Bulletin of Mathematical Biology | 2012

Effects of Time-Dependent Stimuli in a Competitive Neural Network Model of Perceptual Rivalry

Suren Jayasuriya; Zachary P. Kilpatrick

We analyze a competitive neural network model of perceptual rivalry that receives time-varying inputs. Time-dependence of inputs can be discrete or smooth. Spike frequency adaptation provides negative feedback that generates network oscillations when inputs are constant in time. Oscillations that resemble perceptual rivalry involve only one population being “ON” at a time, which represents the dominance of a single percept at a time. As shown in Laing and Chow (J. Comput. Neurosci. 12(1):39–53, 2002), for sufficiently high contrast, one can derive relationships between dominance times and contrast that agree with Levelt’s propositions (Levelt in On binocular rivalry, 1965). Time-dependent stimuli give rise to novel network oscillations where both, one, or neither populations are “ON” at any given time. When a single population receives an interrupted stimulus, the fundamental mode of behavior we find is phase-locking, where the temporally driven population locks its state to the stimulus. Other behaviors are analyzed as bifurcations from this forced oscillation, using fast/slow analysis that exploits the slow timescale of adaptation. When both populations receive time-varying input, we find mixtures of fusion and sole population dominance, and we partition parameter space into particular oscillation types. Finally, when a single population’s input contrast is smoothly varied in time, 1:n mode-locked states arise through period-adding bifurcations beyond phase-locking. Our results provide several testable predictions for future psychophysical experiments on perceptual rivalry.


computer vision and pattern recognition | 2017

Compressive Light Field Reconstructions Using Deep Learning

Mayank Gupta; Arjun Jauhari; Kuldeep Kulkarni; Suren Jayasuriya; Alyosha Molnar; Pavan K. Turaga

Light field imaging is limited in its computational processing demands of high sampling for both spatial and angular dimensions. Single-shot light field cameras sacrifice spatial resolution to sample angular viewpoints, typically by multiplexing incoming rays onto a 2D sensor array. While this resolution can be recovered using compressive sensing, these iterative solutions are slow in processing a light field. We present a deep learning approach using a new, two branch network architecture, consisting jointly of an autoencoder and a 4D CNN, to recover a high resolution 4D light field from a single coded 2D image. This network decreases reconstruction time significantly while achieving average PSNR values of 26-32 dB on a variety of light fields. In particular, reconstruction time is decreased from 35 minutes to 6.7 minutes as compared to the dictionary method for equivalent visual quality. These reconstructions are performed at small sampling/compression ratios as low as 8%, allowing for cheaper coded light field cameras. We test our network reconstructions on synthetic light fields, simulated coded measurements of real light fields captured from a Lytro Illum camera, and real coded images from a custom CMOS diffractive light field camera. The combination of compressive light field capture with deep learning allows the potential for real-time light field video acquisition systems in the future.


computer vision and pattern recognition | 2017

Reconstructing Intensity Images from Binary Spatial Gradient Cameras

Suren Jayasuriya; Orazio Gallo; Jinwei Gu; Timo Aila; Jan Kautz

Binary gradient cameras extract edge and temporal information directly on the sensor, allowing for low-power, low-bandwidth, and high-dynamic-range capabilities—all critical factors for the deployment of embedded computer vision systems. However, these types of images require specialized computer vision algorithms and are not easy to interpret by a human observer. In this paper we propose to recover an intensity image from a single binary spatial gradient image with a deep auto-encoder. Extensive experimental results on both simulated and real data show the effectiveness of the proposed approach.


ieee hot chips symposium | 2016

Experiences using a novel Python-based hardware modeling framework for computer architecture test chips

Christopher Torng; Moyang Wang; Bharath Sudheendra; Nagaraj Murali; Suren Jayasuriya; Shreesha Srinath; Taylor Pritchard; Robin Ying; Christopher Batten

This poster will describe a taped-out 2×2mm 1.3 M-transistor test chip in IBM 130 nm designed using our new Python-based hardware modeling framework. The goal of our tapeout was to demonstrate the ability of this framework to enable Agile hardware design flows.

Collaboration


Dive into the Suren Jayasuriya's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge