Robert LiKamWa
Rice University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Robert LiKamWa.
international conference on mobile systems, applications, and services | 2013
Robert LiKamWa; Bodhi Priyantha; Matthai Philipose; Lin Zhong; Paramvir Bahl
A major hurdle to frequently performing mobile computer vision tasks is the high power consumption of image sensing. In this work, we report the first publicly known experimental and analytical characterization of CMOS image sensors. We find that modern image sensors are not energy-proportional: energy per pixel is in fact inversely proportional to frame rate and resolution of image capture, and thus image sensor systems fail to provide an important principle of energy-aware system design: trading quality for energy efficiency. We reveal two energy-proportional mechanisms, supported by current image sensors but unused by mobile systems: (i) using an optimal clock frequency reduces the power up to 50% or 30% for low-quality single frame (photo) and sequential frame (video) capturing, respectively; (ii) by entering low-power standby mode between frames, an image sensor achieves almost constant energy per pixel for video capture at low frame rates, resulting in an additional 40% power reduction. We also propose architectural modifications to the image sensor that would further improve operational efficiency. Finally, we use computer vision benchmarks to show the performance and efficiency tradeoffs that can be achieved with existing image sensors. For image registration, a key primitive for image mosaicking and depth estimation, we can achieve a 96% success rate at 3 FPS and 0.1 MP resolution. At these quality metrics, an optimal clock frequency reduces image sensor power consumption by 36% and aggressive standby mode reduces power consumption by 95%.
international symposium on computer architecture | 2016
Robert LiKamWa; Yunhui Hou; Julian Gao; Mia Polansky; Lin Zhong
Continuous mobile vision is limited by the inability to efficiently capture image frames and process vision features. This is largely due to the energy burden of analog readout circuitry, data traffic, and intensive computation. To promote efficiency, we shift early vision processing into the analog domain. This results in RedEye, an analog convolutional image sensor that performs layers of a convolutional neural network in the analog domain before quantization. We design RedEye to mitigate analog design complexity, using a modular column-parallel design to promote physical design reuse and algorithmic cyclic reuse. RedEye uses programmable mechanisms to admit noise for tunable energy reduction. Compared to conventional systems, RedEye reports an 85% reduction in sensor energy, 73% reduction in cloudlet-based system energy, and a 45% reduction in computation-based system energy.
asia pacific workshop on systems | 2014
Robert LiKamWa; Zhen Wang; Aaron Carroll; Felix Xiaozhu Lin; Lin Zhong
The Google Glass is a mobile device designed to be worn as eyeglasses. This form factor enables new use cases, such as hands-free video chat and web search. However, its shape also hampers its potential: (1) battery size, and therefore lifetime, is limited by a need for the device to be lightweight, and (2) high-power processing leads to significant heat, which should be limited due to the compact form factor and proximity to the users skin. We use an Explorer Edition of Glass (XE12) to study the power and thermal characteristics of optical head-mounted display devices. We share insights and implications to limit power draw to increase the safety and utility of head-mounted devices.
architectural support for programming languages and operating systems | 2012
Felix Xiaozhu Lin; Zhen Wang; Robert LiKamWa; Lin Zhong
To accomplish frequent, simple tasks with high efficiency, it is necessary to leverage low-power, microcontroller-like processors that are increasingly available on mobile systems. However, existing solutions require developers to directly program the low-power processors and carefully manage inter-processor communication. We present Reflex, a suite of compiler and runtime techniques that significantly lower the barrier for developers to leverage such low-power processors. The heart of Reflex is a software Distributed Shared Memory (DSM) that enables shared memory objects with release consistency among code running on loosely coupled processors. In order to achieve high energy efficiency without sacrificing performance much, the Reflex DSM leverages (i) extreme architectural asymmetry between low-power processors and powerful central processors, (ii) aggressive compile-time optimization, and (iii) a minimalist runtime that supports efficient message passing and event-driven execution. We report a complete realization of Reflex that runs on a TI OMAP4430-based development platform as well as on a custom tri-processor mobile platform. Using smartphone sensing applications reported in recent literature, we show that Reflex supports a programming style very close to contemporary smartphone programming. Compared to message passing, the Reflex DSM greatly reduces efforts in programming heterogeneous smartphones, eliminating up to 38% of the source lines of application code. Compared to running the same applications on existing smartphones, Reflex reduces the average system power consumption by up to 81%.
international conference on mobile systems, applications, and services | 2015
Robert LiKamWa; Lin Zhong
Emerging wearable devices promise a multitude of computer vision-based applications that serve users without active engagement. However, vision algorithms are known to be resource-hungry; and modern mobile systems do not support concurrent application use of the camera. Toward supporting efficient concurrency of vision applications, we report Starfish, a split-process execution system that supports concurrent vision applications by allowing them to share computation and memory objects in a secure and efficient manner. Starfish splits the vision library from an application into a separate process, called the Core, which centrally serves all vision applications. The Core shares library call results among applications, eliminating redundant computation and memory use. Starfish supports unmodified applications and unmodified libraries without needing their source code, and guarantees correctness to the applications. In doing so, Starfish improves both the performance and energy efficiency of concurrent vision applications. Using a prototype implementation on Google Glass, we experimentally demonstrate that Starfish reduces the time spent processing repeated vision library calls by 71% - 97%. When running two to ten concurrent face recognition applications at 0.3 frames per second, Starfish reduces CPU utilization by more than 42% - 80%. Notably, this keeps CPU utilization below 13%, even as the number of applications increases. This reduces system power consumption by 19% - 58%, as Starfish maintains a power consumption at approximately 1210 mW while running the concurrent application workloads.
acm/ieee international conference on mobile computing and networking | 2014
David Ramirez; Robert LiKamWa; Jason Holloway
Screen-to-camera visible-light communication links are fundamentally limited by inter-symbol interference, in which the camera receives multiple overlapping symbols in a single capture exposure. By determining interference constraints, we are able to decode symbols with multi-bit depth across all three color channels. We present Styrofoam, a coding scheme which optimally satisfies the constraints by inserting blank frames into the transmission pattern. The coding scheme improves upon the state-of-the-art in camera-based visible-light communication by: (1) ensuring a decode with at least half-exposure of colored multi-bit symbols, (2) limiting decode latency to two transmission frames, and (3) transmitting 0.4 bytes per grid block at the slowest cameras frame rate. In doing so, we outperform peer unsynchronized VLC transmission schemes by 2.9x. Our implementation on smartphone displays and cameras achieves 69.1 kbps.
international conference on mobile systems, applications, and services | 2013
Robert LiKamWa; Bodhi Priyantha; Matthai Philipose; Lin Zhong; Paramvir Bahl
A hurdle to frequently performing mobile computer vision tasks is the high energy cost of image sensing. In particular, modern image sensors are not energy proportional; for low resolution and low frame rate capture, the image sensor consumes almost the same amount of energy as it does at high resolutions and high frame rates. We reveal two system-level energy proportional mechanisms: (i) using an optimal pixel clock frequency; (ii) entering low power standby mode between frames. These techniques can be implemented by the image sensor driver with minimal hardware adjustment. Further improvements can be made by designing sensors with heterogeneous hardware architectures. With energy proportionality, computer vision frameworks can be optimized for power consumption, continuously requesting low resolution frames with low energy while only occasionally using high energy to request high resolution frames. This will in turn enable low power continuous mobile vision applications.
international conference on mobile systems applications and services | 2014
Robert LiKamWa
Vision algorithms have enabled our camera-equipped devices to perform powerful tasks, including barcode scanning, object detection, text recognition, and face identification. Unfortunately, current systems require significant energy to capture, process, and analyze an image through the image sensor (imager), image signal processor (ISP), and application processor, limiting the longevity of such applications to short-term, on-demand use. We propose the investigation of an image processing paradigm that uses vision task information to optimize the energy cost of the image capture pipeline.
acm/ieee international conference on mobile computing and networking | 2014
Robert LiKamWa; Eddie Reyes; Lin Zhong
While computer vision algorithms and libraries have enabled and accelerated the adoption of vision processing into mobile and wearable applications, vision is a resource-hungry operation, and is thus not efficient enough to run on multiple applications simultaneously. However, we observe that many vision algorithms share identical sets of frames and features to perform their analyses, computed from the same library calls. Leveraging this observation, we design a split-process architecture to retrofit existing vision libraries to allow applications to transparently share the computational, memory, and energy overhead of vision processing.
international workshop on mobile computing systems and applications | 2018
Venkatesh Kodukula; Sai Bharadwaj Medapuram; Britton Jones; Robert LiKamWa
Many researchers in academia and industry [4, 8] advocate shifting processing near the image sensor through near-sensor accelerators to reduce data movement across energy-expensive interfaces. However, near-sensor processing also heats the sensor, increasing thermal noise and hot pixels, which degrades image quality. To understand these implications, we perform an energy and thermal characterization in the context of an augmented reality case study around visual marker detection. Our characterization results show that for a near-sensor accelerator consuming 1 W of power, dynamic range drops by 16 dB, image noise increases by 3 times, and the number of hot pixels multiplies by 16, degrading image quality. Such degradation impairs the task accuracy of interactive perceptual applications that require high accuracy. The marker-detection fails for 12% of frames when degraded by 1 minute of 1 W near-sensor power consumption. To this end, we propose temperature-driven task migration, a system-level technique that partitions processing between the thermally-coupled near-sensor accelerator and the thermally-isolated CPU host. Leveraging the sensors current temperature and application-driven image fidelity requirements, this technique mitigates task accuracy issues while providing gains in energy-efficiency. We discuss challenges pertaining to effective, seamless migration decisions at runtime, and propose potential solutions.