Branislav Kisacanin
Texas Instruments
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Branislav Kisacanin.
Archive | 2008
Branislav Kisacanin; Shuvra S. Bhattacharyya; Sek M. Chai
Embedded Computer Vision, exemplified by the migration from powerful workstations to embedded processors in computer vision applications, is a new and emerging field that enables an associated shift in application development and implementation. This comprehensive volume brings together a wealth of experiences from leading researchers in the field of embedded computer vision, from both academic and industrial research centers, and covers a broad range of challenges and trade-offs brought about by this paradigm shift. Part I provides an exposition of basic issues and applications in the area necessary for understanding the present and future work. Part II offers chapters based on the most recent research and results. Finally, the last part looks ahead, providing a sense of what major applications could be expected in the near future, describing challenges in mobile environments, video analytics, and automotive safety applications. Features: Discusses the latest state-of-the-art techniques in embedded computer vision Presents a thorough introductory section on hardware and architectures, design methodologies, and video analytics to aid the readers understanding through the following chapters Offers emphasis on tackling important problems for society, safety, security, health, mobility, connectivity, and energy efficiency Discusses evaluation of trade-offs required to design cost-effective systems for successful products Explores the advantages of various architectures, development of high-level software frameworks and cost-effective algorithmic alternatives Examines issues of implementation on fixed-point processors, presented through an example of an automotive safety application Offers insights from leaders in the field on what future applications will be This book is a welcome collection of stand-alone articles, ideal for researchers, practitioners, and graduate students. It provides historical perspective, the latest research results, and a vision for future developments in the emerging field of embedded computer vision. Supplementary material can be found at http://www.embeddedvisioncentral.com.
southwest symposium on image analysis and interpretation | 2008
Branislav Kisacanin
This paper illustrates the importance of both algorithmic and embedded software techniques for an optimal embedded implementation of an image analysis and computer vision function: the integral image. A naive, straightforward implementation of the integral image on an embedded processor will likely produce an unacceptable execution time. However, by applying recursion and double buffering, one can improve execution time by several orders of magnitude. We compare execution times and memory utilization for each of the optimization techniques applied. These techniques can also be applied to implement other computer vision functions on programmable processor architectures.
Computer Vision and Image Understanding | 2007
Mathias Kölsch; Vladimir Pavlovic; Branislav Kisacanin; Thomas S. Huang
We are at the beginning of an unprecedented growth period for computer vision. Although image processing and machine vision have long had established roles in manufacturing and industrial automation, only now are we witnessing an increase in the number of applications relying on image understanding. Computer vision technologies have become more prevalent in the past decade in both the commercial and the consumer markets. Technologyfriendly areas have experienced the most noticeable influx of computer vision techniques to provide more and better services, particularly in the gaming and automotive industries, but also in medicine, security and space exploration. This special issue of Computer Vision and Image Understanding highlights one particularly promising yet challenging task for computer vision: the facilitation of human– computer interaction (HCI). Vision is an appealing input ‘‘device’’ owing to its non-invasiveness, its small form factor while potentially observing a large space, its ubiquity and its software-based configurability. Vision-based interfaces (VBI) offer more natural ways to interact, akin to human–human interaction, and the flexibility to provide disabled computer users with highly specialized means of interaction. The trend is clear, as Professor Matthew Turk, a leading HCI researcher, summarizes: ‘‘There has been growing interest in turning the camera around and using computer vision to look at people, that is, to detect and recognize human faces, track heads, faces, hands, and bodies, analyze facial expression and body movement, and recognize gestures’’ (Communications of the ACM, January 2004/Vol. 47, No. 1). Although vision-based HCI as a research area emerged almost two decades ago, it has now reached a level of maturity that makes it a serious contender for building interaction devices and for implementing interaction means. A much larger and vastly improved array of methods, faster and cheaper computers, better imaging chips and optics, coupled with a more detailed understanding of the human visual system have brought forth such VBIs as the EyeToy, driver drowsiness monitoring, vehicle occupant detection for safe airbag deployment, and surgery guidance.
Signal Processing-image Communication | 2010
Branislav Kisacanin; Zoran Nikolic
In the last few years, programmable architectures centered around high-end DSP processors have emerged as the platform of choice for high-volume embedded vision applications, such as automotive safety and video surveillance. Their programmability inherently addresses the problems presented by the sheer diversity of vision algorithms. This paper provides an overview of high-impact algorithmic and software techniques for embedded vision applications implemented on programmable architectures and discusses several system-level issues. We provide a general discussion and practical examples for the following categories of algorithmic techniques: fast algorithms, reduced dimensionality and mathematical shortcuts. Additionally, we discuss the importance of software techniques such as the use of fixed-point arithmetic, reduced data transfers and cache-friendly programming. In our experience, each of these techniques is a key enabler for real-time embedded vision systems.
signal processing systems | 2014
Jagadeesh Sankaran; Ching-Yu Hung; Branislav Kisacanin
In this paper we introduce EVE (embedded vision/vector engine), with a FlexSIMD (flexible SIMD) architecture highly optimized for embedded vision. We show how EVE can be used to meet the growing requirements of embedded vision applications in a power- and area-efficient manner. EVE’s SIMD features allow it to accelerate low-level vision functions (such as image filtering, color-space conversion, pyramids, and gradients). With added flexibility of data accesses, EVE can also be used to accelerate many mid-level vision tasks (such as connected components, integral image, histogram, and Hough transform). Our experiments with a silicon implementation of EVE show that it performs many low- and mid-level vision functions with a 3–12x speed advantage over a C64x+DSP, while consuming less power and area. EVE also achieves code size savings of 4–6x over a C64x+DSP for regular loops. Thanks to its flexibility and programmability, we were able to implement two end-to-end vision applications on EVE and achieve more than a 5× application-level speedup over a C64x+. Having EVE as a coprocessor next to a DSP or a general purpose processor, algorithm developers have an option to accelerate the low- and mid-level vision functions on EVE. This gives them more room to innovate and use the DSP for new, more complex, high-level vision algorithms.
computer vision and pattern recognition | 2011
Goksel Dedeoglu; Branislav Kisacanin; Darnell Moore; Vinay Sharma; Andrew Miller
There is an ever-growing pressure to accelerate computer vision applications on embedded processors for wide-ranging equipment including mobile phones, network cameras, and automotive safety systems. Towards this goal, we propose a software library approach that eases common computational bottlenecks by optimizing over 60 low- and mid-level vision kernels. Optimized for a digital signal processor that is deployed in many embedded image & video processing systems, the library was designed for typical high-performance and low-power requirements. The algorithms are implemented in fixed-point arithmetic and support block-wise partitioning of video frames so that a direct memory access engine can efficiently move data between on-chip and external memory. We highlight the benefits of this library for a baseline video security application, which segments moving foreground objects from a static background. Benchmarks show a ten-fold acceleration over a bit-exact yet unoptimized C language implementation, creating more computational headroom to embed other vision algorithms.
international symposium on vlsi design, automation and test | 2011
Branislav Kisacanin
This invited paper gives an overview of road safety statistics and a summary of advanced driver assistance systems that have recently been deployed. The science and technology behind popular automotive vision systems, such as traffic sign recognition, is briefly explained and the processing requirements of vision algorithms are presented in the context of automotive environment. There are many opportunities in front of the semiconductors industry to help improve the safety of roads around the world and to contribute to the future in which all cars will be autonomous vehicles.
International Journal of Computer Vision | 2013
Zoran Živković; Nicu Sebe; Hamid K. Aghajan; Branislav Kisacanin
Computer vision for human–computer interaction has been in our living rooms for a few years now. We have witnessed an exponential growth in capability of vision systems for computer games, from Sony’s EyeToy for PlayStation 2 in 2003, Eye for PlayStation 3 in 2007, to Microsoft’s Kinect for XBOX 360 in 2010. In addition to enhancing our interaction with computer games, Kinect has had a deeper role in changing our expectations for human–machine interfaces of the future. We may need to wait a few more years to have computers like those in the science fiction movie Minority Report, but the games and hacks using Kinect show us that such interfaces are getting closer to reality. We now expect our TVs, smart phones, and tablets to have such capability and soon that will be a reality. To that end, the research in both academia and industry continues to make great strides, as demonstrated by the presentations at the Workshop on Human–Computer Interaction: Real-Time Vision Aspects of Natural User Interfaces, which the present guest editors organized at the 2011 ICCV in Barcelona. After the workshop was over, we had an open call for papers for authors to write journal articles on their work related to this topic, and the result of this effort is this Special Issue of the IJCV. The papers presented in this special issue
international symposium on circuits and systems | 2014
Sanmati Kamath; Shashank Dabral; Jagadeesh Sankaran; Brian Valentine; Branislav Kisacanin
In this paper we introduce the Embedded Vision Engine (EVE), a novel vision accelerator designed to complement digital signal processors on low- and mid-level vision algorithms. We illustrate EVEs performance on three important mid-level vision functions, the Hough transform for circles, integral image and calculating the Rotation invariant Binary Robust Independent Elementary Feature (RBRIEF) descriptor. EVE can execute these functions 4 times faster than the state-of-the-art digital signal processor. With other vision functions accelerated 3-12X, the acceleration of these popular vision functions contribute to the application level speed up of 5x on common automotive vision applications. This demonstrates that EVE, while working at powers comparable to the power of a DSP core, can deliver significant boost in performance.
signal processing systems | 2014
Shang-Hong Lai; Jenq Kuen Lee; Branislav Kisacanin
Embedded systems with multi-core designs are becoming increasingly important for signal processing in recent years. While embedded multi-core system design can significantly improve efficiency for signal processing systems, several critical issues from the hardware and software perspectives of embedded multi-core systems need to be carefully designed for different applications in signal processing. Some of the most important issues are the hardware architecture design, software tools, programming models, and algorithm parallelization for embedded multi-core computing. Usually, the system optimization for the above issues is closely related to the targeted application domain. This special issue focuses on the latest development and technical solutions concerning the multi-core embedded computing for signal processing from the hardware and software perspectives. This special issue consists of six papers related to the multicore embedded computing for signal processing systems. They can be partitioned into three categories: hardware architecture design, system and programming tools, and application-specific algorithm parallelization. Firstly, the paper “EVE: A Flexible SIMD Coprocessor for Embedded Vision Applications”, by Sankaran et al., is focused on the hardware architecture design for embedded vision applications. In this paper, they present a flexible SIMD architecture that is optimized for embedded vision applications. The proposed embedded vision co-processor is efficient in power and area consumption, and it can accelerate many low-level and mid-level vision tasks. There are three papers related to the system and programming tools for multi-core embedded computing in this special issue. For the paper titled “C++ Support and Applications for Embedded Multicore DSP Systems”, authored by Kuan et al., they propose a layered design to provide a code size aware C++ library support. This work provides C++ programming support to enhance low-level programming APIs that can be used to exploit DSPs, SIMD instructions, and DMAs on embedded multicore systems. They evaluate their C++ support with image blurring and JPEG compression tasks and show significant computational speed-up. Moreover, the paper “Message-Passing Programming for Embedded Multicore Signal-Processing Platforms”, by Hung et al., presents a light-weight MPI-like message-passing library with a three-layer modular design, which supports message-passing on several popular embedded multi-core signal-processing platforms. In this paper, they present a message-passing library for inter-core communications on different multi-core computing platforms to provide a standard, portable and efficient message-passing library for embedded multicore platforms. For the third paper in the category of system and programming tools, “Design Issues in a Performance Monitor for Multi-core Embedded Systems”, authored by Lin et al., presents a multi-core performance monitor and they evaluate the effects of monitor overheads for difference types of tasks, including CPU-bound and IO-bound tasks. This paper proposes an adaptive performance monitoring mechanism to reduce the impact of the monitoring overhead on the application, without sacrificing accuracy or immediacy of the monitored information. They show experimental results with different monitoring periods for a digital recording system. S.