Ákos Zarándy
University of California, Berkeley
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ákos Zarándy.
IEEE Circuits & Devices | 1996
Lo Chua; Tamás Roska; T. Kozek; Ákos Zarándy
New cellular neural network chips, with stored-program capability and analog-and-logic architecture, are poised to challenge all-digital processing. In this article, we highlight the key ideas leading to the CNN Universal Machine, using simple circuit interpretations. We also illustrate the system, software, and application aspects.
2010 12th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA 2010) | 2010
Ákos Zarándy; David Fekete; Péter Földesy; Gergely Soós; Csaba Rekeczky
Displacement calculation algorithm is implemented on a heterogeneous sensor processor architecture, constructed of a mixed signal medium resolution processor array, and a digital, low resolution, foveal processor array. The algorithm is designed as an initial step of an airborne navigation framework. It features multi-scale multi-fovea processing.
2014 14th International Workshop on Cellular Nanoscale Networks and Their Applications, CNNA 2014 | 2014
Ákos Zarándy; Borbala Pencz; Mate Nemeth; Tamas Zsedrovits
Tracking multiple stationary environmental points from a moving platform vision system enables the calculation of the rotation and relative displacement components of the ego-motion. This is very useful for small autonomously moving robotic platforms, like a UAV, where this data can be either the primary or the auxiliary navigation information. In case of a high speed, light moving platform (like a quadcopter), the speed the power consumption, and the accuracy of these calculations is critical. This paper analyzes the implementation methods of different point tracking approaches to find the most suitable for the Eye-RIS vision system from speed and accuracy point of views.
2014 14th International Workshop on Cellular Nanoscale Networks and Their Applications, CNNA 2014 | 2014
Ákos Zarándy; Borbala Pencz; Mate Nemeth
Though UAVs can autonomously fly by using GPS based waypoint navigation, they are practically blind during these flights, therefore they are considered to be a risk factor to other aircrafts. Due to limited payloads of small and medium sized UAVs, small, low power focal-plane array processor based vision system would be a good candidate for identifying intruder aircrafts. Identification of remote aircraft against sky background is already solved. This paper introduces an algorithm for identifying remote aircraft against terrain background. Some of the critical algorithmic components are implemented on the SCAMP simulator, which enables us to judge the accuracy and the speed of the algorithm.
2014 14th International Workshop on Cellular Nanoscale Networks and Their Applications, CNNA 2014 | 2014
Zoltán Nagy; Ákos Zarándy; András Kiss; Mate Nemeth; Tamas Zsedrovics
An on-board UAV high-performance collision avoidance system sets up drastic constraints, which can be fulfilled by using carefully optimized many-core computational architectures. In this demonstration we introduce a many-core processor system, which can process a 150 megapixels/sec video flow to identify remote airplanes. The introduced processor system is implemented on Xilinx Spartan-6 and Zynq SoC FPGAs, and consumes less than 1W.
Spatial Temporal Patterns for Action-Oriented Perception in Roving Robots | 2009
Ákos Zarándy; Csaba Rekeczky
In this chapter, the implementation of visual routines for topographic cellular processors is described in detail. In specific, AnaLogic Computers Kft. (called AnaLogic henceforth) - as part of their contribution to the SPARK project - has developed special visual algorithms for cognition, and implemented them on the Eye-RIS v1.2 system of AnaFocus, described in Chapter 8. A significant requirement was to be able to run these routines in real-time, enabling the roving robots to react to their environment instantaneously. This was achieved, and a library of image processing functions was efficiently implemented on the Eye-RIS system, utilizing the capabilities of the Q-Eye chip. This significant speedup enables real-time image processing with the system even in case of complex tasks. The new functionalities of the Eye-RIS v1.2 visual device enabled the implementation of several advanced visual routines. These routines run at a speed of 200 to 1,000 frames per second (fps) on the system. The following is a list of the specific routines that were implemented: Global displacement calculation; Foreground-background separation based segmentation; Active contour; Multi-object tracking. In the following sections, each of the above routines is described, along with example programs. Examples are also provided showing the results of the processing, both in terms of what output they produce and also their performance on the Eye-RIS system.
Archive | 2008
Péter Földesy; Ákos Zarándy; Csaba Rekeczky
Archive | 1993
Lo Chua; Tamás Roska; T. Kozek; Ákos Zarándy
Archive | 1997
Ákos Zarándy; Tamás Roska
Archive | 2004
Ákos Zarándy; Csaba Rekeczky