Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jose Antonio Boluda is active.

Publication


Featured researches published by Jose Antonio Boluda.


international symposium on circuits and systems | 1996

Response properties of a foveated space-variant CMOS image sensor

Fernando Pardo; Jose Antonio Boluda; J.J. Perez; S. Felici; B. Diericki; Danny Scheffer

A new foveated CMOS image sensor has been designed and fabricated. The photocell elements transform the light into current, and then, in a continuous way, into voltage without charge integration. This kind of sensing cell has been already employed in image sensors for normal cameras, but never in foveated sensors. The presented sensor tries to emulate the human eye pixel resulting in a sensor with a space variant distribution of pixels. Consequences of this distribution are the different size of the sensing elements in the pixel matrix, the non orthogonal shapes of the different elements that integrate the pixel, and, as a result, the different response of every cell in the sensor. A scaling mechanism is needed due to the different pixel size from circumference to circumference. A mechanism for current scaling is presented. This mechanism has been studied along with other effects, as the narrow channel effect on submicron MOS transistors, and the influence of the logarithmic response of these special kinds of sensing cells. The chip has been fabricated using standard 0.7 /spl mu/m CMOS technology.


Robotics and Autonomous Systems | 2001

On the advantages of combining differential algorithms and log-polar vision for detection of self-motion from a mobile robot

Jose Antonio Boluda; Juan Domingo

Abstract This paper describes the design and implementation on programmable hardware (FPGAs) of an algorithm for the detection of self-mobile objects as seen from a mobile robot. In this context, ‘self-mobile’ refers to those objects that change in the image plane due to their own movement, and not to the movement of the camera on board of the mobile robot. The method consists on adapting the original algorithm from Chen and Nandhakumar [A simple scheme for motion boundary detection, in: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, 1994] by using foveal images obtained with a special camera whose optical axis points towards the direction of advance. It is shown that the use of log-polar geometry simplifies the original formulation and highly reduces the volume of data to be treated. Limitations of the algorithm due to the differential nature of the approach are discussed, relating them with the parameters of the system. Experiments are shown in which a self-mobile object is detected in several conditions.


international conference on electronics circuits and systems | 1996

A new foveated space-variant camera for robotic applications

Jose Antonio Boluda; Fernando Pardo; T. Kayser; J.J. Perez; Joan Pelechano

A foveated camera has been designed and fabricated. The camera is implemented using a new foveated CMOS sensor which incorporates a log-polar transformation. This transformation has specially interesting properties for image processing including the information selective reduction and some invariances. The structure of this sensor permits an individual access to each pixel, exactly in the same way that the access to a RAM. This is a very interesting property that makes a big difference between a CCD based camera and a CMOS based camera. In the other hand, the CMOS nature of the sensor implies a very important fixed pattern noise due to the mismatch of the cell transistors. A fixed pattern noise correction circuitry has been included in the camera, in order to achieve a good image quality. The solutions adopted for achieving a low noise/signal ratio are also presented. Finally, some guidelines for including this camera in an autonomous navigation system are shown. The easy way for calculating the time-to-impact through this camera suggests the utilization of this sensor system for real-time applications.


Journal of Real-time Image Processing | 2007

Change-driven data flow image processing architecture for optical flow computation

Julio C. Sosa; Jose Antonio Boluda; Fernando Pardo; Rocío Gómez-Fabela

Optical flow computation has been extensively used for motion estimation of objects in image sequences. The results obtained by most optical flow techniques are computationally intensive due to the large amount of data involved. A new change-based data flow pipelined architecture has been developed implementing the Horn and Schunk smoothness constraint; pixels of the image sequence that significantly change, fire the execution of the operations related to the image processing algorithm. This strategy reduces the data and, combined with the custom hardware implemented, it achieves a significant optical flow computation speed-up with no loss of accuracy. This paper presents the bases of the change-driven data flow image processing strategy, as well as the implementation of custom hardware developed using an Altera Stratix PCI development board.


computer analysis of images and patterns | 1997

Detecting Motion Independent of the Camera Movement Through a Log-Polar Differential Approach

Jose Antonio Boluda; Juan Domingo; Fernando Pardo; Joan Pelechano

This paper is concerned with a differential motion detection technique in log-polar coordinates which allows object motion tracking independently of the camera ego-motion when camera focus is along the movement direction. The method does not use any explicit estimation of the motion field, which can be calculated afterwards at the moving points. The method, previously formulated in Cartesian coordinates, uses the log-polar coordinates, which allows the isolation of the object movement from the image displacement due to certain camera motions. Experimental results on a sequence of real images are included, in which a moving object is detected and optical flow is calculated in log-polar coordinates only for the points of the object.


Advanced Focal Plane Arrays and Electronic Cameras | 1996

Design issues on CMOS space-variant image sensors

Fernando Pardo; Jose Antonio Boluda; J.J. Perez; Bart Dierickx; Danny Scheffer

A new image sensor, using CMOS technology, has been designed and fabricated. The pixel distribution of this sensor follows a log-polar mapping, thus the pixel concentration is maximum at the center reducing the number of pixels towards the periphery, having a resolution of 56 rings with 128 pixels per ring. The design of this kind of sensors has special issues regarding the space-variant nature of the pixel distribution. The main topic is the different pixel size that requires scaling mechanisms to achieve the same output independently of the pixel size. This paper presents some study results on the scaling mechanisms of this kind of sensors. A mechanism for current scaling is presented. This mechanism has been studied along with the logarithmic response of these special kind of sensing cells. The chip has been fabricated using standard 0.7 micrometer CMOS technology.


Proceedings Fifth IEEE International Workshop on Computer Architectures for Machine Perception | 2000

Space variant vision and pipelined architecture for time to impact computation

Fernando Pardo; Isaac Llorens; Francisco Micó; Jose Antonio Boluda

Image analysis is one of the most interesting ways for a mobile vehicle to understand its environment. One of the tasks of an autonomous vehicle is to get accurate information of what it has in front, to avoid collision or find a way to a target. This task requires real-time restrictions depending on the vehicle speed and external object movement. The use of normal cameras, with homogeneous (squared) pixel distribution, for real-time image processing, usually requires high performance computing and high image rates. A different approach makes use of a CMOS space-variant camera that yields a high frame rate with low data bandwidth. The camera also performs the log-polar transform, simplifying some image processing algorithms. One of this simplified algorithms is the time to impact computation. The calculation of the time to impact uses a differential algorithm. A pipelined architecture specially suited for differential image processing algorithms has been also developed using programmable FPGAs.


international symposium on visual computing | 2008

On the Advantages of Asynchronous Pixel Reading and Processing for High-Speed Motion Estimation

Fernando Pardo; Jose Antonio Boluda; Francisco Vegara; Pedro Zuccarello

Biological visual systems are becoming an interesting source for the improvement of artificial visual systems. A biologically inspired read-out and pixel processing strategy is presented. This read-out mechanism is based on Selective pixel Change-Driven (SCD) processing. Pixels are individually processed and read-out instead of the classical approach where the read-out and processing is based on complete frames. Changing pixels are read-out and processed at short time intervals. The simulated experiments show that the response delay using this strategy is several orders of magnitude lower than current cameras while still keeping the same, or even tighter, bandwidth requirements.


reconfigurable computing and fpgas | 2006

Change-driven Image Architecture on FPGA with adaptive threshold for Optical-Flow Computation

Julio C. Sosa; Rocío Gómez-Fabela; Jose Antonio Boluda; Fernando Pardo

Optical flow computation has been extensively used for object motion estimation in image sequences. However, the results obtained by most optical flow techniques are as accurate as computationally intensive due to the large amount of data involved. A new strategy for image sequence processing has been developed; pixels of the image sequence that significantly change fire the execution of the operations related to the image processing algorithm. The data reduction achieved with this strategy allows a significant optical flow computation speed-up. Furthermore, FPGAs allow the implementation of a custom data-flow architecture specially suited for this strategy. The foundations of the change-driven image processing are presented, as well as the hardware custom implementation in an EP20K1000C FPGA showing the achieved performance


IEEE Transactions on Circuits and Systems for Video Technology | 2011

Advantages of Selective Change-Driven Vision for Resource-Limited Systems

Fernando Pardo; Pedro Zuccarello; Jose Antonio Boluda; Francisco Vegara

Selective change-driven (SCD) vision is a capture/processing strategy especially suited for vision systems with limited resources and/or vision applications with real-time constraints. SCD vision capture essentially involves delivering only the pixels that have undergone the greatest change in illumination since the last time they were read-out. SCD vision processing involves processing a limited pixel flow with similar results to the usual image flow, but with far lower bandwidth and processing requirements. SCD vision is based on pixel flow processing instead of traditional image flow processing. This complete change in the way video is processed and has a direct impact on the processing hardware required to deal with visual information. In this paper, we present the first CMOS sensor using the SCD strategy, along with a highly resource-limited system implementing an object tracking experiment. Results show that SCD vision outperforms traditional vision systems by at least one order of magnitude, with limited hardware requirements for the specific tracking experiment being tested.

Collaboration


Dive into the Jose Antonio Boluda's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge