Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maria E. Angelopoulou is active.

Publication


Featured researches published by Maria E. Angelopoulou.


signal processing systems | 2008

Implementation and Comparison of the 5/3 Lifting 2D Discrete Wavelet Transform Computation Schedules on FPGAs

Maria E. Angelopoulou; Konstantinos Masselos; Peter Y. K. Cheung; Yiannis Andreopoulos

The suitability of the 2D Discrete Wavelet Transform (DWT) as a tool in image and video compression is nowadays indisputable. For the execution of the multilevel 2D DWT, several computation schedules based on different input traversal patterns have been proposed. Among these, the most commonly used in practical designs are: the row–column, the line-based and the block-based. In this work, these schedules are implemented on FPGA-based platforms for the forward 2D DWT by using a lifting-based filter-bank implementation. Our designs were realized in VHDL and optimized in terms of throughput and memory requirements, in accordance with the principles of both the schedules and the lifting decomposition. The implementations are fully parameterized with respect to the size of the input image and the number of decomposition levels. We provide detailed experimental results concerning the throughput, the area, the memory requirements and the energy dissipation, associated with every point of the parameter space. These results demonstrate that the choice of the suitable schedule is a decision that should be dependent on the given algorithmic specifications.


computer vision and pattern recognition | 2014

Backscatter Compensated Photometric Stereo with 3 Sources

Chourmouzios Tsiotsios; Maria E. Angelopoulou; Tae-Kyun Kim; Andrew J. Davison

Photometric stereo offers the possibility of object shape reconstruction via reasoning about the amount of light reflected from oriented surfaces. However, in murky media such as sea water, the illuminating light interacts with the medium and some of it is backscattered towards the camera. Due to this additive light component, the standard Photometric Stereo equations lead to poor quality shape estimation. Previous authors have attempted to reformulate the approach but have either neglected backscatter entirely or disregarded its non-uniformity on the sensor when camera and lights are close to each other. We show that by compensating effectively for the backscatter component, a linear formulation of Photometric Stereo is allowed which recovers an accurate normal map using only 3 lights. Our backscatter compensation method for point-sources can be used for estimating the uneven backscatter directly from single images without any prior knowledge about the characteristics of the medium or the scene. We compare our method with previous approaches through extensive experimental results, where a variety of objects are imaged in a big water tank whose turbidity is systematically increased, and show reconstruction quality which degrades little relative to clean water results even with a very significant scattering level.


ACM Transactions on Reconfigurable Technology and Systems | 2009

Robust Real-Time Super-Resolution on FPGA and an Application to Video Enhancement

Maria E. Angelopoulou; Christos-Savvas Bouganis; Peter Y. K. Cheung; George A. Constantinides

The high density image sensors of state-of-the-art imaging systems provide outputs with high spatial resolution, but require long exposure times. This limits their applicability, due to the motion blur effect. Recent technological advances have lead to adaptive image sensors that can combine several pixels together in real time to form a larger pixel. Larger pixels require shorter exposure times and produce high-frame-rate samples with reduced motion blur. This work proposes combining an FPGA with an adaptive image sensor to produce an output of high resolution both in space and time. The FPGA is responsible for the spatial resolution enhancement of the high-frame-rate samples using super-resolution (SR) techniques in real time. To achieve it, this article proposes utilizing the Iterative Back Projection (IBP) SR algorithm. The original IBP method is modified to account for the presence of noise, leading to an algorithm more robust to noise. An FPGA implementation of this algorithm is presented. The proposed architecture can serve as a general purpose real-time resolution enhancement system, and its performance is evaluated under various noise levels.


applied reconfigurable computing | 2008

FPGA-Based Real-Time Super-Resolution on an Adaptive Image Sensor

Maria E. Angelopoulou; Christos-Savvas Bouganis; Peter Y. K. Cheung; George A. Constantinides

Recent technological advances in imaging industry have lead to the production of imaging systems with high density pixel sensors. However, their long exposure times limit their applications to static images due to the motion blur effect. This work presents a system that reduces the motion blurring using a time-variant image sensor. This sensor can combine several pixels together to form a larger pixel when it is necessary. Larger pixels require shorter exposure times and produce high frame-rate samples with reduced motion blur. An FPGA is employed to enhance the spatial resolution of these samples employing Super Resolution (SR) techniques in real-time. This work focuses on the spatial resolution enhancement block and presents an FPGA implementation of the Iterative Back Projection (IBP) SR algorithm. The proposed architecture achieves 25 fps for VGA input and can serve as a general purpose real-time resolution enhancement system.


field-programmable technology | 2006

A comparison of 2-D discrete wavelet transform computation schedules on FPGAs

Maria E. Angelopoulou; Konstantinos Masselos; Peter Y. K. Cheung; Yiannis Andreopoulos

When it comes to the computation of the 2D discrete wavelet transform (DWT), three major computation schedules have been proposed, namely the row-column, the line-based and the block-based. In this work, the lifting-based designs of these schedules are implemented on FPGA-based platforms to execute the forward 2D DWT, and their comparison is presented. Our implementations are optimized in terms of throughput and memory requirements, in accordance with the specifications of each one of the three computation schedules and the lifting decomposition. All implementations are parameterized with respect to the image size and the number of decomposition levels. Experimental results prove that the suitability of each implementation for a particular application depends on the given specifications, concerning the throughput and the hardware cost


IEEE Transactions on Circuits and Systems for Video Technology | 2014

Vision-Based Egomotion Estimation on FPGA for Unmanned Aerial Vehicle Navigation

Maria E. Angelopoulou; Christos-Savvas Bouganis

The use of unmanned aerial vehicles (UAVs) in commercial and warfare activities has intensified over the last decade. One of the main challenges is to enable UAVs to become as autonomous as possible. A vital component toward this direction is the robust and accurate estimation of the egomotion of the UAV. Egomotion estimation can be enhanced by equipping the UAV with a video camera, which enables a vision-based egomotion estimation. However, the high computational requirements of vision-based egomotion algorithms, combined with the real-time performance and low power consumption requirements that are related to such an application, cannot be met by general-purpose processing units. This paper presents a system architecture that employs a field-programmable gate array as the main processing platform connected to a low-power CPU that targets the problem of vision-based egomotion estimation in a UAV. The performance evaluation of the proposed system, using real data captured by a UAVs on-board camera, demonstrates the ability of the system to render accurate estimation of the egomotion parameters, meeting at the same time the real-time requirements imposed by the application.


machine vision applications | 2014

Uncalibrated flatfielding and illumination vector estimationfor photometric stereo face reconstruction

Maria E. Angelopoulou; Maria Petrou

Within the context of photometric stereo reconstruction, flatfielding may be used to compensate for the effect of the inverse-square law of light propagation on the pixel brightness. This would require capturing a set of reference images at an off-line imaging session, which employs a calibrating device that should be captured under the exact conditions as the main session. Similarly, the illumination vectors, on which photometric stereo relies, are typically precomputed based on another dedicated calibration session. In practice, implementing such off-line sessions is inconvenient and often infeasible. This work aims at enabling accurate photometric stereo reconstruction for the case of non-interactive on-line capturing of human faces. We propose unsupervised methodologies, which extract all information that is required for accurate face reconstruction from the images of interest themselves. Specifically, we propose an uncalibrated flatfielding and an uncalibrated illumination vector estimation methodology, and we assess their effect on photometric stereo face reconstruction. Results demonstrate that incorporating our methodologies into the photometric stereo framework halves the reconstruction error, while eliminating the need of off-line calibration.


machine vision applications | 2014

Evaluating the effect of diffuse light on photometric stereo reconstruction

Maria E. Angelopoulou; Maria Petrou

Photometric stereo surface reconstruction requires each input image to be associated with a particular 3D illumination vector. This signifies that the subject should be illuminated in turn by various directional illumination sources. In real life, this directionality may be reduced by ambient illumination, which is typically present as a diffuse component of the incident light. This work assesses the photometric stereo reconstruction quality for various ratios of ambient to directional illuminance and provides a reference for the robustness of photometric stereo with respect to that illuminance ratio. In our analysis, we focus on the face reconstruction application of photometric stereo, as faces are convex objects with rich surface variation, thus providing a suitable platform for photometric stereo reconstruction quality evaluation. Results demonstrate that photometric stereo renders realistic reconstructions of the given surface for ambient illuminance as high as nine times the illuminance of the directional light component.


international conference on image processing | 2011

Feature selection with geometric constraints for vision-based Unmanned Aerial Vehicle navigation

Maria E. Angelopoulou; Christos-Savvas Bouganis

Vision-based egomotion estimation can be employed to endow with navigation ability an Unmanned Aerial Vehicle (UAV) that is equipped with an on-board camera. The egomotion estimation block computes the 3D UAV motion, taking as an input a 2D optical flow map that is constructed for each of the captured video frames. This work considers sparse optical flow estimation, and thus the navigation system that is developed includes a feature selection unit, which initially identifies the points of the optical flow map. This paper demonstrates that the feature selection process, and in particular the geometry of the selected feature set, decisively determines the overall system performance. Various computation schedules, which combine geometric constraints with a textural quality metric for the image features, are thus investigated. This paper shows that imposing appropriate distance constraints in the feature selection process significantly increases the output precision of the egomotion estimation unit, thus enabling accurate vision-based UAV self-navigation.


international conference on image processing | 2008

Video enhancement on an adaptive image sensor

Maria E. Angelopoulou; Christos-Savvas Bouganis; Peter Y. K. Cheung

The high density pixel sensors of the latest imaging systems provide images with high resolution, but require long exposure times, which limit their applicability due to the motion blur effect. Recent technological advances have lead to image sensors that can combine in real-time several pixels together to form a larger pixel. Larger pixels require shorter exposure times and produce high-frame-rate samples with reduced motion blur. This work proposes ways of configuring such a sensor to maximize the raw information collected from the environment, and methods to process that information and enhance the final output. In particular, a super-resolution and a deconvolution-based approach, for motion deblurring on an adaptive image sensor, are proposed, compared and evaluated.

Collaboration


Dive into the Maria E. Angelopoulou's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maria Petrou

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tae-Kyun Kim

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge