Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maria Papadogiorgaki is active.

Publication


Featured researches published by Maria Papadogiorgaki.


Ultrasound in Medicine and Biology | 2008

IMAGE ANALYSIS TECHNIQUES FOR AUTOMATED IVUS CONTOUR DETECTION

Maria Papadogiorgaki; Vasileios Mezaris; Yiannis S. Chatzizisis; George D. Giannoglou; Ioannis Kompatsiaris

Intravascular ultrasound (IVUS) constitutes a valuable technique for the diagnosis of coronary atherosclerosis. The detection of lumen and media-adventitia borders in IVUS images represents a necessary step towards the reliable quantitative assessment of atherosclerosis. In this work, a fully automated technique for the detection of lumen and media-adventitia borders in IVUS images is presented. This comprises two different steps for contour initialization: one for each corresponding contour of interest and a procedure for the refinement of the detected contours. Intensity information, as well as the result of texture analysis, generated by means of a multilevel discrete wavelet frames decomposition, are used in two different techniques for contour initialization. For subsequently producing smooth contours, three techniques based on low-pass filtering and radial basis functions are introduced. The different combinations of the proposed methods are experimentally evaluated in large datasets of IVUS images derived from human coronary arteries. It is demonstrated that our proposed segmentation approaches can quickly and reliably perform automated segmentation of IVUS images.


International Journal of Digital Multimedia Broadcasting | 2008

Two-Level Automatic Adaptation of a Distributed User Profile for Personalized News Content Delivery

Maria Papadogiorgaki; Vasileios Papastathis; Evangelia Nidelkou; Simon Waddington; Ben Bratu; Myriam Ribiere; Ioannis Kompatsiaris

This paper presents a distributed client-server architecture for the personalized delivery of textual news content to mobile users. The user profile consists of two separate models, that is, the long-term interests are stored in a skeleton profile on the server and the short-term interests in a detailed profile in the handset. The user profile enables a high-level filtering of available news content on the server, followed by matching of detailed user preferences in the handset. The highest rated items are recommended to the user, by employing an efficient ranking process. The paper focuses on a two-level learning process, which is employed on the client side in order to automatically update both user profile models. It involves the use of machine learning algorithms applied to the implicit and explicit user feedback. The systems learning performance has been systematically evaluated based on data collected from regular system users.


international workshop on semantic media adaptation and personalization | 2007

Distributed User Modeling for Personalized News Delivery in Mobile Devices

Maria Papadogiorgaki; Vasileios Papastathis; Evangelia Nidelkou; Ioannis Kompatsiaris; Simon Waddington; Ben Bratu; Myriam Ribiere

This paper presents a distributed client-server architecture for the personalized delivery of textual news content to mobile users. The user profile is distributed across client and server, enabling a high-level filtering of available content on the server, followed by matching of detailed user preferences on the handset. The high-level user preferences are stored in a skeleton profile on the server, and the low- level preferences in a detailed user profile on the handset. A learning process for the detailed user profile is employed on the handset exploiting the implicit and explicit user feedback. The systems learning performance has been evaluated based on data collected from regular system users.


International Journal of Biomedical Engineering and Technology | 2010

IVUS image processing and semantic analysis for Cardiovascular Diseases risk prediction

Charalampos Doulaverakis; Maria Papadogiorgaki; Vasileios Mezaris; Antonis Billis; Eirini Parissi; Ioannis Kompatsiaris; Anastasios Gounaris; Yiannis S. Chatzizisis; George D. Giannoglou

The work presented in this paper is part of a system able to perform risk classification of patients based on medical image analysis and on the semantically structured information of patient data from medical records and biochemical data. More specifically, the paper focuses on Intravascular Ultrasound (IVUS) image processing and the automated segmentation developed to extract the useful arterial boundaries. This is coupled with the design and implementation of a semantic reasoning-enabled knowledge base in OWL that integrates data from heterogeneous sources and incorporates functionality for DL classification. Performance evaluation of both IVUS image processing and knowledge base is discussed.


The Open Biomedical Engineering Journal | 2007

Texture Analysis and Radial Basis Function Approximation for IVUS Image Segmentation

Maria Papadogiorgaki; Vasileios Mezaris; Yiannis S. Chatzizisis; George D. Giannoglou; Ioannis Kompatsiaris

>Intravascular ultrasound (IVUS) has become in the last years an important tool in both clinical and research applications. The detection of lumen and media-adventitia borders in IVUS images represents a first necessary step in the utilization of the IVUS data for the 3D reconstruction of human coronary arteries and the reliable quantitative assessment of the atherosclerotic lesions. To serve this goal, a fully automated technique for the detection of lumen and media-adventitia boundaries has been developed. This comprises two different steps for contour initialization, one for each corresponding contour of interest, based on the results of texture analysis, and a procedure for approximating the initialization results with smooth continuous curves. A multilevel Discrete Wavelet Frames decomposition is used for texture analysis, whereas Radial Basis Function approximation is employed for producing smooth contours. The proposed method shows promising results compared to a previous approach for texture-based IVUS image analysis.


International Journal on Disability and Human Development | 2005

Sign synthesis from SignWriting notation using MPEG-4, H-Anim parameters and inverse kinematics

Maria Papadogiorgaki; Nikos Grammalidis; L. Makris

A novel approach is presented for generating VRML animation sequences from Sign Language notation, based on MPEG-4 Face and Body Animation. Sign Language notation, in the well-known SignWriting system, is provided as input and initially converted to SWML (SignWriting Markup Language), an XML-based format that has recently been developed for the storage, indexing, and processing of SignWriting notation. Each basic sign, namely signbox, is then converted to a corresponding sequence of Body Animation Parameters (BAPs) of the MPEG-4 standard. Inverse Kinematics are also employed for synthesizing complex animation sequences (e.g. contacts). In addition, if a sign contains facial expressions, then these are converted to a sequence of MPEG-4 Facial Animation Parameters (FAPs), while exact synchronization between facial and body movements is achieved. These sequences, which can also be coded and/or reproduced by MPEG-4 BAP and FAP players, are then used to animate H-anim compliant VRML avatars, reproducing the exact gestures represented in the sign language notation. Envisaged applications include interactive information systems for the persons with hearing disabilities (Web, E-mail, info-kiosks), and automatic translation of written texts to sign language (e.g. for TV newscasts). Keywords·, sign synthesis, SWML, MPEG-4 Face and Body Animation Correspondence: Nikos Grammalidis, PhD, Dipl. Eng., Informatics and Telematics Institute-Centre for Research and Technology Hellas, 1st Km Thermi-Panorama Road, P.O. Box 361, 57001 Thermi-Thessaloniki, Greece. E-mail: [email protected] Submitted: December 19, 2004. Revised: January 01, 2005. Accepted: January, 2005. INTRODUCTION The SignWriting system is a writing system for deaf sign languages developed by Valerie Sutton for the Center of Sutton Movement Writing in 1974 (1). A basic design concept for this system was to represent movements as they are visually perceived and not for the eventual meaning that these movements convey. In contrast, most other systems that have been proposed for writing deaf sign languages, such as HamNoSys (the Hamburg Notation System) or the Stokoe system employ alphanumeric characters, which represent the linguistic aspects of signs. Almost all international sign languages, including the American Sign Language (ASL) and the Brazilian Sign Language (LIBRAS), can be represented in the SignWriting system. Each signbox (basic sign) consists of a set of graphical and schematic symbols that are highly intuitive (e.g. denoting specific head, hand or body postures, movements, or even facial expressions). The rules for combining symbols are also simple, thus this system provides a simple and effective way for common people with hearing disabilities that have no special training in sign language linguistics to write in sign languages. Examples of SignWriting symbols are illustrated in Figure 1. An efficient representation of these graphical symbols in a computer system should facilitate such tasks as storage, processing, and even indexing of sign language notation. For this purpose, a SignWriting Markup Language (SWML) has recently been proposed (2,3). Currently, an online converter Fig. 1: Three examples of representations of American Sign Language in SignWriting system. is available, allowing the conversion of signboxes in SignWriting format (produced by SignWriter, a popular SignWriting editor) to S W M L format. Another important problem, which is the main focus of this paper, is the synthesis and visualization of the actual gestures and body movements that correspond to the sign language notation. Grieve-Smith (5) presented a thorough review of state-of-the art techniques for performing synthetic animation of deaf signing gestures. Traditionally, dictionaries of sign language notation contain videos (or images) describing each signbox, yet the production of these videos is a tedious procedure and has significant storage requirements. On the other hand, recent developments in computer graphics and virtual reality, such as the new Humanoid Animation (H-Anim) (6) and MPEG-4 SNHC (4) standards, allow the fast conversion of sign language notation to Virtual Reality animation sequences, L i b r a r y B b o u t


Computers in Biology and Medicine | 2007

A novel active contour model for fully automated segmentation of intravascular ultrasound images: In vivo validation in human coronary arteries

George D. Giannoglou; Yiannis S. Chatzizisis; Vassilis Koutkias; Ioannis Kompatsiaris; Maria Papadogiorgaki; Vasileios Mezaris; Eirini Parissi; Panagiotis Diamantopoulos; Michael G. Strintzis; Nicos Maglaveras; George E. Parcharidis; George E. Louridas


Intelligent Environments, 2006. IE 06. 2nd IET International Conference on | 2006

Gesture synthesis from sign language notation using MPEG-4 humanoid animation parameters and inverse kinematics

Maria Papadogiorgaki; Nikos Grammalidis; Lambros Makris; Michael G. Strintzis


Archive | 2004

Synthesis of Virtual Reality Animations from SWML using MPEG-4 Body Animation Parameters

Maria Papadogiorgaki; Nikos Grammalidis; Nikos Sarris; Michael G. Strintzis


european signal processing conference | 2005

Text-to-sign language synthesis tool

Maria Papadogiorgaki; Nikos Grammalidis; Dimitrios Tzovaras; Michael G. Strintzis

Collaboration


Dive into the Maria Papadogiorgaki's collaboration.

Top Co-Authors

Avatar

Vasileios Mezaris

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael G. Strintzis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar

Nikos Grammalidis

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge