Markos Zampoglou
Technological Educational Institute of Crete
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Markos Zampoglou.
international conference on 3d web technology | 2013
Markos Zampoglou; Patti Spala; Konstantinos Kontakis; Athanasios G. Malamos; J. Andrew Ware
Content description is an important step in multimedia indexing and search applications. While, in the past, a large volume of research has been devoted to image, audio, and video data, 3D scenes have received relatively little attention. In this paper, we present a methodology for the automatic description of 3D scenes, based on textual metadata but also their shape, structure, color, animation, lighting, viewpoint, texture and interactivity content. Our system accepts 3D scenes as input, written in the open X3D standard for web graphics, and automatically builds MPEG-7 descriptions. In order to fully model 3D content, we draw upon our previous work, where we have extended the MPEG-7 standard with multiple 3D-specific descriptors. Here, we further extend MPEG-7, and present our approach for automatic descriptor extraction. We take advantage of the fact that both X3D and MPEG-7 are written in XML, and base our automatic extraction system on eXtensible Stylesheet Language Transformations (XSLT). We have incorporated our system into a large-scale platform for VR advertising over the web, where the benefits of automatic annotation can be twofold: authors are offered better access to stored 3D material, for editing and reuse, and end users can be provided with advertisements whose semantic content matches their profile.
international conference on telecommunications | 2014
Kostas Kapetanakis; Spyros Panagiotakis; Athanasios G. Malamos; Markos Zampoglou
Innovative technologies, implemented in web browsers, have led to the rise of a new era of virtual worlds. Devices with strong hardware installed may now present 3D virtual worlds utilizing user interaction in High Definition (HD) monitors. Recent frameworks such as X3DOM provide a free of plug-ins solution to present interactive 3D graphics and animations within a browser. Furthermore, MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard can be implemented in server-client applications to adapt dynamically the video streaming quality in order to provide the best possible user experience. In this work, we present an approach to extend the adaptation methods of X3DOM by adding a mechanism to perform dynamic adaptation and achieve HD video delivery in 3D Virtual Reality (VR) worlds. We maintain the advantages of both bridged technologies and provide a web-friendly application solution without the requirement of software installation.
Multimedia Tools and Applications | 2018
Markos Zampoglou; Kostas Kapetanakis; Andreas Stamoulias; Athanasios G. Malamos; Spyros Panagiotakis
Modern Web 3D technologies allow us to display complex interactive 3D content, including models, textures, sounds and animations, using any HTML-enabled web browser. Thus, due to the device-independent nature of HTML5, the same content might have to be displayed on a wide range of different devices and environments. This means that the display of Web 3D content is faced with the same Quality of Experience (QoE) issues as other multimedia types, concerning bandwidth, computational capabilities of the end device, and content quality. In this paper, we present a framework for adaptive streaming of interactive Web 3D scenes to web clients using the MPEG-DASH standard. We offer an analysis of how the standard’s Media Presentation Description schema can be used to describe adaptive Web 3D scenes for streaming, and explore the types of metrics that can be used to maximize the user’s QoE. Then, we present a prototype client we have developed, and demonstrate how the 3D streaming process can take place over such a client. Finally, we discuss how the client framework can be used to design adaptive streaming policies that correspond to real-world scenarios.
the internet of things | 2014
Markos Zampoglou; Athanasios G. Malamos; Kostas Kapetanakis; Konstantinos Kontakis; Emmanuel Sardis; George Vafiadis; Vrettos Moulos; Anastasios D. Doulamis
We present a large-scale platform for distributing Virtual Reality advertisements over the World Wide Web. The platform aims at receiving and transmitting large amounts of data over mobile and desktop devices in Smart City contexts, is based on a modular and distributed architecture to allow for scalability, and incorporates content-based search capabilities for Virtual Reality (VR) scenes to allow for content management. Data is stored on a cloud repository, to allow for a large amount of VR material to be kept and distributed, and follows a service-based approach of independent subsystems for the management, conversion and streaming of information. In order to function over a wide range of used end-devices, from mobile phones to high-end desktop PCs, the system is based on HTML5 technologies, and implements a remote rendering server to alleviate the computational burden on the end device. Furthermore, an extension of the MPEG-7 standard is used for the description and retrieval of 3D scenes from the cloud, and we have further ensured compliance of our system with a number of other structure and communication standards, to ensure extensibility and reusability of the sub-modules. The platform is a research work in progress: we present the subsystems already implemented, plan our next steps and describe our contributions to research.
Proceedings of the 19th International ACM Conference on 3D Web Technologies | 2014
Andreas Stamoulias; Athanasios G. Malamos; Markos Zampoglou; Don Brutzman
Given that physics can be fundamental for realistic and interactive Web3D applications, a number of JavaScript versions of physics engines have been introduced during the past years. This paper presents the implementation of the rigid body physics component, as defined by the X3D specification, in the X3DOM environment, and the creation of dynamic 3D interactive worlds. We briefly review the state of the art in current technologies for Web3D graphics, including HTML5, WebGL and X3D, and then explore the significance of physics engines in building realistic Web3D worlds. We include a comprehensive review of JavaScript physics engine libraries, and proceed to summarize the significance of our implementation while presenting in detail the methodology followed. The results obtained so far from our cross-browser experiments demonstrate that real-time interactive scenes with hundreds of rigid bodies can be constructed and operate with acceptable frame rates, while the allowing the user to maintain the scene control.
international conference on information intelligence systems and applications | 2014
Kostas Kapetanakis; Markos Zampoglou; Fotis Milionis; Athanasios G. Malamos; Spyros Panagiotakis; Emmanuel Maravelakis
With the advancement of both 3D scanning technologies and Web3D, it is now feasible to convert Cultural Heritage objects and locations of interest into synthetic 3D scenes, and directly embed them in HTML pages so that users can visit them remotely, from practically any Web-enabled device. However, since such scanned scenes tend to be extremely detailed and consist of large volumes of data, browsing them can become a long, burdensome experience. While a number of progressive streaming approaches for 3D graphics have been proposed in the past, such methods tend to require a radical restructuring of the original data in order to be streamed to a web client. We implement a platform for 3D scenes that can stream any model encoded in declarative X3DOM format without further pre-processing. We explore a number of state-of-the-art web technologies for model transmission, and compare them to the typical methods used until now. We present the advantages of each, and lay the groundwork for further extensions to our approach, towards a large-scale platform for the smooth streaming distribution of detailed 3D scenes to a large number of clients, without needing to destroy the original model Web3D format.
International Journal of Wireless Networks and Broadband Technologies (IJWNBT) | 2014
Kostas Kapetanakis; Markos Zampoglou; Athanasios G. Malamos; Spyros Panagiotakis; Emmanuel Maravelakis
Recent advances in web technologies have now created a ubiquitous environment for cross-platform and cross-device multimedia applications. Media files can now be reproduced in a wide range of devices, from mobile phones to desktop computers and web-enabled televisions, using a common infrastructure. This trend towards unifying the technological infrastructure, however, has given rise to a new array of problems resulting from the varying technological capabilities of the different devices and environments. This paper, proposes an adaptive streaming framework for the display of 3D models on a wide range of web-enabled devices. The open, XML-based X3D language for 3D graphics is combined with the MPEG-DASH standard for adaptive streaming. The end result is a framework that can adaptively display 3D graphics in the face of network or computational limitations, and dynamically adapt data flow to maximize user Quality of Experience in any situation.
international conference on telecommunications | 2014
Michael Kalochristianakis; Markos Zampoglou; Konstantinos Kontakis; Kostas Kapetanakis; Athanasios G. Malamos
This paper presents a scene composition approach that allows the combinational use of standard three dimensional objects, models, in order to create virtual worlds using X3D technologies. We extend the current state of the art by incorporating visual interactions and automatic content description capabilities to well-known Web3D authoring software. The work is an integral part of a broader research effort aiming to construct large scale online advertising infrastructures that rely on virtual reality technologies, interactive 3D advertising in HTML contexts. The paper addresses a number of problems regarding the relevant technologies that make high level scene management difficult and the solutions for the implementation of our approach.
The New Review of Hypermedia and Multimedia | 2014
Markos Zampoglou; Athanasios G. Malamos
In this paper, we present an organized survey of the existing literature on music information retrieval systems in which descriptor features are extracted directly from the compressed audio files, without prior decompression to pulse-code modulation format. Avoiding the decompression step and utilizing the readily available compressed-domain information can significantly lighten the computational cost of a music information retrieval system, allowing application to large-scale music databases. We identify a number of systems relying on compressed-domain information and form a systematic classification of the features they extract, the retrieval tasks they tackle and the degree in which they achieve an actual increase in the overall speed—as well as any resulting loss in accuracy. Finally, we discuss recent developments in the field, and the potential research directions they open toward ultra-fast, scalable systems.
International Journal of Interactive Multimedia and Artificial Intelligence | 2014
Michael Kalochristianakis; Markos Zampoglou; Konstantinos Kontakis; Kostas Kapetanakis; Athanasios G. Malamos
This paper presents a scene composition approach that allows the combinational use of standard three dimensional objects, called models, in order to create X3D scenes. The module is an integral part of a broader design aiming to construct large scale online advertising infrastructures that rely on virtual reality technologies. The architecture addresses a number of problems regarding remote rendering for low end devices and last but not least, the provision of scene composition and integration. Since viewers do not keep information regarding individual input models or scenes, composition requires the consideration of mechanisms that add state to viewing technologies. In terms of this work we extended a well-known, open source X3D authoring tool.