Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ioannis Charalampidis is active.

Publication


Featured researches published by Ioannis Charalampidis.


arXiv: Distributed, Parallel, and Cluster Computing | 2014

Micro-CernVM: slashing the cost of building and deploying virtual machines

Jakob Blomer; D. Berzano; P. Buncic; Ioannis Charalampidis; G. Ganis; Georgios Lestaris; René Meusel; V. Nicolaou

The traditional virtual machine building and and deployment process is centered around the virtual machine hard disk image. The packages comprising the VM operating system are carefully selected, hard disk images are built for a variety of different hypervisors, and images have to be distributed and decompressed in order to instantiate a virtual machine. Within the HEP community, the CernVM File System has been established in order to decouple the distribution from the experiment software from the building and distribution of the VM hard disk images. We show how to get rid of such pre-built hard disk images altogether. Due to the high requirements on POSIX compliance imposed by HEP application software, CernVM-FS can also be used to host and boot a Linux operating system. This allows the use of a tiny bootable CD image that comprises only a Linux kernel while the rest of the operating system is provided on demand by CernVM-FS. This approach speeds up the initial instantiation time and reduces virtual machine image sizes by an order of magnitude. Furthermore, security updates can be distributed instantaneously through CernVM-FS. By leveraging the fact that CernVM-FS is a versioning file system, a historic analysis environment can be easily re-spawned by selecting the corresponding CernVM-FS file system snapshot.


arXiv: Distributed, Parallel, and Cluster Computing | 2014

PROOF as a Service on the Cloud: a Virtual Analysis Facility based on the CernVM ecosystem

D. Berzano; Jakob Blomer; P. Buncic; Ioannis Charalampidis; G. Ganis; Georgios Lestaris; René Meusel

PROOF, the Parallel ROOT Facility, is a ROOT-based framework which enables interactive parallelism for event-based tasks on a cluster of computing nodes. Although PROOF can be used simply from within a ROOT session with no additional requirements, deploying and configuring a PROOF cluster used to be not as straightforward. Recently great efforts have been spent to make the provisioning of generic PROOF analysis facilities with zero configuration, with the added advantages of positively affecting both stability and scalability, making the deployment operations feasible even for the end user. Since a growing amount of large-scale computing resources are nowadays made available by Cloud providers in a virtualized form, we have developed the Virtual PROOF-based Analysis Facility: a cluster appliance combining the solid CernVM ecosystem and PoD (PROOF on Demand), ready to be deployed on the Cloud and leveraging some peculiar Cloud features such as elasticity. We will show how this approach is effective both for sysadmins, who will have little or no configuration to do to run it on their Clouds, and for the end users, who are ultimately in full control of their PROOF cluster and can even easily restart it by themselves in the unfortunate event of a major failure. We will also show how elasticity leads to a more optimal and uniform usage of Cloud resources.


Journal of Physics: Conference Series | 2011

Studying ROOT I/O performance with PROOF-Lite

C Aguado-Sanchez; Jakob Blomer; P. Buncic; Ioannis Charalampidis; G Ganis; M Nabozny; F Rademakers

Parallelism aims to improve computing performance by executing a set of computations concurrently. Since the advent of todays many-core machines the full exploitation of the available CPU power has been one of the main challenges. In High Energy Physics (HEP) final data analysis the bottleneck is not only the available CPU but also the available I/O bandwidth. Most of todays HEP analysis frameworks depend on ROOT I/O. In this paper we will discuss the results obtained studying the ROOT I/O performance using PROOF-Lite, a parallel multi-process approach whose results can be directly applied to the generic case of many jobs running concurrently on the same machine. We will also discuss the impact of running the applications in virtual machines.


arXiv: Distributed, Parallel, and Cluster Computing | 2014

CernVM Online and Cloud Gateway: a uniform interface for CernVM contextualization and deployment

Georgios Lestaris; Ioannis Charalampidis; D. Berzano; Jakob Blomer; P. Buncic; G. Ganis; René Meusel

In a virtualized environment, contextualization is the process of configuring a VM instance for the needs of various deployment use cases. Contextualization in CernVM can be done by passing a handwritten context to the user data field of cloud APIs, when running CernVM on the cloud, or by using CernVM web interface when running the VM locally. CernVM Online is a publicly accessible web interface that unifies these two procedures. A user is able to define, store and share CernVM contexts using CernVM Online and then apply them either in a cloud by using CernVM Cloud Gateway or on a local VM with the single-step pairing mechanism. CernVM Cloud Gateway is a distributed system that provides a single interface to use multiple and different clouds (by location or type, private or public). Cloud gateway has been so far integrated with OpenNebula, CloudStack and EC2 tools interfaces. A user, with access to a number of clouds, can run CernVM cloud agents that will communicate with these clouds using their interfaces, and then use one single interface to deploy and scale CernVM clusters. CernVM clusters are defined in CernVM Online and consist of a set of CernVM instances that are contextualized and can communicate with each other.


Journal of Physics: Conference Series | 2015

CernVM WebAPI - Controlling Virtual Machines from the Web

Ioannis Charalampidis; D. Berzano; Jakob Blomer; P. Buncic; G. Ganis; René Meusel; Ben Segal

Lately, there is a trend in scientific projects to look for computing resources in the volunteering community. In addition, to reduce the development effort required to port the scientific software stack to all the known platforms, the use of Virtual Machines (VMs)u is becoming increasingly popular. Unfortunately their use further complicates the software installation and operation, restricting the volunteer audience to sufficiently expert people. CernVM WebAPI is a software solution addressing this specific case in a way that opens wide new application opportunities. It offers a very simple API for setting-up, controlling and interfacing with a VM instance in the users computer, while in the same time offloading the user from all the burden of downloading, installing and configuring the hypervisor. WebAPI comes with a lightweight javascript library that guides the user through the application installation process. Malicious usage is prohibited by offering a per-domain PKI validation mechanism. In this contribution we will overview this new technology, discuss its security features and examine some test cases where it is already in use.


Journal of Physics: Conference Series | 2015

Status and Roadmap of CernVM

D. Berzano; Jakob Blomer; P. Buncic; Ioannis Charalampidis; G. Ganis; René Meusel

Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.


Journal of Physics: Conference Series | 2012

Managing the Virtual Machine Lifecycle of the CernVM Project

Ioannis Charalampidis; Jakob Blomer; P. Buncic; Artem Harutyunyan; D Larsen

CernVM is a virtual software appliance designed to support the development cycle and provide a runtime environment for LHC applications. It consists of a minimal Linux distribution, a specially tuned file system designed to deliver application software on demand, and contextualization tools. The maintenance of these components involves a variety of different procedures and tools that cannot always connect with each other. Additionally, most of these procedures need to be performed frequently. Currently, in the CernVM project, every time we build a new virtual machine image, we have to perform the whole process manually, because of the heterogeneity of the tools involved. The overall process is error-prone and time-consuming. Therefore, to simplify and aid this continuous maintenance process, we are developing a framework that combines these virtually unrelated tools with a single, coherent interface. To do so, we identified all the involved procedures and their tools, tracked their dependencies and organized them into logical groups (e.g. build, test, instantiate). These groups define the procedures that are performed throughout the lifetime of a virtual machine. In this paper we describe the Virtual Machine Lifecycle and the framework we developed (iAgent) in order to simplify the maintenance process.


Archive | 2017

A Collaborative Citizen Science Platform to Bring Together Scientists, Volunteers, and Game Players.

Poonam Yadav; Ioannis Charalampidis; Jeremy Cohen; John Darlington; Francois Grey


arXiv: Software Engineering | 2014

The Need for a Versioned Data Analysis Software Environment.

Jakob Blomer; D. Berzano; P. Buncic; Ioannis Charalampidis; G. Ganis; Georgios Lestaris; René Meusel


Journal of Physics: Conference Series | 2012

Long-term preservation of analysis software environment

Dag Toppe Larsen; Jakob Blomer; P. Buncic; Ioannis Charalampidis; Artem Haratyunyan

Collaboration


Dive into the Ioannis Charalampidis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeremy Cohen

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Poonam Yadav

University of Cambridge

View shared research outputs
Researchain Logo
Decentralizing Knowledge