Enrico Giacinto Caldarola
University of Naples Federico II
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Enrico Giacinto Caldarola.
ACM Sigsoft Software Engineering Notes | 2015
Enrico Giacinto Caldarola; Antonio Picariello; Daniela Castelluccia
In the past few years, a massive amount of data has been delivered by an increased and ubiquitous use of Information and Communication Technologies (ICTs) in human activities and a propagation of smart devices or smart sensors, which continuously connect people and things in cyberspace. This huge bubble of data is a gold mine: it is an unlimited source of knowledge and insights about habits and preferences of people and has captured the attention of modern enterprises. Companies look at this data with interest and purpose to gain competitive advantage by applying analytics tools over them. In this context, a new approach is required for mastering data without the risk of ending up in the bubble and collecting a huge meaningless pile of junk data. Starting from a definition of new approaches, this work outlines Big Data strategies for modern enterprises and highlights challenges, emergent solutions and open issues.
management of emergent digital ecosystems | 2015
Enrico Giacinto Caldarola; Antonio Picariello; Antonio M. Rinaldi
In the last years, the large availability of information and knowledge models formalized by ontologies has demanded effective and efficient methodologies for reusing and integrating such models in global conceptualizations of a specific knowledge or application domain. The ability to effectively and efficiently perform knowledge reuse is a crucial factor in the development of ontologies, which are a potential solution to the problem of information standardization and a viaticum towards the realization of knowledge-based digital ecosystem. In this paper, an approach to ontology reuse based on heterogeneous matching techniques will be presented; in particular, we will show how the process of ontology building will be improved and simplified, by automating the selection and the reuse of existing data models to support the creation of digital ecosystems. The proposed approach has been applied to the food domain, specifically to food production.
international conference on data technologies and applications | 2015
Enrico Giacinto Caldarola; Antonio M. Rinaldi
For several years we are living in the era of information. Since any human activity is carried out by means of information technologies and tends to be digitized, it produces a humongous stack of data that becomes more and more attractive to different stakeholders such as data scientists, entrepreneurs or just privates. All of them are interested in the possibility to gain a deep understanding about people and things, by accurately and wisely analyzing the gold mine of data they produce. The reason for such interest derives from the competitive advantage and the increase in revenues expected from this deep understanding. In order to help analysts in revealing the insights hidden behind data, new paradigms, methodologies and tools have emerged in the last years. There has been a great explosion of technological solutions that arises the need for a review of the current state of the art in the Big Data technologies scenario. Thus, after a characterization of the new paradigm under study, this work aims at surveying the most spread technologies under the Big Data umbrella, throughout a qualitative analysis of their characterizing features.
international joint conference on knowledge discovery knowledge engineering and knowledge management | 2015
Enrico Giacinto Caldarola; Antonio Picariello; Antonio M. Rinaldi
In the Big Data era, the visualization of large data sets is becoming an increasingly relevant task due to the great impact that data have from a human perspective. Since visualization is the closer phase to the users within the data life cycles phases, there is no doubt that an effective, efficient and impressive representation of the analyzed data may result as important as the analytic process itself. This paper presents an experience for importing, querying and visualizing graph database and in particular, we describe as a case study the WordNet database using Neo4J and Cytoscape. We will describe each step in this study focusing on the used strategies for overcoming the different problems mainly due to the intricate nature of the case study. Finally, an attempt to define some criteria to simplify the large-scale visualization of WordNet will be made, providing some examples and considerations which have arisen.
international congress on big data | 2016
Enrico Giacinto Caldarola; Antonio M. Rinaldi
In the Big Data era, the visualization of large data sets is becoming an increasingly relevant task due to the great impact that data have from a human perspective. Since the visualization is the closer phase to the users within the data life cycles phases, there is no doubt that an effective, efficient and impressive representation of the analyzed data may result as important as the analytic process itself. Starting from previous experiences in importing, querying and visualizing WordNet database within Neo4J and Cytoscape, this work aims at improving the WordNet Graph visualization by exploiting the features and concepts behind tag clouds. The objective of this study is twofold: firstly, we argue that the proposed visualization strategy is able to put order in the messy and dense structure of nodes and edges of large knowledge bases as WordNet, showing as much as possible information from this knowledge source and in a clearer way; secondly, we think that the tag cloud approach applied to the synonyms rings reinforces the human cognition in recognizing the different usages of words in natural languages like English. In this regard, we also propose a formal strategy in order to evaluate the information perception in the use of our methodology by means of a questionnaire asked to a group of users. Finally, we compare these results with those resulting from the adoption of well known representations of WordNet within existing GUIs.
information reuse and integration | 2016
Enrico Giacinto Caldarola; Antonio M. Rinaldi
In the last years, the large availability of data and schema models formalized through different languages has demanded effective and efficient methodologies to reuse such models. One of the most challenging problem consists in integrating different models in a global conceptualization of a specific knowledge or application domain. This is a hard task to accomplish due to ambiguities, inconsistencies and heterogeneities, at different levels, that could stand in the way. The ability to effectively and efficiently perform knowledge reuse is a crucial factor in knowledge management systems, and it also represents a potential solution to the problem of standardization of information and a viaticum towards the realization of the Semantic web. In this paper, an approach to ontology reuse based on heterogeneous matching techniques will be presented, in particular, we will show how the process of ontology construction will be improved and simplified, by automatizing the selection and the reuse of existing data models. The proposed approach will be applied to the food domain, specifically to the food production.
international joint conference on knowledge discovery, knowledge engineering and knowledge management | 2015
Enrico Giacinto Caldarola; Antonio Picariello; Antonio M. Rinaldi
Data and Information Visualization is becoming strategic for the exploration and explanation of large data sets due to the great impact that data have from a human perspective. The visualization is the closer phase to the users within the data life cycle’s phases, thus, an effective, efficient and impressive representation of the analyzed data may result as important as the analytic process itself. In this paper, we present our experiences in importing, querying and visualizing graph databases taking one of the most spread lexical database as case study: WordNet. After having defined a meta-model to translate WordNet entities into nodes and arcs inside a labeled oriented graph, we try to define some criteria to simplify the large-scale visualization of WordNet graph, providing some examples and considerations which arise. Eventually, we suggest a new visualization strategy for WordNet synonyms rings by exploiting the features and concepts behind tag clouds.
Archive | 2016
Enrico Giacinto Caldarola; Antonio M. Rinaldi
The new revolutionary web today, the Semantic Web, has augmented the previous one by promoting common data formats and exchange protocols in order to provide a framework that allows data to be shared and reused across application, enterprise, and community boundaries. This revolution, together with the increasing digitization of the world, has led to a high availability of knowledge models, i.e., more or less formal representations of concepts underlying a certain universe of discourse, which span throughout a wide range of topics, fields of study and applications, mostly heterogeneous from each other at a different dimensions. As more and more outbreaks of this new revolution light up, a major challenge came soon into sight: addressing the main objectives of the semantic web, the sharing and reuse of data, demands effective and efficient methodologies to mediate between models speaking different languages. Since ontologies are the de facto standard in representing and sharing knowledge models over the web, this paper presents a comprehensive methodology to ontology integration and reuse based on various matching techniques. The approach proposed here is supported by an ad hoc software framework whose scope is easing the creation of new ontologies by promoting the reuse of existing ones and automatizing, as much as possible, the whole ontology construction procedure.
international conference on data technologies and applications | 2017
Enrico Giacinto Caldarola; Antonio M. Rinaldi
In the era of Big Data, a great attention deserves the visualization of large data sets. Among the main phases of the data management’s life cycle, i.e., storage, analytics and visualization, the last one is the most strategic since it is close to the human perspective. The huge mine of data becomes a gold mine only if tricky and wise analytics algorithms are executed over the data deluge and, at the same time, the analytic process results are visualized in an effective, efficient and why not impressive way. Not surprisingly, a plethora of tools and techniques have emerged in the last years for Big Data visualization, both as part of Data Management Systems or as software or plugins specifically devoted to the data visualization. Starting from these considerations, this paper provides a survey of the most used and spread visualization tools and techniques for large data sets, eventually presenting a synoptic of the main functional and non-functional characteristics of the surveyed
information reuse and integration | 2017
Enrico Giacinto Caldarola; Antonio M. Rinaldi
We live in an increasingly connected and data-greedy world. In the last decade, informative contents over the Web have grown in volume, connectivity and heterogeneity to an extent never seen before. Well known examples of Online Multimedia Social Networks (OMSNs), such as Facebook or Twitter, demonstrate the humongous volume and complexity characterizing common scenarios of the contemporary Web. Recognizing that, today, means adopting intelligent information systems able to use data and links between data to gain insights and clues from such intricate and dense networks. To address this goal, these systems should have formal models able to extract efficiently the knowledge retained in the network, even when it is not so explicit. In this way, complex data can be managed and used to perform new tasks and implement innovative functionalities. This article describes the use of a semantically labelled and property-based graph model in order to represent the information coming from OMSNs by exploiting linguistic-semantic properties between terms and the available low-level multimedia descriptors. The multimedia features are automatically extracted using algorithms based on MPEG-7 descriptors and integrated with textual data from a general knowledge base.