Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Vincenzo Cannella is active.

Publication


Featured researches published by Vincenzo Cannella.


Journal of e-learning and knowledge society | 2010

Intelligent Agents supporting user interactions within self regulated learning processes

Carlo Alberto Bentivoglio; Diego Bonura; Vincenzo Cannella; Simone Carletti; Arianna Pipitone; Pier Giuseppe Rossi; Giuseppe Russo

The paper focuses on the main advantages in the defnition and utilization of an open and modular e-learning software platform to support highly cognitive tasks performed by the main actors of the learning process. We present in detail the integration inside the platform of two intelligent agents devoted to talking with the student and to retrieving new information sources on the Web. The process is triggered as a reply to the system’s perception that the student feels discontented with the presented contents. The architecture is detailed, and some conclusions about the growth of the platform’s overall performance are expressed.


complex, intelligent and software intensive systems | 2008

GAIML: A New Language for Verbal and Graphical Interaction in Chatbots

Vincenzo Cannella; Giuseppe Russo

One of the aims of the research in the field of the human-computer interaction is the design of a natural and intuitive interaction modalities. In particular, many efforts have been devoted in the development of systems able to interact with the user in natural language. Chatbots are the classical interfaces for natural language interaction. Such systems can be very sophisticated, including support for 3D avatars and speech analysis and synthesis. However, all of them present only a text area allowing the user to input her sentences. No doubt, an interaction involving also the natural language can increase the comfort of the user with respect to common interfaces using only graphical widgets. However, multi-modal communication must be preferred in all those situations when the user and the system have a tight interaction. Typical examples are cultural heritages applications (intelligent museum guides, picture browsing) or systems presenting to the user an information integrated from different sources as in the case of the iGoogle (TM) interface. In this work we present the Graphical Artificial Intelligence Markup Language, an extension of AIML allowing merging of verbal and graphical interaction modalities. A chatbot system, Graphbot, is also presented that is able to support this language. The language is able to define personalized interface patterns that are the most suitable ones in relation to the type of data exchanged between the user and the system during the dialogue.


Mobile Information Systems | 2008

GAIML: A new language for verbal and graphical interaction in chatbots

Giuseppe Russo; Vincenzo Cannella; Daniele Peri

Natural and intuitive interaction between users and complex systems is a crucial research topic in human-computer interaction. A major direction is the definition and implementation of systems with natural language understanding capabilities. The interaction in natural language is often performed by means of systems called chatbots. A chatbot is a conversational agent with a proper knowledge base able to interact with users. Chatbots appearance can be very sophisticated with 3D avatars and speech processing modules. However the interaction between the system and the user is only performed through textual areas for inputs and replies. An interaction able to add to natural language also graphical widgets could be more effective. On the other side, a graphical interaction involving also the natural language can increase the comfort of the user instead of using only graphical widgets. In many applications multi-modal communication must be preferred when the user and the system have a tight and complex interaction. Typical examples are cultural heritages applications (intelligent museum guides, picture browsing) or systems providing the user with integrated information taken from different and heterogenous sources as in the case of the iGoogle™ interface. We propose to mix the two modalities (verbal and graphical) to build systems with a reconfigurable interface, which is able to change with respect to the particular application context. The result of this proposal is the Graphical Artificial Intelligence Markup Language (GAIML) an extension of AIML allowing merging both interaction modalities. In this context a suitable chatbot system called Graphbot is presented to support this language. With this language is possible to define personalized interface patterns that are the most suitable ones in relation to the data types exchanged between the user and the system according to the context of the dialogue.


complex, intelligent and software intensive systems | 2009

GUI Usability in Medical Imaging

Vincenzo Cannella; Orazio Gambino; Salvatore Vitabile

The diffusion of computer technologies in everyday life has involved the birth of standard methodologies to control their development. Indeed,the purpose of standardization procedures consists of providing rules aimed to control technologies leaving no space for empirical improvisations. In general, medical software manufacturers provide their applications with Graphic User Interfaces(GUI)that are not compliant with any clear and standard usability criterion. The only guideline is the creation of GUIs inherited from the ones adopted on medical consoles because physicians use them routinely. This paper addresses this issue: medical software interfaces should be designed trying to overcome the limitations described above. The DICOM standard provides useful mechanisms to transfer medical data by instantiating the related classes and methods. The standard could be extended introducing new data and service models in order to create abstract GUI classes and instantiate them automatically depending on the medical data the physicians would like to analyze.


conference on human system interactions | 2009

A map-based visualization tool to support tutors in e-learning 2.0

Vincenzo Cannella; Giuseppe Russo

Web 2.0 regards essentially the social issues about the new usage of web applications, but participative web and user generated contents induce a new way to think about the design of the web applications themselves. This is particularly true in the field of educational systems that are all web based applications. Many researchers are now devoted to study what is called e-learning 2.0 both as regards the technological issues in the field of computer science, and in relation to the impact of the web 2.0 social and psychological issues on the education process itself. One of the most crucial topics in e-learning 2.0 is the way to provide support to the teacher/tutor to avoid cognitive overload when he/she is monitoring the evolution group dynamics inside the class, and decides the proper strategies to ensure the pursuit of the learning goals. Map visualization is a good way to present information without cognitive overload. We present a map-based tool in support of the tutor that is an extension of our ITS called TutorJ. The tool allows a human tutor to have multiple map visualizations about the domain of the course, the social (forum-based) interaction between the students, and the amount of topics faced by each student. The paper reports a detailed description of the architecture of the tool, and a discussion about its relevance in the field of e-learning 2.0.


BICA | 2013

Comprehensive Uncertainty Management in MDPs

Vincenzo Cannella; Antonio Chella

Multistage decision-making in robots involved in real-world tasks is a process affected by uncertainty. The effects of the agent’s actions in a physical environment cannot be always predicted deterministically and in a precise manner. Moreover, observing the environment can be a too onerous for a robot, hence not continuos. Markov Decision Processes (MDPs) are a well-known solution inspired to the classic probabilistic approach for managing uncertainty. On the other hand, including fuzzy logics and possibility theory has widened uncertainty representation. Probability, possibility, fuzzy logics, and epistemic belief allow treating different and not always superimposable facets of uncertainty. This paper presents a new extended version of MDP, designed for managing all these kinds of uncertainty together to describe transitions between multi-valued fuzzy states. The motivation of this work is the design of robots that can be used to make decisions over time in an unpredictable environment. The model is described in detail along with its computational solution.


Archive | 2011

An Emotional Talking Head for a Humoristic Chatbot

Agnese Augello; Orazio Gambino; Vincenzo Cannella; Salvatore Gaglio; Giovanni Pilato

The interest about enhancing the interface usability of applications and entertainment platforms has increased in last years. The research in human-computer interaction on conversational agents, named also chatbots, and natural language dialogue systems equipped with audio-video interfaces has grown as well. One of the most pursued goals is to enhance the realness of interaction of such systems. For this reason they are provided with catchy interfaces using humanlike avatars capable to adapt their behavior according to the conversation content. This kind of agents can vocally interact with users by using Automatic Speech Recognition (ASR) and Text To Speech (TTS) systems; besides they can change their “emotions” according to the sentences entered by the user. In this framework, the visual aspect of interaction plays also a key role in human-computer interaction, leading to systems capable to perform speech synchronization with an animated face model. These kind of systems are called Talking Heads. Several implementations of talking heads are reported in literature. Facial movements are simulated by rational free form deformation in the 3D talking head developed in Kalra et al. (2006). A Cyberware scanner is used to acquire surface of a human face in Lee et al. (1995). Next the surface is converted to a triangle mesh thanks to image analysis techniques oriented to find reflectance local minima and maxima. In Waters et al. (1994) the DECface system is presented. In this work, the animation of a wireframe face model is synchronized with an audio stream provided by a TTS system. An input ASCII text is converted into a phonetic transcription and a speech synthesizer generates an audio stream. The audio server receives a query to determine the phoneme currently running and the shape of the mouth is computed by the trajectory of the main vertexes. In this way, the audio samples are synchronized with the graphics. A nonlinear function controls the translation of the polygonal vertices in such a way to simulate the mouth movements. Synchronization is achieved by calculating the deformation length of the mouth, based on the duration of an audio samples group. BEAT (Behavior Expression Animation Toolkit) an intelligent agent with human characteristics controlled by an input text is presented in Cassell et al. (2001). A talking head for the Web with a client-server architecture is described in Ostermann et al. (2000). The client application comprises the browser, the TTS engine, and the animation renderer. A 15


intelligent systems design and applications | 2009

WikiArt: An Ontology-Based Information Retrieval System for Arts

Vincenzo Cannella; Orazio Gambino; Arianna Pipitone; Giuseppe Russo

The paper presents WikiArt, a new system integrating three distinct types of contents about the art: data, information, and knowledge, to generate automatically thematic paths to consult all its contents. WikiArt is a wiki, allowing to manage cooperatively documents about artists, artworks, artistic movements or techniques, and so on. It is also an expert system, provided with an ontology about arts, with which it is able to plan possible different ways of consulting and browsing its contents. This ability is made possible by a second part of the ontology of the system, describing a collection of criteria regarding how to plan thematic paths, and by a set of rules followed by the expert system to carry out this task. WikiArt is not a semantic wiki, because the ontology has not been employed to tag semantically the documents by the authors. But only their subjects. Our efforts are now devoted to the extension of the system to make it a semantic wiki too.


intelligent tutoring systems | 2014

Fostering Teacher-Student Interaction and Learner Autonomy by the I-TUTOR Maps

Vincenzo Cannella; Laura Fedeli; Arianna Pipitone; Pier Giuseppe Rossi

The paper analyses the use of an automatically generated map as a mediator; that map visually represents the study domain of a university course and fosters the co-activity between teachers and students. In our approach the role of the teacher is meant as a mediator between the student and knowledge. The mediation and not the transmission highlights a process in which theres no deterministic relation between teaching and learning. Learning is affected by the students previous experiences, their own modalities of acquisition and by the inputs coming from the environment. The learning path develops when the teachers and the students visions approach and, partly, overlap. In this case we have co-activity. The teacher uses artifacts-mediators in such a process Bruner. The automatically generated map can be considered a mediator. The paper describes the experimentation of the artifact to check if its use fosters: 1 the elicitation of the different subjects perspectives different students and the teachers, and 2 the structural coupling that is the creation of an empathic process between the perspectives of the teacher and the student as the way to enable co-activity processes between teaching and learning..


Journal of e-learning and knowledge society | 2014

Automatic Concept Maps Generation in Support of Educational Processes

Arianna Pipitone; Vincenzo Cannella

A VLE is a system where three main actors can be devised: the teacher in the role of instructional designer, the tutor, and the stu- dent. Instructional designers need easy interaction for specifying the course domain structure to the system, and for controlling how well the learning materials agree to such a structure. Tutors need tools for having a holistic perception of the evolution of single students and/or groups in the VLE during the learning process. Finally, students need self regulation in terms of controlling their learning rate, reflect on their learning strategies, and comparing with other people in the class. In this work we claim that sharing an implicit representation of the knowledge about the course domain between all these actors can meet the requirements stated before, and we present a tool that has been developed as part of the I-TUTOR project according to our claim. The tool analyzes a suitable document corpus describing the course domain, and generates a semantic space, which in turn is displayed as a 2D zoomable map. All the relevant concepts of the domain are depicted in the map, and learning materials can be browsed through the tool. Also the texts generated by students during the learning process as well as their social activities inside the VLE can be placed in the map. The motivations of the work are reported as well as the underlying AI techniques, and the whole system is explained in detail.

Collaboration


Dive into the Vincenzo Cannella's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge