Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marco Tamassia is active.

Publication


Featured researches published by Marco Tamassia.


annual symposium on computer-human interaction in play | 2015

Player-Computer Interaction Features for Designing Digital Play Experiences across Six Degrees of Water Contact

William L. Raffe; Marco Tamassia; Fabio Zambetta; Xiaodong Li; Sarah Jane Pell; Florian 'Floyd' Mueller

Physical games involving the use of water or that are played in a water environment can be found in many cultures throughout history. However, these experiences have yet to see much benefit from advancements in digital technology. With advances in interactive technology that is waterproof, we see a great potential for digital water play. This paper provides a guide for commencing projects that aim to design and develop digital water-play experiences. A series of interaction features are provided as a result of reflecting on prior work as well as our own practice in designing playful experiences for water environments. These features are examined in terms of the effect that water has on them in relation to a taxonomy of six degrees of water contact, ranging from the player being in the vicinity of water to them being completely underwater. The intent of this paper is to prompt forward thinking in the prototype design phase of digital water-play experiences, allowing designers to learn and gain inspiration from similar past projects before development begins.


computational intelligence and games | 2016

Predicting player churn in destiny: A Hidden Markov models approach to predicting player departure in a major online game

Marco Tamassia; William L. Raffe; Rafet Sifa; Anders Drachen; Fabio Zambetta; Michael Hitchens

Destiny is, to date, the most expensive digital game ever released with a total operating budget of over half a billion US dollars. It stands as one of the main examples of AAA titles, the term used for the largest and most heavily marketed game productions in the games industry. Destiny is a blend of a shooter game and massively multi-player online game, and has attracted dozens of millions of players. As a persistent game title, predicting retention and churn in Destiny is crucial to the running operations of the game, but prediction has not been attempted for this type of game in the past. In this paper, we present a discussion of the challenge of predicting churn in Destiny, evaluate the area under curve (ROC) of behavioral features, and use Hidden Markov Models to develop a churn prediction model for the game.


international symposium on safety, security, and rescue robotics | 2013

Visual coverage using autonomous mobile robots for search and rescue applications

A. Del Bue; Marco Tamassia; F. Signorini; Vittorio Murino; Alessandro Farinelli

This paper focuses on visual sensing of 3D large-scale environments. Specifically, we consider a setting where a group of robots equipped with a camera must fully cover a surrounding area. To address this problem we propose a novel descriptor for visual coverage that aims at measuring visual information of an area based on a regular discretization of the environment in voxels. Moreover, we propose an autonomous cooperative exploration approach which controls the robot movements so to maximize information accuracy (defined based on our visual coverage descriptor) and minimizing movement costs. Finally, we define a simulation scenario based on real visual data and on widely used robotic tools (such as ROS and Stage) to empirically evaluate our approach. Experimental results show that the proposed method outperforms a baseline random approach and an uncoordinated one, thus being a valid solution for visual coverage in large scale outdoor scenarios.


First Australasian Conference on Artificial Life and Computational Intelligence | 2015

Learning Options for an MDP from Demonstrations

Marco Tamassia; Fabio Zambetta; William L. Raffe; Xiaodong Li

The options framework provides a foundation to use hierarchical actions in reinforcement learning. An agent using options, along with primitive actions, at any point in time can decide to perform a macro-action made out of many primitive actions rather than a primitive action. Such macro-actions can be hand-crafted or learned. There has been previous work on learning them by exploring the environment. Here we take a different perspective and present an approach to learn options from a set of experts demonstrations. Empirical results are also presented in a similar setting to the one used in other works in this area.


Proceedings of the Australasian Computer Science Week Multiconference on | 2018

Measuring player skill using dynamic difficulty adjustment

Simon Demediuk; Marco Tamassia; William L. Raffe; Fabio Zambetta; Florian 'Floyd' Mueller; Xiaodong Li

Video games have a long history of use for educational and training purposes, as they provided increased motivation and learning for players. One of the limitations of using video games in this manner is, players still need to be tested outside of the game environment to test their learning outcomes. Traditionally, determining a players skill level in a competitive game, requires players to compete directly with each other. Through the application of the Adaptive Training Framework, this work presents a novel method to determine the skill level of the player after each interaction with the video game. This is done by measuring the effort of a Dynamic Difficult Adjustment agent, without the need for direct competition between players. The experiments conducted in this research show that by measuring the players Heuristic Value Average, we can obtain the same ranking of players as state-of-the-art ranking systems, without the need for direct competition.


Proceedings of the Australasian Computer Science Week Multiconference on | 2018

Player retention in league of legends: a study using survival analysis

Simon Demediuk; Alexandra Murrin; David Bulger; Michael Hitchens; Anders Drachen; William L. Raffe; Marco Tamassia

Multi-player online esports games are designed for extended durations of play, requiring substantial experience to master. Furthermore, esports game revenues are increasingly driven by in-game purchases. For esports companies, the trends in players leaving their games therefore not only provide information about potential problems in the user experience, but also impacts revenue. Being able to predict when players are about to leave the game - churn prediction - is therefore an important solution for companies in the rapidly growing esports sector, as this allows them to take action to remedy churn problems. The objective of the work presented here is to understand the impact of specific behavioral characteristics on the likelihood of a player continuing to play the esports title League of Legends. Here, a solution to the problem is presented based on the application of survival analysis, using Mixed Effects Cox Regression, to predict player churn. Survival Analysis forms a useful approach for the churn prediction problem as it provides rates as well as an assessment of the characteristics of players who are at risk of leaving the game. Hazard rates are also presented for the leading indicators, with results showing that duration between matches played is a strong indicator of potential churn.


IEEE Transactions on Computational Intelligence and Ai in Games | 2017

Learning Options from Demonstrations: A Pac-Man Case Study

Marco Tamassia; Fabio Zambetta; William L. Raffe; Florian 'Floyd' Mueller; Xiaodong Li

Reinforcement learning (RL) is a machine learning paradigm behind many successes in games, robotics, and control applications. RL agents improve through trial-and-error, therefore undergoing a learning phase during which they perform suboptimally. Research effort has been put into optimizing behavior during this period, to reduce its duration and to maximize after-learning performance. We introduce a novel algorithm that extracts useful information from expert demonstrations (traces of interactions with the target environment) and uses it to improve performance. The algorithm detects unexpected decisions made by the expert and infers what goal the expert was pursuing. Goals are then used to bias decisions while learning. Our experiments in the video game Pac-Man provide statistically significant evidence that our method can improve final performance compared to a state-of-the-art approach.


computational intelligence and games | 2015

Enhancing theme park experiences through adaptive cyber-physical play

William L. Raffe; Marco Tamassia; Fabio Zambetta; Xiaodong Li; Florian 'Floyd' Mueller


european conference on artificial intelligence | 2016

Dynamic choice of state abstraction in Q-learning

Marco Tamassia; Fabio Zambetta; William L. Raffe; Florian 'Floyd' Mueller; Xiaodong Li


computational intelligence and games | 2017

Monte Carlo tree search based algorithms for dynamic difficulty adjustment

Simon Demediuk; Marco Tamassia; William L. Raffe; Fabio Zambetta; Xiaodong Li; Florian 'Floyd' Mueller

Collaboration


Dive into the Marco Tamassia's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vittorio Murino

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Del Bue

Istituto Italiano di Tecnologia

View shared research outputs
Researchain Logo
Decentralizing Knowledge