Daniele Liciotti
Marche Polytechnic University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniele Liciotti.
Lecture Notes in Computer Science | 2014
Daniele Liciotti; Marco Contigiani; Emanuele Frontoni; Adriano Mancini; Primo Zingaretti; Valerio Placidi
The aim of this paper is to present an integrated system consisted of a RGB-D camera and a software able to monitor shoppers in intelligent retail environments. We propose an innovative low cost smart system that can understand the shoppers’ behavior and, in particular, their interactions with the products in the shelves, with the aim to develop an automatic RGB-D technique for video analysis. The system of cameras detects the presence of people and univocally identifies them. Through the depth frames, the system detects the interactions of the shoppers with the products on the shelf and determines if a product is picked up or if the product is taken and then put back and finally, if there is not contact with the products. The system is low cost and easy to install, and experimental results demonstrated that its performances are satisfactory also in real environments.
Pattern Recognition Letters | 2016
Mirco Sturari; Daniele Liciotti; Roberto Pierdicca; Emanuele Frontoni; Adriano Mancini; Marco Contigiani; Primo Zingaretti
Bluetooth beacons signal is used to provide low-cost indoor localization.Sensor fusion approach with RGB-D camera is adopted to improve position accuracy.Customer positions allows to track shoppers movements and push context notifications.Customer paths are analyzed and clustered for customer behavior analysis.System can evaluate shop performances and support retailers decision making process. The development of reliable and precise indoor localization systems would considerably improve the ability to investigate shopper movements and behavior inside retail environments. Previous approaches used either computer vision technologies or the analysis of signals emitted by communication devices (beacons). While computer vision approaches provide higher level of accuracy, beacons cover a wider operational area. In this paper, we propose a sensor fusion approach between active radio beacons and RGB-D cameras. This system, used in an intelligent retail environment where cameras are already installed for other purposes, allows an affordable environment set-up and a low operational costs for customer indoor localization and tracking. We adopted a Kalman filter to fuse localization data from radio signals emitted by beacons are used to track users mobile devices and RGB-D cameras used to refine position estimations. By combing coarse localization datasets from active beacons and RGB-D data from sparse cameras, we demonstrate that the indoor position estimation is strongly enhanced. The aim of this general framework is to provide retailers with useful information by analyzing consumer activities inside the store. To prove the robustness of our approach, several tests were conducted into a real indoor showroom by analyzing real customers behavior with encouraging results.
international conference on multimedia and expo | 2015
Roberto Pierdicca; Daniele Liciotti; Marco Contigiani; Emanuele Frontoni; Adriano Mancini; Primo Zingaretti
The success of pervasive smart environments lies in the capacity to involve visitors to interact with them. It is essential for retail stores. In this paper we describe the setting-up of a low cost system for the indoor localization and customer interaction, developed with a complex infrastructure of wireless embedded sensors. The creation of a responsive store allows customers to connect the real world to their smart devices and will overcome the lack of ubiquity in public spaces; furthermore, from in-venue analytics and proximity sensor it is possible to customize the user experience. First of all we describe the whole sensor network. We go in deep into the active beacon technology adopted for this study. Then, thanks to the analytics, we present a data evaluation with the aim of determining the best sensor arrangement, according to several user tests. Beside the strong enhancement of human interaction, the results of our essay demonstrate how embedded localization systems could be a useful source for data collection beside the strong enhancement of human interaction. This paper is focused to help retailers and insiders for many purposes such as products development or improvement, segmentation strategies and human behaviour analyses into such stores where the embedded computing augment the environment.
ieee asme international conference on mechatronic and embedded systems and applications | 2014
Daniele Liciotti; Primo Zingaretti; Valerio Placidi
The aim of this work is to propose an integrated system consisted of an RGB-D camera and software able to monitor shoppers in intelligent retail environments. We want to propose an innovative low cost intelligent system that can evaluate not only the shopper behaviour, but also detect their interactions with the products in the shelves, by developing automatic RGB-D techniques for video analysis. The system of cameras, located in strategic locations within the store, detects the presence of a person by identifying the blob and the centre of mass. The camera detects the person as an object in moving. Through the video frames, the system detects the interactions of the shoppers with the products on the shelf and establishes also the type of interaction: if a product is picked up, if the product is taken and then repositioned and finally, if there is not contact with the products. To understand the shopper behaviour is very important for the marketing strategies of a retail store. The proposed architecture monitors this aspect, that is low cost, easy to install and able to ensure very satisfactory results also in real environments.
international conference on communications | 2015
Daniele Liciotti; Gionata Massi; Emanuele Frontoni; Adriano Mancini; Primo Zingaretti
The aim of this work is to define a fall detection video system for indoor environments based on a RGB-D sensor and a low power and low cost embedded system that processes the sensor data in order to provide a description of human activities in the field of the Ambient Assisted Living. The RGB image is affected by a high luminescence sensibility, so the depth data have the aim to improve the human activity recognition. The system is usable in a sufficiently small room and it requires a RGB-D sensor located in the center of the ceiling and an embedded system connected on a computer network. The embedded system controls the RGB-D sensor and, in the mean time, classifies the images using computer vision algorithms based on the depth map. “Water Filling” algorithm or “Multi-Level Segmentation” algorithm are used to detect person. For each person, the system detects the position with respect to the room, estimating also the human posture. Among the features extracted we enumerate the height, the head size and the distance between the head and the shoulders. The system tracks a person through the frames starting from the first identification. Further, group interactions are monitored and analyzed. The posture detection algorithm takes into account the distance between the person head and the floor during the time. During the experimental phase, conducted in many domestic scenarios, the effectiveness of the proposed solution has been proved, that is fast, accurate and ables to provide a fall map in-home fall risk assessment.
ieee asme international conference on mechatronic and embedded systems and applications | 2014
Daniele Liciotti; Giacomo Ferroni; Emanuele Frontoni; Stefano Squartini; Roberto Bonfigli; Primo Zingaretti; Francesco Piazza
In the recent years several studies on population ageing in the most advanced countries argued that the share of people older than 65 years is steadily increasing. In order to tackle this phenomena, a significant effort has been devoted to the development of advanced technologies for supervising the domestic environments and their inhabitants to provide them assistance in their own home. In this context, the present paper aims to delineate a novel, highly-integrated system for advanced analysis of human behaviours. It is based on the fusion of the audio and vision frameworks, developed at the Multimedia Assistive Technology Laboratory (MATeLab) of the Università Politecnica delle Marche, in order to operate in the ambient assisted living context exploiting audio-visual domain features. The existing video framework exploits vertical RGB-D sensors for people tracking, interaction analysis and users activities detection in domestic scenarios. The depth information has been used to remove the affect of the appearance variation and to evaluate users activities inside the home and in front of the fixtures. In addition, group interactions are monitored and analysed. On the other side, the audio framework recognises voice commands by continuously monitoring the acoustic home environment. In addition, a hands-free communication to a relative or to a healthcare centre is automatically triggered when a distress call is detected. Echo and interference cancellation algorithms guarantee the high-quality communication and reliable speech recognition, respectively. The system we intend to delineate, thus, exploits multi-domain information, gathered from audio and video frameworks each, and stores them in a remote cloud for instant processing and analysis of the scene. Related actions are consequently performed.
international conference on pattern recognition applications and methods | 2017
Daniele Liciotti; Emanuele Frontoni; Primo Zingaretti; Nicola Bellotto; Tom Duckett
Automated recognition of Activities of Daily Living allows to identify possible health problems and apply corrective strategies in Ambient Assisted Living (AAL). Activities of Daily Living analysis can provide very useful information for elder care and long-term care services. This paper presents an automated RGB-D video analysis system that recognises human ADLs activities, related to classical daily actions. The main goal is to predict the probability of an analysed subject action. Thus, the abnormal behaviour can be detected. The activity detection and recognition is performed using an affordable RGB-D camera. Human activities, despite their unstructured nature, tend to have a natural hierarchical structure; for instance, generally making a coffee involves a three-step process of turning on the coffee machine, putting sugar in cup and opening the fridge for milk. Action sequence recognition is then handled using a discriminative Hidden Markov Model (HMM). RADiaL, a dataset with RGB-D images and 3D position of each person for training as well as evaluating the HMM, has been built and made publicly available.
VAAM/FFER@ICPR | 2016
Daniele Liciotti; Marina Paolanti; Emanuele Frontoni; Adriano Mancini; Primo Zingaretti
Video analytics, involves a variety of techniques to monitor, analyse, and extract meaningful information from video streams. In this light, person re-identification is an important topic in scene monitoring, human computer interaction, retail, people counting, ambient assisted living and many other computer vision research. The existing datasets are not suitable for activity monitoring and human behaviour analysis. For this reason we build a novel dataset for person re-identification that uses an RGB-D camera in a top-view configuration. This setup choice is primarily due to the reduction of occlusions and it has also the advantage of being privacy preserving, because faces are not recorded by the camera. The use of an RGB-D camera allows to extract anthropometric features for the recognition of people passing under the camera. The paper describes in details the collection and construction modalities of the dataset TVPR. This is composed by 100 people and for each video frame nine depth and colour features are computed and provided together with key descriptive statistics.
Journal of Intelligent and Robotic Systems | 2018
Marina Paolanti; Daniele Liciotti; Rocco Pietrini; Adriano Mancini; Emanuele Frontoni
Understanding shopper behaviour is one of the keys to success for retailers. In particular, it is necessary that managers know which retail attributes are important to which shoppers and their main goal is to improve the consumer shopping experience. In this work, we present sCREEN (Consumer REtail ExperieNce), an intelligent mechatronic system for indoor navigation assistance in retail environments that minimizes the need for active tagging and does not require metrics maps. The tracking system is based on Ultra-wideband technology. The digital devices are installed in the shopping carts and baskets and sCREEN allows modelling and forecasting customer navigation in retail environments. This paper contributes the design of an intelligent mechatronic system with the use of a novel Hidden Markov Models (HMMs) for the representation of shoppers’ shelf/category attraction and usual retail scenarios such as product out of stock or changes on store layout. Observations are viewed as a perceived intelligent system performance. By forecasting consumers next shelf/category attraction, the system can present the item location information to the consumer, including a walking route map to a location of the product in the retail store, and/or the number of an aisle in which the product is located. Effective and efficient design processes for mechatronic systems are a prerequisite for competitiveness in an intelligent retail environment. Experiments are performed in a real retail environment that is a German supermarket, during business hours. A dataset, with consumers trajectories, timestamps and the corresponding ground truth for training as well as evaluating the HMM, have been built and made publicly available. The results in terms of Precision, Recall and F1-score demonstrate the effectiveness and suitability of our approach, with a precision value that exceeds the 76% in all test cases.
international conference on intelligent autonomous systems | 2016
Daniele Liciotti; Annalisa Cenci; Emanuele Frontoni; Adriano Mancini; Primo Zingaretti
The information of the number of passengers getting in/off a vehicle is very important for public bus transport companies. In fact, the operators need to estimate the number of travellers using their vehicles for marketing purposes, for evaluating transit service capacities and allocating the proper number of buses for each connection-line. The goal of this work is to provide a system for counting and monitoring passengers, both adults and children, at the entrance of bus. This system is mainly based on an RGB-D sensor, located over each bus door, and image processing and understanding software. The RGB image could be affected by a high luminescence sensibility, whereas depth data allow a greater reliability and accuracy in people counting. The correctness and effectiveness of our method has been confirmed by experiments conducted in a real scenario. Furthermore, this approach has the advantage of being computationally inexpensive and flexible enough to obtain, in real time, statistical measures on the amount of people present in the bus, with the use of an Analytical Processing System (a separate process) that accesses the data stored in the database and extracts statistical data and knowledge about the bus passengers.