Javier Andreu
Lancaster University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Javier Andreu.
ieee international conference on fuzzy systems | 2010
Javier Andreu; Plamen Angelov
A new approach to real-time knowledge extraction from streaming data generated by wearable wireless accelerometers based on self-learning evolving fuzzy rule-based classifier is proposed and evaluated in this paper. Based on experiments with real subjects we collected data from 18 different classifieds activities. After preprocessing and classifying data depending on the sequence of activities regarding time, we achieved up to 99.81% of accuracy in recognizing a sequence of activities. This technique allows re-training the system as long as the application is running on the wearable intelligent/smart sensor, getting a better classification rate throughout the time without an increase of the delay in performance.
ambient intelligence | 2013
Javier Andreu; Plamen Angelov
In this paper is presented a novel approach for human activity recognition (HAR) through complex data provided from wearable sensors. This approach considers the development of a more realistic system which takes into account the diversity of the population. It aims to define a general HAR model for any type of individuals. To achieve this much-needed processing capacity, this novel approach makes use of customizable, self-adaptive, self-development capacities of the so-called machine learning technique named evolving intelligent systems. An online pre-processing model to suit real-time capacities has been developed and is also explained in detail in this paper. Additionally, this paper provides valuable information on sensor analysis, online feature extraction, and evolving classifiers used for the attainment of this purpose.
systems, man and cybernetics | 2011
Rashmi Dutta Baruah; Plamen Angelov; Javier Andreu
This paper presents the sequel of evolving fuzzy rule-based classifier eClass, called here as simplified evolving classifier, simpl_eClass. Similarly to eClass, simpl_eClass comprises of two different classifiers, namely zero and first order (simpl_eClass0 and simpl_eClass1). The two classifiers differ from each other in terms of the consequent part of the fuzzy rules, and the classification strategy used. The design of simpl_eClass is based on the density increment principle introduced recently in so called simpl_eTS+ approach. The rule learning in simpl_eClass does not involve computation of potential values that allows it to attain computationally much less expensive model update phase compared to eClass. As compared to other FRB classifiers, it retains all the advantages of eClass, such as being on-line and evolving, having zero and first order. In comparison with other non-fuzzy classifiers it has the advantage of interpretability and transparency (especially zero order type). The goals of this paper are to demonstrate the applicability of simpl_eTS+ to classification task, and to empirically show that the simplification of eClass to simpl_eClass by using potential-free approach does not compromise the accuracy of the classifiers. In order to attain the goals, the classifiers are tested by performing several experiments using benchmark data sets. The simpl_eClass1 classifier is also applied to the real-life problem of on-line scene categorization for low-resource devices benefiting from its low computational cost. The results obtained from the experiments endorse that simpl_eClass achieves the accuracy of eClass while simplifying rule learning process.
ieee international conference on fuzzy systems | 2011
Javier Andreu; Rashmi Dutta Baruah; Plamen Angelov
A new approach to real-time human activity recognition (HAR) using evolving self-learning fuzzy rule-based classifier (eClass) will be described in this paper. A recursive version of the principle component analysis (PCA) and linear discriminant analysis (LDA) pre-processing methods is coupled with the eClass leading to a new approach for HAR which does not require computation and time consuming pre-training and data from many subjects. The proposed new method for evolving HAR (eHAR) takes into account the specifics of each user and possible evolution in time of her/his habits. Data streams from several wearable devices which make possible to develop a pervasive intelligence enabling them to personalize/tune to the specific user were used for the experimental part of the paper.
ambient intelligence | 2013
Javier Andreu; Plamen Angelov
Ubiquitous computing presents nowadays a new paradigm of computation where computers are embedded into the everyday environment. The vanishing of computers in the environment can be obtained through sophisticated and miniaturized devices such as wearable sensors, unobtrusive sensor networks and computer vision technologies. Nowadays, thanks to the low cost, small size and high computational power of these devices, pervasive sensors are able to interact autonomously with humans as a part of their day-to-day. One of the most fundamental areas of research that builds on top of Ubiquitous Applications is the making of human-centric intelligent spaces. As a difference with other intelligent spaces, they focus on creating a context-sensitive computing with respect to humans. That is to say, this intelligence provides inference mechanisms regarding humans’ conditions, feelings, actions or activities. This inference is noteworthy in order to improve the ubiquitous interaction between humans and electronic devices which surround them. Hence, the overall success rate of these systems strongly depends on the inference from the context. In other words, ubiquitous devices should be context-aware. This special issue is focused on one of the most important inference or context-aware systems in Ubiquitous applications (among other such as location, natural language processing, emotional computing, etc.) that is Human Activity Recognition (HAR). In this human activity awareness it is possible to distinguish between different levels of recognition regarding the complexity of activities they tackle: (1) gestures; (2) individual actions; (3) humanobject interactions; (4) human–human interactions; (5) complex or composite activities. HAR systems have become a very prominent area of research but more especially in fields such medical, security and military. Thus, in computing there have been advances in HAR since the late 90’s. Nevertheless, there are still many open issues to overcome in this purpose. From the hardware perspective, there is still an intensive area of research to improve the portability, price, network communication, processing capacities and energy efficiency. Hardware devices can be classified between external (such as intelligent homes) and wearable devices. External devices have the capacity of recognizing long complex activities that comprise several actions such as ‘‘preparing the dinner’’, ‘‘cutting the grass’’, ‘‘playing console games’’ etc. The recognition of these activities depends heavily on the knowledge extraction from several sensors (locational, tag, cameras, microphones etc.). Such environments with many sensors are so-named intelligent homes or smart homes. The problem in this type arises when users do not necessarily interact (for example, they are out of range) with those sensors. Besides, some specific sensors such as video cameras entail further issues such as privacy and scalability. From the software perspective, there are still many challenges such as: (1) the design of an optimal feature extraction; (2) real-time and scalable inference systems; (3) low computational and embeddable algorithms; (4) support to new users without the need of re-training. In this special issue are presented state-of-the-art advances from four European specialized research centers in computing. Three out of four of these pieces of research focus on improvement of the intelligent system J. Andreu (&) P. Angelov School of Computing and Communications, Lancaster University, Lancashire, UK e-mail: [email protected]
ieee international conference on fuzzy systems | 2011
Javier Andreu; Rashmi Dutta Baruah; Plamen Angelov
In this paper an original approach is proposed which makes possible autonomous scenes recognition performed on-line by an evolving self-learning classifier. Existing approaches for scene recognition are off-line and used in intelligent albums for picture categorization/selection. The emergence of powerful mobile platforms with camera on board and sensor-based autonomous (robotic) systems is pushing forward the requirement for efficient self-learning and adaptive/evolving algorithms. Fast real-time and online algorithms for categorisation of the real world environment based on live video stream are essential for understanding and situation awareness as well as for localization and context awareness. In scene analysis the critical problem is feature extraction mechanism for a quick description of the scene. In this paper we apply a well known technique called spatial envelop or GIST. Visual scenes can be quite different but very often they can be grouped in similar types/categories. For example, pictures from different cities across the Globe, e.g. Tokyo, Vancouver, New York, Moscow, Dusseldorf, etc. bear the similar pattern of an urban scene — high rise buildings, despite the differences in the architectural style. Same applies for the beaches of Miami, Maldives, Varna, Costa del Sol, etc. One assumption based on which such automatic video classifiers can be build is to pre-train them using a large number of such images from different groups. Variety of possible scenes suggests the limitations of such an approach. Therefore, we use in this paper the recently propose evolving fuzzy rule-based classifier, simpl_eClass, which is self-learning and thus updates its rules and categories descriptions with each new image. In addition, it is fully recursive, computationally efficient and yet linguistically transparent.
ieee international conference on fuzzy systems | 2010
Javier Andreu; Plamen Angelov
In this paper we present results and algorithm used to predict 14 days horizon from a number of time series provided by the NN GC1 concerning transportation datasets [1]. Our approach is based on applying the well known Evolving Takagi-Sugeno (eTS) Fuzzy Systems [2–6] to self-learn from the time series. ETS are characterized by the fact that they self-learn and evolve the fuzzy rule-based system which, in fact, represents their structure from the data stream on-line and in real-time mode. That means we used all the data samples from the time series only once, at any instant in time we only used one single input vector (which consist of few data samples as described below) and we do not iterate or memorize the whole sequence. It should be emphasized that this is a huge practical advantage which, unfortunately cannot be compared directly to the other competitors in NN GC1 if only precision/error is taken as a criteria. It is also worth to require time for calculations and memory usage as well as iterations and computational complexity to be provided and compared to build a fuller picture of the advantages the proposed technique offers. Nevertheless, we offer a computationally light and easy to use approach which in addition does not require any user-or problem-specific thresholds or parameters to be specified. Additionally, this approach is flexible in terms not only of its structure (fuzzy rule based and automatic self-development), but also in terms of automatic input selection as will be described below.
2012 IEEE Conference on Evolving and Adaptive Intelligent Systems | 2012
Pouria Sadeghi-Tehran; Sasmita Behera; Plamen Angelov; Javier Andreu
In this paper, a novel approach to visual self-localization in an unknown environment is presented. The proposed method makes possible the recognition of new landmark without using GPS or any other communication links or pre-training. An image-based self-localization technique is used to automatically label landmarks that are detected in real-time using a computationally efficient and recursive algorithm. Real-time experiments are carried in outdoor environment at Lancaster University using a real mobile robot Pioneer 3DX in order to build a map the local environment without using any communication links. The presented experimental results in real situations show the effectiveness of the proposed method.
2012 IEEE Conference on Evolving and Adaptive Intelligent Systems | 2012
Plamen Angelov; Javier Andreu; Tu Vuong
In this paper a novel application concerning smart phones is introduced which makes use of the recently introduced autonomous approach for visual landmarks identification based on video input. The problem of recognizing new or pre-defined features in a video stream surfaced a long time ago. Many workable solutions have been proposed and implemented but few can work in an embedded mobile device with sustainable quality. Either they are too slow or very computational costly. Embedded devices such as smart mobile phones have a limited and not scalable amount of resources. Some important aspects to consider when coding a mobile application are processing power, memory and energy in a great extent. This paper describes a novel recursive (and thus fast) method of automatic landmark recognition that is applied to such smart mobile devices. The application uses the inbuilt camera to retrieve information about the external context. This information is used as input of an evolving algorithm that is embedded and runs on the mobile device.
Journal of Automation, Mobile Robotics and Intelligent Systems | 2011
Pouria Sadeghi-Tehran; Javier Andreu; Plamen Angelov; Xiaowei Zhou