Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Djallel Bouneffouf is active.

Publication


Featured researches published by Djallel Bouneffouf.


international conference on neural information processing | 2012

A contextual-bandit algorithm for mobile context-aware recommender system

Djallel Bouneffouf; Amel Bouzeghoub; Alda Lopes Gançarski

Most existing approaches in Mobile Context-Aware Recommender Systems focus on recommending relevant items to users taking into account contextual information, such as time, location, or social aspects. However, none of them has considered the problem of users content evolution. We introduce in this paper an algorithm that tackles this dynamicity. It is based on dynamic exploration/exploitation and can adaptively balance the two aspects by deciding which users situation is most relevant for exploration or exploitation. Within a deliberately designed offline simulation framework we conduct evaluations with real online event log data. The experimental results demonstrate that our algorithm outperforms surveyed algorithms.


knowledge discovery and data mining | 2012

Hybrid- ε -greedy for mobile context-aware recommender system

Djallel Bouneffouf; Amel Bouzeghoub; Alda Lopes Gançarski

The wide development of mobile applications provides a considerable amount of data of all types. In this sense, Mobile Context-aware Recommender Systems (MCRS) suggest the user suitable information depending on her/his situation and interests. Our work consists in applying machine learning techniques and reasoning process in order to adapt dynamically the MCRS to the evolution of the users interest. To achieve this goal, we propose to combine bandit algorithm and case-based reasoning in order to define a contextual recommendation process based on different context dimensions (social, temporal and location). This paper describes our ongoing work on the implementation of a MCRS based on a hybrid-e -greedy algorithm. It also presents preliminary results by comparing the hybrid-e -greedy and the standard e -greedy algorithm.


advanced information networking and applications | 2012

Following the User's Interests in Mobile Context-Aware Recommender Systems: The Hybrid-e-greedy Algorithm

Djallel Bouneffouf; Amel Bouzeghoub; Alda Lopes Gançarski

The wide development of mobile applications provides a considerable amount of data of all types (images, texts, sounds, videos, etc.). In this sense, Mobile Context-aware Recommender Systems (MCRS) suggest the user suitable information depending on her/his situation and interests. Two key questions have to be considered 1) how to recommend the user information that follows his/her interests evolution? 2)how to model the users situation and its related interests? Tithe best of our knowledge, no existing work proposing a MCRS tries to answer both questions as we do. This paper describes an ongoing work on the implementation of a MCRS based on the hybrid-å-greedy algorithm we propose, which combines the standard å-greedy algorithm and both content-based filtering and case-based reasoning techniques.


international conference on neural information processing | 2014

Contextual Bandit for Active Learning: Active Thompson Sampling

Djallel Bouneffouf; Romain Laroche; Tanguy Urvoy; Raphaël Féraud; Robin Allesiardo

The labelling of training examples is a costly task in a supervised classification. Active learning strategies answer this problem by selecting the most useful unlabelled examples to train a predictive model. The choice of examples to label can be seen as a dilemma between the exploration and the exploitation over the data space representation. In this paper, a novel active learning strategy manages this compromise by modelling the active learning problem as a contextual bandit problem. We propose a sequential algorithm named Active Thompson Sampling (ATS), which, in each round, assigns a sampling distribution on the pool, samples one point from this distribution, and queries the oracle for this sample point label. Experimental comparison to previously proposed active learning algorithms show superior performance on a real application dataset.


international conference on neural information processing | 2014

A Neural Networks Committee for the Contextual Bandit Problem

Robin Allesiardo; Raphaël Féraud; Djallel Bouneffouf

This paper presents a new contextual bandit algorithm, NeuralBandit, which does not need hypothesis on stationarity of contexts and rewards. Several neural networks are trained to modelize the value of rewards knowing the context. Two variants, based on multi-experts approach, are proposed to choose online the parameters of multi-layer perceptrons. The proposed algorithms are successfully tested on a large dataset with and without stationarity of rewards.


international conference on neural information processing | 2013

Risk-Aware Recommender Systems

Djallel Bouneffouf; Amel Bouzeghoub; Alda Lopes Ganarski

Context-Aware Recommender Systems can naturally be modelled as an exploration/exploitation trade-off (exr/exp) problem, where the system has to choose between maximizing its expected rewards dealing with its current knowledge (exploitation) and learning more about the unknown user’s preferences to improve its knowledge (exploration). This problem has been addressed by the reinforcement learning community but they do not consider the risk level of the current user’s situation, where it may be dangerous to recommend items the user may not desire in her current situation if the risk level is high. We introduce in this paper an algorithm named R-UCB that considers the risk level of the user’s situation to adaptively balance between exr and exp. The detailed analysis of the experimental results reveals several important discoveries in the exr/exp behaviour.


australasian joint conference on artificial intelligence | 2012

Exploration / exploitation trade-off in mobile context-aware recommender systems

Djallel Bouneffouf; Amel Bouzeghoub; Alda Lopes Gançarski

The contextual bandit problem has been studied in the recommender system community, but without paying much attention to the contextual aspect of the recommendation. We introduce in this paper an algorithm that tackles this problem by modeling the Mobile Context-Aware Recommender Systems (MCRS) as a contextual bandit algorithm and it is based on dynamic exploration/exploitation. Within a deliberately designed offline simulation framework, we conduct extensive evaluations with real online event log data. The experimental results and detailed analysis demonstrate that our algorithm outperforms surveyed algorithms.


international conference on neural information processing | 2013

Contextual Bandits for Context-Based Information Retrieval

Djallel Bouneffouf; Amel Bouzeghoub; Alda Lopes Gançarski

Recently, researchers have started to model interactions between users and search engines as an online learning ranking. Such systems obtain feedback only on the few top-ranked documents results. To obtain feedbacks on other documents, the system has to explore the non-top-ranked documents that could lead to a better solution. However, the system also needs to ensure that the quality of result lists is high by exploiting what is already known. Clearly, this results in an exploration/exploitation dilemma. We introduce in this paper an algorithm that tackles this dilemma in Context-Based Information Retrieval CBIR area. It is based on dynamic exploration/exploitation and can adaptively balance the two aspects by deciding which users situation is most relevant for exploration or exploitation. Within a deliberately designed online framework we conduct evaluations with mobile users. The experimental results demonstrate that our algorithm outperforms surveyed algorithms.


international conference on neural information processing | 2014

Freshness-Aware Thompson Sampling

Djallel Bouneffouf

To follow the dynamicity of the user’s content, researchers have recently started to model interactions between users and the Context-Aware Recommender Systems (CARS) as a bandit problem where the system needs to deal with exploration and exploitation dilemma. In this sense, we propose to study the freshness of the user’s content in CARS through the bandit problem. We introduce in this paper an algorithm named Freshness-Aware Thompson Sampling (FA-TS) that manages the recommendation of fresh document according to the user’s risk of the situation. The intensive evaluation and the detailed analysis of the experimental results reveals several important discoveries in the exploration/exploitation (exr/exp) behaviour.


international joint conference on artificial intelligence | 2017

Context Attentive Bandits: Contextual Bandit with Restricted Context

Djallel Bouneffouf; Irina Rish; Guillermo A. Cecchi; Raphaël Féraud

We consider a novel formulation of the multi-armed bandit model, which we call the contextual bandit with restricted context, where only a limited number of features can be accessed by the learner at every iteration. This novel formulation is motivated by different online problems arising in clinical trials, recommender systems and attention modeling. Herein, we adapt the standard multi-armed bandit algorithm known as Thompson Sampling to take advantage of our restricted context setting, and propose two novel algorithms, called the Thompson Sampling with Restricted Context(TSRC) and the Windows Thompson Sampling with Restricted Context(WTSRC), for handling stationary and nonstationary environments, respectively. Our empirical results demonstrate advantages of the proposed approaches on several real-life datasets

Collaboration


Dive into the Djallel Bouneffouf's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge