Marie Wehenkel
University of Liège
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marie Wehenkel.
international workshop on pattern recognition in neuroimaging | 2017
Marie Wehenkel; Christine Bastin; Christophe Phillips; Pierre Geurts
For several years, machine learning approaches have been increasingly investigated in the neuroimaging field to help the diagnosis of dementia. To this end, this work proposes a new pattern recognition technique based on brain parcelling, group selection and tree ensemble algorithms. In addition to prediction performance competitive with more traditional approaches, the method provides easy interpretation about the brain regions involved in the prognosis of Alzheimer’s disease.
Frontiers in Neuroscience | 2018
Marie Wehenkel; Antonio Sutera; Christine Bastin; Pierre Geurts; Christophe Phillips
Machine learning approaches have been increasingly used in the neuroimaging field for the design of computer-aided diagnosis systems. In this paper, we focus on the ability of these methods to provide interpretable information about the brain regions that are the most informative about the disease or condition of interest. In particular, we investigate the benefit of group-based, instead of voxel-based, analyses in the context of Random Forests. Assuming a prior division of the voxels into non overlapping groups (defined by an atlas), we propose several procedures to derive group importances from individual voxel importances derived from Random Forests models. We then adapt several permutation schemes to turn group importance scores into more interpretable statistical scores that allow to determine the truly relevant groups in the importance rankings. The good behaviour of these methods is first assessed on artificial datasets. Then, they are applied on our own dataset of FDG-PET scans to identify the brain regions involved in the prognosis of Alzheimers disease.
Alzheimers & Dementia | 2018
Eric Salmon; François Meyer; Marie Wehenkel; Christophe Phillips; Pierre Geurts; Christine Bastin
TDP-43), ii) covarying with CSF biomarkers (Ab42, total tau, ptau) and iii) covarying with episodic memory scores (FCSRT, Landscape Test and CERAD Constructional Praxis recall). Results:Amyloid/Tau pathology affected mainly posterior HC while anterior left HC was more atrophied in TDP-43pathies. We also observed a significant correlation between posterior hippocampal atrophy and AD CSF biomarkers levels. In addition, visual memory scores correlated with posterior HC atrophy, whereas verbal memory correlated with both anterior and posterior hippocampal atrophy. Conclusions:These findings fit well with the hypothesis that HC is involved in two different cortical systems that harbor different cognitive functions, which could have distinct vulnerability to different proteinopathies. Taken together, these data suggest that there is a potential differentiation along the hippocampal longitudinal axis based on the underlying pathology, which could be used as a potential biomarker to identify the underlying pathology in different neurodegenerative diseases.
Archive | 2017
Marie Wehenkel; Christine Bastin; Christophe Phillips; Pierre Geurts
In the current age of Artificial Intelligence (AI) there have been numerous inroads into the topic [2][6][7], many of which have made unprecedented leaps in completing tasks that were once thought impossible; notably professional level game playing; such as: Watson[5], Deep Blue[3] or AlphaGO[1]. However, despite these great leaps, nearly all these AI systems[1][3] focus on the use of processor intensive algorithms to achieve their goals; while not necessarily an issue it requires all data to be mapped towards an arithmetic world. Since the world is not necessarily mathematical in all aspects and can only be represented in this manner as far as our current level of mathematics has been developed, which leads to the problem that any mathematical model will be limited by the development of mathematics in relation to representing the real world. The better one can make this representation, the more likely the system will operate with a good accuracy. However, as complexity increases to deal with the real world in a mathematical way, data and processing requirements increase likewise and so eventually one needs massively parallel machines consuming large amounts of power to provide for only a limited amount of intelligent functionality. Additionally, the data will be represented in a format that is optimal for that particular task, but does not necessarily allow for generalisation towards other processes, and so the scope of the application is connected with the mathematical ruleset used for the AI to work from [1][3]; something that then implicitly limits the AI’s ability to move from one environment to another. Which poses the question, how can something be intelligent if it cannot transfer its knowledge from one application to another, as such transference capabilities are clearly seen in humans.Introduction: Over the last decade, a large number of computer aided diagnosis (CAD) systems have been developed by researchers in neuroimaging to study neurodegenerative diseases or other kinds of brain disorders [1,2,3]. Briefly, machine-learning (ML) techniques help doctors to distinguish groups of people (e.g. healthy vs. diseased) by automatically identifying characteristics in the images that discriminate the groups. The challenge in the modelling of CAD systems is not only to perform well in terms of prediction but also to provide relevant information about the diagnosis, such as regions of interest in the brain that are affected by the disease.Learning is a basic skill that helps every human on a day to day basis to improve their life. It is one of the key features to achieve knowledge and deliver intelligence. Learning involves a combination of human thoughts, thinking-process and their behaviour and that on a regular basis. This change in behaviour is achieved through the feedback of information given by and/or received from the surroundings, and that with the aim to improve the accuracy of processes. For example, sensory organs get information from the environment, but the human brain cannot take in and process all the data in one go, so it converts data into patterns, and then arranges and stores it for later retrieval. This process is continuous as these patterns are continuously adjusted to fit recent events. In order to be able to mimic this learning to artificial systems it is key to find answers to questions like: “how do humans learn with patterns?” and “how can humans improve their learning experiences?”.
Archive | 2018
Marie Wehenkel
Archive | 2017
Marie Wehenkel; Christine Bastin; Pierre Geurts; Christophe Phillips
Archive | 2016
Marie Wehenkel; Pierre Geurts; Christophe Phillips
Archive | 2016
Marie Wehenkel
Archive | 2015
Marie Wehenkel; Pierre Geurts; Christophe Phillips
Archive | 2015
Marie Wehenkel