Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Victor Alves is active.

Publication


Featured researches published by Victor Alves.


IEEE Transactions on Medical Imaging | 2016

Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images

Sérgio Pereira; Adriano Pinto; Victor Alves; Carlos A. Silva

Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 ×3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively.Among brain tumors, gliomas are the most common and aggressive, leading to a very short life expectancy in their highest grade. Thus, treatment planning is a key stage to improve the quality of life of oncological patients. Magnetic resonance imaging (MRI) is a widely used imaging technique to assess these tumors, but the large amount of data produced by MRI prevents manual segmentation in a reasonable time, limiting the use of precise quantitative measurements in the clinical practice. So, automatic and reliable segmentation methods are required; however, the large spatial and structural variability among brain tumors make automatic segmentation a challenging problem. In this paper, we propose an automatic segmentation method based on Convolutional Neural Networks (CNN), exploring small 3 ×3 kernels. The use of small kernels allows designing a deeper architecture, besides having a positive effect against overfitting, given the fewer number of weights in the network. We also investigated the use of intensity normalization as a pre-processing step, which though not common in CNN-based segmentation methods, proved together with data augmentation to be very effective for brain tumor segmentation in MRI images. Our proposal was validated in the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013), obtaining simultaneously the first position for the complete, core, and enhancing regions in Dice Similarity Coefficient metric (0.88, 0.83, 0.77) for the Challenge data set. Also, it obtained the overall first position by the online evaluation platform. We also participated in the on-site BRATS 2015 Challenge using the same model, obtaining the second place, with Dice Similarity Coefficient metric of 0.78, 0.65, and 0.75 for the complete, core, and enhancing regions, respectively.


Frontiers in Neuroscience | 2013

A hitchhiker's guide to diffusion tensor imaging

José Miguel Soares; Paulo Marques; Victor Alves; Nuno Sousa

Diffusion Tensor Imaging (DTI) studies are increasingly popular among clinicians and researchers as they provide unique insights into brain network connectivity. However, in order to optimize the use of DTI, several technical and methodological aspects must be factored in. These include decisions on: acquisition protocol, artifact handling, data quality control, reconstruction algorithm, and visualization approaches, and quantitative analysis methodology. Furthermore, the researcher and/or clinician also needs to take into account and decide on the most suited software tool(s) for each stage of the DTI analysis pipeline. Herein, we provide a straightforward hitchhikers guide, covering all of the workflows major stages. Ultimately, this guide will help newcomers navigate the most critical roadblocks in the analysis and further encourage the use of DTI.


international workshop on brainlesion: glioma, multiple sclerosis, stroke and traumatic brain injuries | 2015

Deep Convolutional Neural Networks for the Segmentation of Gliomas in Multi-sequence MRI

Sérgio Pereira; Adriano Pinto; Victor Alves; Carlos A. Silva

In their most aggressive form, the mortality rate of gliomas is high. Accurate segmentation is important for surgery and treatment planning, as well as for follow-up evaluation. In this paper, we propose to segment brain tumors using a Deep Convolutional Neural Network. Neural Networks are known to suffer from overfitting. To address it, we use Dropout, Leaky Rectifier Linear Units and small convolutional kernels. To segment the High Grade Gliomas and Low Grade Gliomas we trained two different architectures, one for each grade. Using the proposed method it was possible to obtain promising results in the 2015 Multimodal Brain Tumor Segmentation (BraTS) data set, as well as the second position in the on-site challenge.


Frontiers in Neuroscience | 2016

A Hitchhiker's guide to functional magnetic resonance imaging

José Miguel Soares; Ricardo José Silva Magalhães; Pedro Moreira; Alexandre Sousa; Edward Ganz; Adriana Sampaio; Victor Alves; Paulo Marques; Nuno Sousa

Functional Magnetic Resonance Imaging (fMRI) studies have become increasingly popular both with clinicians and researchers as they are capable of providing unique insights into brain functions. However, multiple technical considerations (ranging from specifics of paradigm design to imaging artifacts, complex protocol definition, and multitude of processing and methods of analysis, as well as intrinsic methodological limitations) must be considered and addressed in order to optimize fMRI analysis and to arrive at the most accurate and grounded interpretation of the data. In practice, the researcher/clinician must choose, from many available options, the most suitable software tool for each stage of the fMRI analysis pipeline. Herein we provide a straightforward guide designed to address, for each of the major stages, the techniques, and tools involved in the process. We have developed this guide both to help those new to the technique to overcome the most critical difficulties in its use, as well as to serve as a resource for the neuroimaging community.


Progress in Artificial Intelligence | 2012

Evolutionary intelligence in asphalt pavement modeling and quality-of-information

José Neves; Jorge Ribeiro; Paulo A. A. Pereira; Victor Alves; José Machado; António Abelha; Paulo Novais; Cesar Analide; Manuel Filipe Santos; M. Fernández-Delgado

The analysis and development of a novel approach to asphalt pavement modeling, able to attend the need to predict the failure according to technical and non-technical criteria in a highway, is a hard task, namely in terms of the huge amount of possible scenarios. Indeed, the current state-of-the-art for service-life prediction is at empiric and empiric–mechanistic levels, and does not provide any suitable answer even for a single failure criteria. Consequently, it is imperative to achieve qualified models and qualitative reasoning methods, in particular due to the need to have first-class environments at our disposal where defective information is at hand. To fulfill this goal, this paper presents a dynamic and formal model oriented to fulfill the task of making predictions for multi-failure criteria, in particular in scenarios with incomplete information; it is an intelligence tool that advances according to the quality-of-information of the extensions of the predicates that model the universe of discourse. On the other hand, it is also considered the degree-of-confidence factor, a parameter that measures one‘s confidence on the list of characteristics presented by an asphalt pavement, set in terms of the attributes or variables that make the argument of the predicates referred to above.


Frontiers in Human Neuroscience | 2013

BrainCAT - a tool for automated and combined functional magnetic resonance imaging and diffusion tensor imaging brain connectivity analysis

Paulo Marques; José Miguel Soares; Victor Alves; Nuno Sousa

Multimodal neuroimaging studies have recently become a trend in the neuroimaging field and are certainly a standard for the future. Brain connectivity studies combining functional activation patterns using resting-state or task-related functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) tractography have growing popularity. However, there is a scarcity of solutions to perform optimized, intuitive, and consistent multimodal fMRI/DTI studies. Here we propose a new tool, brain connectivity analysis tool (BrainCAT), for an automated and standard multimodal analysis of combined fMRI/DTI data, using freely available tools. With a friendly graphical user interface, BrainCAT aims to make data processing easier and faster, implementing a fully automated data processing pipeline and minimizing the need for user intervention, which hopefully will expand the use of combined fMRI/DTI studies. Its validity was tested in an aging study of the default mode network (DMN) white matter connectivity. The results evidenced the cingulum bundle as the structural connector of the precuneus/posterior cingulate cortex and the medial frontal cortex, regions of the DMN. Moreover, mean fractional anisotropy (FA) values along the cingulum extracted with BrainCAT showed a strong correlation with FA values from the manual selection of the same bundle. Taken together, these results provide evidence that BrainCAT is suitable for these analyses.


distributed computing and artificial intelligence | 2013

Web-Based Solution for Acquisition, Processing, Archiving and Diffusion of Endoscopy Studies

Isabel Laranjo; Joel Braga; Domingos Assunção; Andreia Silva; Carla Rolanda; Luís Lopes; Jorge Correia-Pinto; Victor Alves

In this paper we present a distributed solution for the acquisition, processing, archiving and diffusion of endoscopic procedures. The goal is to provide a system capable of managing all administrative and clinical information (including audiovisual content) since the acquisition process to the searching process of previous exams, for comparison with new cases. In this context, a device for the acquisition of the endoscopic video was designed (MIVbox), regardless of the endoscopic camera that is used. All the information is stored in a structured and standardized way, allowing its reuse and sharing. To facilitate this sharing process, the video undergoes several processing steps in order to obtain a summarized video and the respective content characteristics. The proposed solution uses an annotation system that enables content querying, thus becoming a versatile tool for research in this area. A streaming module in which the endoscopic video is transmitted in real time is also provided.


Medical Image Analysis | 2018

Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

Sérgio Pereira; Raphael Meier; Richard McKinley; Roland Wiest; Victor Alves; Carlos A. Silva; Mauricio Reyes

HighlightsWe propose methodologies to enhance the interpretability of a machine learning system.The approach can yield two levels of interpretability (global and local), allowing us to assess how the system learned task‐specific relations and its individual predictions.Validation on brain tumor segmentation and penumbra estimation in acute stroke.Based on the evaluated clinical scenarios, the proposed approach allows us to confirm that the machine learning system learns relations coherent with expert knowledge and annotation protocols. Graphical abstract Figure. No Caption available. Abstract Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end‐user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable “black boxes”. In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel‐ and patient‐level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images.


Applications and Innovations in Intelligent Systems XIII, Proceedings of AI-2005, the Twenty-fifth SGAI International Conference on Innovative Techniques and Applications of Artificial Intelligence, Cambridge, UK, December 12-14, 2005 | 2005

Web-based Medical Teaching using a Multi-Agent System

Victor Alves; José Neves; Luís Nelas; Filipe Miguel Maria Marreiros

Web-based teaching via Intelligent Tutoring Systems (ITSs) is considered as one of the most successful enterprises in artificial intelligence. Indeed, there is a long list of ITSs that have been tested on humans and have proven to facilitate learning, among which we may find the well-tested and known tutors of algebra, geometry, and computer languages. These ITSs use a variety of computational paradigms, as production systems, Bayesian networks, schema-templates, theorem proving, and explanatory reasoning. The next generation of ITSs are expected to go one step further by adopting not only more intelligent interfaces but will focus on integration. This article will describe some particularities of a tutoring system that we are developing to simulate conversational dialogue in the area of Medicine, that enables the integration of highly heterogeneous sources of information into a coherent knowledge base, either from the tutor’s point of view or the development of the discipline in itself, i.e. the system’s content is created automatically by the physicians as their daily work goes on. This will encourage students to articulate lengthier answers that exhibit deep reasoning, rather than to deliver straight tips of shallow knowledge. The goal is to take advantage of the normal functioning of the health care units to build on the fly a knowledge base of cases and data for teaching and research purposes.


International Journal of Medical Informatics | 2011

A logic programming approach to medical errors in imaging

Susana Rodrigues; Paulo Brandão; Luís Nelas; José Neves; Victor Alves

BACKGROUND In 2000, the Institute of Medicine reported disturbing numbers on the scope it covers and the impact of medical error in the process of health delivery. Nevertheless, a solution to this problem may lie on the adoption of adverse event reporting and learning systems that can help to identify hazards and risks. It is crucial to apply models to identify the adverse events root causes, enhance the sharing of knowledge and experience. The efficiency of the efforts to improve patient safety has been frustratingly slow. Some of this insufficiency of progress may be assigned to the lack of systems that take into account the characteristic of the information about the real world. In our daily lives, we formulate most of our decisions normally based on incomplete, uncertain and even forbidden or contradictory information. Ones knowledge is less based on exact facts and more on hypothesis, perceptions or indications. PURPOSE From the data collected on our adverse event treatment and learning system on medical imaging, and through the use of Extended Logic Programming to knowledge representation and reasoning, and the exploitation of new methodologies for problem solving, namely those based on the perception of what is an agent and/or multi-agent systems, we intend to generate reports that identify the most relevant causes of error and define improvement strategies, concluding about the impact, place of occurrence, form or type of event recorded in the healthcare institutions. RESULTS AND CONCLUSIONS The Eindhoven Classification Model was extended and adapted to the medical imaging field and used to classify adverse events root causes. Extended Logic Programming was used for knowledge representation with defective information, allowing for the modelling of the universe of discourse in terms of data and knowledge default. A systematization of the evolution of the body of knowledge about Quality of Information embedded in the Root Cause Analysis was accomplished. An adverse event reporting and learning system was developed based on the presented approach to medical errors in imaging. This system was deployed in two Portuguese healthcare institutions, with an appealing outcome. The system enabled to verify that the majority of occurrences were concentrated in a few events that could be avoided. The developed system allowed automatic knowledge extraction, enabling report generation with strategies for the improvement of quality-of-care.

Collaboration


Dive into the Victor Alves's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge