Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maaike de Boer is active.

Publication


Featured researches published by Maaike de Boer.


Breast Cancer Research and Treatment | 2011

Safety of avoiding routine use of axillary dissection in early stage breast cancer: a systematic review

Manon J. Pepels; Johanna H. Vestjens; Maaike de Boer; Marjolein L. Smidt; Paul J. van Diest; George F. Borm; Vivianne C. G. Tjan-Heijnen

Physicians are moving away from routine axillary lymph node dissection (ALND) in clinically node-negative breast cancer. We conducted a systemic review on the safety of this policy. Pubmed and Cochrane library were searched for. Sixty-eight studies were included: studies of clinically node-negative patients in the pre-sentinel node (SN) era; observational studies of SN-negative patients, without ALND; comparative studies of SN-negative patients, with a non-ALND and an ALND group; SN-positive studies, of patients without ALND. Primary endpoint was the pooled axillary recurrence rate (ARR) of each category; secondary endpoint was overall survival (OS) rate. In pre-SN studies, with larger tumors and less systemic therapy, ARR without ALND after 5–10xa0years follow-up was 12–18%, with 5% reduced OS. In the observational SN-negative studies, with median follow-up of 36xa0months, the pooled ARR was 0.6% (95% CI 0.6–0.8). In the comparative SN-negative studies, pooled ARR was 0.4% (95% CI 0.2–0.6) without ALND versus 0.3% (95% CI 0.1–0.6) with ALND at 31 and 47xa0months, respectively, and no survival disadvantage. In SN-positive studies, ARR was up to 1.7% (95% CI 1.0–2.7) at 30xa0months. For patients with an H&E positive SN the ARR without ALND was 5% after 23xa0months, which may imply rates as high as 13 and 18% after 5 and 8xa0years. In conclusion, this systematic review confirms the safety of omitting ALND in SN-negative patients. There is a potential role for avoiding ALND in selected SN-positive patients, but eligibility criteria and the role of systemic therapy need further to be elucidated.


European Journal of Cancer | 2016

Ultrasound is at least as good as magnetic resonance imaging in predicting tumour size post-neoadjuvant chemotherapy in breast cancer

Birgit E.P.J. Vriens; Bart de Vries; Marc Lobbes; Saskia M. van Gastel; Franchette van den Berkmortel; Tineke J. Smilde; Laurence J. C. van Warmerdam; Maaike de Boer; Dick Johan van Spronsen; Marjolein L. Smidt; Petronella G. M. Peer; Maureen J. Aarts; Vivianne C. G. Tjan-Heijnen

BACKGROUNDnThe aim of this study was to evaluate the accuracy of clinical imaging of the primary breast tumour post-neoadjuvant chemotherapy (NAC) related to the post-neoadjuvant histological tumour size (gold standard) and whether this varies with breast cancer subtype. In this study, results of both magnetic resonance imaging (MRI) and ultrasound (US) were reported.nnnMETHODSnPatients with invasive breast cancer were enrolled in the INTENS study between 2006 and 2009. We included 182 patients, of whom data were available for post-NAC MRI (n=155), US (n=123), and histopathological tumour size.nnnRESULTSnMRI estimated residual tumour size with <10-mm discordance in 54% of patients, overestimated size in 28% and underestimated size in 18% of patients. With US, this was 63%, 20% and 17%, respectively. The negative predictive value in hormone receptor-positive tumours for both MRI and US was low, 26% and 33%, respectively. The median deviation in clinical tumour size as percentage of pathological tumour was 63% (P25=26, P75=100) and 49% (P25=22, P75=100) for MRI and US, respectively (P=0.06).nnnCONCLUSIONSnIn this study, US was at least as good as breast MRI in providing information on residual tumour size post-neoadjuvant chemotherapy. However, both modalities suffered from a substantial percentage of over- and underestimation of tumour size and in addition both showed a low negative predictive value of pathologic complete remission (Gov nr: NCT00314977).


Multimedia Tools and Applications | 2016

Knowledge based query expansion in complex multimedia event detection

Maaike de Boer; Klamer Schutte; Wessel Kraaij

A common approach in content based video information retrieval is to perform automatic shot annotation with semantic labels using pre-trained classifiers. The visual vocabulary of state-of-the-art automatic annotation systems is limited to a few thousand concepts, which creates a semantic gap between the semantic labels and the natural language query. One of the methods to bridge this semantic gap is to expand the original user query using knowledge bases. Both common knowledge bases such as Wikipedia and expert knowledge bases such as a manually created ontology can be used to bridge the semantic gap. Expert knowledge bases have highest performance, but are only available in closed domains. Only in closed domains all necessary information, including structure and disambiguation, can be made available in a knowledge base. Common knowledge bases are often used in open domain, because it covers a lot of general information. In this research, query expansion using common knowledge bases ConceptNet and Wikipedia is compared to an expert description of the topic applied to content-based information retrieval of complex events. We run experiments on the Test Set of TRECVID MED 2014. Results show that 1) Query Expansion can improve performance compared to using no query expansion in the case that the main noun of the query could not be matched to a concept detector; 2) Query expansion using expert knowledge is not necessarily better than query expansion using common knowledge; 3) ConceptNet performs slightly better than Wikipedia; 4) Late fusion can slightly improve performance. To conclude, query expansion has potential in complex event detection.


international conference on multimedia retrieval | 2016

Event Detection with Zero Example: Select the Right and Suppress the Wrong Concepts

Yi-Jie Lu; Hao Zhang; Maaike de Boer; Chong-Wah Ngo

Complex video event detection without visual examples is a very challenging issue in multimedia retrieval. We present a state-of-the-art framework for event search without any need of exemplar videos and textual metadata in search corpus. To perform event search given only query words, the core of our framework is a large, pre-built bank of concept detectors which can understand the content of a video in the perspective of object, scene, action and activity concepts. Leveraging such knowledge can effectively narrow the semantic gap between textual query and the visual content of videos. Besides the large concept bank, this paper focuses on two challenges that largely affect the retrieval performance when the size of the concept bank increases: (1) How to choose the right concepts in the concept bank to accurately represent the query; (2) if noisy concepts are inevitably chosen, how to minimize their influence. We share our novel insights on these particular problems, which paves the way for a practical system that achieves the best performance in NIST TRECVID 2015.


content based multimedia indexing | 2015

Interactive detection of incrementally learned concepts in images with ranking and semantic query interpretation

Klamer Schutte; Henri Bouma; John G. M. Schavemaker; Laura Daniele; Maya Sappelli; Gijs Koot; Pieter T. Eendebak; George Azzopardi; Martijn Spitters; Maaike de Boer; Maarten C. Kruithof; Paul Brandt

The number of networked cameras is growing exponentially. Multiple applications in different domains result in an increasing need to search semantically over video sensor data. In this paper, we present the GOOSE demonstrator, which is a real-time general-purpose search engine that allows users to pose natural language queries to retrieve corresponding images. Top-down, this demonstrator interprets queries, which are presented as an intuitive graph to collect user feedback. Bottom-up, the system automatically recognizes and localizes concepts in images and it can incrementally learn novel concepts. A smart ranking combines both and allows effective retrieval of relevant images.


International Journal of Multimedia Information Retrieval | 2016

Blind late fusion in multimedia event retrieval

Maaike de Boer; Klamer Schutte; Hao Zhang; Yi-Jie Lu; Chong-Wah Ngo; Wessel Kraaij

One of the challenges in Multimedia Event Retrieval is the integration of data from multiple modalities. A modality is defined as a single channel of sensory input, such as visual or audio. We also refer to this as data source. Previous research has shown that the integration of different data sources can improve performance compared to only using one source, but a clear insight of success factors of alternative fusion methods is still lacking. We introduce several new blind late fusion methods based on inversions and ratios of the state-of-the-art blind fusion methods and compare performance in both simulations and an international benchmark data set in multimedia event retrieval named TRECVID MED. The results show that five of the proposed methods outperform the state-of-the-art methods in a case with sufficient training examples (100 examples). The novel fusion method named JRER is not only the best method with dependent data sources, but this method is also a robust method in all simulations with sufficient training examples.


ACM Transactions on Multimedia Computing, Communications, and Applications | 2017

Semantic Reasoning in Zero Example Video Event Retrieval

Maaike de Boer; Yi-Jie Lu; Hao Zhang; Klamer Schutte; Chong-Wah Ngo; Wessel Kraaij

Searching in digital video data for high-level events, such as a parade or a car accident, is challenging when the query is textual and lacks visual example images or videos. Current research in deep neural networks is highly beneficial for the retrieval of high-level events using visual examples, but without examples it is still hard to (1) determine which concepts are useful to pre-train (Vocabulary challenge) and (2) which pre-trained concept detectors are relevant for a certain unseen high-level event (Concept Selection challenge). In our article, we present our Semantic Event Retrieval System which (1) shows the importance of high-level concepts in a vocabulary for the retrieval of complex and generic high-level events and (2) uses a novel concept selection method (i-w2v) based on semantic embeddings. Our experiments on the international TRECVID Multimedia Event Detection benchmark show that a diverse vocabulary including high-level concepts improves performance on the retrieval of high-level events in videos and that our novel method outperforms a knowledge-based concept selection method.


Multimedia Tools and Applications | 2017

Improving video event retrieval by user feedback

Maaike de Boer; Geert Pingen; Douwe Knook; Klamer Schutte; Wessel Kraaij

In content based video retrieval videos are often indexed with semantic labels (concepts) using pre-trained classifiers. These pre-trained classifiers (concept detectors), are not perfect, and thus the labels are noisy. Additionally, the amount of pre-trained classifiers is limited. Often automatic methods cannot represent the query adequately in terms of the concepts available. This problem is also apparent in the retrieval of events, such as bike trick or birthday party. Our solution is to obtain user feedback. This user feedback can be provided on two levels: concept level and video level. We introduce the method Adaptive Relevance Feedback (ARF) on video level feedback. ARF is based on the classical Rocchio relevance feedback method from Information Retrieval. Furthermore, we explore methods on concept level feedback, such as the re-weighting and Query Point Modification (QPM) methods as well as a method that changes the semantic space the concepts are represented in. Methods on both concept level and video level are evaluated on the international benchmark TRECVID Multimedia Event Detection (MED) and compared to state of the art methods. Results show that relevance feedback on both concept and video level improves performance compared to using no relevance feedback; relevance feedback on video level obtains higher performance compared to relevance feedback on concept level; our proposed ARF method on video level outperforms a state of the art k-NN method, all methods on concept level and even manually selected concepts.


computer analysis of images and patterns | 2015

Fast Re-ranking of Visual Search Results by Example Selection

John G. M. Schavemaker; Martijn Spitters; Gijs Koot; Maaike de Boer

In this paper we present a simple, novel method to use state-of-the-art image concept detectors and publicly available image search engines to retrieve images for semantically more complex queries from local databases without re-indexing of the database. Our low-key, data-driven method for associative recognition of unknown, or more elaborate, concepts in images allows user selection of visual examples to tailor query results to the typical preferences of the user. The method is compared with a baseline approach using ConceptNet-based semantic expansion of the query phrase to known concepts, as set by the concepts of the image concept detectors. Using the output of the image concept detector as index for all images in the local image database, a quick nearest-neighbor matching scheme is presented that can match queries swiftly via concept output vectors. We show preliminary results for a number of query phrases followed by a general discussion.


Counterterrorism, Crime Fighting, Forensics, and Surveillance Technologies II | 2018

Flexible image analysis for law enforcement agencies with deep neural networks to determine: where, who and what

Henri Bouma; Bart Joosten; Maarten C. Kruithof; Maaike de Boer; Alexandru Ginsca; Benjamin Labbe; Quoc T. Vuong

Due to the increasing need for effective security measures and the integration of cameras in commercial products, a huge amount of visual data is created today. Law enforcement agencies (LEAs) are inspecting images and videos to find radicalization, propaganda for terrorist organizations and illegal products on darknet markets. This is time consuming. Instead of an undirected search, LEAs would like to adapt to new crimes and threats, and focus only on data from specific locations, persons or objects, which requires flexible interpretation of image content. Visual concept detection with deep convolutional neural networks (CNNs) is a crucial component to understand the image content. This paper has five contributions. The first contribution allows image-based geo-localization to estimate the origin of an image. CNNs and geotagged images are used to create a model that determines the location of an image by its pixel values. The second contribution enables analysis of fine-grained concepts to distinguish sub-categories in a generic concept. The proposed method encompasses data acquisition and cleaning and concept hierarchies. The third contribution is the recognition of person attributes (e.g., glasses or moustache) to enable query by textual description for a person. The person-attribute problem is treated as a specific sub-task of concept classification. The fourth contribution is an intuitive image annotation tool based on active learning. Active learning allows users to define novel concepts flexibly and train CNNs with minimal annotation effort. The fifth contribution increases the flexibility for LEAs in the query definition by using query expansion. Query expansion maps user queries to known and detectable concepts. Therefore, no prior knowledge of the detectable concepts is required for the users. The methods are validated on data with varying locations (popular and non-touristic locations), varying person attributes (CelebA dataset), and varying number of annotations.

Collaboration


Dive into the Maaike de Boer's collaboration.

Top Co-Authors

Avatar

Wessel Kraaij

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Chong-Wah Ngo

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Hao Zhang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Yi-Jie Lu

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vivianne C. G. Tjan-Heijnen

Maastricht University Medical Centre

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Birgit E.P.J. Vriens

Maastricht University Medical Centre

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

George F. Borm

Radboud University Nijmegen Medical Centre

View shared research outputs
Researchain Logo
Decentralizing Knowledge