Hans Weda
Philips
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hans Weda.
acm multimedia | 2007
Prarthana Shrstha; Mauro Barbieri; Hans Weda
An increasing number of people regularly capture video in social occasions like weddings, parties and holiday trips. As a result, multiple video recordings are made from a single event providing different view angles and wider coverage. This gives an opportunity to produce a desired video summary from the event, combining the videos with the most favorable views from multiple recordings. In order to mix contents from different cameras, the recordings require very precise synchronization in time. This task is very tedious and presently done manually. We present two methods to synchronize multiple videos based on the identical audio content present in the recordings. The first method utilizes audio-classification and the synchronization between two recordings is determined by correlating the audio classes. The second method uses audio-fingerprints to represent the recorded audio. The synchronization is determined by fingerprint matches between the different recordings. The experimental results show that the audio-classification method requires recordings, at least a couple of minutes long, with large temporal overlap to determine the synchronization point. The method using audio-fingerprints requires at least 3 second long overlapping audio and resulted inperfect synchronization in all the examined cases.
IEEE Transactions on Multimedia | 2010
Prarthana Shrestha; Mauro Barbieri; Hans Weda; Dragan Sekulovski
Digital video capturing is getting popular with the decreasing price of camcorders and the increasing availability of devices with embedded video cameras such as digital-still cameras, mobile phones and PDAs. While a raw home video is considered as visually non-appealing, having multiple recordings of the same event provides the opportunity to combine audio and video segments from different cameras for improving quality and aesthetics. Mixing content from different recordings requires precise synchronization among the recordings. In most present applications, synchronization is done manually and considered as a very tedious task. In this paper, we propose a novel automated synchronization approach based on detecting and matching audio and video features extracted from the recorded content. We assess experimentally three realizations of this approach on a common data set and make recommendations on the usability of the different realizations in practical use cases. The realizations have no limitations on the number and movement of the cameras. Moreover, they are robust against various ambient noises and audio-visual artifacts occurring during the recordings.
European Respiratory Journal | 2014
Lieuwe D. Bos; Hans Weda; Yuanyue Wang; Hugo Knobel; Tamara Mathea Elisabeth Nijsen; Teunis Johannes Vink; Aeilko H. Zwinderman; Peter J. Sterk; Marcus J. Schultz
There is a need for biological markers of the acute respiratory distress syndrome (ARDS). Exhaled breath contains hundreds of metabolites in the gas phase, some of which reflect (patho)physiological processes. We aimed to determine the diagnostic accuracy of metabolites in exhaled breath as biomarkers of ARDS. Breath from ventilated intensive care unit patients (n=101) was analysed using gas chromatography and mass spectrometry during the first day of admission. ARDS was defined by the Berlin definition. Training and temporal validation cohorts were used. 23 patients in the training cohort (n=53) had ARDS. Three breath metabolites, octane, acetaldehyde and 3-methylheptane, could discriminate between ARDS and controls with an area under the receiver operating characteristic curve (AUC) of 0.80. Temporal external validation (19 ARDS cases in a cohort of 48) resulted in an AUC of 0.78. Discrimination was insensitive to adjustment for severity of disease, a direct or indirect cause of ARDS, comorbidities, or ventilator settings. Combination with the lung injury prediction score increased the AUC to 0.91 and improved net reclassification by 1.17. Exhaled breath analysis showed good diagnostic accuracy for ARDS, which was externally validated. These data suggest that exhaled breath analysis could be used for the diagnostic assessment of ARDS. Metabolites in the breath of ventilated patients may be used to diagnose the acute respiratory distress syndrome http://ow.ly/uWHF1
international conference on semantic computing | 2007
Tsvetomira Tsoneva; Mauro Barbieri; Hans Weda
The movie industry produces thousands of feature films and TV series annually. Such massive data volumes would take consumers more than a lifetime to watch. Therefore, summarization of narrative media, which engages in providing concise and informative video summaries, has become a popular topic of research. However, most of the summarization solutions so far aim to represent just the overall atmosphere of the video at the expense of the story line. In this paper we describe a novel approach for automated creation of summaries for narrative videos. We propose an automated content analysis and summarization framework for creating moving-image summaries. We aim at preserving the story line to the level that users can watch the summary instead of the original content. Our solution is based on textual cues available in subtitles and movie scripts. We extract features like keywords, main characters names and presence, and combine them in an importance function to identify the moments most relevant for preserving the story line. We develop several summarization methods and evaluate the quality of the resulting summaries in terms of user understanding and user satisfaction through a user test.The efficiency of reputation system depends on the quality of feedbacks. However current reputation models in peer-to-peer systems can not process such strategic feedbacks as correlative and collusive ratings. Furthermore in them there exists unfairness to blameless peers. We propose a new reputation management mechanism to restrain false feedbacks. Our method uses two metrics to evaluate peers: feedback and service trust. After a transaction both service consumer and provider report the quality of this transaction. According to two ratings, service trust of server and feedback trust of consumer are separately updated. Furthermore the former is closely related to the latter. Besides reputation model we also propose a punishment mechanism to prevent malicious servers and liars from iteratively exerting bad behaviors in the system. However under punishment server is only restrained from providing services and it can continuously send out service requests; consumer is restrained from launching requests while it can provide services. Simulation shows our approach can effectively process aforesaid strategic feedbacks and mitigate unfairness.
acm multimedia | 2006
Prarthana Shrestha; Hans Weda; Mauro Barbieri; Dragan Sekulovski
Combining audiovisual sequences from different cameras requires precise alignment in time. Current synchronization techniques involve using geometrical properties such as camera positions and object features. In this paper, we present a synchronization method based on detecting flashes present in the video content. Such flashes, generated by still cameras, cause a sharp bright frame in the videos. They are detected using an adaptive threshold on luminance variation across the frames. The resulting flash sequences are searched for matches. The matching flashes indicate overlapping content, and allow determining the offset time between cameras. The experimental results from flash detection and flash sequence matching show perfect synchronization in all examined cases.
electronic imaging | 2007
Marco Campanella; Hans Weda; Mauro Barbieri
In recent years, more and more people capture their experiences in home videos. However, home video editing still is a difficult and time-consuming task. We present the Edit While Watching system that allows users to automatically create and change a summary of a home video in an easy, intuitive and lean-back way. Based on content analysis, video is indexed, segmented, and combined with proper music and editing effects. The result is an automatically generated home video summary that is shown to the user. While watching it, users can indicate whether they like certain content, so that the system will adapt the summary to contain more content that is similar or related to the displayed content. During the video playback users can also modify and enrich the content, seeing immediately the effects of their changes. Edit While Watching does not require a complex user interface: a TV and a few keys of a remote control are sufficient. A user study has shown that it is easy to learn and to use, even if users expressed the need for more control in the editing operations and in the editing process.
BMC Pulmonary Medicine | 2017
Pouline van Oort; Tamara Mathea Elisabeth Nijsen; Hans Weda; Hugo Knobel; Paul Dark; Tim Felton; Nicholas J. W. Rattray; Oluwasola Lawal; Waqar M. Ahmed; Craig Portsmouth; Peter J. Sterk; Marcus J. Schultz; Tetyana Zakharkina; Antonio Artigas; Pedro Póvoa; Ignacio Martin-Loeches; Stephen J. Fowler; Lieuwe D. Bos
BackgroundThe diagnosis of ventilator-associated pneumonia (VAP) remains time-consuming and costly, the clinical tools lack specificity and a bedside test to exclude infection in suspected patients is unavailable. Breath contains hundreds to thousands of volatile organic compounds (VOCs) that result from host and microbial metabolism as well as the environment. The present study aims to use breath VOC analysis to develop a model that can discriminate between patients who have positive cultures and who have negative cultures with a high sensitivity.Methods/designThe Molecular Analysis of Exhaled Breath as Diagnostic Test for Ventilator-Associated Pneumonia (BreathDx) study is a multicentre observational study. Breath and bronchial lavage samples will be collected from 100 and 53 intubated and ventilated patients suspected of VAP. Breath will be analysed using Thermal Desorption – Gas Chromatography – Mass Spectrometry (TD-GC-MS). The primary endpoint is the accuracy of cross-validated prediction for positive respiratory cultures in patients that are suspected of VAP, with a sensitivity of at least 99% (high negative predictive value).DiscussionTo our knowledge, BreathDx is the first study powered to investigate whether molecular analysis of breath can be used to classify suspected VAP patients with and without positive microbiological cultures with 99% sensitivity.Trial registrationUKCRN ID number 19086, registered May 2015; as well as registration at www.trialregister.nl under the acronym ‘BreathDx’ with trial ID number NTR 6114 (retrospectively registered on 28 October 2016).
human factors in computing systems | 2011
Paul Anthony Shrubsole; Tine Lavrysen; Maddy Janse; Hans Weda
In this case study, we designed a family game to explore whether this could be an effective and fun approach for raising the awareness of family members towards their energy use and, in the long run, to provide an effective tool for affecting their habits regarding sustainable behavior. The design of the family game implemented the metaphor of electricity as flowing liquid, fostered fun experiences and supported competitive and social elements. Dutch families with children, aged 5-11 years, participated in the design and evaluation of the concept. We obtained valuable insights into the use and understanding of electricity by the families, how the families looked at responsible behaviors around their usage and how a game could integrate into the family context in a fun way.
Journal of Breath Research | 2015
Margit Boshuizen; Jan Hendrik Leopold; Tetyana Zakharkina; Hugo Knobel; Hans Weda; Tamara Mathea Elisabeth Nijsen; Teunis Johannes Vink; Peter J. Sterk; Marcus J. Schultz; Lieuwe D. Bos
Alkanes and alkenes in the breath are produced through fatty acid peroxidation, which is initialized by reactive oxygen species. Inflammation is an important cause and effect of reactive oxygen species. We aimed to evaluate the association between fatty acid peroxidation products and inflammation of the alveolar and systemic compartment in ventilated intensive care unit (ICU) patients.Volatile organic compounds were measured by gas chromatography and mass spectrometry in the breath of newly ventilated ICU patients within 24 h after ICU admission. Cytokines were measured in non-directed bronchial lavage fluid (NBL) and plasma by cytometric bead array. Correlation coefficients were calculated and presented in heatmaps.93 patients were included. Peroxidation products in exhaled breath were not associated with markers of inflammation in plasma, but were correlated with those in NBL. IL-6, IL-8, IL-1β and TNF-α concentration in NBL showed inverse correlation coefficients with the peroxidation products of fatty acids. Furthermore, NBL IL-10, IL-13, GM-CSF and IFNγ demonstrated positive associations with breath alkanes and alkenes. Correlation coefficients for NBL cytokines were high regarding peroxidation products of n-6, n-7 and particularly in n-9 fatty acids.Levels of lipid peroxidation products in the breath of ventilated ICU patients are associated with levels of inflammatory markers in NBL, but not in plasma. Alkanes and alkenes in breath seems to be associated with an anti-inflammatory, rather than a pro-inflammatory state in the alveoli.
international conference on multimedia and expo | 2007
Hans Weda; Mauro Barbieri
Distinguishing children from adults in digital images is beneficial for enabling ambient intelligent applications. For example, when an ambient intelligent shop window detects a child, it could adapt its content to the particular limitations and needs of children. This article presents a method to automatically detect children in digital images. The method is based on the fact that the size of a persons iris is practically constant after birth while the head grows as the person grows adult. Therefore adults can be distinguished from children based on the face/iris size ratio. The faces are detected using a standard Viola-Jones face detection technique. The irises are found and measured by using iterative Canny edge detection and a modified circular Hough transform. Our results show an accuracy of over 80% when tested on a set of 289 real life photographs of frontal faces.