Mohamed A. ElHelw
Imperial College London
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohamed A. ElHelw.
Surgical Innovation | 2007
Omer Aziz; Louis Atallah; Benny Lo; Mohamed A. ElHelw; Lei Wang; Guang-Zhong Yang; Ara Darzi
Patients going home following major surgery are susceptible to complications such as wound infection, abscess formation, malnutrition, poor analgesia, and depression, all of which can develop after the fifth postoperative day and slow recovery. Although current hospital recovery monitoring systems are effective during perioperative and early postoperative periods, they cannot be used when the patient is at home. Measuring and quantifying home recovery is currently a subjective and labor-intensive process. This case report highlights the development and piloting of a wireless body sensor network to monitor postoperative recovery at home in patients undergoing abdominal surgery. The device consists of wearable sensors (vital signs, motion) combined with miniaturized computers wirelessly linked to each other, thus allowing continuous monitoring of patients in a pervasive (unobtrusive) manner in any environment. Initial pilot work with results in both the simulated (with volunteers) and the real home environment (with patients) is presented.
tests and proofs | 2008
Mohamed A. ElHelw; Marios Nicolaou; Adrian James Chung; Guang-Zhong Yang; M. Stella Atkins
Visual realism has been a major objective of computer graphics since the inception of the field. However, the perception of visual realism is not a well-understood process and is usually attributed to a combination of visual cues and image features that are difficult to define or measure. For highly complex images, the problem is even more involved. The purpose of this paper is to present a study based on eye tracking for investigating the perception of visual realism of static images with different visual qualities. The eye-fixation clusters helped to define salient image features corresponding to 3D surface details and light transfer properties that attract observers attention. This enabled the definition and categorization of image attributes affecting the perception of photorealism. The dynamics of the visual behavior of different observer groups were examined by analyzing saccadic eye movements. We also demonstrated how the different image categories used in the experiments were perceived with varying degrees of visual realism. The results presented can be used as a basis for investigating the impact of individual image features on the perception of visual realism. This study suggests that post-recall or simple abstraction of visual experience is not accurate and the use of eye tracking provides an effective way of determining relevant features that affect visual realism, thus allowing for improved rendering techniques that target these features.
wearable and implantable body sensor networks | 2007
Louis Atallah; Mohamed A. ElHelw; Julien Pansiot; Danail Stoyanov; Lei Wang; Benny Lo; Guang-Zhong Yang
This paper investigates the combined use of ambient and wearable sensing for inferring changes in patient behaviour patterns. It has been demonstrated that with the use of wearable and blob based ambient sensors, it is possible to develop an effective visualization framework allowing the observation of daily activities in a homecare environment. An effective behaviour modelling method based on Hidden Markov Models (HMMs) has been proposed for highlighting changes in activity patterns. This allows for the representation of sequences in a similarity space that can be used for clustering or data-exploration.
international conference on pervasive computing | 2009
Mohamed A. ElHelw; Julien Pansiot; Douglas G. McIlwraith; Raza Ali; Benny Lo; Louis Atallah
Pervasive healthcare provides an effective solution for monitoring the wellbeing of elderly, quantifying post-operative patient recovery and monitoring the progression of neurodegenerative diseases such as Parkinsons. However, developing functional pervasive systems is a complex task that entails the creation of appropriate sensing platforms, integration of versatile technologies for data stream management and development of elaborate data analysis techniques. This paper describes a complete and an integrated multi-sensing framework, with which the sensing platforms, data fusion and analysis algorithms, and software architecture suitable for pervasive healthcare applications are presented. The potential value of the proposed framework for pervasive patient monitoring is demonstrated and initial results obtained from our current research experiences are described.
ieee international conference on information technology and applications in biomedicine | 2008
Raza Ali; Mohamed A. ElHelw; Louis Atallah; Benny Lo; Guang-Zhong Yang
Pervasive sensing is set to transform the future of patient care by continuous and intelligent monitoring of patient well-being. In practice, the detection of patient activity patterns over different time resolutions can be a complicated procedure, entailing the utilisation of multi-tier software architectures and processing of large volumes of data. This paper describes a scalable, distributed software architecture that is suitable for managing continuous activity data streams generated from body sensor networks. A novel pattern mining algorithm is applied to pervasive sensing data to obtain a concise, variable-resolution representation of frequent activity patterns over time. The identification of such frequent patterns enables the observation of the inherent structure present in a patientpsilas daily activity for analyzing routine behaviour and its deviations.
ieee international conference on information visualization | 2003
Danail Stoyanov; Mohamed A. ElHelw; Benny Lo; Adrian James Chung; Fernando Bello; Guang-Zhong Yang
In surgery, virtual and augmented reality are increasingly being used as new ways of training, preoperative planning, diagnosis and surgical navigation. Further development of virtual and augmented reality in medicine is moving towards photorealistic rendering and patient specific modeling, permitting high fidelity visual examination and user interaction. This coincides with the current development in computer vision and graphics where image information is used directly to render novel views of a scene. These techniques require extensive use of geometric information about the scene and provide a comprehensive review of the underlying techniques required for building patient specific models with photorealstic rendering. It also highlights some of the opportunities that image based modeling and rendering techniques can offer in the context of minimally invasive surgery.
medical image computing and computer assisted intervention | 2004
Mohamed A. ElHelw; Benny Lo; Adrian James Chung; Ara Darzi; Guang-Zhong Yang
With the increasing use of computer based simulation for training and skills assessment, growing effort is being directed towards enhancing the visual realism of the simulation environment. Image-based modelling and rendering is a promising technique in that it attains photorealistic visual feedback while maintaining interactive response. The purpose of this paper is to extend an existing technique for simulating tissues with extensive deformation. We demonstrate that by the incorporation of multiple virtual cameras, geometric proxy and viewing projection manifolds, interactive tissue-instrument interaction can be achieved while providing photorealistic rendering. Detailed steps involved in the algorithm are introduced and quantitative error analysis is provided to assess the accuracy of the technique in terms of projection error through 3D image warping. Results from phantom and real-laparoscope simulation demonstrate the potential clinical value of the technique.
International Workshop on Medical Imaging and Virtual Reality | 2004
Mohamed A. ElHelw; Benny Lo; Ara Darzi; Guang-Zhong Yang
Computer-based surgical simulations are being increasingly used for training and skills assessment. They provide an efficient and cost effective alternative to traditional training methods. To allow for both basic and advanced skills assessment, the required perceptual fidelity is essential to capturing the natural behavior of the operator. The level of realism in terms of object and scene appearance determines the faithfulness and hence the degree of immersion experienced by the trainee in the virtual world. This paper presents a novel photo-realistic rendering approach based on real-time per-pixel effects by using the graphics hardware. Improved realism is achieved by a combined use of specular reflectance and refractance maps to model the effect of surface details and mucous layer on the overall visual appearance of the tissue. The key steps involved in the proposed technique are described, and quantitative performance assessment results demonstrate the practical advantages of the proposed technique.
medical image computing and computer assisted intervention | 2003
Mohamed A. ElHelw; Adrian James Chung; Ara Darzi; Guang-Zhong Yang
This paper describes a novel method for simulating soft tissue deformation with image-based rendering. It is based on the association of a depth map with the texture image and the incorporation of micro-surface details to generate photorealistic images representing soft tissue deformations. In a pre-processing step, the depth map describing the surface is separated into two distributions corresponding to macro- and micro-surface details. During user interactive simulation, deformations resulting from tissue-instrument interaction are rapidly calculated by modifying a coarse mass-spring model fitted to the macro-surface structure. Micro-surface details are subsequently augmented to the modified model with 3D image warping. The proposed technique drastically reduces the polygonal count required to model the scene whilst preserving deformed small surface details to offer a high level of photorealism.
medical image computing and computer assisted intervention | 2005
Mohamed A. ElHelw; M. Stella Atkins; Marios Nicolaou; Adrian James Chung; Guang-Zhong Yang
Computer-based simulation is an important tool for surgical skills training and assessment. In general, the degree of realism experienced by the trainees is determined by the visual and biomechanical fidelity of the simulator. In minimally invasive surgery, specular reflections provide an important visual cue for tissue deformation, depth and orientation. This paper describes a novel image-based lighting technique that is particularly suitable for modeling mucous-covered tissue surfaces. We describe how noise functions can be used to control the shape of the specular highlights, and how texture noise is generated and encoded in image-based structure at a pre-processing stage. The proposed technique can be implemented at run-time by using the graphics processor to efficiently attain pixel-level control and photo-realism. The practical value of the technique is assessed with detailed visual scoring and cross comparison experiments by two groups of observers.