Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karen McMenemy is active.

Publication


Featured researches published by Karen McMenemy.


international conference on document analysis and recognition | 2009

Robust Extraction of Text from Camera Images

Shyama Prosad Chowdhury; Soumyadeep Dhar; Amit Kumar Das; Bhabatosh Chanda; Karen McMenemy

Text within a camera grabbed image can contain a huge amount of meta data about that scene. Such meta data can be useful for identification, indexing and retrieval purposes.Detection of colored scene text is a new challenge for all camera based images.Common problems for text extraction from camera based images are the lack of prior knowledge of any kind of text features such as color, font, size and orientation.In this paper we propose a new algorithm for the extraction of text from an image which can overcome these problems. In addition, problems due to an unconstrained complex background in the scene has also been addressed.Here a new technique is applied to determine the discrete edges around the text boundaries. A novel methodology is also proposed to extract the text exploiting its appearance in terms of color and spatial distribution.


Pattern Recognition | 2008

Epipolar geometry estimation based on evolutionary agents

Mingxing Hu; Karen McMenemy; Stuart Ferguson; Gordon Dodds; Baozong Yuan

This paper presents a novel approach based on the use of evolutionary agents for epipolar geometry estimation. In contrast to conventional nonlinear optimization methods, the proposed technique employs each agent to denote a minimal subset to compute the fundamental matrix, and considers the data set of correspondences as a 1D cellular environment, in which the agents inhabit and evolve. The agents execute some evolutionary behavior, and evolve autonomously in a vast solution space to reach the optimal (or near optima) result. Then three different techniques are proposed in order to improve the searching ability and computational efficiency of the original agents. Subset template enables agents to collaborate more efficiently with each other, and inherit accurate information from the whole agent set. Competitive evolutionary agent (CEA) and finite multiple evolutionary agent (FMEA) apply a better evolutionary strategy or decision rule, and focus on different aspects of the evolutionary process. Experimental results with both synthetic data and real images show that the proposed agent-based approaches perform better than other typical methods in terms of accuracy and speed, and are more robust to noise and outliers.


International Journal of Electrical Engineering Education | 2009

Enhancing the Teaching of Professional Practice and Key Skills in Engineering through the Use of Computer Animation

Karen McMenemy; Stuart Ferguson

This paper presents some observations on how computer animation was used in the early years of a degree program in Electrical and Electronic Engineering to enhance the teaching of key skills and professional practice. This paper presents the results from two case studies. First, in a first year course which seeks to teach students how to manage and report on group projects in a professional way. Secondly, in a technical course on virtual reality, where the students are asked to use computer animation in a way that subliminally coerces them to come to terms with the fine detail of the mathematical principles that underlie 3D graphics, geometry, etc. as well as the most significant principles of computer architecture and software engineering. In addition, the findings reveal that by including a significant element of self and peer review processes into the assessment procedure students became more engaged with the course and achieved a deeper level of comprehension of the material in the course.


Leukos | 2005

Glare, Luminance, and Illuminance Measurements of Road Lighting Using Vehicle Mounted CCD Cameras

Ashraf Zatari; Gordon Dodds; Karen McMenemy; Richard Robinson

Abstract The assessment of well-designed road-lighting systems is necessary since their performance can be critically reduced by incorrect installation. During the operational life of a system, it is also necessary to assess the effects of deterioration in the luminaire fittings, changes in road surface or surroundings and changing user needs. An automated vehicle-mounted system constitutes the most practical technical solution to carry out this task. Previous research produced image acquisition and analysis systems for measuring luminance and uniformity levels of road lighting (Glenn 2000). This paper builds on this work and describes the methods employed to assess the combined parameters of luminance, illuminance and glare. A description of the system components is given, including the CCD digital video cameras, which are mounted on the test vehicle. The cameras are pre-calibrated to estimate relationships between gray value of light images and lighting parameters (luminance and illuminance). Appropriate infra-red and neutral density filters are employed to control the wavelength and limit the light entering into the cameras. Differential GPS, 3D orientation sensors, and image flow analysis, are employed to accurately estimate the position of the vehicle. Automated image analysis methods are further developed to speed up the position and image analysis process. Multidirectional measurements of light output are achieved using multiple journeys and multiple cameras on the same road-segment, which provide data on different observation lines. Interpolation techniques are employed to estimate the complete profile and produce isolux contours. Results produced so far indicate that lighting parameters can be measured accurately, provided that accurate 3D information of luminaires and road layout is available. Further work is reducing the dependency of the system on this a-priori data and using only readily available utility data.


electronic imaging | 2003

Calibration and use of video cameras in the photometric assessment of aerodrome ground lighting

Karen McMenemy; Francis Mullin; Gordon Dodds

This paper describes the adaptation and calibration of domestic CCD cameras for a novel air based measurement system that assesses the intensity, alignment and colour of aerodrome ground lighting (AGL) in service. The measurement system comprises calibrated domestic cameras, lenses and filters capable of examining the desired lighting area. The system has been corrected for distortion, bias, partial pixel coverage and colouration effects. The problems of inbuilt camera signal processing and dynamic range have also been studied and allowed for. The developed image processing techniques allow luminaire location and extraction between successive images. For each extracted luminaire, pixel information is automatically correlated and related to an illuminance value using apriori information. The corresponding intensity and alignment is derived using position and orientation information estimated using a vision model and differential GPS. These techniques have been applied to sequences of images collected at various aerodromes. The results reflect the belief that the highly developed enabling technologies of GPS and digital imaging can be combined to tackle further photometry problems.


electronic imaging | 2006

Adding tactile realism to a virtual reality laparoscopic surgical simulator with a cost-effective human interface device

Ian Mack; S. Potts; Karen McMenemy; R. Stuart Ferguson

The laparoscopic technique for performing abdominal surgery requires a very high degree of skill in the medical practitioner. Much interest has been focused on using computer graphics to provide simulators for training surgeons. Unfortunately, these tend to be complex and have a very high cost, which limits availability and restricts the length of time over which individuals can practice their skills. With computer game technology able to provide the graphics required for a surgical simulator, the cost does not have to be high. However, graphics alone cannot serve as a training simulator. Human interface hardware, the equivalent of the force feedback joystick for a flight simulator game, is required to complete the system. This paper presents a design for a very low cost device to address this vital issue. The design encompasses: the mechanical construction, the electronic interfaces and the software protocols to mimic a laparoscopic surgical set-up. Thus the surgeon has the capability of practicing two-handed procedures with the possibility of force feedback. The force feedback and collision detection algorithms allow surgeons to practice realistic operating theatre procedures with a good degree of authenticity.


ieee international workshop on haptic audio visual environments and games | 2009

Interactive force sensing feedback system for remote robotic laparoscopic surgery

Iran Mack; Stuart Ferguson; Karen McMenemy; S. Potts; Alistair Dick

This paper presents hardware and software systems which have been developed to provide haptic feedback for teleoperated laparoscopic surgical robots. Surgical instruments incorporating quantum tunnelling composite force measuring sensors have been developed and mounted on a pair of Mitsubishi PA-10 industrial robots. Feedback forces are rendered on pseudo-surgical instruments based on a pair of PHANTOM Omnis, which are also used to remotely manipulate the robotic arms. The paper describes the measurement of forces applied to surgical instruments during a teleoperated procedure, in order to provide a haptic feedback channel. This force feedback channel is combined with a visual feedback channel to enable a surgeon to better perform a two-handed surgical procedure on a remote patient by more accurately controlling a pair of robot arms via a computer network.


computer analysis of images and patterns | 2009

Performance Evaluation of Airport Lighting Using Mobile Camera Techniques

Shyama Prosad Chowdhury; Karen McMenemy; Jian Xun Peng

This paper describes the use of mobile camera technology to assess the performance of Aerodrome Ground Lighting (AGL). Cameras are placed inside the cockpit of an aircraft and used to record images of the AGL during an approach to an airport. Subsequent image analysis, using the techniques proposed in this paper, will allow a performance metric to be determined for the lighting. This can be used to inform regulators if the AGL is performing to standards and it will also provide useful information towards the maintenance strategy for the airport. Since the cameras that are used to collect the images are mounted on a moving and vibrating platform (the plane), some image data may be effected by vibration. In the paper we illustrate techniques by which to quantify and remove the effects of vibration and illustrate how the image data can be used to derive a performance metric for the complete AGL.


Transactions of the Institute of Measurement and Control | 2006

Objective Measurement of the Quality of Airport Lighting whilst In Service

Karen McMenemy; Gordon Dodds

This paper contributes to the science of measurement and control by outlining a unique methodology for objectively assessing the luminous intensity of all luminaires forming an airport landing lighting pattern, whilst in service, to check that the lighting pattern conforms to the strict standards set by airports’ governing body within the UK the Civil Aviation Authority (CAA). Central to this methodology is a novel air-based measurement system consisting of charge-coupled device (CCD) cameras and dedicated image-processing software. This prototype measurement system is placed inside the cockpit of an aircraft and is used to take images of the airport landing lighting system whilst the aircraft approaches the airport. Developed image-processing techniques then allow unique luminaire identification and extraction from the hundreds of luminaires within the airport lighting pattern. The corresponding luminous intensity and alignment of each luminaire is derived using dynamic position and orientation information estimated from a visual environment model. By obtaining luminous intensity and alignment information for each luminaire within the pattern, it is then possible to derive its associated isocandela diagram. This then allows an assessment to be made regarding the luminaires’ conformity to the CAA standards. The unique system has been tested at Belfast International Airport, the results of which are documented within this paper. The results indicate that further work is necessary if the system is to be fully functional. Currently it takes approximately 35 h to filter the results for one complete pattern. In addition, a maximum error of 40% is reported between derived and actual luminous intensity measurements for a given luminaire. The reasons for this discrepancy are highlighted in the discussion and improvements are currently being made to the system to account for these. However, the results and the work reported within this paper reflect the belief that digital imaging can be utilized to automate many physically large photometry problems within the fields of science and engineering. The methodology described for the development of this measurement system is unique and we believe extends the science of automated dynamic measurement systems.


Transactions of the Institute of Measurement and Control | 2011

A heuristic model for the simulation of the deformation of elastic and spongy material for virtual reality applications

K. McKenna; Karen McMenemy; Stuart Ferguson

This paper presents a practical algorithm for the simulation of interactive deformation in a 3D polygonal mesh model. The algorithm combines the conventional simulation of deformation using a spring-mass-damping model, solved by explicit numerical integration, with a set of heuristics to describe certain features of the transient behaviour, to increase the speed and stability of solution. In particular, this algorithm was designed to be used in the simulation of synthetic environments where it is necessary to model realistically, in real time, the effect on non-rigid surfaces being touched, pushed, pulled or squashed. Such objects can be solid or hollow, and have plastic, elastic or fabric-like properties. The algorithm is presented in an integrated form including collision detection and adaptive refinement so that it may be used in a self-contained way as part of a simulation loop to include human interface devices that capture data and render a realistic stereoscopic image in real time. The algorithm is designed to be used with polygonal mesh models representing complex topology, such as the human anatomy in a virtual-surgery training simulator. The paper evaluates the model behaviour qualitatively and then concludes with some examples of the use of the algorithm.

Collaboration


Dive into the Karen McMenemy's collaboration.

Top Co-Authors

Avatar

Stuart Ferguson

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

James Niblock

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Gordon Dodds

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

S. Potts

Royal College of Surgeons in Ireland

View shared research outputs
Top Co-Authors

Avatar

George W. Irwin

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Jian-Xun Peng

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Ian Mack

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Alistair Dick

Royal Belfast Hospital for Sick Children

View shared research outputs
Top Co-Authors

Avatar

Ashraf Zatari

Queen's University Belfast

View shared research outputs
Top Co-Authors

Avatar

Jian Xun Peng

Queen's University Belfast

View shared research outputs
Researchain Logo
Decentralizing Knowledge