Martyn Fletcher
University of York
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martyn Fletcher.
Proceedings of the IEEE | 2005
Jim Austin; Robert I. Davis; Martyn Fletcher; Thomas W. Jackson; Mark Jessop; Bojian Liang; Andy Pasley
The use of search engines within the Internet is now ubiquitous. This work examines how Grid technology may affect the implementation of search engines by focusing on the Signal Data Explorer application developed within the Distributed Aircraft Maintenance Environment (DAME) project. This application utilizes advanced neural-network-based methods (Advanced Uncertain Reasoning Architecture (AURA) technology) to search for matching patterns in time-series vibration data originating from Rolls-Royce aeroengines (jet engines). The large volume of data associated with the problem required the development of a distributed search engine, where data is held at a number of geographically disparate locations. This work gives a brief overview of the DAME project, the pattern marching problem, and the architecture. It also describes the Signal Data Explorer application and provides an overview of the underlying search engine technology and its use in the aeroengine health-monitoring domain.
Neural Networks | 2008
Martyn Fletcher; Bojian Liang; Leslie S. Smith; Alastair Knowles; Thomas W. Jackson; Mark Jessop; Jim Austin
In the study of information flow in the nervous system, component processes can be investigated using a range of electrophysiological and imaging techniques. Although data is difficult and expensive to produce, it is rarely shared and collaboratively exploited. The Code Analysis, Repository and Modelling for e-Neuroscience (CARMEN) project addresses this challenge through the provision of a virtual neuroscience laboratory: an infrastructure for sharing data, tools and services. Central to the CARMEN concept are federated CARMEN nodes, which provide: data and metadata storage, new, thirdparty and legacy services, and tools. In this paper, we describe the CARMEN project as well as the node infrastructure and an associated thick client tool for pattern visualisation and searching, the Signal Data Explorer (SDE). We also discuss new spike detection methods, which are central to the services provided by CARMEN. The SDE is a client application which can be used to explore data in the CARMEN repository, providing data visualization, signal processing and a pattern matching capability. It performs extremely fast pattern matching and can be used to search for complex conditions composed of many different patterns across the large datasets that are typical in neuroinformatics. Searches can also be constrained by specifying text based metadata filters. Spike detection services which use wavelet and morphology techniques are discussed, and have been shown to outperform traditional thresholding and template based systems. A number of different spike detection and sorting techniques will be deployed as services within the CARMEN infrastructure, to allow users to benchmark their performance against a wide range of reference datasets.
The Grid 2 (2)#R##N#Blueprint for a New Computing Infrastructure | 2004
Jim Austin; Thomas W. Jackson; Martyn Fletcher; Mark Jessop; Peter Cowley; Peter Lobner
Publisher Summary This chapter discusses the application of Grid technologies to the challenging and broadly important problem of computer-based fault diagnosis and prognostic (DP). It describes a UK e-Science project that is applying Grid technologies to the problems of diagnosing faults in Rolls-Royce aircraft engines based on sensor data recorded during flight. Fault diagnosis is fundamentally based on the monitoring and analysis of sensor data through the application of declarative and procedural knowledge. Data from sensors must be captured, often in real time, and made available to analysis systems, either remote or local. Root cause determination and prognosis may require integrating data from several different systems to build a pattern or a case that is reusable in subsequent diagnoses. To extend the capabilities of fault DP systems, it is also beneficial to archive data such that an operational log of system performance or fault conditions can be maintained.
Software - Practice and Experience | 2005
Howard Chivers; Martyn Fletcher
Risk analysis is the only effective way of making value judgments about the need for security. Established analysis methods apply to whole operational systems, taking a necessarily holistic view of security, but this makes them difficult to integrate into the design process for service‐based applications, where design and implementation are independent of operational deployment. However, the most costly mistakes occur early in the development lifecycle, and effective security can be difficult to retrofit, motivating the need for early security analysis. This paper describes SeDAn (Security Design Analysis), a security risk analysis framework that is adapted for use in the design phase of service‐based systems, and its application to a significant Grid‐based project (Distributed Aircraft Maintenance Environment—DAME). The complete lifecycle of the risk analysis is described, and the effectiveness of the process in identifying design defects validates both the need for, and the effectiveness of, this type of analysis. Copyright
international conference on conceptual structures | 2011
Jim Austin; Thomas W. Jackson; Martyn Fletcher; Mark Jessop; Bojian Liang; Mike Weeks; Leslie S. Smith; Colin Ingram; Paul Watson
Abstract The CARMEN (Code, Analysis, Repository and Modelling for e-Neuroscience) system [1] provides a web based portal platform through which users can share and collaboratively exploit data, analysis code and expertise in neuroscience. The system has been beendeveloped in the UK and currently supports 200 hundred neuroscientists working in a Virtual Environment with an initial focus on electrophysiology data. The proposal here is that the CARMEN system provides an excellent base from which to develop an ‘executable paper’ system. CARMEN has been built by York and Newcastle Universities and is based on over 10 years experience in the construction of eScience based distributed technology. CARMEN started four years ago involving 20 scientific investigators (neuroscientists and computer scientists) at 11 UK Universities (www.CARMEN.org.uk). The project is supported for another 4 years at York and Newcastle, along with a sister project to take the underlying technology and pilot it as a UK platform for supporting the sharing of research outputs in a generic way. An entirely natural extension to the CARMEN system would be its alignment with a publications repository. The CARMEN system is operational on the domain https://portal.CARMEN.org.uk, where it is possible to request a login to try out the system.
Philosophical Transactions of the Royal Society A | 2012
Michael Weeks; Mark Jessop; Martyn Fletcher; Victoria J. Hodge; Thomas W. Jackson; Jim Austin
The CARMEN platform allows neuroscientists to share data, metadata, services and workflows, and to execute these services and workflows remotely via a Web portal. This paper describes how we implemented a service-based infrastructure into the CARMEN Virtual Laboratory. A Software as a Service framework was developed to allow generic new and legacy code to be deployed as services on a heterogeneous execution framework. Users can submit analysis code typically written in Matlab, Python, C/C++ and R as non-interactive standalone command-line applications and wrap them as services in a form suitable for deployment on the platform. The CARMEN Service Builder tool enables neuroscientists to quickly wrap their analysis software for deployment to the CARMEN platform, as a service without knowledge of the service framework or the CARMEN system. A metadata schema describes each service in terms of both system and user requirements. The search functionality allows services to be quickly discovered from the many services available. Within the platform, services may be combined into more complicated analyses using the workflow tool. CARMEN and the service infrastructure are targeted towards the neuroscience community; however, it is a generic platform, and can be targeted towards any discipline.
cluster computing and the grid | 2006
Martyn Fletcher; Thomas W. Jackson; Mark Jessop; Bojian Liang; Jim Austin
We describe a high performance grid based signal search tool for distributed diagnostic applications developed in conjunction with Rolls-Royce plc for civil aero engine condition monitoring applications. With the introduction of advanced monitoring technology into engineering systems, healthcare, etc., the associated diagnostic processes are increasingly required to handle and consider vast amounts of data. An exemplar of such a diagnosis process was developed during the DAME project, which built a proof of concept demonstrator to assist in the enhanced diagnosis and prognosis of aero-engine conditions. In particular it has shown the utility of an interactive viewing and high performance distributed search tool (the signal data explorer) in the aeroengine diagnostic process. The viewing and search techniques are equally applicable to other domains. The signal data explorer and search services have been demonstrated on the Worldwide Universities Network to search distributed databases of electrocardiograph data.
WISE Workshops | 2011
Adriano Galati; Karim Djemame; Martyn Fletcher; Mark Jessop; Michael Weeks; Simon J. Hickinbotham; John McAvoy
The emerging transformation from a product oriented economy to a service oriented economy based on Cloud environments envisions new scenarios where actual QoS mechanisms need to be redesigned. In such scenarios new models to negotiate and manage Service Level Agreements (SLAs) are necessary. An SLA is a formal contract which defines acceptable service levels to be provided by the Service Provider to its customers in measurable terms. This is meant to guarantee that consumers’ service quality expectation can be achieved. In fact, the level of customer satisfaction is crucial in Cloud environments, making SLAs one of the most important and active research topics. The aim of this paper is to explore the possibility of integrating an SLA approach for Cloud services based on the CMAC (Condition Monitoring on A Cloud) platform which offers condition monitoring services in cloud computing environments to detect events on assets as well as data storage services.
international symposium on object component service oriented real time distributed computing | 2012
Paul Townend; Colin C. Venters; Lydia Lau; Karim Djemame; Vania Dimitrova; Alison Marshall; Jie Xu; Charlie Dibsdale; Nick Taylor; Jim Austin; John McAvoy; Martyn Fletcher; Stephen Hobson
Large-scale data processing systems frequently require users to make timely and high-value business decisions based upon information that is received from a variety of heterogeneous sources. Such heterogeneity is especially true of service-oriented systems, which are often dynamic in nature and composed of multiple interacting services. However, in order to establish user trust in such systems, there is a need to determine the validity and reliability of all the data sources that go into the making of a decision. This paper analyses the concept of provenance and discusses how the establishment of personalized provenance recording and retrieval systems can be used to increase the utility of data and engender user trust in complex service-based systems. An overview of current provenance research is presented, and a real-world project to address the abstract concepts of trust and data quality in industrial and clinical settings is presented. From this, we conclude that the addition of provenance into data processing and decision making systems can have a tangible benefit to improving the trust of system users.
grid economics and business models | 2014
Adriano Galati; Karim Djemame; Martyn Fletcher; Mark Jessop; Michael Weeks; John McAvoy
The emerging transformation from a product oriented economy to a service oriented economy based on Cloud environments envisions new scenarios where actual QoS (Quality of Service) mechanisms need to be redesigned. In such scenarios new models to negotiate and manage Service Level Agreements (SLAs) are necessary. An SLA is a formal contract which defines acceptable service levels to be provided by the Service Provider to its customers in measurable terms. SLAs are an essential component in building Cloud systems where commitments and assurances are specified, implemented, monitored and possibly negotiable. This is meant to guarantee that consumers’ service quality expectations can be achieved. In fact, the level of customer satisfaction is crucial in Cloud environments, making SLAs one of the most important and active research topics. This paper presents an SLA implementation for negotiation, monitoring and renegotiation of agreements for Cloud services based on the CMAC (Condition Monitoring on A Cloud) platform. CMAC offers condition monitoring services in cloud computing environments to detect events on assets as well as data storage services.