Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew Rowley is active.

Publication


Featured researches published by Andrew Rowley.


Database | 2012

Argo: an integrative, interactive, text mining-based workbench supporting curation

Rafal Rak; Andrew Rowley; William J. Black; Sophia Ananiadou

Curation of biomedical literature is often supported by the automatic analysis of textual content that generally involves a sequence of individual processing components. Text mining (TM) has been used to enhance the process of manual biocuration, but has been focused on specific databases and tasks rather than an environment integrating TM tools into the curation pipeline, catering for a variety of tasks, types of information and applications. Processing components usually come from different sources and often lack interoperability. The well established Unstructured Information Management Architecture is a framework that addresses interoperability by defining common data structures and interfaces. However, most of the efforts are targeted towards software developers and are not suitable for curators, or are otherwise inconvenient to use on a higher level of abstraction. To overcome these issues we introduce Argo, an interoperable, integrative, interactive and collaborative system for text analysis with a convenient graphic user interface to ease the development of processing workflows and boost productivity in labour-intensive manual curation. Robust, scalable text analytics follow a modular approach, adopting component modules for distinct levels of text analysis. The user interface is available entirely through a web browser that saves the user from going through often complicated and platform-dependent installation procedures. Argo comes with a predefined set of processing components commonly used in text analysis, while giving the users the ability to deposit their own components. The system accommodates various areas and levels of user expertise, from TM and computational linguistics to ontology-based curation. One of the key functionalities of Argo is its ability to seamlessly incorporate user-interactive components, such as manual annotation editors, into otherwise completely automatic pipelines. As a use case, we demonstrate the functionality of an in-built manual annotation editor that is well suited for in-text corpus annotation tasks. Database URL: http://www.nactem.ac.uk/Argo


Nucleic Acids Research | 2014

Europe PMC: a full-text literature database for the life sciences and platform for innovation

Yuci Gou; Florian Graff; Philip Rossiter; Francesco Talo; Vid Vartak; Lee-Ann Coleman; Craig Hawkins; Anna Kinsey; Sami Mansoor; Victoria Morris; Rob Rowbotham; David A. G. Chapman; Oliver Kilian; Ross MacIntyre; Yogesh Patel; Sophia Ananiadou; William J. Black; John McNaught; Rafal Rak; Andrew Rowley; Senay Kafkas; Jyothi Katuri; Jee-Hyub Kim; Nikos Marinos; Johanna McEntyre; Andrew Morrison; Xingjun Pi

This article describes recent developments of Europe PMC (http://europepmc.org), the leading database for life science literature. Formerly known as UKPMC, the service was rebranded in November 2012 as Europe PMC to reflect the scope of the funding agencies that support it. Several new developments have enriched Europe PMC considerably since then. Europe PMC now offers RESTful web services to access both articles and grants, powerful search tools such as citation-count sort order and data citation features, a service to add publications to your ORCID, a variety of export formats, and an External Links service that enables any related resource to be linked from Europe PMC content.


Bioinformatics | 2013

A method for integrating and ranking the evidence for biochemical pathways by mining reactions from text

Makoto Miwa; Tomoko Ohta; Rafal Rak; Andrew Rowley; Douglas B. Kell; Sampo Pyysalo; Sophia Ananiadou

Motivation: To create, verify and maintain pathway models, curators must discover and assess knowledge distributed over the vast body of biological literature. Methods supporting these tasks must understand both the pathway model representations and the natural language in the literature. These methods should identify and order documents by relevance to any given pathway reaction. No existing system has addressed all aspects of this challenge. Method: We present novel methods for associating pathway model reactions with relevant publications. Our approach extracts the reactions directly from the models and then turns them into queries for three text mining-based MEDLINE literature search systems. These queries are executed, and the resulting documents are combined and ranked according to their relevance to the reactions of interest. We manually annotate document-reaction pairs with the relevance of the document to the reaction and use this annotation to study several ranking methods, using various heuristic and machine-learning approaches. Results: Our evaluation shows that the annotated document-reaction pairs can be used to create a rule-based document ranking system, and that machine learning can be used to rank documents by their relevance to pathway reactions. We find that a Support Vector Machine-based system outperforms several baselines and matches the performance of the rule-based system. The success of the query extraction and ranking methods are used to update our existing pathway search system, PathText. Availability: An online demonstration of PathText 2 and the annotated corpus are available for research purposes at http://www.nactem.ac.uk/pathtext2/. Contact: [email protected] Supplementary information: Supplementary data are available at Bioinformatics online.


BMC Bioinformatics | 2015

Overview of the Cancer Genetics and Pathway Curation tasks of BioNLP Shared Task 2013

Sampo Pyysalo; Tomoko Ohta; Rafal Rak; Andrew Rowley; Hong Woo Chun; Sung Jae Jung; Sung Pil Choi; Jun’ichi Tsujii; Sophia Ananiadou

BackgroundSince their introduction in 2009, the BioNLP Shared Task events have been instrumental in advancing the development of methods and resources for the automatic extraction of information from the biomedical literature. In this paper, we present the Cancer Genetics (CG) and Pathway Curation (PC) tasks, two event extraction tasks introduced in the BioNLP Shared Task 2013. The CG task focuses on cancer, emphasizing the extraction of physiological and pathological processes at various levels of biological organization, and the PC task targets reactions relevant to the development of biomolecular pathway models, defining its extraction targets on the basis of established pathway representations and ontologies.ResultsSix groups participated in the CG task and two groups in the PC task, together applying a wide range of extraction approaches including both established state-of-the-art systems and newly introduced extraction methods. The best-performing systems achieved F-scores of 55% on the CG task and 53% on the PC task, demonstrating a level of performance comparable to the best results achieved in similar previously proposed tasks.ConclusionsThe results indicate that existing event extraction technology can generalize to meet the novel challenges represented by the CG and PC task settings, suggesting that extraction methods are capable of supporting the construction of knowledge bases on the molecular mechanisms of cancer and the curation of biomolecular pathway models. The CG and PC tasks continue as open challenges for all interested parties, with data, tools and resources available from the shared task homepage.


Database | 2014

Text Mining-assisted Biocuration Workflows in Argo

Rafal Rak; Riza Theresa Batista-Navarro; Andrew Rowley; Jacob Carter; Sophia Ananiadou

Biocuration activities have been broadly categorized into the selection of relevant documents, the annotation of biological concepts of interest and identification of interactions between the concepts. Text mining has been shown to have a potential to significantly reduce the effort of biocurators in all the three activities, and various semi-automatic methodologies have been integrated into curation pipelines to support them. We investigate the suitability of Argo, a workbench for building text-mining solutions with the use of a rich graphical user interface, for the process of biocuration. Central to Argo are customizable workflows that users compose by arranging available elementary analytics to form task-specific processing units. A built-in manual annotation editor is the single most used biocuration tool of the workbench, as it allows users to create annotations directly in text, as well as modify or delete annotations created by automatic processing components. Apart from syntactic and semantic analytics, the ever-growing library of components includes several data readers and consumers that support well-established as well as emerging data interchange formats such as XMI, RDF and BioC, which facilitate the interoperability of Argo with other platforms or resources. To validate the suitability of Argo for curation activities, we participated in the BioCreative IV challenge whose purpose was to evaluate Web-based systems addressing user-defined biocuration tasks. Argo proved to have the edge over other systems in terms of flexibility of defining biocuration tasks. As expected, the versatility of the workbench inevitably lengthened the time the curators spent on learning the system before taking on the task, which may have affected the usability of Argo. The participation in the challenge gave us an opportunity to gather valuable feedback and identify areas of improvement, some of which have already been introduced. Database URL: http://argo.nactem.ac.uk


Philosophical Transactions of the Royal Society A | 2009

Dancing on the Grid: using e-Science tools to extend choreographic research

Helen Bailey; Michelle Bachler; Simon Buckingham Shum; Anja Le Blanc; Sita Popat; Andrew Rowley; Martin Turner

This paper considers the role and impact of new and emerging e-Science tools on practice-led research in dance. Specifically, it draws on findings from the e-Dance project. This 2-year project brings together an interdisciplinary team combining research aspects of choreography, next generation of videoconferencing and human–computer interaction analysis incorporating hypermedia and nonlinear annotations for recording and documentation.


Database | 2014

Processing biological literature with customizable Web services supporting interoperable formats

Rafal Rak; Riza Theresa Batista-Navarro; Jacob Carter; Andrew Rowley; Sophia Ananiadou

Web services have become a popular means of interconnecting solutions for processing a body of scientific literature. This has fuelled research on high-level data exchange formats suitable for a given domain and ensuring the interoperability of Web services. In this article, we focus on the biological domain and consider four interoperability formats, BioC, BioNLP, XMI and RDF, that represent domain-specific and generic representations and include well-established as well as emerging specifications. We use the formats in the context of customizable Web services created in our Web-based, text-mining workbench Argo that features an ever-growing library of elementary analytics and capabilities to build and deploy Web services straight from a convenient graphical user interface. We demonstrate a 2-fold customization of Web services: by building task-specific processing pipelines from a repository of available analytics, and by configuring services to accept and produce a combination of input and output data interchange formats. We provide qualitative evaluation of the formats as well as quantitative evaluation of automatic analytics. The latter was carried out as part of our participation in the fourth edition of the BioCreative challenge. Our analytics built into Web services for recognizing biochemical concepts in BioC collections achieved the highest combined scores out of 10 participating teams. Database URL: http://argo.nactem.ac.uk.


Frontiers in Neuroscience | 2017

A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine

Basabdatta Sen-Bhattacharya; Teresa Serrano-Gotarredona; Lorinc Balassa; Akash Bhattacharya; Alan B. Stokes; Andrew Rowley; Indar Sugiarto; Steve B. Furber

We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN) developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a “basic building block” for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP)—brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG). Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10–50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz) implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi-node architecture consisting of three “nodes,” where each node is the “basic building block” LGN model. This 420 neuron model is tested with synthetic periodic stimulus at 10 Hz to all the nodes. The model output is the average of the outputs from all nodes, and conforms to the above-mentioned predictions of each node. Power consumption for model simulation on SpiNNaker is ≪1 W.


Archive | 2016

Text Mining for Semantic Search in Europe PubMed Central Labs

William J. Black; Andrew Rowley; Makoto Miwa; John McNaught; Sophia Ananiadou

Abstract This chapter describes an implemented and publicly available search aid designed for use in the context of a standard full-text retrieval service, which automatically suggests questions on the basis of what has been entered in the search engine’s query field. The service is developed on the basis of a content analysis achieved by merging information extraction of biomedical named entities with a syntactic analysis of the full text of an entire collection of scientific papers. We discuss the design and implementation of the system in contrast with alternative ways of providing a search application based on text mining for events of significance in biomedical sciences, and evaluate characteristics of the system against criteria established in collaboration with sample users.


Philosophical Transactions of the Royal Society A | 2012

Secure data sharing across portals: experiences from OneVRE.

Martin Turner; M. Jones; Meik Poschen; Rob Procter; Andrew Rowley; Tobias Schiebeck

Research and higher education are facing an on-going transformation of practice resulting in the need for effective collaboration and sharing of resources within and across disciplinary and geographical boundaries. Portal technologies and portal-based virtual research and learning environments (VREs and VLEs) already have become standard infrastructures within a large number of research communities and institutions. From 2004, a series of research and development projects began to ask the question whether an open source videoconferencing and collaboration system could be used as a complete, or as a part of, VRE. This study presents the evolution of these projects and at the same time, describes the definition of a VRE and their future possible integration. The OneVRE portlet integration project attempted to create missing components, including adding secure and universal identity management. This moves the idea of shared data to a different level by creating a new administrative domain that is outside the control of a single local institution portal and resolves certain administrative virtual organizations problems. We explain some of the hurdles that still need to be overcome to make this venture truly successful, when a complete toolkit can be designed for the researcher of the future.

Collaboration


Dive into the Andrew Rowley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rafal Rak

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Alan B. Stokes

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Martin Turner

University of Bedfordshire

View shared research outputs
Top Co-Authors

Avatar

Jacob Carter

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge