Aleksandra Nenadic
University of Manchester
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aleksandra Nenadic.
Nucleic Acids Research | 2013
Katherine Wolstencroft; Robert Haines; Donal Fellows; Alan R. Williams; David Withers; Stuart Owen; Stian Soiland-Reyes; Ian Dunlop; Aleksandra Nenadic; Paul Fisher; Jiten Bhagat; Khalid Belhajjame; Finn Bacall; Alex Hardisty; Abraham Nieva de la Hidalga; Maria Paula Balcazar Vargas; Shoaib Sufi; Carole A. Goble
The Taverna workflow tool suite (http://www.taverna.org.uk) is designed to combine distributed Web Services and/or local tools into complex analysis pipelines. These pipelines can be executed on local desktop machines or through larger infrastructure (such as supercomputers, Grids or cloud environments), using the Taverna Server. In bioinformatics, Taverna workflows are typically used in the areas of high-throughput omics analyses (for example, proteomics or transcriptomics), or for evidence gathering methods involving text mining or data mining. Through Taverna, scientists have access to several thousand different tools and resources that are freely available from a large range of life science institutions. Once constructed, the workflows are reusable, executable bioinformatics protocols that can be shared, reused and repurposed. A repository of public workflows is available at http://www.myexperiment.org. This article provides an update to the Taverna tool suite, highlighting new features and developments in the workbench and the Taverna Server.
BMC Bioinformatics | 2010
Wei Tan; Ravi K. Madduri; Aleksandra Nenadic; Stian Soiland-Reyes; Dinanath Sulakhe; Ian T. Foster; Carole A. Goble
BackgroundIn biological and medical domain, the use of web services made the data and computation functionality accessible in a unified manner, which helped automate the data pipeline that was previously performed manually. Workflow technology is widely used in the orchestration of multiple services to facilitate in-silico research. Cancer Biomedical Informatics Grid (caBIG) is an information network enabling the sharing of cancer research related resources and caGrid is its underlying service-based computation infrastructure. CaBIG requires that services are composed and orchestrated in a given sequence to realize data pipelines, which are often called scientific workflows.ResultsCaGrid selected Taverna as its workflow execution system of choice due to its integration with web service technology and support for a wide range of web services, plug-in architecture to cater for easy integration of third party extensions, etc. The caGrid Workflow Toolkit (or the toolkit for short), an extension to the Taverna workflow system, is designed and implemented to ease building and running caGrid workflows. It provides users with support for various phases in using workflows: service discovery, composition and orchestration, data access, and secure service invocation, which have been identified by the caGrid community as challenging in a multi-institutional and cross-discipline domain.ConclusionsBy extending the Taverna Workbench, caGrid Workflow Toolkit provided a comprehensive solution to compose and coordinate services in caGrid, which would otherwise remain isolated and disconnected from each other. Using it users can access more than 140 services and are offered with a rich set of features including discovery of data and analytical services, query and transfer of data, security protections for service invocations, state management in service interactions, and sharing of workflows, experiences and best practices. The proposed solution is general enough to be applicable and reusable within other service-computing infrastructures that leverage similar technology stack.
Nucleic Acids Research | 2016
Jon Ison; Kristoffer Rapacki; Hervé Ménager; Matúš Kalaš; Emil Rydza; Piotr Jaroslaw Chmura; Christian Anthon; Niall Beard; Karel Berka; Dan Bolser; Tim Booth; Anthony Bretaudeau; Jan Brezovsky; Rita Casadio; Gianni Cesareni; Frederik Coppens; Michael Cornell; Gianmauro Cuccuru; Kristian Davidsen; Gianluca Della Vedova; Tunca Doğan; Olivia Doppelt-Azeroual; Laura Emery; Elisabeth Gasteiger; Thomas Gatter; Tatyana Goldberg; Marie Grosjean; Björn Grüning; Manuela Helmer-Citterich; Hans Ienasescu
Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand. Here we present a community-driven curation effort, supported by ELIXIR—the European infrastructure for biological information—that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners. As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools.
acm symposium on applied computing | 2004
Aleksandra Nenadic; Ning Zhang; Stephen K. Barton
Communication by e-mail has become a vital part of everyday business and has replaced most of the conventional ways of communicating. Important business correspondence may require certified e-mail delivery, analogous to that provided by conventional mail service. This paper presents a novel certified e-mail delivery protocol that provides non-repudiation of origin and non-repudiation of receipt security services to protect communicating parties from each others false denials that the e-mail has been sent and received. The protocol provides strong fairness to ensure that the recipient receives the e-mail if and only if the sender receives the receipt. The protocol makes use of an off-line and transparent trusted third party only in exceptional circumstances, i.e. when the communicating parties fail to complete the e-mail for receipt exchange due to a network failure or a partys misbehaviour. Considerations have been taken in the protocol design to reduce the use of expensive cryptographic operations for better efficiency and cost-effectiveness.
Journal of Universal Computer Science | 2005
Aleksandra Nenadic; Ning Zhang; Barry M. G. Cheetham; Carole A. Goble
Delivering electronic goods over the Internet is one of the e-commerce applications that will proliferate in the coming years. Certified e-goods delivery is a process where valuable e-goods are exchanged for an acknowledgement of their reception. This paper proposes an efficient security protocol for certified e-goods delivery with the following features: (1) it ensures strong fairness for the exchange of e-goods and proof of reception, (2) it ensures non- repudiation of origin and non-repudiation of receipt for the delivered e-goods, (3) it allows the receiver of e-goods to verify, during the exchange process, that the e-goods to be received are the one he is signing the receipt for, (4) it uses an off-line and transparent semi-trusted third party (STTP) only in cases when disputes arise, (5) it provides the confidentiality protection for the exchanged items from the STTP, and (6) achieves these features with less computational and communicational overheads than related protocols.
international conference on information technology coding and computing | 2004
Aleksandra Nenadic; Ning Zhang; Stephen K. Barton
This paper presents an efficient security protocol for certified e-goods delivery with the following features: (1) ensures strong fairness, (2) ensures non-repudiation of origin and non-repudiation of receipt, (3) allows the receiver of an e-goods to verify, during the protocol execution, that the e-goods he is about to receive is the one he is signing the receipt for, (4) does not require the active involvement of a fully trusted third party, but rather an off-line and transparent semi-trusted third party (STTP) only in cases of unfair behaviour by any party, and (5) provides confidentiality protection for the exchanged items from the STTP.
BMC Ecology | 2016
Alex Hardisty; Finn Bacall; Niall Beard; Maria-Paula Balcázar-Vargas; Bachir Balech; Zoltán Barcza; Sarah J. Bourlat; Renato De Giovanni; Yde de Jong; Francesca De Leo; Laura Dobor; Giacinto Donvito; Donal Fellows; Antonio Fernandez Guerra; Nuno Ferreira; Yuliya Fetyukova; Bruno Fosso; Jonathan Giddy; Carole A. Goble; Anton Güntsch; Robert Haines; Vera Hernández Ernst; Hannes Hettling; Dóra Hidy; Ferenc Horváth; Dóra Ittzés; Péter Ittzés; Andrew R. Jones; Renzo Kottmann; Robert Kulawik
BackgroundMaking forecasts about biodiversity and giving support to policy relies increasingly on large collections of data held electronically, and on substantial computational capability and capacity to analyse, model, simulate and predict using such data. However, the physically distributed nature of data resources and of expertise in advanced analytical tools creates many challenges for the modern scientist. Across the wider biological sciences, presenting such capabilities on the Internet (as “Web services”) and using scientific workflow systems to compose them for particular tasks is a practical way to carry out robust “in silico” science. However, use of this approach in biodiversity science and ecology has thus far been quite limited.ResultsBioVeL is a virtual laboratory for data analysis and modelling in biodiversity science and ecology, freely accessible via the Internet. BioVeL includes functions for accessing and analysing data through curated Web services; for performing complex in silico analysis through exposure of R programs, workflows, and batch processing functions; for on-line collaboration through sharing of workflows and workflow runs; for experiment documentation through reproducibility and repeatability; and for computational support via seamless connections to supporting computing infrastructures. We developed and improved more than 60 Web services with significant potential in many different kinds of data analysis and modelling tasks. We composed reusable workflows using these Web services, also incorporating R programs. Deploying these tools into an easy-to-use and accessible ‘virtual laboratory’, free via the Internet, we applied the workflows in several diverse case studies. We opened the virtual laboratory for public use and through a programme of external engagement we actively encouraged scientists and third party application and tool developers to try out the services and contribute to the activity.ConclusionsOur work shows we can deliver an operational, scalable and flexible Internet-based virtual laboratory to meet new demands for data processing and analysis in biodiversity science and ecology. In particular, we have successfully integrated existing and popular tools and practices from different scientific disciplines to be used in biodiversity and ecological research.
Concurrency and Computation: Practice and Experience | 2007
Ning Zhang; Li Yao; Aleksandra Nenadic; Jay Chin; Carole A. Goble; Alan L. Rector; David W. Chadwick; Sassa Otenko; Qi Shi
In a virtual organization environment, where services and data are provided and shared among organizations from different administrative domains and protected with dissimilar security policies and measures, there is a need for a flexible authentication framework that supports the use of various authentication methods and tokens. The authentication strengths derived from the authentication methods and tokens should be incorporated into an access‐control decision‐making process, so that more sensitive resources are available only to users authenticated with stronger methods. This paper reports our on‐going efforts in designing and implementing such a framework to facilitate multi‐level and multi‐factor adaptive authentication and authentication strength linked fine‐grained access control. The proof‐of‐concept prototype is designed and implemented in the Shibboleth and PERMIS infrastructures, which specifies protocols to federate authentication and authorization information and provides a policy‐driven, role‐based, access‐control decision‐making capability. Copyright
information assurance and security | 2007
Aleksandra Nenadic; Ning Zhang; Lix Yao; Terry Morrow
The ES-LoA project, funded by the UK Joint Information Systems Committee (JISC) under its e- Infrastructure Security Programme, investigates current and future needs among UK research and education community for a more fine-grained authorisation scheme that would allow service providers to take into account of the levels of confidence in identifying a remote entity requesting for service access. Such a fine-grained authorisation scheme is attractive to service providers offering resources with varying levels of sensitivity and/or wishing to tailor their security protections based upon risk levels. Service providers may wish to restrict access to more sensitive resources only to those who have gone through a more stringent authentication process, or given the same remote entity, require the use of a stronger authentication token should the access request come from a more risky environment. In this way, the quality of an authentication instance, expressed as an authentication Level of Assurance (LoA), becomes one of the parameters used in access control decision making. This paper investigates the current worldwide efforts in defining LoA and identifies gaps in existing definitions when they are applied to a federated environment.
international conference on e science | 2006
Aleksandra Nenadic; Ning Zhang; Jay Chin; Carole A. Goble
The paper describes the design of FAME (Flexible Access Middleware Extension) architecture aimed at providing multi-level user authentication service for Shibboleth, which is endorsed by the Joint Information Systems Committee (JISC) as the next generation authentication and authorisation infrastructure for the UK Higher Education community. FAME derives authentication assurance level based upon the strength of the authentication token and protocol used by the user when authenticating and feeds it to the PERMIS authorisation decision engine via Shibboleth to enable more fine-grained access control. In this way, access to resources is controlled according to the strength of the authentication method so that more sensitive resources may require users to identify themselves using a higher level of authentication.