Anne-Marie Tassé
McGill University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Anne-Marie Tassé.
Emerging Themes in Epidemiology | 2013
Dany Doiron; Paul R. Burton; Yannick Marcon; Amadou Gaye; Bruce H. R. Wolffenbuttel; Markus Perola; Ronald P. Stolk; Luisa Foco; Cosetta Minelli; Melanie Waldenberger; Rolf Holle; Kirsti Kvaløy; Hans L. Hillege; Anne-Marie Tassé; Vincent Ferretti; Isabel Fortier
AbstractsBackgroundIndividual-level data pooling of large population-based studies across research centres in international research projects faces many hurdles. The BioSHaRE (Biobank Standardisation and Harmonisation for Research Excellence in the European Union) project aims to address these issues by building a collaborative group of investigators and developing tools for data harmonization, database integration and federated data analyses.MethodsEight population-based studies in six European countries were recruited to participate in the BioSHaRE project. Through workshops, teleconferences and electronic communications, participating investigators identified a set of 96 variables targeted for harmonization to answer research questions of interest. Using each study’s questionnaires, standard operating procedures, and data dictionaries, harmonization potential was assessed. Whenever harmonization was deemed possible, processing algorithms were developed and implemented in an open-source software infrastructure to transform study-specific data into the target (i.e. harmonized) format. Harmonized datasets located on server in each research centres across Europe were interconnected through a federated database system to perform statistical analysis.ResultsRetrospective harmonization led to the generation of common format variables for 73% of matches considered (96 targeted variables across 8 studies). Authenticated investigators can now perform complex statistical analyses of harmonized datasets stored on distributed servers without actually sharing individual-level data using the DataSHIELD method.ConclusionNew Internet-based networking technologies and database management systems are providing the means to support collaborative, multi-center research in an efficient and secure manner. The results from this pilot project show that, given a strong collaborative relationship between participating studies, it is possible to seamlessly co-analyse internationally harmonized research databases while allowing each study to retain full control over individual-level data. We encourage additional collaborative research networks in epidemiology, public health, and the social sciences to make use of the open source tools presented herein.
European Journal of Human Genetics | 2015
Edward S. Dove; Yann Joly; Anne-Marie Tassé; Bartha Maria Knoppers
The biggest challenge in twenty-first century data-intensive genomic science, is developing vast computer infrastructure and advanced software tools to perform comprehensive analyses of genomic data sets for biomedical research and clinical practice. Researchers are increasingly turning to cloud computing both as a solution to integrate data from genomics, systems biology and biomedical data mining and as an approach to analyze data to solve biomedical problems. Although cloud computing provides several benefits such as lower costs and greater efficiency, it also raises legal and ethical issues. In this article, we discuss three key ‘points to consider’ (data control; data security, confidentiality and transfer; and accountability) based on a preliminary review of several publicly available cloud service providers’ Terms of Service. These ‘points to consider’ should be borne in mind by genomic research organizations when negotiating legal arrangements to store genomic data on a large commercial cloud service provider’s servers. Diligent genomic cloud computing means leveraging security standards and evaluation processes as a means to protect data and entails many of the same good practices that researchers should always consider in securing their local infrastructure.
Bioinformatics | 2015
Paul R. Burton; Madeleine Murtagh; Andrew W Boyd; James Bryan Williams; Edward S. Dove; Susan Wallace; Anne-Marie Tassé; Julian Little; Rex L. Chisholm; Amadou Gaye; Kristian Hveem; Anthony J. Brookes; Pat Goodwin; Jon Fistein; Martin Bobrow; Bartha Maria Knoppers
Motivation: The data that put the ‘evidence’ into ‘evidence-based medicine’ are central to developments in public health, primary and hospital care. A fundamental challenge is to site such data in repositories that can easily be accessed under appropriate technical and governance controls which are effectively audited and are viewed as trustworthy by diverse stakeholders. This demands socio-technical solutions that may easily become enmeshed in protracted debate and controversy as they encounter the norms, values, expectations and concerns of diverse stakeholders. In this context, the development of what are called ‘Data Safe Havens’ has been crucial. Unfortunately, the origins and evolution of the term have led to a range of different definitions being assumed by different groups. There is, however, an intuitively meaningful interpretation that is often assumed by those who have not previously encountered the term: a repository in which useful but potentially sensitive data may be kept securely under governance and informatics systems that are fit-for-purpose and appropriately tailored to the nature of the data being maintained, and may be accessed and utilized by legitimate users undertaking work and research contributing to biomedicine, health and/or to ongoing development of healthcare systems. Results: This review explores a fundamental question: ‘what are the specific criteria that ought reasonably to be met by a data repository if it is to be seen as consistent with this interpretation and viewed as worthy of being accorded the status of ‘Data Safe Haven’ by key stakeholders’? We propose 12 such criteria. Contact: [email protected]
Nucleic Acids Research | 2017
Laura Clarke; Susan Fairley; Xiangqun Zheng-Bradley; Ian Streeter; Emily Perry; Ernesto Lowy; Anne-Marie Tassé; Paul Flicek
The International Genome Sample Resource (IGSR; http://www.internationalgenome.org) expands in data type and population diversity the resources from the 1000 Genomes Project. IGSR represents the largest open collection of human variation data and provides easy access to these resources. IGSR was established in 2015 to maintain and extend the 1000 Genomes Project data, which has been widely used as a reference set of human variation and by researchers developing analysis methods. IGSR has mapped all of the 1000 Genomes sequence to the newest human reference (GRCh38), and will release updated variant calls to ensure maximal usefulness of the existing data. IGSR is collecting new structural variation data on the 1000 Genomes samples from long read sequencing and other technologies, and will collect relevant functional data into a single comprehensive resource. IGSR is extending coverage with new populations sequenced by collaborating groups. Here, we present the new data and analysis that IGSR has made available. We have also introduced a new data portal that increases discoverability of our data—previously only browseable through our FTP site—by focusing on particular samples, populations or data sets of interest.
Scientific Data | 2018
Moran N. Cabili; Knox Carey; Stephanie O.M. Dyke; Anthony J. Brookes; Marc Fiume; Francis Jeanson; Giselle Kerry; Alex Lash; Heidi J. Sofia; Dylan Spalding; Anne-Marie Tassé; Susheel Varma; Ravi Pandya
The volume of genomics and health data is growing rapidly, driven by sequencing for both research and clinical use. However, under current practices, the data is fragmented into many distinct datasets, and researchers must go through a separate application process for each dataset. This is time-consuming both for the researchers and the data stewards, and it reduces the velocity of research and new discoveries that could improve human health. We propose to simplify this process, by introducing a standard Library Card that identifies and authenticates researchers across all participating datasets. Each researcher would only need to apply once to establish their bona fides as a qualified researcher, and could then use the Library Card to access a wide range of datasets that use a compatible data access policy and authentication protocol.
npj Genomic Medicine | 2018
J. Patrick Woolley; Emily Kirby; Josh Leslie; Francis Jeanson; Moran N. Cabili; Gregory Rushton; James G. Hazard; Vagelis Ladas; Colin D. Veal; Spencer J. Gibson; Anne-Marie Tassé; Stephanie O.M. Dyke; Clara Gaff; Adrian Thorogood; Bartha Maria Knoppers; John Wilbanks; Anthony J. Brookes
Given the data-rich nature of modern biomedical research, there is a pressing need for a systematic, structured, computer-readable way to capture, communicate, and manage sharing rules that apply to biomedical resources. This is essential for responsible recording, versioning, communication, querying, and actioning of resource sharing plans. However, lack of a common “information model” for rules and conditions that govern the sharing of materials, methods, software, data, and knowledge creates a fundamental barrier. Without this, it can be virtually impossible for Research Ethics Committees (RECs), Institutional Review Boards (IRBs), Data Access Committees (DACs), biobanks, and end users to confidently track, manage, and interpret applicable legal and ethical requirements. This raises costs and burdens of data stewardship and decreases efficient and responsible access to data, biospecimens, and other resources. To address this, the GA4GH and IRDiRC organizations sponsored the creation of the Automatable Discovery and Access Matrix (ADA-M, read simply as “Adam”). ADA-M is a comprehensive information model that provides the basis for producing structured metadata “Profiles” of regulatory conditions, thereby enabling efficient application of those conditions across regulatory spheres. Widespread use of ADA-M will aid researchers in globally searching and prescreening potential data and/or biospecimen resources for compatibility with their research plans in a responsible and efficient manner, increasing likelihood of timely DAC approvals while also significantly reducing time and effort DACs, RECs, and IRBs spend evaluating resource requests and research proposals. Extensive online documentation, software support, video guides, and an Application Programming Interface (API) for ADA-M have been made available.
Current Pharmacogenomics and Personalized Medicine (formerly Current Pharmacogenomics) | 2017
Gratien Dalpé; Emily Kirby; Ida Ngueng Feze; Anne-Marie Tassé; Bartha Maria Knoppers; Pavel Hamet; Johanne Tremblay; Michael S. Phillips; Yann Joly
DOI: 10.2174/18756921156661612201
Biopreservation and Biobanking | 2016
Anne-Marie Tassé; Marianna J. Bledsoe; Lisette Giepmans; Vasiliki Rahimzadeh
Archive | 2013
Karine Sénécal; Conrad V. Fernandez; Anne-Marie Tassé; Ma'n H. Zawati; Bartha Maria Knoppers; Denise Avard
Archive | 2013
Karine Sénécal; Conrad V. Fernandez; Anne-Marie Tassé; Ma'n H. Zawati; Bartha Maria Knoppers; Denise Avard