Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrea Stewart is active.

Publication


Featured researches published by Andrea Stewart.


BMC Medicine | 2014

Using verbal autopsy to measure causes of death: the comparative performance of existing methods

Christopher J L Murray; Rafael Lozano; Abraham D. Flaxman; Peter T. Serina; David Phillips; Andrea Stewart; Spencer L. James; Charles Atkinson; Michael K. Freeman; Summer Lockett Ohno; Robert E. Black; Said M. Ali; Abdullah H. Baqui; Lalit Dandona; Emily Dantzer; Gary L. Darmstadt; Vinita Das; Usha Dhingra; Arup Dutta; Wafaie W. Fawzi; Sara Gómez; Bernardo Hernández; Rohina Joshi; Henry D. Kalter; Aarti Kumar; Vishwajeet Kumar; Marilla Lucero; Saurabh Mehta; Bruce Neal; Devarsetty Praveen

BackgroundMonitoring progress with disease and injury reduction in many populations will require widespread use of verbal autopsy (VA). Multiple methods have been developed for assigning cause of death from a VA but their application is restricted by uncertainty about their reliability.MethodsWe investigated the validity of five automated VA methods for assigning cause of death: InterVA-4, Random Forest (RF), Simplified Symptom Pattern (SSP), Tariff method (Tariff), and King-Lu (KL), in addition to physician review of VA forms (PCVA), based on 12,535 cases from diverse populations for which the true cause of death had been reliably established. For adults, children, neonates and stillbirths, performance was assessed separately for individuals using sensitivity, specificity, Kappa, and chance-corrected concordance (CCC) and for populations using cause specific mortality fraction (CSMF) accuracy, with and without additional diagnostic information from prior contact with health services. A total of 500 train-test splits were used to ensure that results are robust to variation in the underlying cause of death distribution.ResultsThree automated diagnostic methods, Tariff, SSP, and RF, but not InterVA-4, performed better than physician review in all age groups, study sites, and for the majority of causes of death studied. For adults, CSMF accuracy ranged from 0.764 to 0.770, compared with 0.680 for PCVA and 0.625 for InterVA; CCC varied from 49.2% to 54.1%, compared with 42.2% for PCVA, and 23.8% for InterVA. For children, CSMF accuracy was 0.783 for Tariff, 0.678 for PCVA, and 0.520 for InterVA; CCC was 52.5% for Tariff, 44.5% for PCVA, and 30.3% for InterVA. For neonates, CSMF accuracy was 0.817 for Tariff, 0.719 for PCVA, and 0.629 for InterVA; CCC varied from 47.3% to 50.3% for the three automated methods, 29.3% for PCVA, and 19.4% for InterVA. The method with the highest sensitivity for a specific cause varied by cause.ConclusionsPhysician review of verbal autopsy questionnaires is less accurate than automated methods in determining both individual and population causes of death. Overall, Tariff performs as well or better than other methods and should be widely applied in routine mortality surveillance systems with poor cause of death certification practices.


Asia-Pacific Journal of Public Health | 2016

Use of Smartphone for Verbal Autopsy: Results From a Pilot Study in Rural China

Rohina Joshi; Rasika Rampatige; J. Sun; Liping Huang; Shu Chen; Ruijun Wu; Bruce Neal; Alan D. Lopez; Andrea Stewart; Peter T. Serina; Cong Li; Jing Zhang; Jianxin Zhang; Yuhong Zhang; Lijing L. Yan

Traditionally, verbal autopsies (VA) are collected on paper-based questionnaires and reviewed by physicians for cause of death assignment, it is resource intensive and time consuming. The Population Health Metrics Research Consortium VA questionnaires was made available on an Android-based application and cause of death was derived using the Tariff method. Over one year, all adult deaths occurring in 48 villages in 4 counties were identified and a VA interview was conducted using the smartphone VA application. A total of 507 adult deaths were recorded and VA interviews were conducted. Cardiovascular disease was the leading cause of death (35.3%) followed by injury (14.6%) and neoplasms (13.5%). The total cost of the pilot study was USD28 835 (USD0.42 per capita). The interviewers found use of smartphones to conduct interviews to be easier. The study showed that using a smartphone application for VA interviews was feasible for implementation in rural China.


BMC Medicine | 2015

Validating estimates of prevalence of non-communicable diseases based on household surveys: the symptomatic diagnosis study

Spencer L. James; Minerva Romero; Dolores Ramírez-Villalobos; Sara Gómez; Kelsey Pierce; Abraham D. Flaxman; Peter T. Serina; Andrea Stewart; Christopher J L Murray; Emmanuela Gakidou; Rafael Lozano; Bernardo Hernández

BackgroundEasy-to-collect epidemiological information is critical for the more accurate estimation of the prevalence and burden of different non-communicable diseases around the world. Current measurement is restricted by limitations in existing measurement systems in the developing world and the lack of biometry tests for non-communicable diseases. Diagnosis based on self-reported signs and symptoms (“Symptomatic Diagnosis,” or SD) analyzed with computer-based algorithms may be a promising method for collecting timely and reliable information on non-communicable disease prevalence. The objective of this study was to develop and assess the performance of a symptom-based questionnaire to estimate prevalence of non-communicable diseases in low-resource areas.MethodsAs part of the Population Health Metrics Research Consortium study, we collected 1,379 questionnaires in Mexico from individuals who suffered from a non-communicable disease that had been diagnosed with gold standard diagnostic criteria or individuals who did not suffer from any of the 10 target conditions. To make the diagnosis of non-communicable diseases, we selected the Tariff method, a technique developed for verbal autopsy cause of death calculation. We assessed the performance of this instrument and analytical techniques at the individual and population levels.ResultsThe questionnaire revealed that the information on health care experience retrieved achieved 66.1% (95% uncertainty interval [UI], 65.6–66.5%) chance corrected concordance with true diagnosis of non-communicable diseases using health care experience and 0.826 (95% UI, 0.818–0.834) accuracy in its ability to calculate fractions of different causes. SD is also capable of outperforming the current estimation techniques for conditions estimated by questionnaire-based methods.ConclusionsSD is a viable method for producing estimates of the prevalence of non-communicable diseases in areas with low health information infrastructure. This technology can provide higher-resolution prevalence data, more flexible data collection, and potentially individual diagnoses for certain conditions.


PLOS ONE | 2017

Implementing the PHMRC shortened questionnaire: Survey duration of open and closed questions in three sites.

Abraham D. Flaxman; Andrea Stewart; Jonathan Joseph; Nurul Alam; Saidul Alam; Hafizur Rahman Chowdhury; Saman Gamage; Hebe N. Gouda; Rohina Joshi; Marilla Lucero; Meghan D Mooney; Devarsetty Praveen; Rasika Rampatige; Hazel Remolador; Diozele Sanvictores; Peter T. Serina; Peter Kim Streatfield; Veronica Tallo; Nandalal Wijesekera; Christopher J. L. Murray; Bernardo Hernández; Alan D. Lopez; Ian Riley

Background More countries are using verbal autopsy as a part of routine mortality surveillance. The length of time required to complete a verbal autopsy interview is a key logistical consideration for planning large-scale surveillance. Methods We use the PHMRC shortened questionnaire to conduct verbal autopsy interviews at three sites and collect data on the length of time required to complete the interview. This instrument uses a novel checklist of keywords to capture relevant information from the open response. The open response section is timed separately from the section consisting of closed questions. Results We found the median time to complete the entire interview was approximately 25 minutes and did not vary substantially by age-specific module. The median time for the open response section was approximately 4 minutes and 60% of interviewees mentioned at least one keyword within the open response section. Conclusions The length of time required to complete the interview was short enough for large-scale routine use. The open-response section did not add a substantial amount of time and provided useful information which can be used to increase the accuracy of the predictions of the cause of death. The novel checklist approach further reduces the burden of transcribing and translating a large amount of free text. This makes the PHMRC instrument ideal for national mortality surveillance.


The Lancet | 2013

Ensemble modelling in verbal autopsy: the Popular Voting method

Abraham D. Flaxman; Peter T. Serina; Andrea Stewart; Spencer L. James; Alireza Vahdatpour; Bernardo Hernández Prado; Rafael Lozano; Christopher J L Murray; David E. Phillips

Abstract Background Verbal autopsy (VA) is a highly valuable tool for assessing causes of death in resource-limited settings without medically certified death certificates. The Population Health Metrics Research Consortium (PHMRC) collected 12 535 VAs in four countries for which the true cause of death was reliably known. This project led to the development of three new computer algorithms to determine cause of death from these VAs, all of which predict underlying cause of death more accurately than the status quo: physician review. Concurrently, ensemble models, or blends of well-performing models, have been shown to have favourable predictive validity and have begun to be implemented in global health metrics settings. Methods We developed a simple ensemble model based on the three top performing PHMRC methods: the Simplified Symptom Pattern (SSP), the Tariff, and the Random Forest (RF). This ensemble method functions at the individual-record level, examining the predicted cause of death from the three component models and selecting cause of death by a simple majority (Popular Voting). Sensitivity analyses revealed that selecting the prediction made by RF in cases where all three models disagreed was preferable, and this ensemble method was adapted accordingly. Findings The Popular Voting method performed better in cause-specific mortality fraction accuracy than did any individual model alone for adults, children, and neonates, and performed better in chance-corrected concordance than did any individual model except SSP in adults. The three component models disagreed in 16% of all cases, and unanimously agreed in 47% of cases. Interpretation As VA continues to be an effective source of data for estimating cause of death, accurate and inexpensive methods for analysing VA interview responses are increasingly important. The recent development of the three highly accurate PHMRC computational models allows for the option of a meta-model such as the ensemble introduced here. This ensemble model for VA achieves superior performance, and could be applied to other VA samples to accurately assess the relative mortality burden from a variety of diseases and injuries. Funding Population Health Metrics Research Consortium.


Population Health Metrics | 2018

Collecting verbal autopsies: improving and streamlining data collection processes using electronic tablets

Abraham D. Flaxman; Andrea Stewart; Jonathan Joseph; Nurul Alam; Sayed Saidul Alam; Hafizur Rahman Chowdhury; Meghan D Mooney; Rasika Rampatige; Hazel Remolador; Diozele Sanvictores; Peter T. Serina; Peter Kim Streatfield; Veronica Tallo; Christopher J L Murray; Bernardo Hernández; Alan D. Lopez; Ian Riley

BackgroundThere is increasing interest in using verbal autopsy to produce nationally representative population-level estimates of causes of death. However, the burden of processing a large quantity of surveys collected with paper and pencil has been a barrier to scaling up verbal autopsy surveillance. Direct electronic data capture has been used in other large-scale surveys and can be used in verbal autopsy as well, to reduce time and cost of going from collected data to actionable information.MethodsWe collected verbal autopsy interviews using paper and pencil and using electronic tablets at two sites, and measured the cost and time required to process the surveys for analysis. From these cost and time data, we extrapolated costs associated with conducting large-scale surveillance with verbal autopsy.ResultsWe found that the median time between data collection and data entry for surveys collected on paper and pencil was approximately 3 months. For surveys collected on electronic tablets, this was less than 2 days. For small-scale surveys, we found that the upfront costs of purchasing electronic tablets was the primary cost and resulted in a higher total cost. For large-scale surveys, the costs associated with data entry exceeded the cost of the tablets, so electronic data capture provides both a quicker and cheaper method of data collection.ConclusionsAs countries increase verbal autopsy surveillance, it is important to consider the best way to design sustainable systems for data collection. Electronic data capture has the potential to greatly reduce the time and costs associated with data collection. For long-term, large-scale surveillance required by national vital statistical systems, electronic data capture reduces costs and allows data to be available sooner.


BMC Medicine | 2015

A shortened verbal autopsy instrument for use in routine mortality surveillance systems

Peter T. Serina; Ian Riley; Andrea Stewart; Abraham D. Flaxman; Rafael Lozano; Meghan D Mooney; Richard Luning; Bernardo Hernández; Robert E. Black; Ramesh C. Ahuja; Nurul Alam; Sayed Saidul Alam; Said M. Ali; Charles Atkinson; Abdulla H. Baqui; Hafizur Rahman Chowdhury; Lalit Dandona; Rakhi Dandona; Emily Dantzer; Gary L. Darmstadt; Vinita Das; Usha Dhingra; Arup Dutta; Wafaie W. Fawzi; Michael B Freeman; Saman Gamage; Sara Gómez; Dilip Hensman; Spencer L. James; Rohina Joshi


BMC Medicine | 2015

Improving performance of the Tariff Method for assigning causes of death to verbal autopsies

Peter T. Serina; Ian Riley; Andrea Stewart; Spencer L. James; Abraham D. Flaxman; Rafael Lozano; Bernardo Hernández; Meghan D Mooney; Richard Luning; Robert E. Black; Ramesh C. Ahuja; Nurul Alam; Sayed Saidul Alam; Said M. Ali; Charles Atkinson; Abdulla H. Baqui; Hafizur Rahman Chowdhury; Lalit Dandona; Rakhi Dandona; Emily Dantzer; Gary L. Darmstadt; Vinita Das; Usha Dhingra; Arup Dutta; Wafaie W. Fawzi; Michael K. Freeman; Sara Gómez; Hebe N. Gouda; Rohina Joshi; Henry D. Kalter


Population Health Metrics | 2016

What is the optimal recall period for verbal autopsies? Validation study based on repeat interviews in three populations

Peter T. Serina; Ian Riley; Bernardo Hernández; Abraham D. Flaxman; Devarsetty Praveen; Veronica Tallo; Rohina Joshi; Diozele Sanvictores; Andrea Stewart; Meghan D Mooney; Christopher J L Murray; Alan D. Lopez


Population Health Metrics | 2016

The paradox of verbal autopsy in cause of death assignment: symptom question unreliability but predictive accuracy

Peter T. Serina; Ian Riley; Bernardo Hernández; Abraham D. Flaxman; Devarsetty Praveen; Veronica Tallo; Rohina Joshi; Diozele Sanvictores; Andrea Stewart; Meghan D Mooney; Christopher J L Murray; Alan D. Lopez

Collaboration


Dive into the Andrea Stewart's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ian Riley

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rafael Lozano

University of Washington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge