Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lifeng Lin is active.

Publication


Featured researches published by Lifeng Lin.


Journal of Statistical Software | 2017

Performing arm-based network meta-analysis in R with the pcnetmeta package

Lifeng Lin; Jing Zhang; James S. Hodges; Haitao Chu

Network meta-analysis is a powerful approach for synthesizing direct and indirect evidence about multiple treatment comparisons from a collection of independent studies. At present, the most widely used method in network meta-analysis is contrast-based, in which a baseline treatment needs to be specified in each study, and the analysis focuses on modeling relative treatment effects (typically log odds ratios). However, population-averaged treatment-specific parameters, such as absolute risks, cannot be estimated by this method without an external data source or a separate model for a reference treatment. Recently, an arm-based network meta-analysis method has been proposed, and the R package pcnetmeta provides user-friendly functions for its implementation. This package estimates both absolute and relative effects, and can handle binary, continuous, and count outcomes.


Biometrics | 2017

Alternative measures of between-study heterogeneity in meta-analysis: Reducing the impact of outlying studies

Lifeng Lin; Haitao Chu; James S. Hodges

Meta-analysis has become a widely used tool to combine results from independent studies. The collected studies are homogeneous if they share a common underlying true effect size; otherwise, they are heterogeneous. A fixed-effect model is customarily used when the studies are deemed homogeneous, while a random-effects model is used for heterogeneous studies. Assessing heterogeneity in meta-analysis is critical for model selection and decision making. Ideally, if heterogeneity is present, it should permeate the entire collection of studies, instead of being limited to a small number of outlying studies. Outliers can have great impact on conventional measures of heterogeneity and the conclusions of a meta-analysis. However, no widely accepted guidelines exist for handling outliers. This article proposes several new heterogeneity measures. In the presence of outliers, the proposed measures are less affected than the conventional ones. The performance of the proposed and conventional heterogeneity measures are compared theoretically, by studying their asymptotic properties, and empirically, using simulations and case studies.


Biometrika | 2016

An adaptive two-sample test for high-dimensional means

Gongjun Xu; Lifeng Lin; Peng Wei; Wei Pan

SUMMARY Several two-sample tests for high-dimensional data have been proposed recently, but they are powerful only against certain alternative hypotheses. In practice, since the true alternative hypothesis is unknown, it is unclear how to choose a powerful test. We propose an adaptive test that maintains high power across a wide range of situations and study its asymptotic properties. Its finite-sample performance is compared with that of existing tests. We apply it and other tests to detect possible associations between bipolar disease and a large number of single nucleotide polymorphisms on each chromosome based on data from a genome-wide association study. Numerical studies demonstrate the superior performance and high power of the proposed test across a wide spectrum of applications.


Epidemiology | 2016

Sensitivity to excluding treatments in network meta-analysis

Lifeng Lin; Haitao Chu; James S. Hodges

Network meta-analysis of randomized controlled trials is increasingly used to combine both direct evidence comparing treatments within trials and indirect evidence comparing treatments across different trials. When the outcome is binary, the commonly used contrast-based network meta-analysis methods focus on relative treatment effects such as odds ratios comparing two treatments. As shown in a recent report, when using contrast-based network meta-analysis, the impact of excluding a treatment in the network can be substantial, suggesting a methodological limitation. In addition, relative treatment effects are sometimes not sufficient for patients to make decisions. For example, it can be challenging for patients to trade off efficacy and safety for two drugs if they only know the relative effects, not the absolute effects. A recently proposed arm-based network meta-analysis, based on a missing-data framework, provides an alternative approach. It focuses on estimating population-averaged treatment-specific absolute effects. This article examines the influence of treatment exclusion empirically using 14 published network meta-analyses, for both arm- and contrast-based approaches. The difference between these two approaches is substantial, and it is almost entirely due to single-arm trials. When a treatment is removed from a contrast-based network meta-analysis, it is necessary to exclude other treatments in two-arm studies that investigated the excluded treatment; such exclusions are not necessary in arm-based network meta-analysis, leading to substantial gain in performance.


Journal of General Internal Medicine | 2018

Empirical Comparison of Publication Bias Tests in Meta-Analysis

Lifeng Lin; Haitao Chu; Mohammad Hassan Murad; Chuan Hong; Zhiyong Qu; Stephen R. Cole; Yong Chen

ABSTRACTBackgroundDecision makers rely on meta-analytic estimates to trade off benefits and harms. Publication bias impairs the validity and generalizability of such estimates. The performance of various statistical tests for publication bias has been largely compared using simulation studies and has not been systematically evaluated in empirical data.MethodsThis study compares seven commonly used publication bias tests (i.e., Begg’s rank test, trim-and-fill, Egger’s, Tang’s, Macaskill’s, Deeks’, and Peters’ regression tests) based on 28,655 meta-analyses available in the Cochrane Library.ResultsEgger’s regression test detected publication bias more frequently than other tests (15.7% in meta-analyses of binary outcomes and 13.5% in meta-analyses of non-binary outcomes). The proportion of statistically significant publication bias tests was greater for larger meta-analyses, especially for Begg’s rank test and the trim-and-fill method. The agreement among Tang’s, Macaskill’s, Deeks’, and Peters’ regression tests for binary outcomes was moderately strong (most κ’s were around 0.6). Tang’s and Deeks’ tests had fairly similar performance (κ > 0.9). The agreement among Begg’s rank test, the trim-and-fill method, and Egger’s regression test was weak or moderate (κ < 0.5).ConclusionsGiven the relatively low agreement between many publication bias tests, meta-analysts should not rely on a single test and may apply multiple tests with various assumptions. Non-statistical approaches to evaluating publication bias (e.g., searching clinical trials registries, records of drug approving agencies, and scientific conference proceedings) remain essential.


BMJ Evidence-Based Medicine | 2018

The effect of publication bias magnitude and direction on the certainty in evidence

Mohammad Hassan Murad; Haitao Chu; Lifeng Lin; Zhen Wang

Publication bias occurs when studies with statistically significant results have increased likelihood of being published. Publication bias is commonly associated with inflated treatment effect which lowers the certainty of decision makers about the evidence. In this guide we propose that systematic reviewers and decision makers consider the direction and magnitude of publication bias, as opposed to just the binary determination of the presence of this bias, before lowering their certainty in the evidence. Direction of bias may not always exaggerate the treatment effect. The presence of bias with a trivial magnitude may not affect the decision at hand. Various statistical approaches are available to determine the direction and magnitude of publication bias.


Research Synthesis Methods | 2018

Bayesian multivariate meta-analysis of multiple factors

Lifeng Lin; Haitao Chu

In medical sciences, a disease condition is typically associated with multiple risk and protective factors. Although many studies report results of multiple factors, nearly all meta-analyses separately synthesize the association between each factor and the disease condition of interest. The collected studies usually report different subsets of factors, and the results from separate analyses on multiple factors may not be comparable because each analysis may use different subpopulation. This may impact on selecting most important factors to design a multifactor intervention program. This article proposes a new concept, multivariate meta-analysis of multiple factors (MVMA-MF), to synthesize all available factors simultaneously. By borrowing information across factors, MVMA-MF can improve statistical efficiency and reduce biases compared with separate analyses when factors were missing not at random. As within-study correlations between factors are commonly unavailable from published articles, we use a Bayesian hybrid model to perform MVMA-MF, which effectively accounts for both within- and between-study correlations. The performance of MVMA-MF and the conventional methods are compared using simulations and an application to a pterygium dataset consisting of 29 studies on 8 risk factors.


Biometrics | 2018

Quantifying publication bias in meta-analysis

Lifeng Lin; Haitao Chu


Qme-quantitative Marketing and Economics | 2018

Cross channel effects of search engine advertising on brick & mortar retail sales: Meta analysis of large scale field experiments on Google.com

Kirthi Kalyanam; John McAteer; Jonathan Marek; James A Hodges; Lifeng Lin


Journal of Clinical Epidemiology | 2018

Borrowing of strength from indirect evidence in 40 network meta-analyses

Lifeng Lin; Aiwen Xing; Michael J. Kofler; Mohammad Hassan Murad

Collaboration


Dive into the Lifeng Lin's collaboration.

Top Co-Authors

Avatar

Haitao Chu

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhiyong Qu

Beijing Normal University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gongjun Xu

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peng Wei

University of Texas MD Anderson Cancer Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge