Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yaohua Hu is active.

Publication


Featured researches published by Yaohua Hu.


Methods | 2014

Inferring gene regulatory networks by integrating ChIP-seq/chip and transcriptome data via LASSO-type regularization methods

Jing Qin; Yaohua Hu; Feng Xu; Hari Krishna Yalamanchili; Junwen Wang

Inferring gene regulatory networks from gene expression data at whole genome level is still an arduous challenge, especially in higher organisms where the number of genes is large but the number of experimental samples is small. It is reported that the accuracy of current methods at genome scale significantly drops from Escherichia coli to Saccharomyces cerevisiae due to the increase in number of genes. This limits the applicability of current methods to more complex genomes, like human and mouse. Least absolute shrinkage and selection operator (LASSO) is widely used for gene regulatory network inference from gene expression profiles. However, the accuracy of LASSO on large genomes is not satisfactory. In this study, we apply two extended models of LASSO, L0 and L1/2 regularization models to infer gene regulatory network from both high-throughput gene expression data and transcription factor binding data in mouse embryonic stem cells (mESCs). We find that both the L0 and L1/2 regularization models significantly outperform LASSO in network inference. Incorporating interactions between transcription factors and their targets remarkably improved the prediction accuracy. Current study demonstrates the efficiency and applicability of these two models for gene regulatory network inference from integrative omics data in large genomes. The applications of the two models will facilitate biologists to study the gene regulation of higher model organisms in a genome-wide scale.


Siam Journal on Optimization | 2016

On Convergence Rates of Linearized Proximal Algorithms for Convex Composite Optimization with Applications

Yaohua Hu; Chong Li; Xiaoqi Yang

In the present paper, we investigate a linearized proximal algorithm (LPA) for solving a convex composite optimization problem. Each iteration of the LPA is a proximal minimization of the convex composite function with the inner function being linearized at the current iterate. The LPA has the attractive computational advantage that the solution of each subproblem is a singleton, which avoids the difficulty as in the Gauss--Newton method (GNM) of finding a solution with minimum norm among the set of minima of its subproblem, while still maintaining the same local convergence rate as that of the GNM. Under the assumptions of local weak sharp minima of order


Numerical Functional Analysis and Optimization | 2015

A Subgradient Method Based on Gradient Sampling for Solving Convex Optimization Problems

Yaohua Hu; Chee-Khian Sim; X. Q. Yang

p


European Journal of Operational Research | 2015

Inexact subgradient methods for quasi-convex optimization problems

Yaohua Hu; X. Q. Yang; Chee-Khian Sim

(


Optimization | 2017

A new linear convergence result for the iterative soft thresholding algorithm

Lufang Zhang; Yaohua Hu; Chong Li; Jen-Chih Yao

p \in [1,2]


Siam Journal on Optimization | 2013

Quasi-Slater and Farkas--Minkowski Qualifications for Semi-infinite Programming with Applications

Chong Li; Xiaopeng Zhao; Yaohua Hu

) and a quasi-regularity condition, we establish a local superlinear convergence rate for the LPA. We also propose a globalization strategy for the LPA based on a backtracking line-search and an inexact version of the LPA. We further apply the LPA to solve a (possibly nonconvex) feasibility problem, as well as a sensor network localiza...


Journal of Global Optimization | 2015

A box-constrained differentiable penalty method for nonlinear complementarity problems

Boshi Tian; Yaohua Hu; X. Q. Yang

Based on the gradient sampling technique, we present a subgradient algorithm to solve the nondifferentiable convex optimization problem with an extended real-valued objective function. A feature of our algorithm is the approximation of subgradient at a point via random sampling of (relative) gradients at nearby points, and then taking convex combinations of these (relative) gradients. We prove that our algorithm converges to an optimal solution with probability 1. Numerical results demonstrate that our algorithm performs favorably compared with existing subgradient algorithms on applications considered.


Quantitative Biology | 2016

Applications of integrative OMICs approaches to gene regulation studies

Jing Qin; Bin Yan; Yaohua Hu; Panwen Wang; Junwen Wang

In this paper, we consider a generic inexact subgradient algorithm to solve a nondifferentiable quasi-convex constrained optimization problem. The inexactness stems from computation errors and noise, which come from practical considerations and applications. Assuming that the computational errors and noise are deterministic and bounded, we study the effect of the inexactness on the subgradient method when the constraint set is compact or the objective function has a set of generalized weak sharp minima. In both cases, using the constant and diminishing stepsize rules, we describe convergence results in both objective values and iterates, and finite convergence to approximate optimality. We also investigate efficiency estimates of iterates and apply the inexact subgradient algorithm to solve the Cobb–Douglas production efficiency problem. The numerical results verify our theoretical analysis and show the high efficiency of our proposed algorithm, especially for the large-scale problems.


Optimization | 2018

Iterative positive thresholding algorithm for non-negative sparse optimization

Lufang Zhang; Yaohua Hu; Carisa Kwok Wai Yu; Jinhua Wang

The iterative soft thresholding algorithm (ISTA) is one of the most popular optimization algorithms for solving the regularized least squares problem, and its linear convergence has been investigated under the assumption of finite basis injectivity property or strict sparsity pattern. In this paper, we consider the regularized least squares problem in finite- or infinite-dimensional Hilbert space, introduce a weaker notion of orthogonal sparsity pattern (OSP) and establish the Q-linear convergence of ISTA under the assumption of OSP. Examples are provided to illustrate the cases where the linear convergence of ISTA can be established only by our result, but cannot be ensured by any existing result in the literature.


Optimization | 2018

Abstract convergence theorem for quasi-convex optimization problems with applications

Carisa Kwok Wai Yu; Yaohua Hu; Xiaoqi Yang; Siu-Kai Choy

The well-known Farkas--Minkowski (FM) type qualification plays an important role in linear semi-infinite programming and has been extensively developed by many authors in establishing optimality conditions, duality, and stability for semi-infinite programming. In this paper, we introduce the concept of the quasi-Slater condition for a semi-infinite convex inequality system and present that the Slater type conditions imply the FM qualification under some appropriate continuity assumption of the set-valued mapping

Collaboration


Dive into the Yaohua Hu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoqi Yang

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Jing Qin

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

X. Q. Yang

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Carisa Kwok Wai Yu

Hang Seng Management College

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bin Yan

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chee-Khian Sim

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Le Zhang

Hong Kong Polytechnic University

View shared research outputs
Top Co-Authors

Avatar

Weihua Gu

Hong Kong Polytechnic University

View shared research outputs
Researchain Logo
Decentralizing Knowledge