Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Beilun Wang is active.

Publication


Featured researches published by Beilun Wang.


pacific symposium on biocomputing | 2017

DEEP MOTIF DASHBOARD: VISUALIZING AND UNDERSTANDING GENOMIC SEQUENCES USING DEEP NEURAL NETWORKS.

Jack Lanchantin; Ritambhara Singh; Beilun Wang; Yanjun Qi

Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequences saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them.


european conference on machine learning | 2017

GaKCo: A Fast Gapped k-mer String Kernel Using Counting.

Ritambhara Singh; Arshdeep Sekhon; Kamran Kowsari; Jack Lanchantin; Beilun Wang; Yanjun Qi

String Kernel (SK) techniques, especially those using gapped


Machine Learning | 2017

A constrained \(\ell \)1 minimization approach for estimating multiple sparse Gaussian or nonparanormal graphical models

Beilun Wang; Ritambhara Singh; Yanjun Qi

k


arXiv: Learning | 2017

A Theoretical Framework for Robustness of (Deep) Classifiers against Adversarial Samples

Beilun Wang; Ji Gao; Yanjun Qi

-mers as features (gk), have obtained great success in classifying sequences like DNA, protein, and text. However, the state-of-the-art gk-SK runs extremely slow when we increase the dictionary size (


Archive | 2016

A Theoretical Framework for Robustness of (Deep) Classifiers Under Adversarial Noise.

Beilun Wang; Ji Gao; Yanjun Qi

\Sigma


arXiv: Learning | 2017

DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples

Ji Gao; Beilun Wang; Zeming Lin; Weilin Xu; Yanjun Qi

) or allow more mismatches (


arXiv: Learning | 2017

DeepMask: Masking DNN Models for robustness against adversarial samples.

Ji Gao; Beilun Wang; Yanjun Qi

M


international conference on machine learning | 2018

A Fast and Scalable Joint Estimator for Integrating Additional Knowledge in Learning Multiple Related Sparse Gaussian Graphical Models

Beilun Wang; Arshdeep Sekhon; Yanjun Qi

). This is because current gk-SK uses a trie-based algorithm to calculate co-occurrence of mismatched substrings resulting in a time cost proportional to


international conference on artificial intelligence and statistics | 2018

Fast and Scalable Learning of Sparse Changes in High-Dimensional Gaussian Graphical Model Structure

Beilun Wang; Arshdeep Sekhon; Yanjun Qi

O(\Sigma^{M})


international conference on artificial intelligence and statistics | 2017

A Fast and Scalable Joint Estimator for Learning Multiple Related Sparse Gaussian Graphical Models

Beilun Wang; Ji Gao; Yanjun Qi

. We propose a \textbf{fast} algorithm for calculating \underline{Ga}pped

Collaboration


Dive into the Beilun Wang's collaboration.

Top Co-Authors

Avatar

Yanjun Qi

University of Virginia

View shared research outputs
Top Co-Authors

Avatar

Ji Gao

University of Virginia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kamran Kowsari

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Zeming Lin

University of Virginia

View shared research outputs
Researchain Logo
Decentralizing Knowledge