Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Trang Pham is active.

Publication


Featured researches published by Trang Pham.


knowledge discovery and data mining | 2016

DeepCare: A Deep Dynamic Memory Model forźPredictive Medicine

Trang Pham; Truyen Tran; Dinh Q. Phung; Svetha Venkatesh

Personalized predictive medicine necessitates modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, recorded in electronic medical records, are episodic and irregular in time. We introduce DeepCare, a deep dynamic neural network that reads medical records and predicts future medical outcomes. At the data level, DeepCare models patient health state trajectories with explicit memory of illness. Built on Long Short-Term Memory LSTM, DeepCare introduces time parameterizations to handle irregular timing by moderating the forgetting and consolidation of illness memory. DeepCare also incorporates medical interventions that change the course of illness and shape future medical risk. Moving upi¾źto the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling and readmission prediction in diabetes, a chronic disease with large economic burden. The results show improved modeling and risk prediction accuracy.


Journal of Biomedical Informatics | 2017

Predicting healthcare trajectories from medical records: a deep learning approach.

Trang Pham; Truyen Tran; Dinh Q. Phung; Svetha Venkatesh

Personalized predictive medicine necessitates the modeling of patient illness and care processes, which inherently have long-term temporal dependencies. Healthcare observations, stored in electronic medical records are episodic and irregular in time. We introduce DeepCare, an end-to-end deep dynamic neural network that reads medical records, stores previous illness history, infers current illness states and predicts future medical outcomes. At the data level, DeepCare represents care episodes as vectors and models patient health state trajectories by the memory of historical records. Built on Long Short-Term Memory (LSTM), DeepCare introduces methods to handle irregularly timed events by moderating the forgetting and consolidation of memory. DeepCare also explicitly models medical interventions that change the course of illness and shape future medical risk. Moving up to the health state level, historical and present health states are then aggregated through multiscale temporal pooling, before passing through a neural network that estimates future outcomes. We demonstrate the efficacy of DeepCare for disease progression modeling, intervention recommendation, and future risk prediction. On two important cohorts with heavy social and economic burden - diabetes and mental health - the results show improved prediction accuracy.


international conference on pattern recognition | 2016

Faster training of very deep networks via p-norm gates

Trang Pham; Truyen Tran; Dinh Q. Phung; Svetha Venkatesh

A major contributing factor to the recent advances in deep neural networks is structural units that let sensory information and gradients to propagate easily. Gating is one such structure that acts as a flow control. Gates are employed in many recent state-of-the-art recurrent models such as LSTM and GRU, and feedforward models such as Residual Nets and Highway Networks. This enables learning in very deep networks with hundred layers and helps achieve record-breaking results in vision (e.g., ImageNet with Residual Nets) and NLP (e.g., machine translation with GRU). However, there is limited work in analysing the role of gating in the learning process. In this paper, we propose a flexible p-norm gating scheme, which allows user-controllable flow and as a consequence, improve the learning speed. This scheme subsumes other existing gating schemes, including those in GRU, Highway Networks and Residual Nets as special cases. Experiments on large sequence and vector datasets demonstrate that the proposed gating scheme helps improve the learning speed significantly without extra overhead.


international conference on software engineering | 2018

Poster: Predicting Components for Issue Reports Using Deep Learning with Information Retrieval

Morakot Choetkiertikul; Hoa Khanh Dam; Truyen Tran; Trang Pham; Aditya K. Ghose

Assigning an issue to the correct component(s) is challenging, especially for large-scale projects which have are up to hundreds of components. We propose a prediction model which learns from historical issues reports and recommends the most relevant components for new issues. Our model uses the deep learning Long Short-Term Memory to automatically learns semantic features representing an issue report, and combines them with the traditional textual similarity features. An extensive evaluation on 142,025 issues from 11 large projects shows our approach outperforms alternative techniques with an average 60% improvement in predictive performance.


national conference on artificial intelligence | 2017

Column Networks for Collective Classification

Trang Pham; Truyen Tran; Dinh Q. Phung; Svetha Venkatesh


foundations of software engineering | 2016

A deep language model for software code

Hoa Khanh Dam; Truyen Tran; Trang Pham


IEEE Transactions on Software Engineering | 2018

A deep learning model for estimating story points

Morakot Choetkiertikul; Hoa Khanh Dam; Truyen Tran; Trang Pham; Aditya K. Ghose; Tim Menzies


arXiv: Machine Learning | 2017

One Size Fits Many: Column Bundle for Multi-X Learning.

Trang Pham; Truyen Tran; Svetha Venkatesh


arXiv: Software Engineering | 2018

A deep tree-based model for software defect prediction.

Hoa Khanh Dam; Trang Pham; Shien Wee Ng; Truyen Tran; John Grundy; Aditya K. Ghose; Taeksu Kim; Chul-Joo Kim


arXiv: Learning | 2018

Graph Memory Networks for Molecular Activity Prediction.

Trang Pham; Truyen Tran; Svetha Venkatesh

Collaboration


Dive into the Trang Pham's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hoa Khanh Dam

University of Wollongong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shien Wee Ng

University of Wollongong

View shared research outputs
Researchain Logo
Decentralizing Knowledge