Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Min S. H. Aung is active.

Publication


Featured researches published by Min S. H. Aung.


Smart Materials and Structures | 2012

The pizzicato knee-joint energy harvester: characterization with biomechanical data and the effect of backpack load

Michele Pozzi; Min S. H. Aung; Meiling Zhu; Richard Jones; John Yannis Goulermas

The reduced power requirements of miniaturized electronics offer the opportunity to create devices which rely on energy harvesters for their power supply. In the case of wearable devices, human-based piezoelectric energy harvesting is particularly difficult due to the mismatch between the low frequency of human activities and the high-frequency requirements of piezoelectric transducers. We propose a piezoelectric energy harvester, to be worn on the knee-joint, that relies on the plucking technique to achieve frequency up-conversion. During a plucking action, a piezoelectric bimorph is deflected by a plectrum; when released due to loss of contact, the bimorph is free to vibrate at its resonant frequency, generating electrical energy with the highest efficiency. A prototype, featuring four PZT-5H bimorphs, was built and is here studied in a knee simulator which reproduces the gait of a human subject. Biomechanical data were collected with a marker-based motion capture system while the subject was carrying a selection of backpack loads. The paper focuses on the energy generation of the harvester and how this is affected by the backpack load. By altering the gait, the backpack load has a measurable effect on performance: at the highest load of 24?kg, a minor reduction in energy generation (7%) was observed and the output power is reduced by 10%. Both are so moderate to be practically unimportant. The average power output of the prototype is 2.06???0.3?mW, which can increase significantly with further optimization.


IEEE Transactions on Neural Networks | 2009

Partial Logistic Artificial Neural Network for Competing Risks Regularized With Automatic Relevance Determination

Paulo J. G. Lisboa; Terence A. Etchells; Ian H. Jarman; Corneliu T. C. Arsene; Min S. H. Aung; Antonio Eleuteri; Azzam Taktak; Federico Ambrogi; Patrizia Boracchi; Elia Biganzoli

Time-to-event analysis is important in a wide range of applications from clinical prognosis to risk modeling for credit scoring and insurance. In risk modeling, it is sometimes required to make a simultaneous assessment of the hazard arising from two or more mutually exclusive factors. This paper applies to an existing neural network model for competing risks (PLANNCR), a Bayesian regularization with the standard approximation of the evidence to implement automatic relevance determination (PLANNCR-ARD). The theoretical framework for the model is described and its application is illustrated with reference to local and distal recurrence of breast cancer, using the data set of Veronesi (1995).


IEEE Transactions on Neural Systems and Rehabilitation Engineering | 2013

Automated Detection of Instantaneous Gait Events Using Time Frequency Analysis and Manifold Embedding

Min S. H. Aung; Sibylle B. Thies; Laurence Kenney; David Howard; Ruud W. Selles; Andrew H. Findlow; John Yannis Goulermas

Accelerometry is a widely used sensing modality in human biomechanics due to its portability, non-invasiveness, and accuracy. However, difficulties lie in signal variability and interpretation in relation to biomechanical events. In walking, heel strike and toe off are primary gait events where robust and accurate detection is essential for gait-related applications. This paper describes a novel and generic event detection algorithm applicable to signals from tri-axial accelerometers placed on the foot, ankle, shank or waist. Data from healthy subjects undergoing multiple walking trials on flat and inclined, as well as smooth and tactile paving surfaces is acquired for experimentation. The benchmark timings at which heel strike and toe off occur, are determined using kinematic data recorded from a motion capture system. The algorithm extracts features from each of the acceleration signals using a continuous wavelet transform over a wide range of scales. A locality preserving embedding method is then applied to reduce the high dimensionality caused by the multiple scales while preserving salient features for classification. A simple Gaussian mixture model is then trained to classify each of the time samples into heel strike, toe off or no event categories. Results show good detection and temporal accuracies for different sensor locations and different walking terrains.


ieee international conference on automatic face gesture recognition | 2013

Transfer learning to account for idiosyncrasy in face and body expressions

Bernardino Romera-Paredes; Min S. H. Aung; Massimiliano Pontil; Nadia Bianchi-Berthouze; Amanda C. de C. Williams; Paul J. Watson

In this paper we investigate the use of the Transfer Learning (TL) framework to extract the commonalities across a set of subjects and also to learn the way each individual instantiates these commonalities to model idiosyncrasy. To implement this we apply three variants of Multi Task Learning, namely: Regularized Multi Task Learning (RMTL), Multi Task Feature Learning (MTFL) and Composite Multi Task Feature Learning (CMTFL). Two datasets are used; the first is a set of point based facial expressions with annotated discrete levels of pain. The second consists of full body motion capture data taken from subjects diagnosed with chronic lower back pain. A synchronized electromyographic signal from the lumbar paraspinal muscles is taken as a pain-related behavioural indicator. We compare our approaches with Ridge Regression which is a comparable model without the Transfer Learning property; as well as with a subtractive method for removing idiosyncrasy. The TL based methods show statistically significant improvements in correlation coefficients between predicted model outcomes and the target values compared to baseline models. In particular RMTL consistently outperforms all other methods; a paired t-test between RMTL and the best performing baseline method returned a maximum p-value of 2.3 × 10-4.


international symposium on neural networks | 2007

Time-to-event analysis with artificial neural networks: An integrated analytical and rule-based study for breast cancer

Paulo J. G. Lisboa; Terence A. Etchells; Ian H. Jarman; Min S. H. Aung; Sylvie Chabaud; T. Bachelor; David Perol; Thérèse Gargi; Valérie Bourdès; Stéphane Bonnevay; Sylvie Négrier

This paper presents an analysis of censored survival data for breast cancer specific mortality and disease-free survival. There are three stages to the process, namely time-to-event modelling, risk stratification by predicted outcome and model interpretation using rule extraction. Model selection was carried out using the benchmark linear model, Cox regression but risk staging was derived with Cox regression and with Partial Logistic Regression Artificial Neural Networks regularised with Automatic Relevance Determination (PLANN-ARD). This analysis compares the two approaches showing the benefit of using the neural network framework especially for patients at high risk. The neural network model also has results in a smooth model of the hazard without the need for limiting assumptions of proportionality. The model predictions were verified using out-of-sample testing with the mortality model also compared with two other prognostic models called TNG and the NPI rule model. Further verification was carried out by comparing marginal estimates of the predicted and actual cumulative hazards. It was also observed that doctors seem to treat mortality and disease-free models as equivalent, so a further analysis was performed to observe if this was the case. The analysis was extended with automatic rule generation using Orthogonal Search Rule Extraction (OSRE). This methodology translates analytical risk scores into the language of the clinical domain, enabling direct validation of the operation of the Cox or neural network model. This paper extends the existing OSRE methodology to data sets that include continuous-valued variables.


Proceedings of 4th International Workshop on Human Behavior Understanding - Volume 8212 | 2013

MMLI: Multimodal Multiperson Corpus of Laughter in Interaction

Radoslaw Niewiadomski; Maurizio Mancini; Tobias Baur; Giovanna Varni; Harry J. Griffin; Min S. H. Aung

The aim of the Multimodal and Multiperson Corpus of Laughter in Interaction (MMLI) was to collect multimodal data of laughter with the focus on full body movements and different laughter types. It contains both induced and interactive laughs from human triads. In total we collected 500 laugh episodes of 16 participants. The data consists of 3D body position information, facial tracking, multiple audio and video channels as well as physiological data. In this paper we discuss methodological and technical issues related to this data collection including techniques for laughter elicitation and synchronization between different independent sources of data. We also present the enhanced visualization and segmentation tool used to segment captured data. Finally we present data annotation as well as preliminary results of the analysis of the nonverbal behavior patterns in laughter.


IEEE Transactions on Affective Computing | 2015

Perception and Automatic Recognition of Laughter from Whole-Body Motion: Continuous and Categorical Perspectives

Harry J. Griffin; Min S. H. Aung; Bernardino Romera-Paredes; Ciaran McLoughlin; William Curran; Nadia Bianchi-Berthouze

Despite its importance in social interactions, laughter remains little studied in affective computing. Intelligent virtual agents are often blind to users’ laughter and unable to produce convincing laughter themselves. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received less attention. The aim of this study is threefold. First, to probe human laughter perception by analyzing patterns of categorisations of natural laughter animated on a minimal avatar. Results reveal that a low dimensional space can describe perception of laughter “types”. Second, to investigate observers’ perception of laughter (hilarious, social, awkward, fake, and non-laughter) based on animated avatars generated from natural and acted motion-capture data. Significant differences in torso and limb movements are found between animations perceived as laughter and those perceived as non-laughter. Hilarious laughter also differs from social laughter. Different body movement features were indicative of laughter in sitting and standing avatar postures. Third, to investigate automatic recognition of laughter to the same level of certainty as observers’ perceptions. Results show recognition rates of the Random Forest model approach human rating levels. Classification comparisons and feature importance analyses indicate an improvement in recognition of social laughter when localized features and nonlinear models are used.


IEEE Transactions on Biomedical Engineering | 2010

Spatiotemporal Visualizations for the Measurement of Oropharyngeal Transit Time From Videofluoroscopy

Min S. H. Aung; John Yannis Goulermas; Shaheen Hamdy; Maxine Power

Videofluoroscopy remains one of the mainstay methods for clinical swallowing assessment, yet its interpretation is both complex and subjective. This, in part, reflects the difficulties associated with estimation of bolus transit time through the oral and pharyngeal regions by visual inspection, and problems with consistent repeatability. This paper introduces a software-only framework that automatically determines the time taken for the bolus to cross 1-D anatomical landmarks representing the oral and pharyngeal region boundaries (Fig. 1). The user-steered delineation algorithm live-wire and straight-line annotators are used to demarcate the landmark on a frame prior to the swallow action. The rate of change of intensity of the pixels in each landmark is used as the detection feature for bolus presence that can be visualized on a spatiotemporal plot. Artifacts introduced by head and neck movement are removed by updating the landmark coordinates using affine parameters optimized by a genetic-algorithm-based registration method. Heuristics are applied to the spatiotemporal plot to identify the frames during which the bolus passes the landmark. Correlation coefficients between three observers visually inspecting twenty-four 5-mL single swallow clips did not exceed 0.42. Yet the same measurements taken using this framework on the same clips had correlation coefficients exceeding 0.87.


international conference of the ieee engineering in medicine and biology society | 2007

Breast Cancer Predictions by Neural Networks Analysis: a Comparison with Logistic Regression

Valérie Bourdès; Stéphane Bonnevay; Paulo J. G. Lisboa; Min S. H. Aung; Sylvie Chabaud; Thomas Bachelot; David Perol; Sylvie Négrier

This paper presents an exploratory fixed time study to identify the most significant covariates as a precursor to a longitudinal study of specific mortality, disease free survival and disease recurrences. The data comprise consecutive patients diagnosed with primary breast cancer and entered into the study from 1996 at a single French clinical center, Centre Leon Berard, based in Lyon, where they received standard treatment. The methodology was to compare and contrast multi-layer perceptron neural networks (NN) with logistic regression (LR), to identify key covariates and their interactions and to compare the selected variables with those routinely used in clinical severity of illness indices for breast cancer. The logistic regression in this work was chosen as an accepted standard for prediction by biostatisticians in order to evaluate the neural network. Only covariates available at the time of diagnosis and immediately following surgery were used. We used for comparison classification performance indices: AUROC (AREA Under Receiver-Operating Characteristics) curves, sensitivity, specificity, accuracy and positive predictive value for the two following events of interest: specific mortality and disease free survival.


international conference of the ieee engineering in medicine and biology society | 2010

Automated Nonlinear Feature Generation and Classification of Foot Pressure Lesions

Tingting Mu; Todd C. Pataky; Andrew H. Findlow; Min S. H. Aung; John Yannis Goulermas

Plantar lesions induced by biomechanical dysfunction pose a considerable socioeconomic health care challenge, and failure to detect lesions early can have significant effects on patient prognoses. Most of the previous works on plantar lesion identification employed the analysis of biomechanical microenvironment variables like pressure and thermal fields. This paper focuses on foot kinematics and applies kernel principal component analysis (KPCA) for nonlinear dimensionality reduction of features, followed by Fishers linear discriminant analysis for the classification of patients with different types of foot lesions, in order to establish an association between foot motion and lesion formation. Performance comparisons are made using leave-one-out cross-validation. Results show that the proposed method can lead to ~94% correct classification rates, with a reduction of feature dimensionality from 2100 to 46, without any manual preprocessing or elaborate feature extraction methods. The results imply that foot kinematics contain information that is highly relevant to pathology classification and also that the nonlinear KPCA approach has considerable power in unraveling abstract biomechanical features into a relatively low-dimensional pathology-relevant space.

Collaboration


Dive into the Min S. H. Aung's collaboration.

Top Co-Authors

Avatar

Paulo J. G. Lisboa

Liverpool John Moores University

View shared research outputs
Top Co-Authors

Avatar

Azzam Taktak

Royal Liverpool University Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Terence A. Etchells

Liverpool John Moores University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antonio Eleuteri

Royal Liverpool University Hospital

View shared research outputs
Top Co-Authors

Avatar

Bertil Damato

Royal Liverpool University Hospital

View shared research outputs
Top Co-Authors

Avatar

Ian H. Jarman

Liverpool John Moores University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge