Ronny K. Ibrahim
University of New South Wales
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ronny K. Ibrahim.
international conference on digital signal processing | 2007
Ronny K. Ibrahim; Eliathamby Ambikairajah; Branko G. Celler; Nigel H. Lovell
The analysis of gait data has been a challenging problem and several new approaches have been proposed in recent years. This paper describes a novel front-end for classification of gait patterns using data obtained from a tri-axial accelerometer. The novel features consist of delta features, low and high frequency signal variations and energy variations in both frequency bands. The back-end of the system is a Gaussian mixture model based classifier. Using Bayesian adaptation, an overall classification accuracy of 96.1% was achieved for five walking patterns.
international conference of the ieee engineering in medicine and biology society | 2008
Ronny K. Ibrahim; Eliathamby Ambikairajah; Branko G. Celler; Nigel H. Lovell
Recent research work indicates that gait patterns are both non-linear and non-stationary signals and they can be analyzed using empirical mode decomposition. This paper describes gait pattern classification using features that are obtained by performing discrete cosine transforms (DCT) on intrinsic mode functions of five different human gait patterns. The DCT provides a compact 8-dimensional feature vector for gait pattern classification. Fifty two subjects participated in the experiment. The classification was performed using a Gaussian mixture model and an overall accuracy of 90.2% was achieved.
Frontiers in Human Neuroscience | 2017
Pragati Rao Mandikal Vasuki; Mridula Sharma; Ronny K. Ibrahim; Joanne Arciuli
Musicians’ brains are considered to be a functional model of neuroplasticity due to the structural and functional changes associated with long-term musical training. In this study, we examined implicit extraction of statistical regularities from a continuous stream of stimuli—statistical learning (SL). We investigated whether long-term musical training is associated with better extraction of statistical cues in an auditory SL (aSL) task and a visual SL (vSL) task—both using the embedded triplet paradigm. Online measures, characterized by event related potentials (ERPs), were recorded during a familiarization phase while participants were exposed to a continuous stream of individually presented pure tones in the aSL task or individually presented cartoon figures in the vSL task. Unbeknown to participants, the stream was composed of triplets. Musicians showed advantages when compared to non-musicians in the online measure (early N1 and N400 triplet onset effects) during the aSL task. However, there were no differences between musicians and non-musicians for the vSL task. Results from the current study show that musical training is associated with enhancements in extraction of statistical cues only in the auditory domain.
Frontiers in Psychology | 2016
Catherine M. McMahon; Isabelle Boisvert; Peter de Lissa; Louise Granger; Ronny K. Ibrahim; Chi Yhun Lo; Kelly Miles; Petra L. Graham
Listening to degraded speech can be challenging and requires a continuous investment of cognitive resources, which is more challenging for those with hearing loss. However, while alpha power (8–12 Hz) and pupil dilation have been suggested as objective correlates of listening effort, it is not clear whether they assess the same cognitive processes involved, or other sensory and/or neurophysiological mechanisms that are associated with the task. Therefore, the aim of this study is to compare alpha power and pupil dilation during a sentence recognition task in 15 randomized levels of noise (-7 to +7 dB SNR) using highly intelligible (16 channel vocoded) and moderately intelligible (6 channel vocoded) speech. Twenty young normal-hearing adults participated in the study, however, due to extraneous noise, data from only 16 (10 females, 6 males; aged 19–28 years) was used in the Electroencephalography (EEG) analysis and 10 in the pupil analysis. Behavioral testing of perceived effort and speech performance was assessed at 3 fixed SNRs per participant and was comparable to sentence recognition performance assessed in the physiological test session for both 16- and 6-channel vocoded sentences. Results showed a significant interaction between channel vocoding for both the alpha power and the pupil size changes. While both measures significantly decreased with more positive SNRs for the 16-channel vocoding, this was not observed with the 6-channel vocoding. The results of this study suggest that these measures may encode different processes involved in speech perception, which show similar trends for highly intelligible speech, but diverge for more spectrally degraded speech. The results to date suggest that these objective correlates of listening effort, and the cognitive processes involved in listening effort, are not yet sufficiently well understood to be used within a clinical setting.
Trends in hearing | 2017
Kelly Miles; Catherine M. McMahon; Isabelle Boisvert; Ronny K. Ibrahim; Peter de Lissa; Petra L. Graham; Björn Lyxell
Listening to speech in noise is effortful, particularly for people with hearing impairment. While it is known that effort is related to a complex interplay between bottom-up and top-down processes, the cognitive and neurophysiological mechanisms contributing to effortful listening remain unknown. Therefore, a reliable physiological measure to assess effort remains elusive. This study aimed to determine whether pupil dilation and alpha power change, two physiological measures suggested to index listening effort, assess similar processes. Listening effort was manipulated by parametrically varying spectral resolution (16- and 6-channel noise vocoding) and speech reception thresholds (SRT; 50% and 80%) while 19 young, normal-hearing adults performed a speech recognition task in noise. Results of off-line sentence scoring showed discrepancies between the target SRTs and the true performance obtained during the speech recognition task. For example, in the SRT80% condition, participants scored an average of 64.7%. Participants’ true performance levels were therefore used for subsequent statistical modelling. Results showed that both measures appeared to be sensitive to changes in spectral resolution (channel vocoding), while pupil dilation only was also significantly related to their true performance levels (%) and task accuracy (i.e., whether the response was correctly or partially recalled). The two measures were not correlated, suggesting they each may reflect different cognitive processes involved in listening effort. This combination of findings contributes to a growing body of research aiming to develop an objective measure of listening effort.
Clinical Neurophysiology | 2017
Pragati Rao Mandikal Vasuki; Mridula Sharma; Ronny K. Ibrahim; Joanne Arciuli
OBJECTIVE The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). METHODS Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. RESULTS Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, childrens music skills were associated with performance on auditory and visual behavioural statistical learning tasks. CONCLUSION Our data suggests that individual differences in musical skills are associated with childrens ability to detect regularities. SIGNIFICANCE The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments.
PLOS ONE | 2016
Catherine M. McMahon; Ronny K. Ibrahim; Ankit Mathur
Subjective tinnitus is characterised by the conscious perception of a phantom sound. Previous studies have shown that individuals with chronic tinnitus have disrupted sound-evoked cortical tonotopic maps, time-shifted evoked auditory responses, and altered oscillatory cortical activity. The main objectives of this study were to: (i) compare sound-evoked brain responses and cortical tonotopic maps in individuals with bilateral tinnitus and those without tinnitus; and (ii) investigate whether changes in these sound-evoked responses occur with amelioration of the tinnitus percept during a 30-week tinnitus treatment program. Magnetoencephalography (MEG) recordings of 12 bilateral tinnitus participants and 10 control normal-hearing subjects reporting no tinnitus were recorded at baseline, using 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz tones presented monaurally at 70 dBSPL through insert tube phones. For the tinnitus participants, MEG recordings were obtained at 5-, 10-, 20- and 30- week time points during tinnitus treatment. Results for the 500 Hz and 1000 Hz sources (where hearing thresholds were within normal limits for all participants) showed that the tinnitus participants had a significantly larger and more anteriorly located source strengths when compared to the non-tinnitus participants. During the 30-week tinnitus treatment, the participants’ 500 Hz and 1000 Hz source strengths remained higher than the non-tinnitus participants; however, the source locations shifted towards the direction recorded from the non-tinnitus control group. Further, in the left hemisphere, there was a time-shifted association between the trajectory of change of the individual’s objective (source strength and anterior-posterior source location) and subjective measures (using tinnitus reaction questionnaire, TRQ). The differences in source strength between the two groups suggest that individuals with tinnitus have enhanced central gain which is not significantly influenced by the tinnitus treatment, and may result from the hearing loss per se. On the other hand, the shifts in the tonotopic map towards the non-tinnitus participants’ source location suggests that the tinnitus treatment might reduce the disruptions in the map, presumably produced by the tinnitus percept directly or indirectly. Further, the similarity in the trajectory of change across the objective and subjective parameters after time-shifting the perceptual changes by 5 weeks suggests that during or following treatment, perceptual changes in the tinnitus percept may precede neurophysiological changes. Subgroup analyses conducted by magnitude of hearing loss suggest that there were no differences in the 500 Hz and 1000 Hz source strength amplitudes for the mild-moderate compared with the mild-severe hearing loss subgroup, although the mean source strength was consistently higher for the mild-severe subgroup. Further, the mild-severe subgroup had 500 Hz and 1000 Hz source locations located more anteriorly (i.e., more disrupted compared to the control group) compared to the mild-moderate group, although this was trending towards significance only for the 500Hz left hemisphere source. While the small numbers of participants within the subgroup analyses reduce the statistical power, this study suggests that those with greater magnitudes of hearing loss show greater cortical disruptions with tinnitus and that tinnitus treatment appears to reduce the tonotopic map disruptions but not the source strength (or central gain).
international conference on acoustics, speech, and signal processing | 2009
Ronny K. Ibrahim; Eliathamby Ambikairajah; Branko G. Celler; Nigel H. Lovell
The use of a wearable triaxial accelerometer for unsupervised monitoring of human movement has become a major research focus in recent years. In this paper, the relationship between accelerometry signals and human gait is analysed using a linear prediction (LP) model. We explore the use of the LP model for analysing five gait patterns and show that the LP cepstrum can be used for gait pattern classification with high accuracy. This is then compared to a filterbank based approach to estimate the cepstral coefficients. Fifty subjects participated in collection of gait pattern data involving walking on level surfaces, and walking up and down stairs and ramps. The results show that an overall accuracy of 93% can be achieved using features derived from the cepstral coefficients for the five different walking patterns.
international conference of the ieee engineering in medicine and biology society | 2010
Ronny K. Ibrahim; Vidhyasaharan Sethu; Eliathamby Ambikairajah
Many recent research works on gait pattern classification indicates that static features are used. This paper describes of extracting novel dynamic features as complimentary features for the gait pattern classification. The dynamic features are obtained by using regression on the delta zero crossing counts (ΔZCC) of the acceleration signal. The classification results using the filterbank features with the novel dynamic features showed an overall accuracy of 97% was achieved. This is an improvement of 3% from using the filterbank features alone.
Rivista Internazionale di Tecnica della Traduzione/International Journal of Translation | 2017
Jan-Louis Kruger; Stephen Doherty; Ronny K. Ibrahim
All audiovisual translation (AVT) modes mediate the audiovisual text for the audience. For audiences excluded from all or part of a visual or an auditory channel, this has significant implications in terms of comprehension and enjoyment. With subtitling (SDH in particular), we want the audiences to have the same quality of access to the characters and worlds that is afforded the hearing audience. Likewise, with AD, we want the audiences to have an equivalent experience to that afforded sighted audiences. Since the degree to which an audience becomes immersed in the story world plays an important role in this quality of access and enjoyment, it would be useful to find ways to measure immersion reliably. In this article we present a discussion on the measurement of immersion in subtitled film using a triangulation of offline and online measures. In particular, electroencephalography (EEG) as an online measure holds a lot of potential in AVT research. We present the results of a pilot study in which EEG beta coherence between the prefrontal and posterior parietal cortices is used as an indication of the degree to which an audience surrenders itself to the story world and experience the characters and events imaginatively in an immersed state. Our findings indicate that EEG beta coherence could be a valuable tool for measuring the fluctuating states of immersion in film in the presence of subtitles, but also potentially in the context of AD. Electroencephalographic beta coherence as an objective measure of psychological immersion in film