Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Krishna Somandepalli is active.

Publication


Featured researches published by Krishna Somandepalli.


Developmental Cognitive Neuroscience | 2015

Short-term test-retest reliability of resting state fMRI metrics in children with and without attention-deficit/hyperactivity disorder.

Krishna Somandepalli; Clare Kelly; Philip T. Reiss; Xi-Nian Zuo; R.C. Craddock; Chao-Gan Yan; Eva Petkova; Francisco Xavier Castellanos; Michael P. Milham; Adriana Di Martino

Highlights • Children with or without ADHD have moderate/high R-fMRI test–retest reliability.• Reliability is greater in controls than ADHD across most R-fMRI metrics.• Regional differences in ICC related to diagnostic groups reflect underlying pathophysiology for ADHD affecting both inter and intra subject variability.


acm multimedia | 2016

Online Affect Tracking with Multimodal Kalman Filters

Krishna Somandepalli; Rahul Gupta; Nasir; Brandon M. Booth; Sungbok Lee; Shrikanth Narayanan

Arousal and valence have been widely used to represent emotions dimensionally and measure them continuously in time. In this paper, we introduce a computational framework for tracking these affective dimensions from multimodal data as an entry to the Multimodal Affect Recognition Sub-Challenge of the 2016 Audio/Visual Emotion Challenge and Workshop (AVEC2016). We propose a linear dynamical system approach with a late fusion method that accounts for the dynamics of the affective state evolution (i.e., arousal or valence). To this end, single-modality predictions are modeled as observations in a Kalman filter formulation in order to continuously track each affective dimension. Leveraging the inter-correlations between arousal and valence, we use the predicted arousal as an additional feature to improve valence predictions. Furthermore, we propose a conditional framework to select Kalman filters of different modalities while tracking. This framework employs voicing probability and facial posture cues to detect the absence or presence of each input modality. Our multimodal fusion results on the development and the test set provide a statistically significant improvement over the baseline system from AVEC2016. The proposed approach can be potentially extended to other multimodal tasks with inter-correlated behavioral dimensions.


Journal of the Acoustical Society of America | 2017

Test–retest repeatability of human speech biomarkers from static and real-time dynamic magnetic resonance imaging

Johannes Töger; Tanner Sorensen; Krishna Somandepalli; Asterios Toutios; Sajan Goud Lingala; Shrikanth Narayanan; Krishna S. Nayak

Static anatomical and real-time dynamic magnetic resonance imaging (RT-MRI) of the upper airway is a valuable method for studying speech production in research and clinical settings. The test-retest repeatability of quantitative imaging biomarkers is an important parameter, since it limits the effect sizes and intragroup differences that can be studied. Therefore, this study aims to present a framework for determining the test-retest repeatability of quantitative speech biomarkers from static MRI and RT-MRI, and apply the framework to healthy volunteers. Subjects (n = 8, 4 females, 4 males) are imaged in two scans on the same day, including static images and dynamic RT-MRI of speech tasks. The inter-study agreement is quantified using intraclass correlation coefficient (ICC) and mean within-subject standard deviation (σe). Inter-study agreement is strong to very strong for static measures (ICC: min/median/max 0.71/0.89/0.98, σe: 0.90/2.20/6.72 mm), poor to strong for dynamic RT-MRI measures of articulator motion range (ICC: 0.26/0.75/0.90, σe: 1.6/2.5/3.6 mm), and poor to very strong for velocities (ICC: 0.21/0.56/0.93, σe: 2.2/4.4/16.7 cm/s). In conclusion, this study characterizes repeatability of static and dynamic MRI-derived speech biomarkers using state-of-the-art imaging. The introduced framework can be used to guide future development of speech biomarkers. Test-retest MRI data are provided free for research use.


Journal of Attention Disorders | 2018

Is Increased Response Time Variability Related to Deficient Emotional Self-Regulation in Children With ADHD?

Shereen Elmaghrabi; Maria Nahmias; Nicoletta Adamo; Adriana Di Martino; Krishna Somandepalli; Varun Patel; Andrea McLaughlin; Virginia De Sanctis; Francisco Xavier Castellanos

Objective: Elevated response time intrasubject variability (RT-ISV) characterizes ADHD. Deficient emotional self-regulation (DESR), defined by summating Child Behavior Checklist Anxious/Depressed, Aggressive, and Attention subscale scores, has been associated with worse outcome in ADHD. To determine if DESR is differentially associated with elevated RT-ISV, we examined RT-ISV in children with ADHD with and without DESR and in typically developing children (TDC). Method: We contrasted RT-ISV during a 6-min Eriksen Flanker Task in 31 children with ADHD without DESR, 34 with ADHD with DESR, and 65 TDC. Results: Regardless of DESR, children with ADHD showed significantly greater RT-ISV than TDC (p < .001). The ADHD subgroups, defined by presence or absence of DESR, did not differ from each other. Conclusion: Increased RT-ISV characterizes ADHD regardless of comorbid DESR. Alongside similar findings in children and adults with ADHD, these results suggest that RT-ISV is related to cognitive rather than emotional dysregulation in ADHD.


conference of the international speech communication association | 2016

Articulatory Synthesis Based on Real-Time Magnetic Resonance Imaging Data.

Asterios Toutios; Tanner Sorensen; Krishna Somandepalli; Rachel Alexander; Shrikanth Narayanan

This paper presents a methodology for articulatory synthesis of running speech in American English driven by real-time magnetic resonance imaging (rtMRI) mid-sagittal vocal-tract data. At the core of the methodology is a time-domain simulation of the propagation of sound in the vocal tract developed previously by Maeda. The first step of the methodology is the automatic derivation of air-tissue boundaries from the rtMRI data. These articulatory outlines are then modified in a systematic way in order to introduce additional precision in the formation of consonantal vocal-tract constrictions. Other elements of the methodology include a previously reported set of empirical rules for setting the time-varying characteristics of the glottis and the velopharyngeal port, and a revised sagittal-to-area conversion. Results are promising towards the development of a full-fledged text-to-speech synthesis system leveraging directly observed vocal-tract dynamics.


ICMI '18 Proceedings of the 20th ACM International Conference on Multimodal Interaction | 2018

Multimodal Representation of Advertisements Using Segment-level Autoencoders

Krishna Somandepalli; Victor R. Martinez; Naveen Kumar; Shrikanth Narayanan

Automatic analysis of advertisements (ads) poses an interesting problem for learning multimodal representations. A promising direction of research is the development of deep neural network autoencoders to obtain inter-modal and intra-modal representations. In this work, we propose a system to obtain segment-level unimodal and joint representations. These features are concatenated, and then averaged across the duration of an ad to obtain a single multimodal representation. The autoencoders are trained using segments generated by time-aligning frames between the audio and video modalities with forward and backward context. In order to assess the multimodal representations, we consider the tasks of classifying an ad as funny or exciting in a publicly available dataset of 2,720 ads. For this purpose we train the segment-level autoencoders on a larger, unlabeled dataset of 9,740 ads, agnostic of the test set. Our experiments show that: 1) the multimodal representations outperform joint and unimodal representations, 2) the different representations we learn are complementary to each other, and 3) the segment-level multimodal representations perform better than classical autoencoders and cross-modal representations -- within the context of the two classification tasks. We obtain an improvement of about 5% in classification accuracy compared to a competitive baseline.


Psychiatry Research-neuroimaging | 2017

Computerized cognitive training for children with neurofibromatosis type 1: A pilot resting-state fMRI study

Yuliya N. Yoncheva; Kristina K. Hardy; Daniel J. Lurie; Krishna Somandepalli; Lanbo Yang; Gilbert Vezina; Nadja Kadom; Roger J. Packer; Michael P. Milham; F. Xavier Castellanos; Maria T. Acosta

In this pilot study, we examined training effects of a computerized working memory program on resting state functional magnetic resonance imaging (fMRI) measures in children with neurofibromatosis type 1 (NF1). We contrasted pre- with post-training resting state fMRI and cognitive measures from 16 participants (nine males; 11.1 ± 2.3 years) with NF1 and documented working memory difficulties. Using non-parametric permutation test inference, we found significant regionally specific differences (family-wise error corrected) in two of four voxel-wise resting state measures: fractional amplitude of low frequency fluctuations (indexing peak-to-trough intensity of spontaneous oscillations) and regional homogeneity (indexing local intrinsic synchrony). Some cognitive task improvement was observed as well. These preliminary findings suggest that regionally specific changes in resting state fMRI indices may be associated with treatment-related cognitive amelioration in NF1. Nevertheless, current results must be interpreted with caution pending independent controlled replication.


Journal of the American Academy of Child and Adolescent Psychiatry | 2016

Mode of Anisotropy Reveals Global Diffusion Alterations in Attention-Deficit/Hyperactivity Disorder.

Yuliya N. Yoncheva; Krishna Somandepalli; Philip T. Reiss; Clare Kelly; Adriana Di Martino; Mariana Lazar; Juan Zhou; Michael P. Milham; F. Xavier Castellanos


conference of the international speech communication association | 2018

Improving Gender Identification in Movie Audio Using Cross-Domain Data.

Rajat Hebbar; Krishna Somandepalli; Shrikanth Narayanan


IEEE Transactions on Multimedia | 2018

Unsupervised Discovery of Character Dictionaries in Animation Movies

Krishna Somandepalli; Naveen Kumar; Tanaya Guha; Shrikanth Narayanan

Collaboration


Dive into the Krishna Somandepalli's collaboration.

Top Co-Authors

Avatar

Shrikanth Narayanan

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Asterios Toutios

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Francisco Xavier Castellanos

Nathan Kline Institute for Psychiatric Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maria Nahmias

New York Medical College

View shared research outputs
Researchain Logo
Decentralizing Knowledge