Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gerasimos Fergadiotis is active.

Publication


Featured researches published by Gerasimos Fergadiotis.


Aging Neuropsychology and Cognition | 2014

Global coherence in younger and older adults: Influence of cognitive processes and discourse type.

Heather Harris Wright; Anthony D. Koutsoftas; Gilson J. Capilouto; Gerasimos Fergadiotis

ABSTRACT The purpose of the present research was to examine the influence of cognitive processes on discourse global coherence ability measured across different discourse tasks and collected from younger (n = 40; 20–39 years) and older (n = 40; 70–87 years) cognitively healthy adults. Study participants produced oral language samples in response to five commonly used discourse elicitation tasks and they were analyzed for maintenance of global coherence. Participants also completed memory and attention measures. Group differences on the global coherence scale were found for only one type of discourse—recounts. Across discourse elicitation tasks the lowest global coherence scores were found for recounts compared to the other discourse elicitation tasks. The influence of cognitive processes on maintenance of global coherence differed for the two age groups. For the younger group, there were no observed significant relationships. For the older group, cognitive measures were related to global coherence of stories and procedures.


Journal of Speech Language and Hearing Research | 2015

Development and Simulation Testing of a Computerized Adaptive Version of the Philadelphia Naming Test

William D. Hula; Stacey Kellough; Gerasimos Fergadiotis

Purpose The purpose of this study was to develop a computerized adaptive test (CAT) version of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996), to reduce test length while maximizing measurement precision. This article is a direct extension of a companion article (Fergadiotis, Kellough, & Hula, 2015), in which we fitted the PNT to a 1-parameter logistic item-response-theory model and examined the validity and precision of the resulting item parameter and ability score estimates. Method Using archival data collected from participants with aphasia, we simulated two PNT-CAT versions and two previously published static PNT short forms, and compared the resulting ability score estimates to estimates obtained from the full 175-item PNT. We used a jackknife procedure to maintain independence of the samples used for item estimation and CAT simulation. Results The PNT-CAT recovered full PNT scores with equal or better accuracy than the static short forms. Measurement precision was also greater for the PNT-CAT than the static short forms, though comparison of adaptive and static nonoverlapping alternate forms showed minimal differences between the two approaches. Conclusion These results suggest that CAT assessment of naming in aphasia has the potential to reduce test burden while maximizing the accuracy and precision of score estimates.


Aphasiology | 2014

A core outcome set for aphasia treatment research: Obstacles, risks, and benefits

William D. Hula; Gerasimos Fergadiotis; Patrick J. Doyle

Wallace, Worrall, Rose, and Le Dorze (2014) have laid out an ambitious agenda for establishing a core outcome set (COS) for aphasia treatment research. We agree in principle that establishing a COS could be a positive development but are less sanguine on the issue than Wallace and colleagues. In our commentary, we focus on some key issues that should be considered if such an undertaking is to be successful. We believe that the single most important step in the proposed agenda will be achieving meaningful and informed consensus about which constructs should be measured. As such, particular attention must be paid to designing and implementing the process by which the consensus will be defined. While methods such as the Delphi and nominal group techniques have proven useful, they are not without shortcomings and are susceptible to flawed implementation (Cantrill, Sibbald, & Buetow, 1996; Keeney, Hasson, & McKenna, 2001). Also, we have chosen the word “construct” in the first sentence deliberately, in part because we wish to emphasise that the discussion should not necessarily be restricted to currently available outcomemeasurement tools. If it were, the process could lead to counterproductive calcification of approaches to outcome measurement. Second, we note that the things to be measured that are identified as a result of a consensus process must be subject to empirical tests. We address these issues further below in our discussion of the second step in the proposed agenda. In our view, the primary obstacle to achieving consensus about what constructs should be included in a COS is the broad scope of aphasia rehabilitation. Wallace and colleagues acknowledged this, but in our opinion, they understate the challenge. In the majority of the studies cited in the target paper, clinicians are treated as a homogenous group, but a panel of aphasiologists will likely have disparate, conflicting views about


Journal of Speech Language and Hearing Research | 2015

Item Response Theory Modeling of the Philadelphia Naming Test

Gerasimos Fergadiotis; Stacey Kellough; William D. Hula

Purpose In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating explanatory variables to item difficulty. This article describes the statistical model underlying the computer adaptive PNT presented in a companion article (Hula, Kellough, & Fergadiotis, 2015). Method Using archival data, we evaluated the fit of the PNT to 1- and 2-parameter logistic models and examined the precision of the resulting parameter estimates. We regressed the item difficulty estimates on three predictor variables: word length, age of acquisition, and contextual diversity. Results The 2-parameter logistic model demonstrated marginally better fit, but the fit of the 1-parameter logistic model was adequate. Precision was excellent for both person ability and item difficulty estimates. Word length, age of acquisition, and contextual diversity all independently contributed to variance in item difficulty. Conclusions Item-response-theory methods can be productively used to analyze and quantify anomia severity in aphasia. Regression of item difficulty on lexical variables supported the validity of the PNT and interpretation of anomia severity scores in the context of current word-finding models.


Aphasiology | 2016

Modelling Confrontation Naming and Discourse Performance in Aphasia

Gerasimos Fergadiotis; Heather Harris Wright

Background: It is well documented in the literature that the ability to produce discourse is what matters most to people with aphasia (PWA) and their families. However, quantifying discourse in typical clinical settings is a challenging task due to its dynamic and multiply determined nature. As a result, professionals often depend on confrontation naming tests to identify and measure impaired underlying cognitive mechanisms that are also hypothesised to be important for discourse production. Aims: The main goals of the study were to investigate the extent to which confrontation naming test (CNT) scores are predictive of (i) the proportion of paraphasias in connect speech, and (ii) the amount of information PWA can effectively communicate. Methods & Procedures: Data from 98 monolingual PWA were retrieved from AphasiaBank and analysed using structural equation modelling. Performance in CNTs was modelled as a latent variable based on the Boston Naming Test, the Western Aphasia Battery–R Naming Subtest, and the Verb Naming Test. Performance at the discourse level was modelled based on the observed proportions of paraphasias in three discourse tasks (free speech, eventcasts, and story retell). Informativeness was quantified using the percentage of Correct Information Units based on story re-tell. Outcomes & Results: For the first question, the regression coefficient between the two latent factors was estimated to be –0.52. When the latent factor based on the CNTs was regressed on informativeness, the estimated regression coefficient was equal to 0.68. Conclusions: Performance on CNTs was not a strong predictor of the proportion of paraphasias produced in connected speech but was substantially higher for the amount of information communicated by PWA. Clinicians are cautioned not to predict a speaker’s performance at the discourse level after establishing CNT performance. Other anomic behaviours (e.g., pauses) during discourse production may be associated with CNT performance and should be considered in future investigations.


Journal of Speech Language and Hearing Research | 2015

Psychometric Evaluation of Lexical Diversity Indices: Assessing Length Effects

Gerasimos Fergadiotis; Heather Harris Wright; Samuel B. Green

PURPOSE Several novel techniques have been developed recently to assess the breadth of a speakers vocabulary exhibited in a language sample. The specific aim of this study was to increase our understanding of the validity of the scores generated by different lexical diversity (LD) estimation techniques. Four techniques were explored: D, Maas, measure of textual lexical diversity, and moving-average type-token ratio. METHOD Four LD indices were estimated for language samples on 4 discourse tasks (procedures, eventcasts, story retell, and recounts) from 442 adults who are neurologically intact. The resulting data were analyzed using structural equation modeling. RESULTS The scores for measure of textual lexical diversity and moving-average type-token ratio were stronger indicators of the LD of the language samples. The results for the other 2 techniques were consistent with the presence of method factors representing construct-irrelevant sources. CONCLUSION These findings offer a deeper understanding of the relative validity of the 4 estimation techniques and should assist clinicians and researchers in the selection of LD measures of language samples that minimize construct-irrelevant sources.


Journal of Speech Language and Hearing Research | 2017

Language Sample Analysis and Elicitation Technique Effects in Bilingual Children With and Without Language Impairment

Maria Kapantzoglou; Gerasimos Fergadiotis; M. Adelaida Restrepo

Purpose This study examined whether the language sample elicitation technique (i.e., storytelling and story-retelling tasks with pictorial support) affects lexical diversity (D), grammaticality (grammatical errors per communication unit [GE/CU]), sentence length (mean length of utterance in words [MLUw]), and sentence complexity (subordination index [SI]), which are commonly used indices for diagnosing primary language impairment in Spanish-English-speaking children in the United States. Method Twenty bilingual Spanish-English-speaking children with typical language development and 20 with primary language impairment participated in the study. Four analyses of variance were conducted to evaluate the effect of language elicitation technique and group on D, GE/CU, MLUw, and SI. Also, 2 discriminant analyses were conducted to assess which indices were more effective for story retelling and storytelling and their classification accuracy across elicitation techniques. Results D, MLUw, and SI were influenced by the type of elicitation technique, but GE/CU was not. The classification accuracy of language sample analysis was greater in story retelling than in storytelling, with GE/CU and D being useful indicators of language abilities in story retelling and GE/CU and SI in storytelling. Conclusion Two indices in language sample analysis may be sufficient for diagnosis in 4- to 5-year-old bilingual Spanish-English-speaking children.


Aphasiology | 2016

Semantic Knowledge Use in Discourse Produced by Individuals with Anomic Aphasia

Stephen Kintz; Heather Harris Wright; Gerasimos Fergadiotis

Background: Researchers have demonstrated that people with aphasia (PWA) have preserved semantic knowledge. However, some PWA have impaired access to certain types of knowledge more than others. Yet, all these studies used single concepts. It has not been demonstrated whether PWA have difficulty accessing certain types of features within a discourse sample. Aims: The main goals of this study were to determine whether semantic knowledge and two category types were used differently within discourse produced by participants with anomic aphasia and healthy controls. Methods & Procedures: Participants with anomic aphasia (n = 19) and healthy controls (n = 19) told stories that were transcribed and coded for 10 types of semantic knowledge and two category types, living and non-living things. Outcomes & Results: A Poisson regression model was conducted. The results indicated a significant difference between the groups for the semantic knowledge types, sound and internal state, but no difference was found for category types. Yet the distribution of semantic knowledge and category types produced within the discourse samples were similar between the groups. Conclusions: PWA might have differential access to certain types of semantic knowledge within discourse production, but it does not rise to the level of categorical deficits. These findings extend single-concept research into the realm of discourse.


American Journal of Speech-language Pathology | 2016

Algorithmic Classification of Five Characteristic Types of Paraphasias

Gerasimos Fergadiotis; Kyle Gorman; Steven Bedrick

Purpose This study was intended to evaluate a series of algorithms developed to perform automatic classification of paraphasic errors (formal, semantic, mixed, neologistic, and unrelated errors). Method We analyzed 7,111 paraphasias from the Moss Aphasia Psycholinguistics Project Database (Mirman et al., 2010) and evaluated the classification accuracy of 3 automated tools. First, we used frequency norms from the SUBTLEXus database (Brysbaert & New, 2009) to differentiate nonword errors and real-word productions. Then we implemented a phonological-similarity algorithm to identify phonologically related real-word errors. Last, we assessed the performance of a semantic-similarity criterion that was based on word2vec (Mikolov, Yih, & Zweig, 2013). Results Overall, the algorithmic classification replicated human scoring for the major categories of paraphasias studied with high accuracy. The tool that was based on the SUBTLEXus frequency norms was more than 97% accurate in making lexicality judgments. The phonological-similarity criterion was approximately 91% accurate, and the overall classification accuracy of the semantic classifier ranged from 86% to 90%. Conclusion Overall, the results highlight the potential of tools from the field of natural language processing for the development of highly reliable, cost-effective diagnostic tools suitable for collecting high-quality measurement data for research and clinical purposes.


Evidence-based Communication Assessment and Intervention | 2013

This study into the perceptions of caregivers and people with aphasia following a course on total communication yielded promising results, but there are methodological concerns

Aimee Dietz; Gerasimos Fergadiotis

Rautakoski, P. (2011). Training total communication. Aphasiology, 25(3), 344–365. http://dx.doi.org/10.1080/02687038.2010.530671. Source of funding and disclosure of interest: This study was supported by grants from the Finnish Cultural Foundation; the original author of this research reports no conflicts of interest.

Collaboration


Dive into the Gerasimos Fergadiotis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stephen Kintz

East Carolina University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aimee Dietz

University of Cincinnati

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge