Conal Cunningham
Mercer University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Conal Cunningham.
BMJ | 2004
Henry O'Connell; Ai-Vyrn Chin; Conal Cunningham; Brian A. Lawlor
Elderly people have a higher risk of completed suicide than any other age group worldwide.1 Despite this, suicide in elderly people receives relatively little attention, with public health measures, medical research, and media attention focusing on younger age groups.2 We outline the epidemiology and causal factors associated with suicidal behaviour in elderly people and summarise the current measures for prevention and management of this neglected phenomenon. We searched Medline and the Cochrane database for original research and review articles on suicide in elderly people using the search terms “suicide”, “elderly”, and “older”. From time immemorial, suicidal feelings and hopelessness have been considered part of ageing and understandable in the context of being elderly and having physical disabilities. The Ancient Greeks tolerated these attitudes in the extreme and gave elderly people the option of assisted suicide if they could plead convincingly that they had no useful role in society. Such practices were based on the assumption that once an individual had reached a certain age then they no longer had any meaningful purpose in life and would be better off dead. Although not as extreme, ageist beliefs in modern, especially industrialised, societies are based on similar assumptions. Sigmund Freud echoed such views, while suffering from incurable cancer of the palate: > It may be that the gods are merciful when they make our lives more unpleasant as we grow old. In the end, death seems less intolerable than the many burdens we have to bear. The burden of suicide is often calculated in economic terms and, specifically, loss of productivity. Despite lower rates of completed suicide in younger age groups, the absolute number of younger people dying as a result of suicide is higher than that for older people because of the current demographic structure of many societies.1 Younger …
BMJ | 2003
Henry O'Connell; Ai-Vyrn Chin; Conal Cunningham; Brian A. Lawlor
Alcohol use disorders in elderly people are common and associated with considerable morbidity. The ageing of populations worldwide means that the absolute number of older people with alcohol use disorders is on the increase, and health services need to improve their provision of age appropriate screening and treatment methods and services Media attention and public health initiatives related to alcohol use disorders tend to focus on younger age groups.1–3 However, alcohol use disorders are common among elderly people and are associated with notable health problems.2 Furthermore, in elderly people they are often underdetected and misdiagnosed4 as screening instruments and diagnostic criteria are geared towards younger people.5 The ageing of populations worldwide means that the absolute number of elderly people with alcohol use disorders is on the increase,6 and a real danger exists that a “silent epidemic” may be evolving. We searched PubMed for research papers and review articles in the area of alcohol use disorders in elderly people. The prevalence of alcohol use disorders in elderly people is generally accepted to be lower than in younger people, but rates may be underestimated because of underdetection and misdiagnosis, the reasons for which are many and varied.7 The cross sectional nature of prevalence studies also means that a cohort effect cannot be ruled out. For example, the drinking habits of Americans from the 1920s may differ substantially from those from the era after the second world war because of the effects of prohibition.8 Most prevalence studies have been carried out in North America, and results may not be generalisable to other cultures.7 Rates of alcohol use disorders also vary depending on the restrictiveness of diagnostic criteria used, with higher rates for “excessive alcohol consumption” and “alcohol abuse” than “alcohol dependence syndrome.” For …
Dementia and Geriatric Cognitive Disorders | 2006
Muireann Irish; Conal Cunningham; J. Bernard Walsh; Davis Coakley; Brian A. Lawlor; Ian H. Robertson; Robert F. Coen
The enhancing effect of music on autobiographical memory recall in mild Alzheimer’s disease individuals (n = 10; Mini-Mental State Examination score >17/30) and healthy elderly matched individuals (n = 10; Mini-Mental State Examination score 25–30) was investigated. Using a repeated-measures design, each participant was seen on two occasions: once in music condition (Vivaldi’s ‘Spring’ movement from ‘The Four Seasons’) and once in silence condition, with order counterbalanced. Considerable improvement was found for Alzheimer individuals’ recall on the Autobiographical Memory Interview in the music condition, with an interaction for condition by group (p < 0.005). There were no differences in terms of overall arousal using galvanic skin response recordings or attentional errors during the Sustained Attention to Response Task. A significant reduction in state anxiety was found on the State Trait Anxiety Inventory in the music condition (p < 0.001), suggesting anxiety reduction as a potential mechanism underlying the enhancing effect of music on autobiographical memory recall.
Age and Ageing | 2008
Maura O'Sullivan; Catherine Blake; Conal Cunningham; Gerard Boyle; Ciaran Finucane
BACKGROUND falls are a common cause of injury and decreased functional independence in the older adult. Diagnosis and treatment of fallers require tools that accurately assess physiological parameters associated with balance. Validated clinical tools include the Berg Balance Scale (BBS) and the Timed Up and Go test (TUG); however, the BBS tends to be subjective in nature, while the TUG quantifies an individuals functional impairment but requires further subjective evaluation for balance assessment. Other quantitative alternatives to date require expensive, sophisticated equipment. Measurement of the acceleration of centre of mass, with relatively inexpensive, lightweight, body-mounted accelerometers is a potential solution to this problem. OBJECTIVES to determine (i) if accelerometry correlates with standard clinical tests (BBS and TUG), (ii) to characterise accelerometer responses to increasingly difficult challenges to balance and (iii) to characterise acceleration patterns between fallers and non-fallers. STUDY DESIGN AND SETTING torso accelerations were measured at the level of L3 using a tri-axial accelerometer under four conditions; standing unsupported with eyes open (EO), eyes closed (EC) and on a mat with eyes open (MAT EO) and closed (MAT EC). Older patients (n = 21, 8 males, 13 females) with a mean age of 78 (SD +/- 7.6) years who attended a day hospital were recruited for this study. Patients were identified as fallers or non-fallers based on a comprehensive falls history. MEASUREMENTS Spearmans rank correlation analysis examined the relationship between acceleration root mean square (RMS) data and the BBS while Pearsons correlation was used with TUG scores. Differences in accelerometer RMS between fallers and non-fallers and between test conditions were examined using t-test and non-parametric alternatives where appropriate. RESULTS there was a stepwise increase in accelerometer RMS with increasing task complexity, and the accelerometer was able to distinguish significantly between sway responses to all test conditions except between EO and EC (P < 0.05). Acceleration data for MAT EO were significantly and inversely correlated with BBS scores (P = -0.829, P < 0.001) and positively correlated with TUG values (r = 0.621, P < 0.01). There was a significant difference in acceleration RMS for MAT EO between fallers and non-fallers (P < 0.011). CONCLUSIONS this is the first study of its kind to show a high correlation between accelerometry, the BBS and TUG. Accelerometry could also distinguish between sway responses to differing balancing conditions and between fallers and non-fallers. Accelerometry was shown to be an efficient, quantitative alternative in the measurement of balance in older people.
The Journal of Clinical Endocrinology and Metabolism | 2014
Eamon Laird; Helene McNulty; Mary Ward; L. Hoey; Emeir M. McSorley; Julie M. W. Wallace; E. L. Carson; Anne M. Molloy; Martin Healy; Miriam Casey; Conal Cunningham; J. J. Strain
CONTEXT Inadequate vitamin D status is common within elderly populations and may be implicated in the etiology of autoimmune disease and inflammation. Few studies have investigated the relationship between vitamin D status and age-related immune dysfunction in humans. OBJECTIVE The aim of this study was to investigate the association between vitamin D status and immune markers of inflammation in a large sample of older adults. DESIGN, SETTING, AND PARTICIPANTS An observational investigation of 957 Irish adults (>60 years of age) recruited in Northern Ireland (55°N latitude) as part of the Trinity Ulster Department of Agriculture aging cohort study. MAIN OUTCOME MEASURE We measured serum 25-hydroxyvitamin D (25(OH)D) by liquid chromatography tandem mass spectrometry and serum cytokines IL-6, TNF-α, IL-10, and C-reactive protein (CRP) by ELISA. RESULTS Concentrations of IL-6, CRP, and the ratios of IL-6 to IL-10 and CRP to IL-10 were significantly higher in individuals with deficient (<25 nmol/L) serum 25(OH)D compared with those with sufficient (>75 nmol/L) status after adjustment for age, sex, and body mass index (P < .05). Vitamin D status was a significant predictor of the IL-6 to IL-10 cytokine ratio, and those participants defined as deficient were significantly more likely to have an IL-6 to IL-10 ratio >2:1 compared with those defined as sufficient. CONCLUSIONS This study demonstrated significant associations between low vitamin D status and markers of inflammation (including the ratio of IL-6 to IL-10) within elderly adults. These findings suggest that an adequate vitamin D status may be required for optimal immune function, particularly within the older adult population.
Aging & Mental Health | 2011
Damien Gallagher; Aine Ni Mhaolain; Lisa Crosby; Deirdre Ryan; Loretto Lacey; Robert F. Coen; Cathal Walsh; Davis Coakley; J. Bernard Walsh; Conal Cunningham; Brian A. Lawlor
Background: Self-efficacy is the belief that one can perform a specific task or behaviour and is a modifiable attribute which has been shown to influence health behaviours. Few studies have examined the relationship between self-efficacy for dementia-related tasks and symptoms of burden and depression in caregivers. Methods: Eighty four patient/caregiver dyads with Alzheimers disease were recruited through a memory clinic. Patient function, cognition and neuropsychiatric symptoms were assessed together with caregiver burden, personality, depressive symptoms, coping strategies and self-efficacy for completing tasks related to dementia care. Results: 33% (28) of caregivers reported significant depressive symptoms (CES-D ≥ 10). In multivariate analyses, caregiver burden was predicted by self-efficacy for symptom management, neuroticism, patient function and neuropsychiatric symptoms while caregiver depression was predicted by self-efficacy for symptom management, caregiver educational level, neuroticism, emotion-focused coping, dysfunctional coping and patient function. In patients with moderate to severe impairment (MMSE ≤ 20), self-efficacy for symptom management behaved as a mediator between patient neuropsychiatric symptoms and symptoms of burden and depression in caregivers. Conclusions: Further longitudinal investigation is warranted to determine if self-efficacy might be usefully considered a target in future interventional studies to alleviate symptoms of burden and depression in Alzheimers caregivers.
Clinical Chemistry | 2011
Edward Valente; John M. Scott; Per-Magne Ueland; Conal Cunningham; Miriam Casey; Anne M. Molloy
BACKGROUND Vitamin B₁₂ deficiency is common among the elderly, and early detection is clinically important. However, clinical signs and symptoms have limited diagnostic accuracy and there is no accepted reference test method. METHODS In elderly subjects (n = 700; age range 63-97 years), we investigated the ability of serum cobalamin, holotranscobalamin (holoTC), total homocysteine (tHcy), methylmalonic acid (MMA), serum and erythrocyte folate, and other hematologic variables to discriminate cobalamin deficiency, defined as red blood cell cobalamin <33 pmol/L. RESULTS Serum holoTC was the best predictor, with area under the ROC curve (95% CI) 0.90 (0.86-0.93), and this was significantly better (P ≤ 0.0002) than the next best predictors; serum cobalamin, 0.80 (0.75-0.85), and MMA, 0.78 (0.72-0.83). For these 3 analytes, we constructed a 3-zone partition of positive and negative zones and a deliberate indeterminate zone between. The boundaries were values of each test that resulted in a posttest probability of deficiency of 60% and a posttest probability of no deficiency of 98%. The proportion of indeterminate observations for holoTC, cobalamin, and MMA was 14%, 45%, and 50%, respectively. Within the holoTC indeterminate zone (defined as 20-30 pmol/L), discriminant analysis selected only erythrocyte folate, which correctly allocated 65% (58/89) of the observations. Renal dysfunction compromised the diagnostic accuracy of MMA but not holoTC or serum cobalamin. CONCLUSIONS This study supports the use of holoTC as the first-line diagnostic procedure for vitamin B₁₂ status.
International Journal of Geriatric Psychiatry | 2011
Damien Gallagher; Robert F. Coen; Dana Kilroy; Kate Belinski; Irene Bruce; Davis Coakley; Bernard Walsh; Conal Cunningham; Brian A. Lawlor
Depression and anxiety have been reported to be independently predictive of conversion to Alzheimers disease (AD) in patients with mild cognitive impairment (MCI). Anxiety symptoms have been less well studied and findings in this regard have been inconsistent. The objectives of this study are to determine which symptoms among a range of neuropsychiatric symptoms known to commonly occur in patients with MCI are predictive of later conversion to AD. We also wish to determine whether these symptoms track existing measures of declining cognitive and functional status or may be considered distinct and sensitive biomarkers of evolving Alzheimers pathology.
American Journal of Alzheimers Disease and Other Dementias | 2011
Damien Gallagher; Aine Ni Mhaolain; Lisa Crosby; Deirdre Ryan; Loretto Lacey; Robert F. Coen; Cathal Walsh; Davis Coakley; J. Bernard Walsh; Conal Cunningham; Brian A. Lawlor
The dependence scale has been designed to be sensitive to the overall care needs of the patient and is considered distinct from standard measures of functional ability in this regard. Little is known regarding the relationship between patient dependence and caregiver burden. We recruited 100 patients with Alzheimer’s disease or mild cognitive impairment and their caregivers through a memory clinic. Patient function, dependence, hours of care, cognition, neuropsychiatric symptoms, and caregiver burden were assessed. Dependence was significantly correlated with caregiver burden. Functional decline and dependence were most predictive of caregiver burden in patients with mild impairment while behavioral symptoms were most predictive in patients with moderate to severe disease. The dependence scale demonstrated good utility as a predictor of caregiver burden. Interventions to reduce caregiver burden should address patient dependence, functional decline, and behavioral symptoms while successful management of the latter becomes more critical with disease progression.
International Journal of Geriatric Psychiatry | 2010
Damien Gallagher; Aine Ni Mhaolain; Robert F. Coen; Cathal Walsh; Dana Kilroy; Kate Belinski; Irene Bruce; Davis Coakley; J. B. Walsh; Conal Cunningham; Brian A. Lawlor
The Cambridge cognitive examination (CAMCOG) is a mini neuropsychological battery which is well established and widely used. The utility of the CAMCOG in detecting prodromal Alzheimers disease (AD) in patients with mild cognitive impairment (MCI) has not been determined. The objectives of this study are: to establish which subtests of cognitive domains contained within the CAMCOG are predictive of conversion to AD, to compare these with an extended version of the delayed word recall (DWR) test and to establish optimal cut points for all measures used.