Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Belkacem Abdous is active.

Publication


Featured researches published by Belkacem Abdous.


Obstetrics & Gynecology | 2010

The Role of Uterine Closure in the Risk of Uterine Rupture

Emmanuel Bujold; Martine Goyet; Sylvie Marcoux; Normand Brassard; Beatrice Cormier; Emily F. Hamilton; Belkacem Abdous; Elhadji A. Laouan Sidi; Robert A. Kinch; Louise Miner; André Masse; Claude Fortin; Guy-Paul Gagné; André Fortier; Gilles Bastien; Robert Sabbah; Pierre Guimond; Stéphanie Roberge; Robert J. Gauthier

To the Editor: I read with great interest the latest article on the role of single compared with double stitching on uterine closure.1 The article found that one of the 10 centers had no uterine ruptures during the 10year period. Perhaps future protocol might profit by looking at their apparently successful management. The study included 96 uterine ruptures after previous cesarean delivery, but only 74 of them had a known method of closure on the previous cesarean delivery, the variable being studied in this report. The 23% of cases in which the previous method of closure was unknown were irrelevant to the study. Despite this fact, these cases were found “matching controls” that occurred at the same time and place as the unknown closure cases, and the controls of the unknown cases were considered in all the analyses. This may reflect a desire to “prove” a predetermined outcome. In the study, to calculate how many controls were needed, it was assumed that 50% of uterine rupture would happen in women who had previous single-suture technique, whereas only 25% of controls would have previous singlesuture closure. Is it serendipitous that that the conclusions match the predetermined assumptions? It is exceeding logic that the more you stretch the skin near scar tissue, the more likely it is to rupture. Eight large studies found that trials of labor when the birth weight was more than 4,000 g was a significant factor for vaginal birth after cesarean failure and uterine rupture, and one found low rates of uterine rupture with birth weights of 2,500 g or less.2–10 Case–control studies are reliable when variables with important influence are matched for in the controls. Because birth weight is an already known and important factor for uterine rupture, birth weights needed to be matched in the control group with the cases. The authors write that it is impossible to control birth weight, thereby emphasizing a need to determine which closure rate is more effective. The literature already has shown that low glycemic diets with 50 g of protein intake per day after 12 weeks of gestation result in lower birth weights without increases in stillbirth or prematurity.11 A prospective multicenter study, controlling for the important factor of birth weight, surely is required before this question can be resolved.


Archive | 2005

Dependence Properties of Meta-Elliptical Distributions

Belkacem Abdous; Christian Genest; Bruno Rémillard

A distribution is said to be meta-elliptical if its associated copula is elliptical. Various properties of these copulas are critically reviewed in terms of association measures, concepts, and stochastic orderings, including tail dependence. Most results pertain to the bivariate case.


Environmental Health Perspectives | 2011

Relation between Methylmercury Exposure and Plasma Paraoxonase Activity in Inuit Adults from Nunavik

Pierre Ayotte; Antoine Carrier; Nathalie Ouellet; Véronique Boiteau; Belkacem Abdous; Elhadji A. Laouan Sidi; Marie-Ludivine Château-Degat; Eric Dewailly

Background: Methylmercury (MeHg) exposure has been linked to an increased risk of coronary heart disease (CHD). Paraoxonase 1 (PON1), an enzyme located in the high-density–lipoprotein (HDL) fraction of blood lipids, may protect against CHD by metabolizing toxic oxidized lipids associated with low-density liproprotein and HDL. MeHg has been shown to inhibit PON1 activity in vitro, but this effect has not been studied in human populations. Objectives: This study was conducted to determine whether blood mercury levels are linked to decreased plasma PON1 activities in Inuit people who are highly exposed to MeHg through their seafood-based diet. Methods: We measured plasma PON1 activity using a fluorogenic substrate and blood concentrations of mercury and selenium by inductively coupled plasma mass spectrometry in 896 Inuit adults. Sociodemographic, anthropometric, clinical, dietary, and lifestyle variables as well as PON1 gene variants (rs705379, rs662, rs854560) were considered as possible confounders or modifiers of the mercury–PON1 relation in multivariate analyses. Results: In a multiple regression model adjusted for age, HDL cholesterol levels, omega-3 fatty acid content of erythrocyte membranes, and PON1 variants, blood mercury concentrations were inversely associated with PON1 activities [β-coefficient = –0.063; 95% confidence interval (CI), –0.091 to –0.035; p < 0.001], whereas blood selenium concentrations were positively associated with PON1 activities (β-coefficient = 0.067; 95% CI, 0.045–0.088; p < 0.001). We found no interaction between blood mercury levels and PON1 genotypes. Conclusions: Our results suggest that MeHg exposure exerts an inhibitory effect on PON1 activity, which seems to be offset by selenium intake.


Annals of Surgery | 2009

The Trauma Risk Adjustment Model: A New Model for Evaluating Trauma Care

Lynne Moore; André Lavoie; Alexis F. Turgeon; Belkacem Abdous; Natalie Le Sage; Marcel Émond; Moishe Liberman; Eric Bergeron

Summary Background Data:The trauma injury severity score (TRISS) has been used for over 20 years for retrospective risk assessment in trauma populations. The TRISS has serious limitations, which may compromise the validity of trauma care evaluations. Objective:To derive and validate a new mortality prediction model, the trauma risk adjustment model (TRAM), and to compare the performance of the TRAM to that of the TRISS in terms of predictive validity and risk adjustment. Methods:The Quebec Trauma Registry (1998–2005), based on the mandatory participation of 59 designated provincial trauma centers, was used to derive the model. The American National Trauma Data Bank (2000–2005), based on the voluntary participation of any US hospitals treating trauma, was used for the validation phase. Adult patients with blunt trauma respecting at least one of the following criteria were included: hospital stay >2 days, intensive care unit admission, death, or hospital transfer. Hospital mortality was modeled with logistic generalized additive models using cubic smoothing splines to accommodate nonlinear relations to mortality. Predictive validity was assessed with model discrimination and calibration. Risk adjustment was assessed using comparisons of risk-adjusted mortality between hospitals. Results:The TRAM generated an area under the receiving operator curve of 0.944 and a Hosmer-Lemeshow statistic of 42 in the derivation phase. In the validation phase, the TRAM demonstrated better model discrimination and calibration than the TRISS (area under the receiving operator curve = 0.942 and 0.928, P < 0.001; Hosmer-Lemeshow statistics = 127 and 256, respectively). Replacing the TRISS with the TRAM led to a mean change of 28% in hospital risk-adjusted odds ratios of mortality. Conclusions:Our results suggest that adopting the TRAM could improve the validity of trauma care evaluations and trauma outcome research.


Annals of Emergency Medicine | 2008

Using Information on Preexisting Conditions to Predict Mortality From Traumatic Injury

Lynne Moore; André Lavoie; Natalie Le Sage; Eric Bergeron; Marcel Émond; Moishe Liberman; Belkacem Abdous

STUDY OBJECTIVE Preexisting conditions have been found to be an independent predictor of mortality after trauma. However, no consensus has been reached as to what indicator of preexisting condition status should be used, and the contribution of preexisting conditions to mortality prediction models is unclear. This study aims to identify the most accurate way to model preexisting condition status to predict inhospital trauma mortality and to evaluate the potential gain of adding preexisting condition status to a standard trauma mortality prediction model. METHODS The study comprised all patients from the trauma registries of 4 Level I trauma centers. Information provided by individual preexisting conditions was compared to 3 commonly used summary measures: (1) absence/presence of any preexisting condition, (2) number of preexisting conditions, and (3) Charlson Comorbidity Index. The impact of adding preexisting condition status to 2 baseline risk models, the current standard Trauma and Injury Severity Score model and an improved model based on nonparametric transformations of quantitative variables, was evaluated by the area under the receiver operating characteristics curve. RESULTS Discrimination for predicting mortality in the improved model was as follows: baseline risk model: area under the receiver operating characteristics curve=0.935; baseline risk model+individually modeled preexisting conditions: area under the receiver operating characteristics curve=0.941; baseline risk model+presence of any preexisting condition: area under the receiver operating characteristics curve=0.937; baseline risk model+number of preexisting conditions: area under the receiver operating characteristics curve=0.939; baseline risk model+Charlson Comorbidity Index: area under the receiver operating characteristics curve=0.938. CONCLUSION Preexisting condition status is an independent predictor of mortality from trauma that provides a modest improvement in mortality prediction. The total number of preexisting conditions is a good summary measure of preexisting condition status. The Charlson Comorbidity Index is no better than the total number of preexisting conditions and is therefore not recommended for use in trauma mortality modeling.


British Journal of Nutrition | 2013

Gene–diet interactions on plasma lipid levels in the Inuit population

Iwona Rudkowska; Eric Dewailly; Robert A. Hegele; Véronique Boiteau; Ariane Dubé-Linteau; Belkacem Abdous; Yves Giguère; Marie-Ludivine Chateau-Degat; Marie-Claude Vohl

The Inuit population is often described as being protected against CVD due to their traditional dietary patterns and their unique genetic background. The objective of the present study was to examine gene-diet interaction effects on plasma lipid levels in the Inuit population. Data from the Qanuippitaa Nunavik Health Survey (n 553) were analysed via regression models which included the following: genotypes for thirty-five known polymorphisms (SNP) from twenty genes related to lipid metabolism; dietary fat intake including total fat (TotFat) and saturated fat (SatFat) estimated from a FFQ; plasma lipid levels, namely total cholesterol (TC), LDL-cholesterol (LDL-C), HDL-cholesterol (HDL-C) and TAG. The results demonstrate that allele frequencies were different in the Inuit population compared with the Caucasian population. Further, seven SNP (APOA1 - 75G/A (rs670), APOB XbAI (rs693), AGT M235T (rs699), LIPC 480C/T (rs1800588), APOA1 84T/C (rs5070), PPARG2 - 618C/G (rs10865710) and APOE 219G/T (rs405509)) in interaction with TotFat and SatFat were significantly associated with one or two plasma lipid parameters. Another four SNP (APOC3 3238C>G (rs5128), CETP I405V (rs5882), CYP1A1 A4889G (rs1048943) and ABCA1 Arg219Lys (rs2230806)) in interaction with either TotFat or SatFat intake were significantly associated with one plasma lipid variable. Further, an additive effect of these SNP in interaction with TotFat or SatFat intake was significantly associated with higher TC, LDL-C or TAG levels, as well as with lower HDL-C levels. In conclusion, the present study supports the notion that gene-diet interactions play an important role in modifying plasma lipid levels in the Inuit population.


Journal of Trauma-injury Infection and Critical Care | 2008

Consensus or Data-Derived Anatomic Injury Severity Scoring?

Lynne Moore; André Lavoie; Natalie Le Sage; Eric Bergeron; Marcel Émond; Belkacem Abdous

BACKGROUND Anatomic injury severity scores can be grouped into two classes; consensus-derived and data-derived. The former, including the Injury Severity Score (ISS), the New Injury Severity Score (NISS), and the Anatomic Profile Score (APS), are based on the severity score of the Abbreviated Injury Scale (AIS), assigned by clinical experts. The latter, including the International Classification of Disease Injury Severity Score (ICISS) and the Trauma Registry Abbreviated Injury Scale Score (TRAIS) are based on survival probabilities calculated in large trauma databases. We aimed to compare the predictive accuracy of consensus-derived and data-derived severity scores when considered alone and in combination with age and physiologic status. METHODS Analyses were based on 25,111 patients from the trauma registries of the four Level I trauma centers in the province of Quebec, Canada, abstracted between April 1998 and March 2005. The predictive validity of each severity score was evaluated in logistic regression models predicting hospital mortality using measures of discrimination (Area Under the Receiver Operating Characteristics curve [AUC]) and calibration (Hosmer-Lemeshow statistic [HL]). RESULTS Data-derived scores had consistently better predictive accuracy than consensus-derived scores in univariate models (p < 0.0001) but very little difference between scores was observed in models including information on age and physiologic status. The difference in AUC between the least accurate severity score (ISS) and the most accurate severity score (TRAIS) was 15% in anatomic-only models but fell to 2% in models including age and physiologic status. CONCLUSIONS Data-derived scores provide more accurate mortality prediction than consensus-derived scores do when only anatomic injury severity is considered but offer little advantage if age and physiologic status are taken into account. This may be because of the fact that data-derived scores are not an independent measure of anatomic injury severity.


Journal of Epidemiology and Community Health | 2014

Household crowding is associated with higher allostatic load among the Inuit

Mylène Riva; Pierrich Plusquellec; Robert-Paul Juster; Elhadji A. Laouan-Sidi; Belkacem Abdous; Michel Lucas; Serge Déry; Eric Dewailly

Background Household crowding is an important problem in some aboriginal communities that is reaching particularly high levels among the circumpolar Inuit. Living in overcrowded conditions may endanger health via stress pathophysiology. This study examines whether higher household crowding is associated with stress-related physiological dysregulations among the Inuit. Methods Cross-sectional data on 822 Inuit adults were taken from the 2004 Qanuippitaa? How are we? Nunavik Inuit Health Survey. Chronic stress was measured using the concept of allostatic load (AL) representing the multisystemic biological ‘wear and tear’ of chronic stress. A summary index of AL was constructed using 14 physiological indicators compiled into a traditional count-based index and a binary variable that contrasted people at risk on at least seven physiological indicators. Household crowding was measured using indicators of household size (total number of people and number of children per house) and overcrowding defined as more than one person per room. Data were analysed using weighted Generalised Estimating Equations controlling for participants’ age, sex, income, diet and involvement in traditional activities. Results Higher household crowding was significantly associated with elevated AL levels and with greater odds of being at risk on at least seven physiological indicators, especially among women and independently of individuals’ characteristics. Conclusions This study demonstrates that household crowding is a source of chronic stress among the Inuit of Nunavik. Differential housing conditions are shown to be a marker of health inequalities among this population. Housing conditions are a critical public health issue in many aboriginal communities that must be investigated further to inform healthy and sustainable housing strategies.


Epidemiology | 2005

Environmental Tobacco Smoke and Risk of Adult Leukemia

Khaled Kasim; Patrick Levallois; Belkacem Abdous; Pierre Auger; Kenneth C. Johnson

Background: The role of environmental tobacco smoke (ETS) in the causation of lung and breast cancer has been repeatedly evaluated over recent years. In contrast, its impact on the risk of adult leukemia has received little attention. Methods: We used the lifetime residential and occupational ETS exposure histories from a population-based sample of 1068 incident and histologically confirmed adult leukemia cases and 5039 population controls age 20 to 74 years to evaluate the relationship between ETS exposure and adult leukemia risk among nonsmokers in Canada. The duration of exposure and smoker-years index were used as indices of ETS exposure. We restricted our analysis to the 266 case and 1326 control subjects who reported being lifetime nonsmokers and provided residential ETS exposure history for at least 75% of their lifetime. Results: No association was found for most leukemia subtypes, and in particular for acute myeloid leukemia. In contrast, the risk for chronic lymphocytic leukemia was clearly associated with ETS exposure, with an adjusted odds ratio of 2.3 (95% confidence interval = 1.2–4.5) for more than 83 smoker-years of residential exposure and 2.4 (1.3–4.3) for more than 72 smoker-years of occupational exposure. There was a dose–response relationship for chronic lymphocytic leukemia with both indices of exposure. Risk was not higher with recent exposure, using time-window-exposure analyses. Conclusions: Regular long-term ETS exposure may be a risk factor for chronic lymphocytic leukemia.


Applied Psychological Measurement | 2011

Accuracy of Person-Fit Statistics: A Monte Carlo Study of the Influence of Aberrance Rates.

Christina St-Onge; Pierre Valois; Belkacem Abdous; Stéphane Germain

Using a Monte Carlo experimental design, this research examined the relationship between answer patterns’ aberrance rates and person-fit statistics (PFS) accuracy. It was observed that as the aberrance rate increased, the detection rates of PFS also increased until, in some situations, a peak was reached and then the detection rates of PFS decreased with increases in aberrance rates. Furthermore, the results suggest that ECI2Z was somewhat more robust to high levels of aberrance than lz , HT , and U3 when cheating was simulated. The results of this study shed light on a limitation of PFS analysis.

Collaboration


Dive into the Belkacem Abdous's collaboration.

Top Co-Authors

Avatar

Diane Bélanger

Institut national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Moishe Liberman

Montreal General Hospital

View shared research outputs
Researchain Logo
Decentralizing Knowledge