Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ullrika Sahlin is active.

Publication


Featured researches published by Ullrika Sahlin.


Acta Paediatrica | 2013

The quality of the outdoor environment influences childrens health- a cross sectional study of preschools

Margareta Söderström; Cecilia Boldemann; Ullrika Sahlin; Fredrika Mårtensson; Anders Raustorp; Margareta Blennow

To test how the quality of the outdoor environment of child day care centres (DCCs) influences childrens health.


Molecular Informatics | 2014

Applicability domain dependent predictive uncertainty in QSAR regressions

Ullrika Sahlin; Nina Jeliazkova; Tomas Öberg

Predictive models used in decision making, such as QSARs in chemical regulation or drug discovery, call for evaluated approaches to quantitatively assess associated uncertainty in predictions. Uncertainty in less reliable predictions may be captured by locally varying predictive errors. In the current study, model‐based bootstrapping was combined with analogy reasoning to generate predictive distributions varying in magnitude over a model’s domain of applicability. A resampling experiment based on PLS regressions on four QSAR data sets demonstrated that predictive errors assessed by k nearest neighbour or weighted PRedicted Error Sum of Squares (PRESS) on samples of external test data or by internal cross‐validation improved the performance of the uncertainty assessment. Analogy using similarity defined by Euclidean distances, or differences in standard deviation in perturbed predictions, resulted in better performances than similarity defined by distance to, or density of, the training data. Locally assessed predictive distributions had on average at least as good coverage as Gaussian distribution with variance assessed from the PRESS. An R‐code is provided that evaluates performances of the suggested algorithms to assess predictive error based on log likelihood scores and empirical coverage graphs, and which applies these to derive confidence intervals or samples from the predictive distributions of query compounds.


Molecular Informatics | 2011

A risk assessment perspective of current practice in characterizing uncertainties in QSAR regression predictions

Ullrika Sahlin; Monika Filipsson; Tomas Öberg

The European REACH legislation accepts the use of non‐testing methods, such as QSARs, to inform chemical risk assessment. In this paper, we aim to initiate a discussion on the characterization of predictive uncertainty from QSAR regressions. For the purpose of decision making, we discuss applications from the perspective of applying QSARs to support probabilistic risk assessment. Predictive uncertainty is characterized by a wide variety of methods, ranging from pure expert judgement based on variability in experimental data, through data‐driven statistical inference, to the use of probabilistic QSAR models. Model uncertainty is dealt with by assessing confidence in predictions and by building consensus models. The characterization of predictive uncertainty would benefit from a probabilistic formulation of QSAR models (e.g. generalized linear models, conditional density estimators or Bayesian models). This would allow predictive uncertainty to be quantified as probability distributions, such as Bayesian predictive posteriors, and likelihood‐based methods to address model uncertainty. QSAR regression models with point estimates as output may be turned into a probabilistic framework without any loss of validity from a chemical point of view. A QSAR model for use in probabilistic risk assessment needs to be validated for its ability to make reliable predictions and to quantify associated uncertainty.


Journal of Chemical Information and Modeling | 2012

PLS-Optimal: A stepwise D-Optimal design based on latent variables

Stefan Brandmaier; Ullrika Sahlin; Igor V. Tetko; Tomas Öberg

Several applications, such as risk assessment within REACH or drug discovery, require reliable methods for the design of experiments and efficient testing strategies. Keeping the number of experiments as low as possible is important from both a financial and an ethical point of view, as exhaustive testing of compounds requires significant financial resources and animal lives. With a large initial set of compounds, experimental design techniques can be used to select a representative subset for testing. Once measured, these compounds can be used to develop quantitative structure-activity relationship models to predict properties of the remaining compounds. This reduces the required resources and time. D-Optimal design is frequently used to select an optimal set of compounds by analyzing data variance. We developed a new sequential approach to apply a D-Optimal design to latent variables derived from a partial least squares (PLS) model instead of principal components. The stepwise procedure selects a new set of molecules to be measured after each previous measurement cycle. We show that application of the D-Optimal selection generates models with a significantly improved performance on four different data sets with end points relevant for REACH. Compared to those derived from principal components, PLS models derived from the selection on latent variables had a lower root-mean-square error and a higher Q2 and R2. This improvement is statistically significant, especially for the small number of compounds selected.


Ecology and Evolution | 2017

Pollinator population size and pollination ecosystem service responses to enhancing floral and nesting resources

Johanna Häussler; Ullrika Sahlin; Charlotte Baey; Henrik G. Smith; Yann Clough

Abstract Modeling pollination ecosystem services requires a spatially explicit, process‐based approach because they depend on both the behavioral responses of pollinators to the amount and spatial arrangement of habitat and on the within‐ and between‐season dynamics of pollinator populations in response to land use. We describe a novel pollinator model predicting flower visitation rates by wild central‐place foragers (e.g., nesting bees) in spatially explicit landscapes. The model goes beyond existing approaches by: (1) integrating preferential use of more rewarding floral and nesting resources; (2) considering population growth over time; (3) allowing different dispersal distances for workers and reproductives; (4) providing visitation rates for use in crop pollination models. We use the model to estimate the effect of establishing grassy field margins offering nesting resources and a low quantity of flower resources, and/or late‐flowering flower strips offering no nesting resources but abundant flowers, on bumble bee populations and visitation rates to flowers in landscapes that differ in amounts of linear seminatural habitats and early mass‐flowering crops. Flower strips were three times more effective in increasing pollinator populations and visitation rates than field margins, and this effect increased over time. Late‐blooming flower strips increased early‐season visitation rates, but decreased visitation rates in other late‐season flowers. Increases in population size over time in response to flower strips and amounts of linear seminatural habitats reduced this apparent competition for pollinators. Our spatially explicit, process‐based model generates emergent patterns reflecting empirical observations, such that adding flower resources may have contrasting short‐ and long‐term effects due to apparent competition for pollinators and pollinator population size increase. It allows exploring these effects and comparing effect sizes in ways not possible with other existing models. Future applications include species comparisons, analysis of the sensitivity of predictions to life‐history traits, as well as large‐scale management intervention and policy assessment.


Journal of Computer-aided Molecular Design | 2015

Assessment of uncertainty in chemical models by Bayesian probabilities: Why, when, how?

Ullrika Sahlin

A prediction of a chemical property or activity is subject to uncertainty. Which type of uncertainties to consider, whether to account for them in a differentiated manner and with which methods, depends on the practical context. In chemical modelling, general guidance of the assessment of uncertainty is hindered by the high variety in underlying modelling algorithms, high-dimensionality problems, the acknowledgement of both qualitative and quantitative dimensions of uncertainty, and the fact that statistics offers alternative principles for uncertainty quantification. Here, a view of the assessment of uncertainty in predictions is presented with the aim to overcome these issues. The assessment sets out to quantify uncertainty representing error in predictions and is based on probability modelling of errors where uncertainty is measured by Bayesian probabilities. Even though well motivated, the choice to use Bayesian probabilities is a challenge to statistics and chemical modelling. Fully Bayesian modelling, Bayesian meta-modelling and bootstrapping are discussed as possible approaches. Deciding how to assess uncertainty is an active choice, and should not be constrained by traditions or lack of validated and reliable ways of doing it.


Journal of Risk Research | 2017

A note on EFSA’s ongoing efforts to increase transparency of uncertainty in scientific opinions

Ullrika Sahlin; Matthias C. M. Troffaes

This is a comment on Lofstedt and Bouder’s paper, which explores the prospects of evidence based uncertainty analysis in Europe, focusing on the ongoing development on uncertainty analysis at the European Food Safety Authority (EFSA). We very much welcome a discussion on the need to develop better treatment and communication of uncertainty in risk analysis, as we believe that such discussion is long overdue. Lofstedt and Bouder raise many relevant points, in particular the call for evidence based uncertainty analysis. However, there is need to distinguish different types of communication in the discussion and facilitate – not diminish – the description and communication of uncertainty between risk assessors and decision-makers. We find that EFSA has taken steps toward a novel approach to guide their scientific experts and risk assessors in uncertainty analysis based on a modern and scientific view on uncertainty.


PLOS ONE | 2018

Implications of accounting for management intensity on carbon and nitrogen balances of European grasslands

Jan Hendrik Blanke; Niklas Boke-Olén; Stefan Olin; Ullrika Sahlin; Mats Lindeskog; Veiko Lehsten

European managed grasslands are amongst the most productive in the world. Besides temperature and the amount and timing of precipitation, grass production is also highly controlled by applications of nitrogen fertilizers and land management to sustain a high productivity. Since management characteristics of pastures vary greatly across Europe, land-use intensity and their projections are critical input variables in earth system modeling when examining and predicting the effects of increasingly intensified agricultural and livestock systems on the environment. In this study, we aim to improve the representation of pastures in the dynamic global vegetation model LPJ-GUESS. This is done by incorporating daily carbon allocation for grasses as a foundation to further implement daily land management routines and land-use intensity data into the model to discriminate between intensively and extensively used regions. We further compare our new simulations with leaf area index observations, reported regional grassland productivity, and simulations conducted with the vegetation model ORCHIDEE-GM. Additionally, we analyze the implications of including pasture fertilization and daily management compared to the standard version of LPJ-GUESS. Our results demonstrate that grassland productivity cannot be adequately captured without including land-use intensity data in form of nitrogen applications. Using this type of information improved spatial patterns of grassland productivity significantly compared to standard LPJ-GUESS. In general, simulations for net primary productivity, net ecosystem carbon balance and nitrogen leaching were considerably increased in the extended version. Finally, the adapted version of LPJ-GUESS, driven with projections of climate and land-use intensity, simulated an increase in potential grassland productivity until 2050 for several agro-climatic regions, most notably for the Mediterranean North, the Mediterranean South, the Atlantic Central and the Atlantic South.


International Journal of Life Cycle Assessment | 2018

The potential to use QSAR to populate ecotoxicity characterisation factors for simplified LCIA and chemical prioritisation

Hanna Holmquist; Jenny Lexén; Magnus Rahmberg; Ullrika Sahlin; Julia Grönholdt Palm; Tomas Rydberg

PurposeToday’s chemical society use and emit an enormous number of different, potentially ecotoxic, chemicals to the environment. The vast majority of substances do not have characterisation factors describing their ecotoxicity potential. A first stage, high throughput, screening tool is needed for prioritisation of which substances need further measures.MethodsUSEtox characterisation factors were calculated in this work based on data generated by quantitative structure-activity relationship (QSAR) models to expand substance coverage where characterisation factors were missing. Existing QSAR models for physico-chemical data and ecotoxicity were used, and to further fill data gaps, an algae QSAR model was developed. The existing USEtox characterisation factors were used as reference to evaluate the impact from the use of QSARs to generate input data to USEtox, with focus on ecotoxicity data. An inventory of chemicals that make up the Swedish societal stock of plastic additives, and their associated predicted emissions, was used as a case study to rank chemicals according to their ecotoxicity potential.Results and discussionFor the 210 chemicals in the inventory, only 41 had characterisation factors in the USEtox database. With the use of QSAR generated substance data, an additional 89 characterisation factors could be calculated, substantially improving substance coverage in the ranking. The choice of QSAR model was shown to be important for the reliability of the results, but also with the best correlated model results, the discrepancies between characterisation factors based on estimated data and experimental data were very large.ConclusionsThe use of QSAR estimated data as basis for calculation of characterisation factors, and the further use of those factors for ranking based on ecotoxicity potential, was assessed as a feasible way to gather substance data for large datasets. However, further research and development of the guidance on how to make use of estimated data is needed to achieve improvement of the accuracy of the results.


Risk, Reliability and Safety: Innovating Theory and Practice - Proceedings of the 26th European Safety and Reliability Conference, ESREL 2016; pp 51-51 (2017) | 2017

Small data and conflicting information

Ullrika Sahlin

Things are seldom ideal. The quality in information underlying the assessment of risk and support decisions is not an exception. Quantitative measures of uncertainty are extremely useful, but a problem with quantitative measures of uncertainty is that qualitative aspects of uncertainty, e.g. related to weaknesses in background knowledge or deep uncertainty (Cox 2012), are difficult to address.

Collaboration


Dive into the Ullrika Sahlin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ester Papa

University of Insubria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Laura Golsteijn

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge