4th EUROPEAN DOCTORAL COLLEGE
ON ENVIRONMENT AND HEALTH
6-8 June 2016, Rennes, France



LECTURES

“The exposome: a lighthouse for environmental research or a dangerous Lorelei ?"
Rémy Slama (Inserm, Institute of Advanced Biosciences, Grenoble, France)

The Exposome concept encompasses all environmental exposures one undergoes from conception to death. This concept, which calls for a holistic view of exposures, is in line with the development of 'omics technologies to assess genetic polymorphisms, gene expression, metabolites... and adds another layer to the stacking of 'omics layers spanning from the microbiome, to the genome, methylome and down to the metabolome.
The promises of the Exposome include a thorough description of the exposures of human population and of their associations with specific factors such as socio-demographic factors (in line with the notion of environmental justice), and, when it comes to studying associations between the Exposome and health, the end of publication bias and selective reporting of results.
Huge challenges face those tempted by the Exposome adventure, both in terms of exposure assessment (e.g., do we have tools to successfully increase the number of exposures assessed without simultaneously increasing exposure misclassification? Can the temporal component of the Exposome be efficiently characterized?) and statistical modelling (e.g., can false detection rate really be controlled? Is there enough power to identify synergy between exposures? Are there efficient approaches to map cross-omics relations?). Without immense efforts and creativity from the research community and financial support equivalent to that provided to establish the sequence of the human genome, the cruise towards the Exposome is likely to end in the cliffs of the Lorelei.



Integrating modeling and monitoring for better exposure assessment: Experiences from the German health-related Environmental Monitoring program
Andre Conrad (Federal Environment Agency, Department for Environmental Hygiene, Berlin, Germany)

People are exposed to various health-relevant substances through the environment. Prominent examples are plasticizers in food, heavy metals in drinking water, and benzene in indoor air. The two main approaches for describing and quantifying human exposures are human biomonitoring (HBM) and exposure modelling (EM).
The German health-related environmental monitoring program - consisting of the German Environmental Survey (GerES) and the German Environmental Specimen Bank (ESB) - is also a key source for the German Exposure factors database (RefXP). Current HBM results from GerES and ESB and updated exposure models based on RefXP can be combined for an enhanced analysis of environmental health issues.
The lecture demonstrates how integrating HBM and EM provides a more complete view on exposures, relevant exposure pathways, and information on the potential effect of policy measures aiming to improve environmental health.



“Spatiotemporal analyses tools for studying environmental and social health inequalities”
Séverine Deguen (UMR Inserm 1085-IRSET/E9, Rennes, France)

Several studies have documented that more deprived populations tend to live in areas characterized by higher levels of environmental pollution and then, may increase the risk of health outcomes. Yet, time trends and geographic patterns of this disproportionate distribution of environmental burden remain poorly assessed, especially in Europe.
Because of the considerable recent development of large data sets routinely collected with public health, social policies or environmental perspectives, the use of spatially referenced data in environmental epidemiological studies is gaining an important place in etiologic investigations. In this context, an overview of the value of spatiotemporal approaches for investigating environmental and social health inequalities will be given.



“Appropriate choice of biological matrices for exposure assessment in early life”
Claire Philippat (U 1209 Inserm, Grenoble, France)

Chemical (or their metabolites) concentrations in human biological samples (biomonitoring) provide an estimate of the dose that actually enters the body (internal dose) from all sources and routes of exposures. Biomarkers are extensively used to assess exposure to chemicals with multiple sources of exposure. The pro and cons of the several biological matrices that could be used to assess biomarkers will be reviewed. Exposure measurement errors will also be discussed.



“Physiologically Based Pharmacokinetic models (PBPK): an awesome tool for the exposure assessment in environmental epidemiology”
Claude Emond (Ecole de santé publique, Université de Montréal, Canada)

Exposure assessments are playing a critical role in environmental epidemiology, and biomarkers are useful in monitoring or measuring internal exposure. However, biomarkers only reflect the current or recent exposure, depending of the half-life of the considered compounds; they do not inform of the duration of exposure. In toxicology, the physiologically based pharmacokinetic (PBPK) model may help to provide information about the present or past exposure of a population and, more specifically, during the windows of sensitivity of their lives.
During this lecture, we will review the main steps of PBPK modeling, including the structure and the development and how these work together to accurately determine exposure for environmental epidemiology risk assessments. From that point, we will analyze several examples from the literature to demonstrate how the PBPK model can support epidemiologists when conducting exposure assessments.


“Optimising bio-accessibility tests used in the exposure assessment of organic pollutants”
Chris Collins (University of Reading, United Kingdom)

Bio-accessibility studies have been widely used as a research tool to determine the potential human exposure to ingested contaminants. More recently they have been practically applied for soil borne toxic elements.
This lecture will review the application of bio-accessibility tests across a range of organic pollutants and contaminated matrices. Important factors are: the physiological relevance of the test, the components in the gut media, the size fraction chosen for the test and whether it contains a sorptive sink. The bio-accessibility is also a function of the composition of the matrix (e.g. organic carbon content of soils) and the physicochemical characteristics of the pollutant under test. Despite the widespread use of these tests, there are a large number of formats used and very few validation studies with animal models. We propose a unified format for a bio-accessibility test for organic pollutants.



“Identification of mixtures from combined exposures“
Amélie Crépet (ANSES-French Agency for Food, Environmental and Occupational Health Safety, Maisons-Alfort, France)

Due to the large number of chemicals found in the environment, individuals are daily exposed to complex mixtures of chemicals which can interact and cause health diseases. Thus, chemical mixtures may lead to a risk for human. This risk is difficult to characterize. One reason lies in the multitude of possible combinations of chemicals for which it is unrealistic to test toxicological combined effects. For this reason, risk assessment is usually performed for chemicals belonging to a same chemical family and having same mode of action. However, those mixtures could not reflect the reality of exposures.
After an introduction on new challenges related to exposure assessment to mixtures, the methods recently developed to define mixtures from combined exposures will be explained. Probabilistic methods to calculate co-exposures by combining food surveys with concentration data will be first detailed. Then tree methods making possible to select mixtures and to cluster individuals into sub-groups with similar profile will be presented. The first approach is based on the decomposition of the co-exposure matrix into two matrices to extract the main mixtures which are relevant to study will be explained. A second approach includes a clustering of individuals with similar food patterns to determine food vectors of the mixtures. The last method combines exposure levels and data on the toxicity of the substances to characterize mixtures. The methods will be illustrated by examples on pesticides residues in food and on substances analyzed in total diet study 2 (TDS2).



“Dealing with multiple exposure: grouping chemicals based on mechanistic data”
Nathalie Bonvallot (UMR Inserm 1085-IRSET/E9, Rennes, France)

The identification of relevant mixtures of contaminants is first based on an analysis of exposure profiles in a population, combining contamination data with human behavior data. But in a risk assessment purposes, the identification of these mixtures is not sufficient. One of the major current challenges is how to select the substances of interest among the identified exposure profiles.
The consideration of a common mechanism of action based on the additivity hypothesis is common for some chemical families such as dioxins, furans, or polycyclic aromatic hydrocarbons. But the generalization of this method is not easy when toxicological data is incomplete, and when environmental contamination is studied as a whole. Different agencies working in the field of public health and food safety (EFSA, ANSES, RIVM) initiated a work focusing on grouping pesticides used in Europe. The general methodology for classifying pesticides into cumulative assessment groups is based on identifying compounds that exhibit similar toxicological properties in a specific organ or system at a macroscopic scale. More recent work presents a framework for identifying pollutants to be included in a cumulative risk assessment approach with the case of indoor environments. This approach, based on the definition of adverse outcome pathways, allows to link adverse effects observed at the organism level with cell damage, induced by molecular key-events. Exemples of neurotoxic chemicals will be shown.



“Methodological overview of statistical models to derive OMICS-based biomarkers”
Mark Chadeau-Hyam (Imperial College, London, United Kingdom)

Recent advances in high throughput ’omics’ technologies have given rise to a wealth of novel high dimensional data (ranging from thousands to hundreds of thousand variables) each demonstrating complex correlation structures. These data comprise genetic, epigenetic and transcriptomic profiles offer a great potential to measuring the abundance of biologically relevant molecules over the whole biological system. The analysis of such complex data raises strong statistical challenges relating to the fact that the number of predictors exceeds the number of observations. Statistical models to explore such complex data sets are now established and include univariate approaches coupled with multiple testing correction strategies, dimensionality reduction techniques, and variable selection approaches. We will explore the vast range of methods from these three complementary fields and will describe the main features, advantages and limitation of the most popular approaches in computational epidemiology.





 

Updated on 15/06/2017