Evolutionary Medicine Miriam Nývltová Fišáková Department of Physiology The Modern Nutrition Transition We see then that the advent of agriculture should not be equated with dietary security or abundance; in fact, it was often associated with the opposite. Agricultural populations were often more prone to shortfalls in both energy and specific micronutrients than were their foraging ancestors. The nutritional abundance that is now the hallmark of many modern agricultural societies, and which many of us take for granted, is in fact a far more recent—and in many regions, ongoing—development. It is a product of a set of changes in diet and lifestyle that have been described as the nutrition transition (see Popkin 2001). In contrast food security remains a critical issue for many lower-income countries with rapidly growing populations. There are many factors that contribute to the nutrition transition. One relates to the cheaper production of refined carbohydrates and fats, in particular fats of plant origin. Government subsidies for certain crops such as corn, introduced for political reasons, further incentivize farmers and food producers. Greater trans-national trade, the development of intensive industrial farming practices that reduce the cost of crop production, and the burgeoning market of populations moving from rural to urban environments all make a contribution. Economic considerations show how the processing and ultra-processing of foods increase the value (and thus the retail price) of foods as well as their preservation. Additional profits lead food companies to focus increasingly on such production rather than on the provision of minimally processed, usually less energy dense and healthier, foods. Advertising and communication also play a part in making these products desirable: for evidence we need only look to the success of fast food chains and of the marketing of sweetened beverages. But we must also recognize that these trends occurred against the backdrop of human societies who were willing to make the transition. It is unfortunate, for example, that the traditional scarcity of fats for cooking in poor communities has encouraged overconsumption of these items as their cost has declined and their availability increased. In many such populations, as income rises so does the intake of additional high-fat foods and sweetened beverages. • China provides a good example of this transition. During the 1970s, in the period after the Cultural Revolution, food insecurity was a major problem. There was no television, limited bus and other mass transportation, and little food trade with other countries. Very little processed food existed, and most rural and urban occupations were very labor-intensive: oxen were used for plowing and factories still required huge amounts of human labor to move stock and equipment about. However, work and life in China have changed dramatically in recent decades. Tractors are now available on many farms and fork-lift trucks in factories; the internet has arrived in offices along with printers, fax machines, and modern telephone systems. By 2000, soft drinks and processed foods were consumed everywhere and nearly 90% of all homes had a television. Hong Kongand Western-based advertising have been on the increase and may be received on many televisions as well as seen on billboards and in magazines. Use of the bicycle has declined and public transport and cars increased. In the course of one generation, the lives of millions of Chinese people were transformed from a subsistence agriculture-based economy to a modern, industrialized one. Data on the incidence of obesity in Chinese children demonstrates the consequences of this. The overall prevalence of overweight and obesity in childhood increased from 2.2% in 1981–5 to 20.6% in 2006–10 (see Yu et al. 2012). The increase largely occurred in urban areas, in which children were nearly twice as likely to be obese compared with their rural counterparts. Similar changes are also seen in other low- to middle-income countries, for example in Oceania, Latin America, elsewhere in Asia, and parts of the Middle East (see Ng et al. 2014). Refugees or migrants who were born in conditions of poor nutrition but who then moved to nutritionally abundant environments are likewise at greater risk of diabetes, obesity, and cardiovascular disease. Well Fed but Poorly Nourished The foraging populations we describe were generally considered to be free from chronic diseases like diabetes and obesity until the adoption of more Westernized dietary and lifestyle practices in recent generations. In fact, the relatively high meat consumption of foragers may cause some readers to pause for thought. The amount of animal products in the diet of foragers appears to far exceed the intake in most Western industrialized nations. In the USA, where typical per-capita food consumption is of the order of 2700 kcal/day, 15% of energy is from meat, 10% from dairy, and the remaining 75% from flour, sweeteners, oils, fruits, vegetables, and other items. Americans appear to consume far fewer calories from meat than most foragers. If excessive meat consumption is indeed bad for our health, how then do we explain the rise of chronic metabolic disease in societies like the USA and the foragers’ comparatively disease-free existence? One important point is that not all meat is alike. Wild meat of the sort that foragers consume is lean (protein dense) and typically provides about half as much of its energy from fat as does meat from grain- fed domestic cattle. • Through time, human herding cultures have selectively bred the animals that produce the fattiest, tastiest meat. Then in the post- war period, as meat has been marketed to a consumer society, the trend towards taste and textural qualities has rapidly grown. This has maximized factors that improve flavor, such as marbling, which involves the deposition of fat droplets (triglycerides) within and between the muscle cells. In addition, industrial herding practices in places like the USA often involve feeding animals grain, such as corn, which is grown in abundance with generous federal subsidies, rather than the grasses that are their natural food resource. Compared with grass- fed cattle, grain- fed beef is higher in fat and has more saturated and monounsaturated fats, and is also very low in beneficial polyunsaturated fats. As a result of these changes, what many of us call meat today is very different from the meat that has probably been an important component of the hominin diet for several million years. • These changes have consequences for our health. Not only are we eating more calories as a result of the greater energy density of domesticated meats, but we now consume higher levels of the saturated fatty acids that elevate cholesterol, which is implicated in cardiovascular disease. In addition to the well- known deleterious effects of an excessive intake of saturated fatty acids, recent work is revealing with higher resolution the importance of specific fatty acids, and especially the ratios of fatty acids, as key to conditions such as atherosclerosis. The omega- 3 fatty acids are particularly important. These are not produced within the body to a great extent and thus are essential. Omega- 3 fatty acids have been shown to have a wide range of effects such as reducing clotting and inflammation. Omega- 6 fatty acids, which are also essential, can tip the balance in the other direction, encouraging inflammation. It appears that the effects of these fatty acids on health depend critically on their balance within the body, and that inappropriately high omega- 6/ omega- 3 ratios in the diet can be harmful. It is notable, then, that grain- fed beef is significantly lower in omega- 3 fatty acids than grass- fed beef (see Daley et al. 2010). The ratio of omega- 6 to omega- 3 fatty acids has increased substantially in human breast milk over the past 20 years, demonstrating the subtleties of how influences from these changing food behaviors could affect our biology. The public health push to reduce consumption of meat and meat products in many countries has encouraged consumption of vegetable oils. Vegetable oils in general tend to be higher in monounsaturated and polyunsaturated fats, which have more favorable effects on risk factors such as cholesterol profiles when compared with the saturated fats present in meat or poultry. But some commonly used vegetable oils have fatty acid profiles and other properties that are also not beneficial. For instance, soybean and corn oils, which are mass produced and widely utilized in processed foods, have very high omega- 6/ omega- 3 ratios. Palm and coconut oil, the most widely used vegetable oils, have very high levels of saturated fats. • Cattle are foraging animals that have been bred to keep their noses down seeking out the most nutritious grasses in the field. Almost all beef cattle in Australia start their life off in paddocks at their mother’s side as part of a larger herd. When weaned, they join a new herd of younger animals. Some remain in the paddocks eating pasture for the rest of their lives and some will be sent to feed lots, where they have a constant supply of fresh water and nutrient rich, grain based feed. • Grain fed beef cattle spend 70 days minimum, often 100 days or more prior to processing, eating a mix of grain and plant based feed. Animals fed on grain for 300 days or more are destined for the top end of the market. They can be housed in purpose-built feed lots, where they can rest and sleep on beds of straw with shelter from the sun and rain. They can also be grain fed in open fields where they are free to roam. Australia has some of the strictest animal welfare and environmental measures in place in regards to feed lots. Effluent ponds treat their manure and their bedding is used as compost. The meat from grain fed beef cattle is considered by many to have better consistency of quality, be more tender and have better marbling. Many customers prefer grain fed beef, as the marbled fat has a clean white appearance and the meat has a juicy mouthfeel with a lovely, nutty flavour from the grain. • Some diners prefer the flavour the pasture gives to grass fed beef. Grass fed beef accounts for roughly two thirds of production in Australia. Natural compounds in the grass, such as beta carotenes, add to the flavour of the meat. They can also add a yellow tinge to the fat, something some diners like but others not. Grass fed beef comes from cattle that free-range for their entire lives. They can have supplementary feeding of hay, lucerne or silage, but for the most part they live on grass. Many consider this a more natural way of raising an animal and better for their well-being. While the beef from grass fed cattle can have more nuanced flavours, grass fed beef is more likely to be influenced by weather and climate. Good rainfall and a mild climate can mean better pasture and better beef, while drier times can affect the quality of the steak on the plate. • Our diets have changed in other ways that also influence our health and risk of metabolic disease. Modern foragers acquire most of their carbohydrates from fruits and starchy tubers. Western societies increasingly rely on refined sugars and sweeteners, which differ from naturally occurring sugars and starches in that the body metabolizes them more rapidly, giving them a high glycemic index with consequent effects on homeostatic processes such as insulin release. The process of refining involves extracting the sugars from the surrounding food matrix, which is often the source of important micronutrients and of fiber which also reduces the glycemic index. Even as our diets increasingly focus on refined energy-dense foods, our level of physical activity has decreased in recent decades in response to several important global economic and social trends. As nomadic foragers, our ancestors spent much of their time on the move, and our modern life increasingly deviates from this in fundamental ways. Physical activity can be broken down into that engaged in at work, during leisure time, and in locomotion, and all have seen dramatic changes in the past few generations. The percentage of individuals who engage in physical activity at work has greatly declined with the rising importance of the service sector and the mechanization of manufacturing processes. Leisure time is also increasingly sedentary, focusing on activities like watching television or playing video games rather than physically active pursuits. With the advent and wide distribution of the internal combustion engine, we now rely to a great degree upon the hydrocarbons in fossil fuels, rather than our own dietary energy, to move from place to place. These trends, the so-called nutrition transition, are sweeping the globe. Although there clearly is no single Paleolithic-type diet or pattern of activities that will keep us healthy, we see that studies of our evolutionary past and of modern foragers do give us important insights into the aspects of our lifestyle and diet that have changed the most, and that consequently are most likely to be involved with the modern epidemic of metabolic disease. The Western diet and lifestyle, with its chronic imbalance of intake and expenditure, appears to have exceeded our homeostatic capacity to cope metabolically, helping drive the global pattern of weight gain and its associated problems. The modern epidemic of metabolic disease is not merely a function of calories, but also of the composition of what we eat. Modern dietary change has led to other, no less important, imbalances and sources of novelty, such as our increasing consumption of nonprotein calories like processed sugars and alcohol, and foods that are not only high in fat but also have an imbalanced fatty acid composition. All of these changes have been rapid, leading to unprecedented global changes in human metabolism and health. It is also important to remember that even in high-income countries many members of the population, especially those of low socioeconomic status and living in parts of urban environments sometimes called “food deserts,” have a diet deficient in micronutrients and high-quality protein, and have food insecurity. How Can Change in the Environment Increase Disease Risk? • These lifestyle transitions have been implicated as causal in the changing patterns of obesity and diseases such as type 2 diabetes and cardiovascular disease. The evolutionary question is, why has this change in environment increased the risk of disease? Many of us are now living in environments that are novel in an evolutionary sense, and the underlying consensus is that the disease syndromes are the out come of a mismatch between the human capacity to maintain metabolic homeostasis and the modern energetic and nutritional environments. Two sets of possibilities exist to explain how this mismatch may arise, and they are not mutually exclusive (see Low and Gluckman 2016). First, because the contemporary environment is novel in evolutionary terms, there has never been selection to match our genotype to it; rather it was primarily selected and adapted for the range of Paleolithic environments. But there are additional questions: was the genotype selected because it allowed the individual to survive bottlenecks produced by famine and now has deleterious consequences manifest in obesity and its complications, or was it that the rate of change in the environment outstripped the capacity of selection to match the organism’s physiology to its environment in a more general sense, and obesity and metabolic disease are the outcomes of this mismatch? A “Thrifty” Genotype? • The hypotheses developed to explain the rise of metabolic and related diseases in recent history were among the earliest attempts to use evolutionary principles to understand human health and disease. In 1962 the geneticist James Neel proposed that such diseases arose through patterns of inherited genes that may have been previously advantageous in human evolution but were now disadvantageous. He referred to these genes as “thrifty genes” because they were proposed to be associated with energy-saving characteristics such as insulin resistance in muscle or the propensity to deposit fat (see Neel 1962). He reasoned that such genes would have conferred an adaptive advantage in times of famine and perhaps would have conferred little disadvantage in times of plenty, because, as we saw for many other chronic diseases, there would have been little selection operating to maintain health into later life, as in general these diseases do not inhibit reproduction. Neel proposed that these genetically mediated thrifty traits that were useful to our ancestors would produce inappropriate effects, including obesity, in a modern world. He argued that this was particularly manifest in populations in parts of the world where famine had been frequent. Then, when such populations underwent a nutrition transition from a hunter-gatherer lifestyle to a modern lifestyle of chronic dietary abundance they would be at a particular disadvantage metabolically. The “thrifty genotype hypothesis” led to an early focus on predominantly genetic explanations for the high prevalence of metabolic disorders in some populations. What properties would we expect of a thrifty gene in the sense originally proposed by Neel? First, we would expect that the gene would demonstrate (and indeed would probably have been identified by) significant linkage to some trait which could represent thriftiness: perhaps BMI or glucose tolerance. Second, we might expect plausibility of gene function, i.e., that it would code for a protein or regulatory RNA associated with some aspect of control of metabolic partitioning (e.g., insulin secretion or action, a key metabolic regulatory enzyme, or some facet of adipocyte biology). Third, and ideally, we might expect population genetic studies to show some signal of selection on the gene, perhaps for a “risk” allele in populations particularly exposed to cycles of feast and famine in historical times, or even for a “protective” modern allele where the ancestral allele is now detrimental. Clearly, susceptibility to obesity and type 2 diabetes varies between individuals in the population, and there is evidence for heritability of these traits. Over 800 candidate genes associated with obesity or with type 2 diabetes have been identified (see Dai et al. 2013). However, of these only a small number have been confirmed by replicate studies in different populations with large enough samples to give assurance that the associations are indeed real. Moreover, even the strongest of these associations typically explains only a very small amount of the variation in susceptibility to obesity or diabetes in a population (of the order of 1-3%). Moreover, the mere demonstration that variation in obesity is related to genes does not prove that these genes were selected in the human gene pool due to advantages during famine, as Neel originally proposed. Thus, the thrifty gene hypothesis remains just that, a hypothesis without any direct support. • In fact, putting Neel’s ideas under greater scrutiny gives one reason to pause when consider in the rise of agriculture ushered in poorer health and periodic nutritional stress, which is expected given the greater reliance upon the more precarious, narrower selection of crops in such populations. If hunter- gatherers were not particularly prone to famine, then could we adapt Neel’s hypothesis to propose that the more widespread and severe episodes of famine that have occurred since the introduction of monocrop agriculture have been a major driver of selection for thrifty metabolism? We know from studies of lactase persistence and malaria resistance genes that selection over such a timescale can leave physiologically important signatures in the human genome. Indeed, it has been suggested that selection in response to major famine events caused by failure of the Indian monsoon is the cause of the tendency of individuals of South Asian origin to deposit more of their energy in metabolically labile visceral fat (see Wells 2007). • Others have questioned on a number of grounds whether the selective advantage of a putative thrifty gene would be sufficient to cause it to spread widely. It is argued that the frequency and severity of famine have been insufficient, that relatively few famine events have occurred in most populations, that deaths during famine are predominantly caused by factors other than starvation, and that mortality in famines disproportionately affects the young and old rather than individuals of reproductive age (see Speakman 2006). In addition, obesity is rare in well- nourished individuals from present day hunter- gatherer and subsistence agricultural societies, who would have been expected to inherit such thrifty genes. If foraging populations do not put down excess calories as fat during times of plenty, how could the putative thrifty alleles underlying human metabolism have increased survival during periods of famine? Additional doubt is cast upon the assumed role of famine when we consider the development of body fat. In humans, fat makes up a larger percentage of weight at birth than in any other mammal. This is followed by a continued period of rapid fat deposition during the early post- natal months. In well- nourished populations, adiposity reaches peak levels during the first year of life before gradually declining in childhood, when humans reach their lowest level of body fat in the life cycle (see Davies and Preece 1989). If the threat of famine is what drove the human tendency to build up fat reserves, it is not obvious why children’s bodies should do so little to prepare for these difficult periods. The lower priority placed upon maintaining an energy reserve by middle childhood suggests that the background risk of starvation faced by our ancestors—“famine”—was small in comparison with the nutritional stress during the preceding developmental period of infancy. Indeed, infancy is marked by often intensive nutritional stress associated with weaning and the related problem of infectious disease, making nutritional disruption and nutrition-related mortality common at this age (see Kuzawa et al. 2007). Because all individuals who successfully pass on their genes to offspring must have survived this early-life nutritional bottleneck, there is likely to have been selection for building up protective fat reserves at this age in large-brained humans. In summary, while the thrifty gene hypothesis was valuable in developing evolutionary concepts related to metabolic disease, the hypothesis itself has largely been discarded. Does Evolutionary Novelty Explain Current Patterns of Metabolic Disease and Obesity? • The preceding sections provide the background for considering more recent concepts of how our evolutionary history helps explain the changing patterns of obesity and metabolic disease, in particular the recent increases in these conditions associated with the rapid nutrition and socioeconomic transition in Western countries over the last 100 years, and the transition that is now occurring in low-and middle-income countries. To recapitulate, our ancestors evolved in the absence of agriculture and a stable source of carbohydrates. Our species is characterized by being able to cope with an omnivorous diet and with a metabolism and behavior evolved on the basis of regular access to food. Selection acted to match our physiology to these characteristics and to the diet of the Paleolithic foragers from whom we are descended. Although, as discussed earlier, the concept of a typical hunter-gatherer is misleading, it is generally accepted that we were selected within and adapted to environments characterized by a relatively high protein intake and low intake of sugars and fat. Frank obesity would have been a remote possibility. It has been estimated that hunter-gatherers may expend up to 2500 kcal each day gathering food. Since the Upper Neolithic, the human diet has changed dramatically, at first in association with the development of agriculture and then, since the industrial and technological revolutions, in association with the development of various forms of highly refined foods. In parallel the physical work expended to “earn” this food is declining, again at an accelerating rate. • These two changes have occurred against a background of a marked increase in life expectancy across virtually all populations as public health measures take effect—more dramatically in high-income countries, but increasingly also in low- and middle-income countries. It is important to note that because peak reproduction occurs well before middle age, health and reproductive fitness are not identical. There will not be great selection pressure against health consequences occurring in middle age if reproductive competence at an earlier age has not been affected, although the effects on kin fitness conferred by the presence of older individuals in social groups may provide some fitness advantage for longevity. The increasing incidence of metabolic disorders among younger people refutes one possible reason for the epidemic—that it is the direct result of a longer lifespan—but does raise the question of how such early onset disease will influence the reproductive fitness of affected individuals. The simplest evolutionary explanation of this epidemic is therefore that our species is facing a nutritional environment that is entirely novel in evolutionary history and for which our metabolism has not been selected. Instead, our metabolic repertoire of genes is based on that which would have been best adapted for the Paleolithic. For instance, a metabolism selected for a high-protein, lowfat, and low-carbohydrate diet might result in obesity and metabolic disease when confronted with a low-protein, high-fat, and high-sugar diet. This is the simplest form of evolutionary mismatch in which cultural and dietary change have outpaced the capacity of evolutionary processes to compensate (see Low et al. 2015). What seems clear is that the environmental setting is critical, regardless of genetic background. Pima Indians who have maintained their traditional way of life do not have high rates of type 2 diabetes, but their contemporaries consuming a different diet and engaging in reduced levels of physical activity frequently develop the disease. The risk of developing diabetes can vary markedly across populations: this also requires an explanation. In some cases this can be traced to genetic differences in local populations that have been confronted with distinct nutritional histories, selecting for unique mutations which influence their disease risk. At a given BMI, populations from the Indian subcontinent are more likely to develop type 2 diabetes than are Europeans, and it has been proposed that monsoon cycles could have created famine conditions that might have selected for such a trait. In themselves these observations still fit with a simple model of mismatch caused by evolutionary novelty, but where the sensitivity to the novel environment might be genetically different across populations. Developmental Mismatch— Contributing Factor It has been obvious from Hippocratic times that events in early life may have lifelong consequences. During the past 25 years, much research has emerged to show that early human development is responsive to metabolic signals, and that a mismatch between the signals and the actual environment in which the individual later lives could contribute to the rise of metabolic disorders such as diabetes and heart disease (see Gluckman et al. 2015). Population- based studies were the first to suggest that the developmental environment plays a role in risk of later disease. Evidence for such effects came in several forms. The most common was the finding that individuals of lower birthweight tend to have higher rates of later metabolic disorders, including hypertension, diabetes, and cardiovascular diseases (reviewed in Godfrey 2006). These studies are by their nature very long term: how else can we relate what happened to a person when they were a fetus or a growing child to their health as an adult? One of the most important approaches was to find historical records of birthweight and early growth after birth from a period over 50 years ago, and then to trace the subsequent adults. In the early 1980s, David Barker, an epidemiologist from Southampton, and his colleagues were investigating the rates of mortality from coronary heart disease and other vascular diseases in England and Wales over the previous decade (in fact 1968– 78). They noticed that the highest rates of mortality did not occur in the areas of greatest contemporary affluence, in the southeast of Britain for example. Rather, the highest rates occurred in the northwest of England and South Wales, which were areas of high unemployment and poor social conditions, particularly at the time the adults they were studying had been born. Furthermore, there was a strong geographical correlation between ratesn of infant mortality (a marker of poverty, and indirectly of maternal nutrition) in the early 1900s and rates of death from coronary heart disease several decades later. What might account for such an association between early life and late- life mortality? These observations were very remote in time, and likely to be influenced by unmeasured factors, such as differences in smoking, diet, or lifestyle, which were not quantifiable. The next step was to move from gross correlations over integrated populations and across time to find out what happened to a specific population where individuals could be studied throughout their lives. What was needed was a set of records about the growth and development of children in the early part of the century that could be linked specifically to the causes of death in later life. The largest set of records that were found related to the English county of Hertfordshire. These included weight at birth, weight at 1 year of age, and whether the baby was weaned at 1 year. The ledgers had been maintained from 1911 to 1945. Barker and his colleagues used the National Health Service Central Register at Southport to trace 16,000 men and women born in Hertfordshire between 1911and 1930, and to determine their cause of death. The results were controversial (see Barker et al. 1989; Osmond et al. 1993). What they found, for both men and women, was that risk of death from heart disease was doubled in individuals born at a weight of less than 5.5 lb (2.5 kg) compared with those born at a weight of 9–9.5 lb (4.1–4.3 kg). While there was much controversy about these initial findings, they in fact built on many earlier observations suggesting that the conditions during pregnancy and early life had long- term health consequences. Over the next two decades, a large amount of confirmatory epidemiological, prospective clinical, and experimental studies supported the general model that an adverse start to life, which might be proxied by a low birthweight, was associated with a greater risk of later obesity, cardiovascular disease, and insulin resistance. Other workers also showed that excess nutrition in infancy, such as associated with formula feeding, also led to altered disease risk. Given that poor fetal growth is generally associated with some rebound rapid growth in infancy these are probably different perspectives of the same phenomenon. However, further evaluation of the developmental data showed that the relationship between developmental growth patterns and disease risk is not a function of severe intrauterine growth impairment; rather, there is a continuous change in risk across the full range of developmental growth patterns. This suggests that disease risk is not a consequence of developmental disruption but is part of a developmentally plastic process that originally had an adaptive purpose. Indeed, subsequent studies have shown relationships between the conditions of pregnancy such as maternal caloric intake and later pathophysiology of the offspring (see Gale et al. 2006). The first studies were conducted in high-income countries, so of course the question arises of how relevant they are to low- to middle-income countries. There have now been studies conducted in several of these countries including India, China, and parts of South America, and they all show similar trends. Finally, evidence from modern famines further supports the role of the intrauterine environment in determining health in later life. Maladaptive Consequences of an Adaptive Process • These epidemiological studies could not provide evidence about the biological basis of the relationship, but a consensus arose that poor fetal or early life experience altered development in such a way that growth was affected and also influenced the propensity to develop disease in later life. The term “programming,” previously and variously used to reflect the putative intergenerational effects of gestational diabetes (see later) and then the later consequences of formula feeding versus breastfeeding, was adopted to describe the phenomenon. Subsequently, Barker and Hales developed the thrifty phenotype hypothesis, in an explicit reference to Neel’s earlier proposal of a thrifty genotype, to explain programming (see Hales and Barker 1992). • Their concept was that the fetus adjusts its biology in response to signals from its mother of poor nutrition, allowing it to survive until birth (i.e., an immediately adaptive response) but then predisposing it to the adverse consequences of such programmed thriftiness in adulthood. They proposed that the likely mechanism was the in utero induction of insulin resistance, as insulin is known to be a key regulator of fetal growth. However, that model failed to fit several aspects of subsequent data. It assumed that fetal growth retardation was key to the process, yet it soon became clear that most people who later developer obesity and metabolic compromise were not growth retarded at birth. Further, infants who are small at birth have insulin hypersensitivity, and insulin resistance only appears at about 3 years of age (see Mericq et al. 2005). Nevertheless, the conceptual framework of the thrifty phenotype has inspired considerable research which has led to our current understandings of the developmental basis of later disease risk. • Animal studies confirmed and extended the results and helped dissect the underlying molecular mechanisms. The initial studies were performer in rats. If a rat fetus was undernourished in utero, because the pregnant dam was fed a reduced energy diet or just an unbalanced diet (e.g., with a low protein content), it became hypertensive and obese as an adult (Langley and Jackson 1994). These adult rats were also shown to have insulin resistance and to have shorter life expectancies than those whose mothers had been fed a balanced diet in pregnancy. The effect was magnified if the rat was placed on a high-fat diet after weaning, echoing the human situation of dietary abundance and demonstrating that the interaction between the fetal and post-natal environments determined outcome (see Vickers et al. 2000). As experimental work proceeded it became linked to other fields of biological enquiry, especially evolutionary developmental biology. The realization was growing that development was an important and under-represented component in the explanation of metabolic disease. But a key feature of the link with metabolic disease was that this was not just about those with a low birthweight. The epidemiological studies had shown a continuous relationship between birth size and later disease risk, present even in those of above average birthweight. Epidemiologists had also shown that metabolic programming could be induced by changes in the fetal environment that did not affect birthweight (see Gale et al. 2006). Further epidemiological and clinical studies showed that smaller infants had relatively more visceral fat at birth, although subcutaneous fat was reduced. Such observations, together with the broad incidence of metabolic disease in the population, suggested that the developmental component did not represent the outcome of a pathological process involving disruption of fetal development. It was rather a maladaptive outcome of the generally adaptive processes of developmental plasticity. • Indeed, there is now ample experimental evidence in animals, and growing clinical evidence, that lifelong epigenetic changes in genes associated with systems such as insulin sensitivity, glucose metabolism, and the glucocorticoid axis underpin the developmental induction of metabolic risk (see Low et al. 2014). In humans, individuals exposed pre-natally to the Dutch Hunger Winter famine of 1944 showed differential methylation levels at gene loci implicated in growth and in metabolic and cardiovascular disease risk nearly six decades after exposure, showing that a transient environmental exposure can indeed have long-lasting molecular consequences (see Heijmans et al. 2008). Other epigenetic marks have been found in umbilical cord tissue that relate to body composition in pre-pubertal Children. Developmental Plasticity and Mismatched Signals Across the Life Course • How can these developmental components be understood in evolutionary terms? The general paradigm we have put forward is that early developmental cues have induced an adaptive, developmentally plastic response that cues the individual’s physiology in the expectation of living in a nutrient-poor or adequate environment, but then the organism ends up in a more nutritionally plentiful environment than predicted. This general model suggests that these processes are not pathological in origin, but are fundamental to biological variation operating in the normal range of ecological cues. There are other developmental pathways that reflect evolutionarily novel exposures in development for which an adaptive explanation is inappropriate, including maternal obesity, gestational diabetes, and feeding with infant formula. • We explained how the processes of developmental plasticity act to allow one genotype to give rise to a range of phenotypes and how organisms use plasticity as an alternative or additional process to adapt to environments, particularly among species faced with change during their life course. Developmentally plastic responses are induced by external cues (e.g., maternal nutrition), and depending on the fidelity of the relationship between the cue and the future environment there may be effects on fitness or health. It was hypothesized that if the developing organism predicts a limited nutritional environment in the future, it might be appropriate to use the mechanisms of plasticity to adjust growth patterns so that later body function is optimized for a limiting nutritional environment. Conversely, if the fetus predicts a later environment with plentiful nutrition it is appropriate to have a metabolic system set up with different expectations. In the former situation of predicted nutritional threat, an appropriate response would include reduced investment insomatic growth (e.g., reduced muscle mass), a preference for high- fat foods, metabolic settings that favor fat deposition in times of energy excess, and altered endocrine, behavioral, and vascular controls such that the organism has reduced insulin secretion and sensitivity. Given that evolution is driven by the fitness imperative, anticipation of a threatening environment might be expected to accelerate the timing of maturation and commit more resources to reproduction, perhaps even at a cost to other traits that improve longevity (e.g., by investing less in cellular or DNA repair- see Gluckman and Hanson 2006a). Depending on the severity of the initial cue, the organism may only respond with predictive adaptation, but if the challenge in early life is more severe it will induce the immediate adaptive responses of reduced growth and/ or early delivery, thus explaining the original birthweight relationships found in epidemiological studies. • The general hypothesis of predictive adaptive responses was initially tested in rats. It was shown that neonatal rats that had been born to undernourished mothers could be “tricked” into thinking they were in a high- nutrition environment by being injected with the adipokine leptin. This prevented the animals from developing obesity, insulin resistance, and the other features of metabolic compromise, and the associated epigenetic changes— even when maintained on a high- fat diet through life (see Vickers et al. 2005; Gluckman et al. 2007b). But could such a hypothesis be tested in humans? Studies in a population where severe undernutrition is common showed that being born small was protective against the risk of dying from infant undernutrition, and could be best interpreted as supportive of the evolutionary hypothesis (see Forrester et al. 2012). The adaptive advantage of a predictive response depends on the fidelity of the prediction. If correct predictions lead to greater chances of growth and survival to reproduce, this would be why underlying anticipatory and plastic processes have been selected through evolution. Modeling work shows that developmental forecasting does not have to be particularly accurate to confer a selective advantage (see Jablonka et al. 1995). When fetal nutrition is not a reliable cue of external conditions— as a result of maternal disease, placental inadequacy or malfunction, or simply because the environment has changed notably between birth and later life— this could produce a phenotype that is not well suited to meeting the challenges of its environment, thus placing the individual at greater risk of disease. In the metabolic domain, a phenotype of increased insulin resistance, reduced muscle mass, and increased propensity to store fat is precisely the background on which susceptibility to metabolic disease would be enhanced in a later nutritional environment of high energy availability. • Because of maternal constraint, it is reasonable to suggest that most individuals are sensitized to an obesogenic environment because the fetus will be biased towards predicting lower- nutritional environments that may have been typical of earlier epochs. Maternal constraint refers to a set of somewhat poorly defined mechanisms by which fetal growth is limited by maternal size (see Gluckman and Hanson 2004). It probably involves limits on uterine blood flow and placental– fetal hormone interactions. The outcome is that genetic factors are less important than postnatal factors in determining fetal growth— indeed embryo transplant experiments in animals, and studies of humans born to surrogate mothers, show that the maternal phenotype rather than genotype is the primary determinant of birth size. These studies also show that fetuses do not grow to their maximal genetic potential because of these constraining mechanisms (see Hanson and Godfrey 2008), and thus are perhaps generally signaled to expect limited nutrition after weaning. The developmental mismatch model proposes that, in evolutionary terms, there is an advantage in using the processes of developmental plasticity to adjust the set points for metabolic homeostasis to match the predicted environment of the mature organism. This process could have had adaptive advantage in environments that were reasonably stable over decades. But the fidelity of the prediction might increasingly be lost because the mechanisms of maternal constraint limit the forecast that is possible, particularly as the post- natal environment becomes abundant in evolutionarily unprecedented ways. Additionally, modern medical care allows a greater range of fetuses to survive, many with greater degrees of maternal constraint (such as twins and those with extreme growth retardation) and perhaps therefore with a greater risk of mismatch. Maternal ill- health and placental dysfunction is other ways in which the maternal– placental transduction of environmental information can lead to a faulty prediction, and progress in obstetrics and pediatrics allows a far greater range of babies to survive to adulthood. • While the expansion of experimental and clinical research and the definition of underlying epigenetic mechanisms has established the view that developmental pathways are important, the relative importance of these pathways in contributing to the risk of metabolic disease remains uncertain. Clearly, disease risk would not exist were it not for the large changes in diet and lifestyle. It is important to note that this model does not imply that developmental factors cause obesity or type 2 diabetes; rather it explains how developmental factors change the sensitivity of the individual to the obesogenic environment in which they will live as older children and as adults. Indeed, recent epidemiological studies show that there is an interaction between developmental and evolutionary mismatch, such that adult lifestyle interacts with birthweight to compound the risk of developing type 2 diabetes (see Li et al. 2015). Evolutionary Novelty in Development The developing human may, additionally, be exposed to entirely novel exposures, such as infant formula based on cows’ milk. It is well established that feeding with infant formula predisposes to obesity, probably through mechanisms other than via adaptive developmental plasticity, and that breastfeeding has a protective effect (see Cattaneo and Cogoy 2012). When infants are breastfed they regulate their own food intake. This is not the case when fed by bottle, and this may change the development of appetite control. Human breast milk has a number of components including milk oligosaccharides, neurohormones such as leptin, and other hormones such as IGF1, which may have an impact on the infant’s metabolic and satiation biology and development. Furthermore, human breast milk supports a different gut microbiota from that of formula (see Penders et al. 2006); these differences persist for some time after birth and may have important long-term effects. Finally, the macronutrient content of infant formula is very different from that of breast milk, and may make obesity more likely to develop. A second example of probable evolutionary novelty that is of rapidly emerging public health importance is gestational diabetes mellitus. Hyperglycemia in a pregnant woman leads to fetal hyperglycemia and hyperinsulinemia, thus resulting in adiposity of the offspring. Insulin is known to be adipogeneic in the late-gestation fetus and infant. As a consequence, infants of diabetic mothers have an increased fat mass, in proportion to the degree of maternal hyperglycemia. The larger number of fat cells in infancy confers a greater risk of obesity through childhood. Beyond this, there may also be epigenetic changes that also predispose to a later risk of diabetes. It has been suggested that gestational diabetes is an evolutionarily novel exposure (see Ma et al. 2013). Some degree of insulin resistance is normal in pregnancy, and is induced by placental production of growth hormone and placental lactogen that induces mild insulin resistance in the pregnant woman. This promotes transfer of maternal glucose to the fetus to promote fetal growth. But unlike other nutrients such as amino acids and fatty acids, there is no limit to placental glucose transfer. Gestational diabetes itself is more likely in women who had been born small and are consequently at greater risk in an obesogenic environment, thus reflecting earlier developmental impacts on their own lives. Left untreated, gestational diabetes results in fetal macrosomia, which in the absence of obstetric intervention leads to dystocia, posing a severe risk to both mother and child. That there is no limit on glucose transfer suggests that there has been no selective pressure for such a need, which might imply that gestational diabetes is itself a manifestation of modernity. Indeed, maternal obesity and gestational diabetes often coexist, and correlations have been shown between maternal BMI and offspring adiposity. • Animal research is now starting to suggest that paternal obesity and insulin resistance, experimentally induced in male rodents, can also pass risk to their offspring, even when the mothers are lean and fed a balanced diet (see Fullston et al. 2013, Wei et al. 2014b). This is one of the growing pieces of evidence for epigenetic inheritance. One of the most active areas of current research concerns the role of the gut microbiome in health. Recent work has shown that women whose gut microbiota predominantly comprise the bacterial phylum Firmicutes had different epigenetic marks in genes functionally associated with obesity, cardiovascular diseases, and inflammation. A human infant that is delivered vaginally will acquire a microbiota similar to that of its mother’s vagina, while infants delivered by cesarean section have a very different microbiota (see Funkhouser and Bordenstein 2013). There is now good evidence that the risk of later overweight/obesity is up to 25% greater in babies who were delivered by cesarean section (see Darmasseelane et al. 2014). Although the causative nature of the association remains to be tested, this probably represents another example of a health risk arising from exposure to an evolutionary novelty—namely loss of exposure to the maternal vaginal microbiota. Other evolutionarily novel exposures include environmental chemicals such as endocrine disruptors, which mimic or block hormonal action and are found in many common modernday household products. Many of these endocrine disruptors interfere with lipid metabolism and adipogenesis, and have been implicated in the growing obesity epidemic. Recent data in rats suggest that the adverse effects of maternal obesity on health of the offspring may be mitigated by supplementing the mother’s diet during pregnancy with methyl donors, taurine, or a cocktail of antioxidants (see Vickers and Sloboda 2012). The specific mechanisms by which this occurs remain unclear, but these findings provide important proof of principle for the reversibility of developmental metabolic effects induced by evolutionarily novel exposures. In light of the modern-day obesity epidemic, the potential public health utility of these findings may be substantial. Thank you for your attention!!!!!