Monday, September 22, 2014

Nature vs nurture, and human behavior: where’s the science?

Despite more than a century of work in genetics, sociology, psychology, anthropology, economics, and other disciplines that focus on human behavior, the turning of the Nature-Nurture cycle continues.  Is human behavior hard-wired in our genomes or is it a pattern of responses created from birth onward by experience?  How can it be, given the masses of data that state-of-the-art science has produced, and the quantity of ink that has been spilled to answer the question, that neither side yields an inch?  Indeed, how can their still be sides?  

We can express the debate roughly as follows:
Does there exist in every human being, underlying behavior that is shaped by the social influences around him/her, an inborn predisposition that education might modify, but cannot really change? Is the view that denies this and asserts that we are born with tabula rasa, totally molded by life experience, asserted by people who have not recognized the obvious fact that we are not born with blank faces—or who never compared two infants of even just a few days old, and observed that those infants are not born with blank temperaments that can be molded up at will? Is there, essentially open-ended variation among individuals, genomic causes of behavior, that cannot be over-ridden by a person’s life experience? Is experience ever the key to understanding a person’s behavior, or can genomic knowledge predict the nature of each person?
The prevailing view in our current age, in love as it is with technology and genomics, the Nature view, invokes Darwinian evolutionary theory as its justification for genetic determinism, and asserts that if we do enough genome sequencing, individual behaviors will be predictable from birth, much as the view in the biomedical community that argues that disease risk will be predictable from birth.  In anthropology, this view goes by the rubric ‘Human Biodiversity’, the assertion that is the bio component that drives our diversity, as opposed to our culture and environment.  This is extended to the argument that Darwinian theory implies that different ‘races’ necessarily are different in their inherent skills, temperaments and the like just as they are in their skin color and hair form.

Opposed to that is the Nurture view that argues that some physical traits and disease predispositions have genetic components, but that our basic nature and behavior are molded by our social, political, and economic contexts.  This view argues that measures of value and achievement, and behaviors leading to economic or family success or to addiction and antisocial behavior, are social constructs rather than reflections of hard-wired natures.

There is nothing new here, and in reaction to the eugenic movement and the Nazi horrors, which were rationalized in terms of Darwinian inherency, there was a strong turn towards what was coined as tabula rasa, the view that early experience makes us what we are.  This was how psychology was generally taught in the post-war generations.

Both views can be traced back to precursors, including Freud and many others for the Nurture side and of course Darwin and his relative Francis Dalton on the Nature side.  However, we are after all supposedly in the Age of Science, and it is a bit surprising that we have not moved very far at all with respect to the Nature-Nurture question.

Another way to say it
In fact, that lack of progress, despite enormous investment in supposed scientific research to answer the question, explicitly or implicitly, is not hard to document.  And the documentation goes beyond the ivory labs of scientists, to the broader public, where it has been of interest for more than a century.  Here is a way to express the same question, which I have just chanced upon:
Does there exist in every human being, beneath that outward and visible character which is shaped into form by the social influences surrounding us, an inward, invisible disposition, which is part of ourselves, which education may indirectly modify, but can never hope to change? Is the philosophy which denies this and asserts that we are born with dispositions like blank sheets of paper a philosophy which has failed to remark that we are not born with blank faces—a philosophy which has never compared together two infants of a few days old, and has never observed that those infants are not born with blank tempers for mothers and nurses to fill up at will? Are there, infinitely varying with each individual, inbred forces of Good and Evil in all of us, deep down below the reach of mortal encouragement and mortal repression—hidden Good and hidden Evil, both alike at the mercy of the liberating opportunity and the sufficient temptation? Within these earthly limits, is earthly Circumstance ever the key; and can no human vigilance warn us beforehand of the forces imprisoned in ourselves which that key may unlock?
Where does this somewhat stilted form of the issue come from?  It’s from  Wilkie Collins’ book No Name, published in serial form in Charles Dickens’ journal All The Year Round, 1862-3, just 3 years after Origin of Species (but with no reference to it, and Collins was not involved in science discussions at the time).  That was about 150 years or six generations of science ago!

No Name is a neglected masterpiece, a compelling novel about social conditions in England at the time (related to rules of inheritance, marriage, and illegitimacy.  It was not a commentary on science, but this paragraph reflects an essentially totally modern expression of the very debate that still goes on in the world generally, and in science specifically.

Where we are today
We have tons more facts, on both sides, but not a whit more nuance in the generally expressed views on the subject.  Indeed, do we have much more actual science or is each side just enumerating some additional data, often that barely deserves being called 'fact'?  One major reason for the persistence of no-answers is the inordinately dichotomous world-views of scientists, predilections based on various aspects of their technical specialties but also on their sociopolitical views.  Another reason is simply the tribal nature of humans generally, and in particular that of opinions about issues that affect societal policy, such as where to invest public funds in regard to things like education or welfare, how one views social inequality, the origins of proper (socially admired rather than antisocial) behavior, and the like.  We all have our perspectives and our interests, regardless of whether we work as scientists or in other occupations.  In light of this discussion I probably should be asking whether my world-view and career in science is the result of my genes, or the fact that there were lots of jobs (and science was more fun and less about 'productivity' and money than it is now) when I got my degree?

Not everyone sees this question in a completely polarized way, but instead propose Nature via Nurture or other acknowledgements of the role of both genes and environment (an obvious truth, since DNA is in itself basically inert), but if you look carefully you'll almost always be able to detect their political beliefs, and thus their predilection for one side or the other.  They pay lip service to environment but basically want to do a lot of DNA sequencing (or tests for 'epigenetic' changes in DNA), or they want to do sociological studies and minimize the genetic component (or opportunistically say they're testing a given variant in many environments).  We are all being trained as specialists with particular interests and science in general has been based on an enumerative reductionist approach that is not good at integrative studies of complex interactions.

The bottom line for me is that we should all recognize the uncertain nature of the science, even perhaps that science doesn’t pose its questions in ways that have scientific answers. But we also should recognize that behavior and attitudes towards it affect how society allocates resources, including status and privilege and power hierarchies.  For that reason, scientists should treat the subject with a bit more caution and circumspection—much as we do things like experiments testing whether genetic engineering of viruses could make them more pandemic, and other areas where what someone does in a lab, for curiosity or any other reason, can spill over to have implications for the lives of the rest of us—who are paying the bills for that science.

However, for the same reason that the research affects society, we can’t expect scientists, who are privileged members of that society, to monitor themselves.  And the persistence of the same important questions, from the beginning of the scientific age to the present, should teach a lesson in humility as well.

Friday, September 19, 2014

Faith in science? Industrialized agriculture and antibiotic resistance

Someone asked me the other day on Twitter whether I thought that the words "science" and "belief" were compatible.  I said yes, though I know that a lot of scientists think (...believe...) that faith has nothing to do with science.  Science is facts, faith is religion, based on sacred texts and the like, which are basically hearsay without empirically acceptable evidence.  But, the history of science indicates that this distinction is far from being so simple -- there was a time when people believed that the moon was made of cheese, diseases were caused by bad air, Newton was right about physics, the continents didn't move.  And these beliefs were based on empirical evidence, observation -- dare I say 'facts'? -- not mere guesswork.

In that light, two recent pieces about the role of agriculture in the rise of antibiotic resistance are interesting.  The New York Times described a new study in the Journal of Occupational and Environmental Medicine ("Persistence of livestock-associated antibiotic-resistant Staphylococcus aureus among industrial hog operation workers in North Carolina over 14 days," Nadimpalli et al.)
that reports that workers at industrial hog farms can carry antibiotic-resistant bacteria, Staphylococcus aureus, in their nostrils for up to four days.
Twenty-two workers provided 327 samples. S. aureus carriage end points did not change with time away from work (mean 49 h; range greater than 0-96 h). Ten workers were persistent and six were intermittent carriers of livestock-associated S. aureus. Six workers were persistent and three intermittent carriers of livestock-associated multidrug-resistant S. aureus. One worker persistently carried livestock-associated methicillin-resistant S. aureus. Six workers were non-carriers of livestock-associated S. aureus. Eighty-two per cent of livestock-associated S. aureus demonstrated resistance to tetracycline. A majority of livestock-associated S. aureus isolates (n=169) were CC398 (68%) while 31% were CC9. No CC398 and one CC9 isolate was detected among scn-positive isolates.
As the NYT piece notes, eight-six percent of this sample of hog farm workers carried bacteria for at least 24 hours, compared with about one-third of the non-farm worker population.

This is a problem because the resistant variety of S. aureus, MRSA, has made its way into hospitals and is responsible for thousands of deaths.  Further, many people believe that industrial farming is the cause of much of the antibiotic resistance that is now becoming such a problem, because animals are fed antibiotics to speed their growth, and many of those antibiotics are used to treat human diseases.  Indeed, the majority of the antibiotics used in the industrialized world are given to animals.  When bacteria on the farm become resistant to antibiotics, as this study shows, they don't necessarily stay on the farm.  How they spread has been difficult to document, but might include consumption of contaminated meat, and Nadimpalli et al. report another pathway.

Hog farm; Wikipedia
Responding to the increase in antibiotic resistance that many believe industrial farming to be responsible for, the US Food and Drug Administration this year put a voluntary ban on the use of antibiotics for growth promotion. Critics saw this as a weak response to a very large problem, but pharmaceutical companies and some farmers say it will do what it is meant to do; reduce the use of antibiotics for non-medical purposes, and thus reduce the possible evolution of resistant bacteria that are harmful to humans. Of course one always has to ask the political question of who wields the power and influence over any sort of decision that may affect a particular industry.

But much of this is controversial. Is agricultural use of antibiotics in fact to blame for the problem, or is it overuse of antibiotics by the medical system?  Indeed, there's less of a problem in, say, Scandinavian countries where for decades physicians have prescribed antibiotics at a much lower rate than they have done in the US. Do resistant bacteria really spread in considerable numbers from farm to city?  This may be less controversial with the publication of the Nadimpalli et al. paper, but critics will say that the sample size was small and anyway, documenting a mechanism doesn't mean this is what has happened.

We all tend to pick and choose facts to support our convictions.  Indeed, if you look at how scientists, in any field, cling to their explanations, 'convictions' is perhaps a muted term for what is being clung to.  How we think about these questions may well reflect what we believe more generally about the food system, how or even whether animals should be farmed for meat, whether we patronize farmers'  markets rather than industrially produced food, and so forth rather than what we, or anyone, actually know about the causes of antibiotic resistance.  That is, our personal sociopolitical positions seem clearly be correlated with, if not strongly influencing, our scientific position.

Yesterday, an opinion piece by Iowa veterinarian and pig farmer Howard Hill appeared in our local paper, and in papers around the country.  Hill believes that farmers are being unfairly blamed for antibiotic resistance in humans.
...the claim that "70 to 80 percent of all antibiotics sold in the United States each year are used in livestock" is a straw man. More than a third of those drugs aren't used in human medicine, another third are not considered highly important to human medicine, and most of them aren't used for growth promotion. Critics also ignore the fact that there are a lot more cows, pigs and chickens than people. In 2011, for example, 30 million pounds of antibiotics were sold for use in more than 3 billion livestock and poultry, compared with 7 million pounds for 311 million people, meaning each person used nearly five times more antibiotics than were used in each food animal.
Is he making selective use of the data?  Yes, but isn't everyone who talks about this issue?  And does that make our assertions wrong?  Doesn't prior belief influence our understanding of what the data show?

While Rome burns
President Obama yesterday issued an executive order aimed at combating antibiotic resistance.  The order accepts that industrial agriculture may have a role in increasing resistance, but it adds little to the FDA order of several months ago:
The Food and Drug Administration (FDA) in HHS, in coordination with the Department of Agriculture (USDA), shall continue taking steps to eliminate the use of medically important classes of antibiotics for growth promotion purposes in food-producing animals.
Not many teeth here.  Years ago Europe took much the same approach, requiring that the use of antibiotics for growth promotion be reduced, but a lot of reclassification of antibiotic use for medical purposes followed, as many expected in the US following the FDA announcement last December, which we blogged about here,  and again with this Executive Order.

Again in Scandinavia, the use of antibiotics for growth promotion has been banned, beginning in Sweden in 1986, but farmers have not suffered.  According to a piece in the BCMJ in 2011:
In 1986, Sweden became the first country to regulate the withdrawal of antibiotics used in food animal production. By 2009, Swedish sales of antibiotics for use in agriculture were reduced from an average of 45 tons of active substance to 15 tons. Sweden was followed by Denmark, the United Kingdom, and the Netherlands. 
Danish swine and poultry production continued to flourish with gradual reductions of antibiotic use beginning in 1992 and continuing to 2008 (latest data). During this time, Danish farmers increased swine production by 47% while reducing antimicrobial use by 51%. As well, poultry production increased slightly while reducing antimicrobial use by 90%. Denmark remains one of the largest pork ex­porters in the world.
So, whether or not growth promoting antibiotic use in animals is a major cause of resistance is not really an issue, and we needn't even continue to have the discussion.  If there is any chance it is, why not ban it entirely?  Experience in Scandinavia suggests there won't be dire economic consequences -- unless you're a pharmaceutical company making antibiotics for animals.

Faith in science
We have often written here about the economic interests that drive the course of Big Science.  Can we have faith in science if there is considerable faith in science?  People are, after all, only human, and people of all faiths, including science, defend their faiths.  Further, it's often impossible to disentangle belief from vested interest.   If you've got a hammer, or a hammer to sell, everything looks like a nail.

Thursday, September 18, 2014

Malaria control now may not foretell the future

There are hailstorms, landslides, droughts, malaria and...the State. These are inescapable evils: such there always have been and there always will be.
                     Carlo Levi, Christ Stopped at Eboli: The Story of a Year, 1945
Malaria was once endemic in southern Europe, the UK, and the Americas.  It was in Greece at least 4000 years ago, and reached the Americas with or shortly after Columbus, eventually becoming an 'inescapable evil' when the slave trade was at its height.  With mosquito control due to DDT, cleaning up of standing water and other measures, the disease was essentially eliminated in the US and Canada and some parts of South America by 1950.  But the factors that maintain malaria in a population are complex, and it's possible that with climate change, global transfer of goods and increasing immigration from endemic areas, this, and other mosquito-borne diseases could return.

A recent episode of the BBC Radio 4 program, Costing the Earth, discussed the increasing spread of a number of mosquito species in the UK and throughout Europe.  In particular, the Asian tiger mosquito (Aedes albopictus), a voracious biter, has been spreading north from the Mediterranean for several decades.  It doesn't carry malaria, but it is a vector for other significant diseases including Yellow fever, Chikungunya, and nematodes that cause filariasis.  

Asian tiger mosquito; Wikipedia
The Costing the Earth episode pointed out that the mosquito is spreading for multiple reasons, perhaps a perfect storm of causation: temperatures are warming, sustained drought in the UK has meant that more and more people are collecting rain in buckets, to use in watering their gardens, and these standing pots of water have turned out to be a great reservoir for mosquito breeding, there is sustained wetland restoration happening in the country, tires shipped into the UK are also frequently full of stagnant water and thus mosquitoes and a number of species are entering the country via this route.  And so on.  But, mosquitoes that carry and transmit malaria still live in areas where malaria is no longer endemic, including the UK and the US.  So, it's not the absence of the vector that explains the absence of disease.

The dynamics of epidemics are well-understood, in mathematical, ecological and cultural detail (Aeon magazine has just published an excellent and accessible description of the mathematics of epidemics).  In the case of malaria, generally speaking, to maintain the disease in a population the population must be over a given size, there must be a reservoir of infected individuals for the mosquitoes to feed on, there must be a large enough mosquito population to transmit the disease to enough susceptible individuals, the fatality rate can't be so high that the parasite is quickly eliminated by death, environmental conditions must be favorable to the vector, and so on.  And, cultural factors must be favorable as well, so that, e.g., mosquitoes can find people to feed on or to infect at the right time of day or night.  That is, there are multiple factors required to maintain disease in a population, and often the infection can be halted by interfering with any one of them.

So, eliminating the vector, mosquitoes that carry the malarial parasite, is one approach to eliminating malaria, and as anything that breaks the chain of infection, vector control can successfully control the disease.  But, it's not required.  Reducing the prevalence of the disease to a level at which the host/vector ratio is no longer sufficient to transmit the disease widely, as was done in the US and Europe in the 1940's, is another way.  So, we in non-endemic areas of the world live with, and are bitten by potential vectors with no fear of infection with malaria, though now West Nile virus and Chikungunya are another story.  Indeed, this map shows the distribution of vectors or potential vectors around the globe.

            Global Distribution of Dominant or Potentially Important Malaria Vectors

From Kiszewksi et al., 2004. American Journal of Tropical Medicine and Hygiene 70(5):486-498, via the CDC.
Thus, where malaria has been eliminated, continued control depends on there being not enough infected individuals to sustain the infection.  Of course, this could change.  Southern Europe may be seeing the re-introduction of malaria, for example -- the disease has been spreading in Greece now for the first time in four decades.  In this case it can be blamed on the economic crisis and austerity because the government has not been able to pay for mosquito spraying, but the spread of malaria can't be blamed on mosquitoes alone; they've got to have an increased number of infected people to feed on as well. And it's not just malaria -- other heretofore neglected, unexpected, tropical diseases are reaching the shores of the US, and Europe, Chikungunya being one example.

We live in a dynamic, changing, interconnected world.  Malaria rates seem to have been declining in Africa and Asia, and research into better prevention and control is ongoing, and researchers can claim many successes.  But, like the proverbial fluttering of the butterfly wing that unpredictably causes climatic chaos far away, if innocuous acts like saving rain water in the UK can have widespread and unpredicted effects like increasing the spread of previously unknown mosquitoes, and thus potentially disease, it's hard to reliably predict that malaria will remain controlled in currently non-endemic areas.  This is not to be alarmist, but simply to point out that it's a bit hubristic to believe we can predict anything so complex, particularly when it requires predicting future environments.  We've said the same many times before about predicting complex disease from genes.

Carlo Levi (1902 - 1975) was an Italian painter, writer and physician.  Because of his political activism during the fascist era, he was exiled to a small southern town in Lucania, where he spent several years painting, writing and attending to the medical needs of the inhabitants there.  He wrote about this time in his book, Christ Stopped at Eboli, published in 1945.  It's a fascinating story; political, ethnographic, scientific (or quasi so).  Malaria was a fact of life in southern Italy at the time,  and Levi mentions it often in the book.  Including in this passage:
In this region malaria is a scourge of truly alarming proportions; it spares no one and when it is not properly cared for it can last a lifetime.  Productive capacity is lowered, the race is weakened, the savings of the poor are devoured; the result is a poverty so dismal and abject that it amounts to slavery without hope of emancipation.  Malaria arises from the impoverishment of the deforested clayey land, from neglected water, and inefficient tilling of the soil; in its turn it generates in a vicious circle the poverty of the peasants.  Public works on a large scale are necessary to uproot it.  The four main rivers of Lucania ... besides a host of lesser streams, should be dammed up; trees should be planted on the mountainsides; good doctors, hospitals, visiting nurses, medicines, and preventive measures should be made available to all.  Even improvements on a limited scale would have some effect...  But a general apathy prevails and the peasants continue to sicken and die.  
Levi may not have entirely understood the cause of malaria, but he clearly understood the vicious cycle of malaria and poverty, which he witnessed every day, all around him in exile.  As Dan Parker has written here on MT numerous times, economic development itself may be one of the best preventives we know.  But it may not always be enough.  It doesn't much matter which link in the chain of infection is broken; once repaired, we may have to figure out new ways to break the chain again.

Wednesday, September 17, 2014

Antibiotic resistance: Move the money (and control it)!

The BBC Radio4 program Discovery had a two-part series (August 18 and 25th) on the real health danger that we face and the research challenge it presents.  No, not Big Data mapping of diabetes or cancer, or athletic ability or intelligence.  Instead, they were about an impending biomedical disaster, one that essentially trivializes much of which we are throwing away resources on: antibiotic resistance.

Growing antibiotic resistance seems to be imminent or at least inevitable, both in terms of issues like treatment of disease in hospital patients, and in the control of spreadable diseases.  This doesn't seem to be too speculative.  Some strains of infectious bacteria are already resistant to multiple antibiotics, and these are often contracted by hospital patients who were there for some non-infectious reason, and some infectious diseases are not responding to antibiotics at all.

If we no longer have antibiotics, of course, the simplest infection can again become life threatening again, surgery, chemotherapy, kidney dialysis, even an ear infection will become risky again, and infectious diseases will again be the killers in the rich world that they once were.

The antibiotic Novamoxin; Wikimedia Commons

Pharmaceutical firms simply aren't investing in antibiotic development as needed.  Not surprisingly, the reasons are commercial: an antibiotic that becomes widely used may be profitable, but not nearly as much as anticancer agents, or recreational drugs like Viagra, or anti-balding cream.  And, if it's widely used the cost may be lower but resistance is sure to evolve.  If saved for the most dire cases, then sales will be low, cost too high to bear, and not enough profit for the company.

The cost of development and testing and the limited duration of patent exclusiveness present additional issues.  So, nobody is investing heavily in antibiotic development, not even governments that don't have quite the greedy commercial motive for what they do.

The Ebola epidemic is another biomedical disaster that has caught international medical care unprepared.  This is a virus, but there is basically no known antiviral agent; one with some effectiveness seems to be in the works and there are some other stop-gap strategies, but nothing systematic.  But the problem, dangers, and challenges are analogous to the fight against pathogenic bacteria.  Indeed, lately there's been discussion of the possibility--or inevitability?--that Ebola will evolve an ability to be transmitted via the air rather than just physical contact with infected persons.  But of course this is a repeat of the story of SARS, MERS, and other emerging infectious diseases, and surely not the last.

So the question for potential investigators or those worried about the looming disasters becomes: where is the money to solve these problems going to come from?  The answer isn't wholly simple, but it isn't wholly top secret either.

Move the money!
Developed countries are spending countless coinage on the many chronic, often late-onset diseases that have fed the research establishment for the past generation or so.  These are the playgrounds of genomics and other 'omics approaches, and more and more resources are being claimed by advocates of huge, long-term, exhaustive, enumerative 'Big Data' projects--projects that will be costly, hard to stop, and almost certainly will have low cost-benefit or diminishing returns.

We already know this basic truth from experience.  Worse, in our view, many or even most of these disorders have experienced major secular trends in recent decades, showing that they are due to environmental and behavioral patterns and exposures, not inherent genetic or related 'omic ones. They do not demand costly technical research.  Changing behavior or exposures is a real challenge but has been solved in various important instances, including iodized salt, fluoridated water, the campaign against smoking, urban pollution, seat belts/air bags, cycle helmets and much else.  It doesn't require fancy laboratories.

Unfortunately, if we continue to let the monied interests drive the health care system, we may not get antibiotic development.  The profit motive, evil enough in itself, isn't enough apparently, and some of the reasons are even reasonable.  So we have to do it as a society, for societal good rather than profit. If funds are tight and we can't have everything, then we should put the funds we have where the major risks are, and that is not in late-onset, avoidable or deferrable diseases.

Let's not forget that the reason we have those diseases is that we have enjoyed a century or so of hygiene, antibiotics, and vaccines.  The 'old man's friend', pneumonia and things like it, were put at bay (here, at least; the developing world didn't get the kind of attention we pay to ourselves).  But if we dawdle because we're so enamored of high-tech glamour and the sales pitches of the university community (and the for-profit pharmas), and because of our perfectly natural fear of the complex degenerative diseases, we could be making a devil's bargain.

Instead, what we need to do is move the funds from those diseases to the major, globally connected problem of infectious diseases (and similar problems combating evolving pests and infections that affect our food crops and animals as well).  We need a major shift in our investment.  Indeed, quite unlike the current Big Data approach, combatting infectious diseases actually has a potentially quick, identifiable, major payoff.  Some actual bang for the buck. We'll need somehow to circumvent the profit and short-term greed side of things as well.  Of course, shutting down some labs will cost jobs and have other economic repercussions; but the shift will open others, so the argument of job-protection is a vacuous one.

"What?!" some might say, "Move funds from my nice shiny omics Big Data lab to work on problems in poverty-stricken places where only missionaries want to go?" Well, no, not even that.  If plagues return, it won't matter who you are or where you live, or if you have or might get cancer or diabetes or dementia when you get old, or if you've got engineered designer genes for being a scientist or athlete.

The battle to control infectious diseases is one for the ages--all ages of people.  It perhaps is urgent.  It requires resources to win, if 'winning' is even possible.  But we have the resources and we know where they are.

Tuesday, September 16, 2014

Akenfield, and lessons for now-age sustainability movements?

In the 1960s I was stationed as an Air Force weather officer in eastern England (Suffolk, or East Anglia).  I had my off-base lodgings in the intellectual town of Aldeburgh, on the shingle-beach of the North Sea coast.  Aldeburgh is a North Sea fishing town, but more notably the home of the distinguished composer Benjamin Britten, and was a long-time or passing-through place of many notable artists, writers and scholars in the early 20th century.  But Aldeburgh is something of an exception: East Anglia is basically a kind of wetlands rural agricultural area--scenic if you are just passing through, but a place of farming business if you live there.

Aldeburgh village and beach (Wikipedia)

In 1969 the author Ronald Blythe published a book, Akenfield: Portrait of an English Village, of reminiscences of Suffolk folk of various ages and professions.  That was when I was living there, but I didn't learn of the book until recently.  Akenfield is a fictitious name for a village, but the book's stories, told by the locals, are real.  This book is an evocative one, capturing the mood--and change--of an English village's way of life, as seen by people of all ages and occupations.  In your mind's eye, you can hear the birds and the livestock call, and see the farmers, shepherds, smiths and so on plying their trades.

Those familiar with Wendell Berry's work about American farm life, or Aldo Leopold's work on Nature and the American landscape, largely about the Midwest a half-century or more ago, will find Akenfield to have a similar mix of nostalgia by the old-timers, commitment by forward-looking younger people, deep love and dedication for the land, yet recognition of the harsh realities of the onset of industrial farming and the leaving of the land by the young, who headed for better-paying jobs in urban trades and factories.

Suffolk farm by Edward Seago, 1930s (Wiki images)
Tractors replaced horse-drawn plows, the many farm laborers were replaced by machines.  Posh landowners have been replaced by more business-like owners.  Produce and livestock are processed through the landscape on a rapid, no-nonsense (and generally no sentimentality) scale, unlike the mixed, small-scale less commercial farming that had come on.  At the time, the villagers largely located themselves in relation to the two World Wars that had so affected England: their roles in the wars, rationing and hardship, and so on.

That was then....and still is, now
By the 1960s, large-scale business-farming had taken root.  Many of the issues discussed by the Akenfielders would sound the same today:  animals being treated in what for humans or pets would be considered horridly inhumane ways, people being driven off the land by machinery, generalists or money people replacing skilled craftsmen, the new rough treatment of the land compared to the mixed-crop smaller-scale farming of the past.  Chicken and hog farms already had become jails for their inhabitants who may never see the light of day in their short, measured, lifespans.

1969 was nearly 50 years go!  In Akenfield in the '60s there were a few who clung to the older ways, who loved the land and refused to leave it, whose needs were simple and commitment great.  This was not for political reasons, but for local traditional ones.  I can't say much about how things may have changed in East Anglia since then, except that my last (also nostalgic) trip through there to Aldeburgh was in 2006, and the hog lots one passed were large.  No rustic slow-paced life!

These musings strike me as relevant to much that is happening today.  Industrial, now genomic-driven agriculture is dominating and many will say devastating not only the nature of agricultural life but also the land itself.  Soil is being lost, monocropping risks major pest devastation, and large farms have become huge impersonal businesses.  And of course livestock practices are every bit as inhumane as ever they may have been.  Of course the argument now, as then, is that more is being produced to feed more people (and there are now a lot more to feed in the world).

At the same time, some are trying to raise the alarm about what may happen if this continues.  Under the banner of 'sustainability', people are attempting to organize resistance to the Monsantification of the land, as one might put it.  There are small farmers who sell humanely raised, local, often organic, small-scale farm products.  There are those trying to use the land in a long-term sustainable way.

Is it pushing analogy too far to liken these scattered and often struggling movements to those who held on to traditional life a half-century ago?  They passed from the scene (as did some protest-era movements, such as communes, 'small is beautiful', and other similar movements in the '70s protest era). Will the current movements flourish, or are they like the trades of old, destined to pass into history?  If they do, will the industrial model sustain life, or destroy it?

Friday, September 12, 2014's not genetic after all! (but who's listening?)

Is it time to predict the gradual out-cycling of a focus on genetic causation and a return of environmental causation, in our mainstream scientific dialog (and funding-pot)?  Such a recycling is bound to happen--even if, say, genetics' claims were correct.  Why?  Because the new generation has to have something to show they're smarter than their elders, and because the abuse of genetic determinism by society is a nearly inevitable consequence of the fervid love-affair we're now having with genomics and its glittering technology.  But maybe there's another reason:  maybe genetics really has been oversold!  Is it possible?

Bees and societal (in)determination
Honey bee harvesting is a social phenomenon and experiments by various authors have found that only a fraction (in some studies, 20%) of the workers actually do most of the work.  But a recent controlled study reported in the journal Animal Behavior by Tenczar et al (vol. 95, pp41-48, 2014, but paywalled) found that if those 'busy-bees' are removed, others step in to fill the work gap.  The gist of the evidence seems to be that among the gatherer work force (and presumably other castes as well, though that's not reported), there is a spectrum of contribution and it's condition or context-dependent.  As the paper says:
These bees resembled elite workersreported in a number of other species. However, our results also show that honeybee foraging activity level is flexibly adjusted during a bee's lifetime, suggesting that in honeybees, elitism does not involve a distinct subcaste of foragers but rather stems from an extreme of a range of individual activity levels that are continuously adjusted and may be influenced by environmental cues.  . . . these results support the view that individual workers continuously adjust their activity level to ensure that the colony's nutritional needs are being adequately and efficiently met, and that the net activity of the whole foraging population is likely to be one of the factors that influences this decision. 
The authors discuss the fact that these patterns have not been studied, with varying levels of rigor, in many species of social insects.  While it is not clear that genetic differences are never partly responsible, the evidence is that social roles are not rigidly pre-programmed.  This study was presented by the NYTimes with a captivating video from the authors, but while that was nice and led us to the research story itself, the Times characterized this as a system allowing upward social mobility.  That's a bit pandering to middle-class readership, and didn't really critique this work in the context of today's prevailing genetic-deterministic viewpoint. However, the idea of context-dependent roles, based on the needs and opportunities in society at large, is noteworthy and of course is something that also happens in humans.

Honeybee; Wikimedia Commons

This of course raises the question of how the bees perceive the needs or different roles, or if the role pattern is a spectrum of activity of each bee, then how does it know when and what to do.  This would relate to the bees' brains' ability to digest quite complex information and make decisions, something very interesting to try to understand, and something we wrote about here not long ago.

A new paper in PNAS reports the results of a large study of the genetics of IQ.  Essentially, they found three genes with very small effect and unknown functional association with cognition.  Indeed, one of the genes may not even be a gene. To sort this all out, of course, they say they would need a sample of a million people.  One of the authors faced with this mountain of chaff is quoted this way in the story:
Benjamin says that he and his colleagues knew from the outset that their efforts might come up empty handed. But the discovery that traits such as intelligence are influenced by many genes, each having a very small effect, should help to guide future studies and also temper expectations of what they will deliver. “We haven’t found nothing,” he says.
Nice try!  But the truth is that that is just what they have found: nothing.  Or, at least, nothing new, that is, no thing.  We knew very well that this was the most likely sort of finding.  We have countless precedents, including the results of countless earlier searches for genes for intelligence (and, for that matter, similar findings for most psychological/behavioral traits).  Like other traits from normal ones like stature and IQ, to body weight and major diseases of all sorts, we find polygenic control--countless contributing genetic factors with individually minimal effect. This even though usually the heritability of the trait is substantial, meaning that variation in genes together accounts for a non-trivial fraction of the overall variation in the trait (the environment and other factors contribute the rest, usually around 60-70%).  

But heritability is a persistently subtle and misunderstood (or ignored) measure. Even with nontrivial overall heritability, the aggregate nature of the measure means we cannot say in any given individual whether his/her IQ is based on this or that particular genes, or is some specifiable percent due to genes (that is itself difficult to make sense of when referring to an individual).  And heritability is often measured after taking out, or controlling for the major real causal factors, such as age and sex.  Arguing for a sample for a million, if allowed and funded, is a huge fool's errand and a corrupt way to spend money (because it's mainly to keep professors off the street of unemployment).

Yet the issues in these cases are subtle, because we also know of many different individual genes that, when seriously mutated, cause direct, major, usually congenital damage to traits like intelligence.  Yet few if any of these genes show up in these mega-mapping studies.  It is this sort of landscape of elusive complexity that we need to address, rather than just building expensive Big Data resources that will largely be obsolete before the DNA sequence is even analyzed, based on the daydream that we are not, knowingly, chasing rainbows.

The primary question one thinks to ask is whether 'intelligence' is a biologically meaningful trait.  If not, even if it can be measured and be affected by genes, it isn't really an 'it' and one can't be surprised that no strong genetic influences are found even if the measure is stable and heritable.  Asking about the genetic basis of intelligence under such circumstances is not asking a well-posed question.

Baby stories
The other day we posted about the recent Science issue on non-genetic influences on parenting,  environmental effects on traits and how long-term and subtle they can be, and how they are not Genetic in the sense of the G-rush we are currently experiencing.  The stories are many and diverse and tell the same tale.  

Here the fascinating question is how the various environmental factors could influence a fetus in factor-specific manners that even relate to the factor itself (e.g., maternal diet affecting the future baby's obesity level, or the effect of the mother eating garlic or being exposed to odors on taste preference or specific odor-related behavior in the child).  To answer such questions we have to know more than just about a gene or two.

So, why aren't these findings grabbing headlines?
The bee story made the front-page of the NYTimes, but mainly because of the video and not because it is a counter to the strong genomic hard-wiring ethos so often promoted by scientists these days.  Likewise, the baby influences made the cover of Science, but we didn't see a Hot-News blare announcing that genetics isn't, after all, everything.  And of course the IQ story didn't make that clear either, given that the author said he wanted studies of a million to find the real genetic causes of IQ.  And, determinists say this isn't going to change their mind about the genetics of intelligence, because it's definitely genetic.  

Will we, or when will we, see people begin to back off their claims of strong genetic determinism, and begin addressing the really hard questions concerning how complex genomes interact with complex environments to produce what we are clearly observing?  In my opinion, these questions cannot be addressed from a genetic, or from an environmental, or from a simple Gene + Environment point of view.

Wednesday, September 10, 2014

The Turner Oak effect: unexpected explanations

We are just catching up on a backlog of reading after three weeks away, which means that while the immediate subject of this post may be a bit dated, the topic certainly is not. The August 15 special issue of Science, which we're just now reading, is so fascinating that we can't let it go unremarked.  The issue, called "Parenting: A legacy that transcends genes," provides example after example of the effects of environmental factors on development, taste preferences, the way the brain works, disease risk, and many other aspects of life.  We can't of course evaluate the reliability of all of these results, but the evidence does seem to be pointing strongly in the direction of a mix of genes and environment in explaining the effects of parenting on growth and development.

We don't know that mounting such a strong challenge to the idea that genes are predominantly what make us who we are was the editors' intention, but the subtitle suggests that, and in our view, that's certainly what they have done.  Indeed, we can't help noting that this is an unintended but eloquent counterpoint to Nicholas Wade's view of life, in which everything including the kitchen sink is genetic (or at least we assume he'd say this, since sinks are designed by Eurasians who are because of natural selection genetically of superior inventiveness).

Cover of Science, Aug 15, 2014
Given the papers in this special issue, it's clear that more and more is being learned about how extra-genetic factors affect growth and development. What the mother eats in the days around conception, uterine conditions before conception, conditions during development, components of breast milk, ways of parenting and so forth all apparently affect the growth, development and health of a child.  In vitro fertilization may have life-long effects including risk of disease, starvation during pregnancy may affect risk of disease in offspring, what a mother eats while she's pregnant can influence her child's taste for specific foods, lack of parental care during infancy and early childhood can have lifelong effects, maternal mental illness may affect the development of the fetal brain, and so on.

Lane et al. write about "Parenting from before conception".  Infant health, they write, seems to be particularly influenced by conditions during 'fertilization and the first zygotic divisions, [when] the embryo is sensitive to signals from the mother's reproductive tract.'
The oviductal fluid surrounding the embryo varies according to maternal nutritional, metabolic, and inflammatory parameters, providing a microcosm that reflects the outside world. In responding to these environmental cues, the embryo exerts a high degree of developmental plasticity and can, within a discrete range, modulate its metabolism, gene expression, and rate of cell division. In this way, the maternal tract and the embryo collaborate to generate a developmental trajectory adapted to suit the anticipated external environment, to maximize survival and fitness of the organism. But if the resulting phenotype is a poor match for conditions after birth, or if adaptation constrains capacity to withstand later challenges, offspring are at risk.
Maternal diet at conception has a major impact on the developmental program. Reduced protein content for just the first 3 days of embryogenesis retards cell proliferation and skews the balance of cell lineage differentiation in the blastocyst.  The effect of nutritional disturbance at conception persists through implantation and influences placental development and nutrient transfer capacity, then after birth, the neonate gains weight more rapidly, developing higher systolic blood pressure and elevated anxiety.
Some of the effect is epigenetic, that is, modifications to the DNA structure that affect gene expression.  And some of the effect is, Lane et al. write, on oocyte mitochondria.  These organelles, "powerhouses of the cell", support blastocyst formation.  Their location and activity levels are known to respond to the mother's nutritional status, and ultimately affect the health of the child, as well as affecting gene expression in the brain, among other things.  Epigenetic effects on sperm, influenced by environmental conditions, also can affect the developing embryo.  But it's the "epi" in epigenetic that tells the tale: it's not the genetic (DNA sequence) variants that cause the trait difference, but variation in the use of the same sequence.

Many of the essays in this issue use the word 'plasticity', meaning that developing embryos are able to respond to various and varying environmental conditions.  If conditions are too extreme, of course, the embryo can't survive, but in general, how an embryo responds to immediate conditions may have lifelong effects.  From the review by Rilling and Young ("The biology of mammalian parenting and its effect on offspring social development"):
Parenting... shapes the neural development of the infant social brain. Recent work suggests that many of the principles governing parental behavior and its effect on infant development are conserved from rodent to humans.
That parenting has a strong effect on the infant's physiology, and that the effects of parent/child interactions have evolved to be strong is not a surprise, of course, given that parenting in mammals is essential for the survival of the offspring.  And plasticity, or adaptability, is a fundamental principle of life.  We have referred to this as 'facultativeness' in the past.  Organisms that are able to adapt to changing environments -- within survivable limits -- are much better equipped to survive and evolve.  Indeed, the final piece in this special section on parenting is titled "The evolution of flexible parenting."  Parenting behaviors among many species are well-documented to respond to environmental changes.  Put another way, it is not being genomically hard-wired that is most adaptable in these ways.

So, with all these examples of the interdigitation of nature and nurture, can we declare the death of genetic determinism?  Well, no.  Genetic determinism is alive and well, thanks in large part to Mendel and the resulting expectation that there are genes for traits that are out there to be found.  But in many ways, we've become prisoners of Mendel -- while many genes have been found to be associated with disease, we know very well that most traits are polygenic, and/or due to gene-environment interaction and we've know this for a century.  So the idea that the effect of parenting might transcend genes shouldn't be surprising.  And the idea that there might be factors that we haven't predicted that affect traits such as diseases or how brains work shouldn't be surprising, either.

The BBC recently aired an excellent 25-part program called "Plants: From Roots to Riches" about the history of Kew Gardens, and because the gardens have been so central to botany for so long, about the history of botany in general.  The series is still accessible online, and well worth a listen.  I bring this up because a story told on one of the episodes struck me as a very apropos lesson about causation.  A "Great Storm" hit the UK in 1987.  This was a hurricane that did tremendous damage, including killing millions of trees, 700 at Kew alone.

Before the storm, arborists had been concerned about a 200 year old tree at the Gardens, the Turner Oak.  It was clearly not well; leaves were stunted, growth was slow, but it wasn't clear what was wrong with it.  During the storm, the tree was uprooted completely and tossed into the air, but as luck would have it, it came back to earth right in the hole its exodus had created.  The arborists decided it didn't need as much attention as many other trees in the gardens after the storm, though, so they left it until they were finished tending to others.  This was three years later, at which time they discovered that the tree was thriving, growing again, and looking healthier than it had in decades.

Quercus x turneri at Kew Gardens; Royal Botanic Gardens

Why?  The arborists eventually realized that all the foot traffic at the Gardens had compacted the soil to the extent that the roots, and thus the tree, were suffering.  It turns out that the soil around a tree must be aerated if the tree is to thrive.

I love this serendipitous discovery.  A tree was ailing, no one knew why, until an unexpected event uncovered the explanation, and it turned out to be something that no one had thought to consider.  Many of the discoveries reported in the August 15 issue of Science strike me as of the same ilk.  Scientists have been looking for genes 'for' diabetes, taste, mental illness, obesity, and so on for decades now, and the explanation for these conditions may be instead events that happen even before conception, where it never occurred to anyone to look before.

There are numerous other examples; a few years ago it was reported that age at death (for late-life, not infant mortality) is affected by the month in which someone is born.  The authors, for some reason, did not follow up this potentially very important finding.  Maybe the effect is due to seasonal foods consumed by the mother during what turn out to be the riskier months of conception--if so, should there be lifelong evidence, if we but looked for it, of accelerated disease prodromes like obesity, hypertension and the like.

Perhaps the Turner Oak effect should be a thing -- it might encourage investigators to explicitly look for the unexpected.  What causes asthma? Could it be disposable diapers?  Who knows?  Broccoli has never been blamed for anything -- maybe it's time for broccoli to be implicated in some disease.  The problem is that we don't think to look because we all 'know' that broccoli is good for us.

Some ideas are kooky, but when it turns out that some kooky ideas really do seem to explain cause and effect, it means we shouldn't always be looking in the same place for our answers (the drunk under the lamppost phenomenon).  The cause and effect relationships described in the parenting issue of Science involve some unexpected environmental effects on gene expression -- epigenetic effects of various kinds -- and plasticity, meaning that cross-talk between genes and environment creates a give-and-take that can't be called genes or environment alone.  We don't know that these are final answers, but we know that we should expand our range of expected possibilities.

Perhaps the Turner Oak effect should guide more of our thinking in science.