Thursday, January 31, 2013

URGENT! (a note on bourgeois science rhetoric)

The other day we saw reportage of a story by some anthropologists that showed how deeply, and subtly the current science culture has become a lobbying and advertising culture, even as it tries, genuinely, to answer questions about Nature.

Barefoot Pheidippides along the Marathon Road
This particular story was an investigation of some African peoples that was done to support the authors' idea that running barefoot, rather than Nike-shod, is good for us because that's how we evolved (barefoot, not Nike-shod, nor Avia-shod either!).

We're not commenting on the merits of the story, because we have no reason to doubt the authors' analysis or data.  They are capable people and the question, though not earth-shaking, is a legitimate subject for looking at the effects of our evolutionary past.

No, what got our attention in regard to MT commentary was something the authors said to the reporter at the end of the interview.  Rather than claiming that with this research we now understand about the evolution of human running, they pointed to inconsistencies or incompleteness in the data, and said that this shows how much we still "need' to learn.

Need?

They didn't say 'how much we do not yet know' or how difficult it is to answer such questions relating to the past.  No, we need to know more.

This is the seepage of subtle, relentless lobbying into the fabric of contemporary science.  We have absolutely no 'need' for this knowledge.  It also tends to suggest that what they have learned so far is of some vital importance to humankind, rather than simply what it should be viewed as, a perfectly interesting question about human function and evolution.  But  it's pretty far down the need list.

These authors are not particularly blameworthy for slipping this kind of rhetoric in when describing their work.  It is routine, indeed part of the tactics, of scientists these days.  Most papers and nearly every interview with the media, stresses how we 'need' to do more research, and these days that clearly means larger, longer studies.  The genetic basis of disease X, no matter what it is or how rare or  how serious, is treated as if we urgently 'need' to work out what we don't yet know.

Saying that it would be helpful to know more, that there is always more that can be learned, or that we'd like to continue working on this project are perfectly legitimate views.  It's up to society to decide which cases are pressing enough to be given the funds.

But to claim that understanding cancer, diabetes, or what makes someone a good athlete are in any sense 'urgent' is to play a game of self-importance.  Certainly it might be good if we could eliminate, say, diabetes or cancer and, for those of us alive and with the years ticking, the sooner the better for us.  But for society?  There are many more short-term problems that in the same sense could be called 'urgent', such as world hunger and many traits that truly are genetic, rather than the ones grabbing the bulk of life-science funds these days.  Of course we rant about this regularly.

And there is the inevitable fact that if we did actually use genomics to eliminate cancer and diabetes, those of us who pushed for this because it was urgent (for us personally), some other, later, longer, more lingering diseases of even-older age would replace them.

Meanwhile, think of the impact on quality of life of so many children here and worldwide because of the way we use our resources.  And perhaps we might consider--just consider, mind you!--moving funds to energy or climate or ecological or infectious disease or social-epidemological efforts, to do what we can about problems that we perhaps more sensibly really do need to address.

Wednesday, January 30, 2013

Here's one on the top of the Disgusting charts

If this story is true, it is to us as disgusting, inexcusable a story one can imagine.  As reported in the LA Times, the minister of health in Israel, has just revised guidelines for the use of the birth control drug, Depo-Vera, now requiring better informed consent when it is prescribed.
The ministry's new policy comes in response to a controversy exposed last month by local investigative journalist Gal Gabbay, who reported that Jewish Ethiopian women awaiting emigration to Israel in transit camps in Ethiopia were coaxed into the treatment with little medical explanation and led to understand this was a condition for moving to Israel.
Many women continued to receive the drug after moving to Israel, although they were often not informed of its effects. 

Whether or not this was officially sanctioned by the Israeli government is not yet known, but the high number of women given Depo-Vera, and the fact that, again according to the LA Times, 60% of the women given the treatment were Ethiopian, and most of the rest women in detention, suggests that it was at least someone's official policy.  The reason being suggested is that if Israel didn't control births among this immigrant group, Ethiopians would pollute the true Jewish gene pool.  After the story hit the news, or, as the LA Times put it, "rocked by scandal," the government revised its guidelines.  Which suggests that there are at least some decent people in the Israeli government. And raises the question how anyone in that population could possibly think to do this sort of thing without crystal clear agreement on the part of the recipients (victims?)--much less justify it in his/her own mind.

This is so completely a replay, in reverse, of what Hitler visited on the Jews to preserve the pure Aryan race, as to be totally jaw-dropping.  If even Israel's European Jews can't escape the trap of vicious racism, and justify a recurrence of eugenics, then what hope is there for anybody!

We have written about our concerns, and have often been asked what we think, about the potential for genomic data to be used in a way that might revisit the eugenics era, in which privileged scientists and health professionals used a Darwin-rationalized belief in peoples' inherited worth to decide who deserves to have life.  Judgments by the elite on who's worthy and who isn't are a commonplace in human history, and genetic and evolutionary justifications are just some of the more recent excuses for exploitation and elitism. This shows the need for vigilance, and we shouldn't overlook the possibility that real good may come of genomics if there are strong laws to prevent abuse. But many just conveniently dismiss the idea that abuse will be commonplace, arguing that we should just be allowed to whatever of this kind of research we want to, because we would never repeat the sins of our recent forebears.....

The Israeli story isn't the only recent one.  A few years ago the Brits were suggesting the use of genetics to determine who should be allowed to immigrate (based on detection of some countries of origin).  Even if it was stopped before being implemented, the fact that it was seriously considered was concerning.  But to hear that the Israelis, of all people, would reinvent the involuntary sterilization of the unworthy, shows that raising the issue of a new era of eugenics is not just being Chicken Little and making up issues where they really don't exist.

To us it is important to have real, strict, enforced laws on what science is allowed to investigate.  Yes, it's censorship!  But we already do prohibit all sorts of research on humans, and even to a lesser extent on animals.  We are not allowed to cry Fire! in a theater unless there is a fire. Slander is a kind of speech that isn't free.  We couldn't for example, put study subjects in a tank of ice water to see how long they survived, the kind of thing that was done by the Nazis.  Millions of people who died in the Holocaust--many of them the Jewish ancestors or relatives of today's Israelis--would have had a chance at life had there been such restriction in the first half of the 20th century.

Is there any place for sympathy?
We've been unapologetically critical of this decision to protect the Chosen race (should we call them, say, neo-Aryans?).  There is consistent historical precedent for people to develop tribal solidarity, and hence to be suspicious and wary of incoming people who are not members of the tribe.  That may be  only natural: tribal identity is a way of building cooperation and trust, order and solidarity.  Outsiders can pose various levels of unknown threat.  That's why immigrants are so widely distrusted and opposed today and in the past, and why ethnic warfare is not a new invention.  Nor is elitism based on  inherent worth--it's in Plato and other Greek classical authors, and the earliest formal writing on such topics.

From this point of view, it is wholly understandable that, now that it's their turn in power, the European-derived Israeli power structure would feel threatened not just by Palestinian Muslims but even by those 'other' kinds of Jews that may not look or act like proper (i.e., clean-cut, educated European) Jews, that is, the ones in power.  It is also quite understandable--which is not the same as excusable--that those in power would want to pretend to welcome all members of their religious tribe, but not really want to be overwhelmed (i.e., out-reproduced) by them.

The lesson is perhaps an anthropological one: that no group, even those who were the victims within living memory of as  evil a genocide as was ever perpetrated, can be trusted to behave differently when they are in power.  The bottom line is that we have to be strong and vigilant in opposing this kind of action.

Unfortunately, this has implications for immigration policies, worldwide.  The reality must be addressed that people do tend to oppose immigration.  Policy to incorporate or limit foreigners has to be based on how the locals feel at any given time.  Every population is capable of acting in a seriously discriminatory way.  There are no easy answers, but the lesson of the eugenics era, and its reported echo even where it should be most eschewed with horror, should be one all of us should  learn.  And we should be careful about using Darwin or genetics to base our decisions on.

Tuesday, January 29, 2013

Scientific fraud and why it happens--or how could it NOT?

A piece by Jennifer Couzin-Frankel, "Shaking Up Science," in last week's Science hits hard. Couzin-Frankel writes of two journal editors and their quest to understand cheating in science, and the pressures that have made it so widespread.  From our point of view, they've gotten it just about right.

Immunologist and microbiologist, Ferric Fang and Arturo Casadevall are editor and past editor of the journal Infection and Immunity. They were just doing their jobs until, 5 years ago or so, it occurred to them to wonder what was going wrong with science that people weren't loving it like they once had.
Discovery for its own sake was being sidelined by a push to publish in high-impact journals. Funding was scarcer than ever. Scientists focused on narrow fields and often couldn't communicate their professional passions at a cocktail party.
They began to wonder what they could change, to bring back the passion.  So, they started writing opinion pieces on the subject, people started reading them, and it took off.  To date, they've written 14 and are still going.

They first wrote about the pressure to get grants, and problems with peer review and so on, but one day they, as editors, received a letter telling them that one of the papers they'd published was fraudulent.  They broadened their interest, then, to include how grant pressures might lead to fraud, and they've written a lot about that since.

They wondered whether the prestige of a journal might encourage more fraud, or at least cutting of corners, because high-impact journals are better for careers and so scientists might be more willing to compromise their ethics to publish in high-impact places.  They in fact discovered a robust correlation between prestige and number of papers retracted for fraud.  This didn't surprise them.  As they said in their paper about this in Infection and Immunity in 2011:
Articles may be retracted when their findings are no longer considered trustworthy due to scientific misconduct or error, they plagiarize previously published work, or are found to violate ethical guidelines. Using a novel measure that we call the “retraction index,” we found that the frequency of retraction varies among journals and shows a strong correlation with the journal impact factor. Although retractions are relatively rare, the retraction process is essential for correcting the literature and maintaining trust in the scientific process.
They published a paper in PNAS last October about retractions*, and why papers are pulled. Still rare, about 1 in every 10,000 papers is retracted, they are becoming more frequent but whether this is because fraud is increasing or detection is more efficient is not something they can answer, although they suspect the former.  They reported that 65% of retractions are due to misconduct -- plagiarism, falsification, duplicate publication and the like -- and only 21% due to error.

Fang and Casadevall have thought a lot about why misconduct happens, and why it seems to be increasing, and they largely put it down to flaws in the system, not bad people.  Many departments, for example, will only interview candidates who have been first authors on Science or Nature papers,  meaning, as they put it, that the editors of these journals are making all manner of hiring and career decisions.  And of course the pressure to get grants is widespread and pernicious, largely because universities depend so heavily on overhead from grants.

FIG 1 
Gender distribution of scientists committing misconduct. The percentage of scientists sanctioned by the U.S. Office of Research Integrity who are male, stratified by rank, is compared with the percentage of males in the overall United States scientific workforce (error bars show standard deviations) (blue and green bars are from NSF data, 1999–2006).Source: Fang, Bennett and Casadevall, mBio, 2013
Fang and Casadevall's most recent publication is online in the journal that Dr Casadevall now edits, mBio.  Here, they document, among other things, the gender difference in the proportion of scientists who commit fraud.  Two thirds are male, and they do it at all ages and stages of their careers, which means, as they say, that it's not only when scientists are in training that ethical issues need to be addressed, but throughout scientific careers.  And, it's important, as they say, because:
As many of humanity’s greatest problems require scientific solutions, it is critical for the scientific enterprise to function optimally. Misconduct threatens the scientific enterprise by undermining trust in the validity of scientific findings.
It also slows down progress and costs a lot of money.  Fraudulent results, because they aren't replicable by other labs, are unlikely to form the foundation of future work, but it takes time and money for others to pursue a fraudulent lead and fail.  And it's usually taxpayer money, not to mention valuable career-building time.

How could we not?
Given the System that we, especially senior investigators like ourselves, have acquiesced to or even encouraged, step by step, these findings are no wonder at all.  In a middle-class environment, where ordinary people not just the idle rich can do science, and where we have to do it in universities rather than our basements, and when it is costly, and when we have to persuade the public to pay for it (and our salaries), it would be astounding if we didn't see what we see!

Further, given that all of this puts institutions into the position of needing to insist on imposing the rat-race on its employees, while trying to be 'objective' in evaluating faculty performance by using score-counting rather than judging real creativity (hard to do once organizations get too big, and always hard to avoid insider power structures), and when scaling up brings in more money than having fewer people working at a slower pace, on more focused problems, again we get what we have agreed to over the last few decades.  We agreed to it because when it started a half-century and more ago, there was far less demand on the available supply of resources, and we believed as a nation that government-supported science would help us beat the USSR in the cold war, solve epidemiological problems, and so on.  At the time, something like 50% of grant applications were funded.  The system that developed served all our interests.  We remember this directly from our own experience.

But as the system grew into a System, and put Malthusian pressures on everyone, flooding the market with more people than it could support, to force them into relentless competition with each other, such as even forcing us to have too many graduate students and post docs to boost our status and do our work for us, to enable us to get more grants and as a mark of our prestige whether or not there were ample jobs for them, naturally imposing intense pressure to work 24/7 and so on---that it lead to shaving the truth is only to be expected.

But more important to us than all of the issues raised above, is that the truth is shaved in much more pernicious ways than outright fraud.  The latter may be much more common than widely realized, but is still relative rare compared to investigators knowingly, and as policy, exaggerating their findings.  Negative or less-than-positive results tend to be understated or omitted, or journals won't publish them.  After all, a negative result can't be used to generate a grant to keep doing the same sort of study so we can keep our careers going.  Complicated issues are buried in 'supplemental information' that, at least sometimes, shows that the main paper's claims are over-stated. Marginal positive findings blown all out of proportion by various kind of obfuscating rhetoric and statistical blizzardry.

And of course, the media and science journalists (too many of whom have scant understanding of science, or whose own jobs and sense of prestige pressures them to breathless overstatement) are hungry for the Big Headline story, again 24/7.  It's a mutual reinforcement game of truth-shading. It's not just restricted to science, either.  In an individualistic, profit-based, advertising-driven society like ours, science is just doing what's the going thing for everybody else.  We're all shopping in the same mall.

Investigators, even the most senior, absolutely know that they're playing the game.  Students are trained in 'grantsmanship' from shortly after they arrive.  One regularly hears the smirking comment that to get funded one proposes to do work s/he's basically already done.  And on and on.  None of this dishonor is at all surprising nor, within the profession, is it at all a secret.

Outright fraud is only the tip of the proverbial problem. The rest, the dissembling and so on, and the play of vested interests because of the need to keep the funds flowing, is more subtle and harder to find or document.  The System can wear down anyone who really wanted to try to document its behavior.  But it is widespread if not nearly universal.  Only some, the super-human or like those of us too senior to play the game and who feel liberated to speak about it, can resist.

In this sense, the explanation of science's behavior provides no excuse for it, and yet explains why the problem is not our fault!  It is a System, as we've noted, into which people are brought as junior investigators.  Their forebears (like us) built it, but newcomers inherit it as part of their professional environment.  They must play the game from Day One to survive in their careers for which they've trained so long.  Only by grass-roots demands to scale back, slow down, be more responsible in our statements, reduce the size of the over-stuffed research establishment, and so on, will we return to a more honorable, or more clearly honest, way of doing business.  But right now, the grass roots are quivering in the winds of our times, and there is no rebellion yet in sight.


------------------------------------------------
*A list of retractions is maintained by two science journalists at Retraction Watch.  They welcome tips and comments.

Monday, January 28, 2013

Genomic analysis results: understanding, or Fairy Dust?

We are daily seeing claims of major discoveries from genomics and other 'omics' kinds of studies.  These are being proclaimed by their investigators as if they have waved a magic wand and solved critical human problems 'urgently' in need of solution.

Yet, many realize that GWAS and other omics, or idea-free methods, have provided a much lower  yield than was promised or expected.  This is expressed in terms like 'hidden heritability', referring to the familial clustering that should be genetic but for which specific genes cannot be found, or at least many individually trivial contributing genome regions are identified.  In fact, this is what we should have expected, based on long-standing evolutionary theory and ideas about genetics.  We've posted many, many times about this.

The evidence is consistent.  Many genes interact to produce biological traits, in humans as well as other species including yeast and bacteria, and plants.  These genes have to be regulated to control the timing and amount of their expression in cells, and gene regulation involves many interactions among genes and other DNA regions where regulatory proteins bind.  Each of the functional DNA regions that are involved is subject to mutation that, if not lethal, can circulate in the population over generations.

This is known as 'polygenic' variation.  The word simply means a great many contributing genetic elements that mainly have individually tiny effects.  Findings from GWAS and other types of studies consistently point to evidence for just this kind of polygenic control.  But the frustrating thing (for proponents of genetics-are-everything and of personalized genomic medicine, etc.) is that with many individually trivial contributions, each person's genotype is different and each case of the 'same' disease is due to different genotypes and/or environmental exposures.

At the same time, major mutational changes in contributing genes can yield a serious effect that proper analysis can assign to that specific gene.  Our methods identify these, and we generally refer to their effects as 'Mendelian'.  These are often due to changes that inactivate ('knock out') the gene.  This success in easily identifying causes large and only problematically finding the small ones suggests that the reason there appears to be so much genetic control (reflected in measures like family correlation or heritability) is simply what we think it is: traits really are polygenic.

But can life be that complicated??
In the face of this apparent complexity--many argue that life can't really be that complex.  One may feel that it's just not plausible that hundreds or thousands of genes can be the explanation for traits that show orderly value distributions in populations.  That orderliness, and the relatively orderly nature of evolution, and the fact that a trait can be knocked out by single genes, all might be seen as indicating that life must have been able to evolve our complex traits in a way that is not so complex after all.  We're just not understanding--yet!

The usual approach to this view is to argue that we just need longer, larger, costlier studies, or more kinds of 'omics' approaches--like epigenetics, copy number variation, nutrigenomic, microbiomic studies and the like.  Then, this view goes, we'll (whew!) finally identify essentially all the causal elements.  But if things really are polygenic, this may be hopeless.

But, do we really know what's going on, whether or not causation is totally enumerable as the current belief system holds?  We know this belief system is based in part--perhaps major part--on the kinds of professional vested interests and paucity of better ideas that we write often about here.  One way to view this is imply to assert, yet again, that for nearly a century we have had the right kinds of knowledge and the right interpretation, even if lacking in sufficient technology to document those ideas, and that recent technologies are showing just what we expected to find--despite the resistance to the contrary, the idea that we can reduce complexity to simple genetics, or even omics, is largely based on wishful thinking.

But what about an alternative?

Fairy Dust?
Source: Wikimedia
Suppose this is one of those situations in which we are documenting the hell out of trivia, because our theoretical understanding leads us there--we try to force the current 'paradigm' to fit facts that really don't fit!  But perhaps not only are the wishful-thinkers wrong, but so are those of us who have been arguing that we see what we expected to see and that, alas, life really is as complex as our polygene theory suggests it is.

If both sides of these issues are wrong, perhaps there is some other explanation for what seems like the tractable theory of life's coherence, something other than many tiny contributing factors.  Could there be some force or factor--call it 'Fairy Dust'--that we simply have not discovered but that underlies what we are struggling to understand?

Such factors would be analogous to those that were discovered in other 'paradigm shifts', or revolutionary changes in scientific gestalt, that have happened over time.  We might refer to quantum effects, dark energy or dark matter, gravity particles, and so on as exemplars of such factors in other sciences. It could be some kind of 'field' or 'force' whose nature we don't know of or even suspect.  Or just another way of thinking about what we know already.

One can never deny the possibility that Fairy Dust exists.  But neither can we propose studies to find it, as if we knew it existed, and it's understandable given human nature and the history of science that we'll press ahead, ever more intently, trying herd-like to force things to fit our theory, or trying to outwit our theory, the way that's par for the course now, until someone somehow stumbles on the insight required to identify the fairy dust and improve our biological explanations.  This is just how Thomas Kuhn described the way science works (as we posted about yesterday).

But don't hold your breath, because to us, right now, it does not seem that our explanations are missing any such thing.

Friday, January 25, 2013

Depressing facts not to get depressed over: zero genes for depression?

Depression seems to run in families, but despite numerous attempts, few causal genes have been identified and even fewer have been replicated.  Why?

Twin and family studies have estimated heritability, or the proportion of the cause of depression that is genetic, to be 31-42%, according to a newly published report in Biological Psychiatry.  This is a report of a meta-analysis that pooled results from 17 genomewide association studies (GWAS), including a total of 34,549 people of European origin with depression somewhere along a continuum defined by the answers to 20 questions as to extent of depressed mood, feelings of guilt and worthlessness, feelings of helplessness and hopelessness, psychomotor retardation, loss of appetite, and sleep disturbance.  This study differs from most others in that it used symptoms rather than diagnosis.  

They also attempted to replicate their findings with 5 studies that used a different assessment of depressive symptoms, the requirement these days for validating a study. Finally, they did a combined meta-analysis of a number of discovery and replication studies, for a total of 51,258 individuals.

 Summed up bluntly by Science News:
A massive effort to uncover genes involved in depression has largely failed. By combing through the DNA of 34,549 volunteers, an international team of 86 scientists hoped to uncover genetic influences that affect a person’s vulnerability to depression. But the analysis turned up nothing.
Well, not exactly nothing. The authors report one hit when they pooled all the studies, but it did not replicate any other studies, and it was in a genomic region that included no genes. That's not necessarily meaningless, but it is difficult to follow up.

 The authors wonder whether using the depression scale might explain this lack of hits. Were their cases so heterogeneous that this prevented them from finding a major effect? If they'd included only people who scored high on the scale, perhaps this would have narrowed down possible causal genes, or eliminated cases of depression that might not have a genetic basis. The authors wrote:
The approach of studying depression on a continuum has the advantage that not only information on extremes is used but that all available information is exploited. Van der Sluis et al. showed that if the phenotypic variation among cases, as well as the variation among control subjects, is used, this greatly increases the power to detect genetic variants. 
But, given that no other study has replicably found genes with major effect, it is unlikely that this explains the lack of significant findings. Their replication studies may have added a different phenotype to the mix, too, and therefore additional genetic heterogeneity. The authors also point out that gene-gene interaction or gene-environment interactions might explain depression, and hinder GWAS.

 Of course, the authors (naturally, and with total scientific disinterestedness) say that the way to do this right is to do a larger study of, say, 50,000 cases. But of course this won't eliminate any of the issues, and will only increase the heterogeneity problem.

Naive expectations
Showing that this isn't just evidence of sloppy workmanship is the fact that similar results have been found for most other psychiatric or neurological disorders, with few exceptions, and that even 'mechanical' traits like the skeleton, metabolism, and so on, that are easier to define and measure show the same level of complexity. Even in yeast.

Is it fair to point out that we have known for a mere 94 years that it is absolutely consistent that traits without major genetic contributions can cluster in families, so that substantial heritability is not an indicator that mapping will find such genes. While families with many cases in many generations and collateral relatives (like cousins) do raise hopes of such effects at least in those families, it was long ago also shown that polygenic causation (which is what we have here, if the evidence is to be believed) can mimic Mendelian transmission families.

So this study, like so many costly others, tells us what we knew. Indeed, we already knew we knew that before this (and so many other) study was done.

When nothing is something--unless it's not something you want!
We're in the age in which we boast about our various name-dropping technologies that we used in our latest elephantine study. We make sure our listeners know that we've used 'massively parallel' or 'Next-generation' sequencing to get data from massive numbers of people. The idea is that signal-to-noise ratios are well-behaved, so that increasing sample sizes will find increasingly weaker signals.

There is no reason to believe this, when it comes to genetics, and indeed there are good reasons to think the contrary. But that doesn't slow down the current modus operandi: generating massively incremental data.

Actually, on the other hand, the 'nothing' that is being found is nothing to be depressed about! Instead, it's convincing evidence for complexity--that is, that many minor causes interact and combine to generate results, so that each case is different in detail, both in terms of the trait itself and the genotype that contributes to it. It's the nothing we don't want because what we want is something we can cash in on: build a career as the discoverer of the Big Fact, make and profit from a patent, make a pharmaceutical bonanza.

Instead, it's the mouse that roared. We should be taking credit for showing how life really seems to be, which all these GWAS have in fact shown, but few are willing to accept it. Of course, we needn't do that thousands of times, except that that's what we're doing because of the paucity of better ideas. But that's how humans, at least humans in our kind of culture, seem to operate.

Thursday, January 24, 2013

A 'paradigm shift' in science....or a manoever?

Thomas Kuhn's 1962 book The Structure of  Scientific Revolutions suggested that most of the time we practice 'normal' science, in which we take our current working theory--he called it a 'paradigm'--and try to learn as much as we can.  We spend our time at the frontiers of knowledge, and at some point we have to work harder and harder to make facts fit the theory.  Something is missing, we don't know what, but we insist on forcing the facts to fit.

Then, for reasons hard to account for but in a way that happens regularly enough that it's a pattern Kuhn could outline (even if rare), someone has a major insight, and shows how a totally unexpected new way to view things can account for the facts that had heretofore been so problematic.  Everyone excitedly jumps onto the new bandwagon, and a 'paradigm shift' has occurred. Even then, some old facts may not be as well accounted for, or the new paradigm may just explain issues of contemporary concern, leaving older questions behind.   But the herd follows rapidly, and an era of new 'normal science' begins.

The most famous paradigm shifts involve people like Newton and Galileo in classical physics, Darwin in biology, Einstein and relativity, and the discovery of continental drift.  Because historians and philosophers of science have in a sense glamorized the rare genius who leads such changes, the term 'paradigm shift' has become almost pedestrian:  we all naturally want to be living--and participating--in an important time in history, and far, far too many people declare paradigm shifts far too frequently (often humbly referring to their own work).  It's become a kind of label to justify whatever one is doing, a lobbying tactic, or a bit of wishful thinking.

Is 'omics' a paradigm shift?
The idea grew out of the Enlightenment period in Europe starting about 400 years ago, that empiricism (observation) rather than just thinking, was the secret to understanding the world.  But pure empiricism--just gathering data-- was rejected in the sense that the idea was for the facts to lead to theoretical generalizations, the discovery of the 'laws' of Nature, which is what science is all about.  This led to the formation of the 'scientific method,' of forming hypotheses based on current theory, setting up studies specifically to test the hypothesis, and adjusting the theory according to the results.

If the 17th-19th centuries were largely spent in gathering data from around the world, a first rather extensive kind of exploration.  But by the 20th century such 'Victorian beetle collection' was sneered at, and the view was that to do real science you must be constrained by orderly hypothesis-driven research.  Data alone would not reveal the theory.

With advances in molecular and computing technology, and the complexity of life being documented, things changed.  In the 'omic' era, which began with genomics, the ethos has changed.  Now we are again enamored of massive data collection unburdened by the necessity to specify what we think is going on in any but the most generic terms.  The first omics effort, sequencing the human genome, led to copy-cat omics of all sorts (microbiomics, nutrigenomics, proteomics, .....) in which expensive and extensive technology is thrown at a problem in the hope that fundamental patterns will be revealed.

We now openly aver, if not brag, that we are not doing 'hypothesis-driven' research, as if there is now something wrong with having focused ideas!  Indeed, we now often treat 'targeted' research as a kind of after-omics specialty activity.  Whether this is good or not, I recently heard a speaker refer to the  omics approach as a 'paradigm shift'.  Is that justified?

Before we could even dream about genomic-scale DNA sequencing and the like, we must acknowledge that our understanding of genetic functions and the complex genome had perplexed us in many ways.  If we had no 'candidate' genes in mind-no specific genetic hypothesis--for some purpose, such as to understand a complex disease, but were convinced for some reason that genetic variation must be involved, what was the best way to find the gene(s)?  The answer was to go back to 'Victorian beetle collection'.  Just grab everything you can and hope the pieces fall into place.  It was, given the new technology, a feeling of hope that this might help (even though we had many reasons to believe that we would find what we indeed did find, as some of us were writing even then).

The era of Big Science
Omics approaches are not just naked confessions of ignorance.  If that were the case, one might say that we should not fund such largely purposeless research.  No, more is involved.  Since the Manhattan Project and a few others, it did not escape scientists' attention that big, long, too-large-to-be-canceled projects could sequester huge amounts of funding.  We shouldn't have to belabor this point here: the way universities and investigators, their salaries and careers, became dependent on, if not addicted to, external grants, the politics of getting started down a costly path enabling one to argue that to stop now would throw away the money so-far invested (e.g., current Higgs Boson/Large Hadron Collider arguments?).  Professors are not dummies, and they know how to strategize to secure funds!

It is fair to ask two questions here:
First, could something more beneficial have been done, perhaps for less cost, in some other way?  Omics-scale research of course does lead to discoveries, at least some of which might not happen or might take a long time to occur.  After the money's been spent and the hundred-author papers published in prestige journals, one can always look back, identify what's been found, and argue that that justifies the cost. 

Second, is this approach likely to generate importantly transformative understanding of Nature?  This is a debatable point, but many have said, and we generally agree, that the System created by Big Science is almost guaranteed to generate incremental rather than conceptually innovative results.  (E.g., economist Tim Harford talked about this last week on BBC Radio 4, comparing the risk-taker science of the Howard Hughes Institutes with the safe and incremental science of the NIH.)  Propose what's big in scale (to impress reviewers or reporters), but safe--you know you'll get some results!  If you compare 250,000 diabetics to 500,000 non-diabetic controls and search for genetic differences, across a genome of 3.1 billion nucleotides, you are bound to get some result (even if it is that no gene stands out as a major causal factor, that is a 'result').  It is safe.

This is not providing a daring return on society's largesse, but it is the way things largely work these days.  We post about this regularly, of course.  The idea of permanent, factory-like, incremental, over-claimed, budget-inflated activity as the way to do science has become the way too many feel is necessary in order to protect careers. Rarely do they admit this openly, of course, as it would be self-defeating.  But it is very well-known, and almost universally acknowledged off the record, that this strategy of convenience seriously under-performs, but is the way to do business.

Hypothesis-free?
This sort of Big Science is often said to be 'hypothesis free'.  That is a big turn away from classical Enlightenment science in which you had to state your theory and then test it.  Indeed, this change itself has been called a 'paradigm shift'.

In fact, even the omics approach is not really theory- or hypothesis-free.  It assumes, though often not stated in this way, that genes do cause the trait, and the omics data will find them.  It is hypothesis-free only in the sense that we don't have to say in advance which gene(s) we think are involved.  Pleading ignorant has become accepted as a kind of insight.

For better or worse, this is certainly a change in how we do business, and it is also a change in our 'gestalt' or worldview about science.  But it does not constitute a new paradigm about the nature of Nature!  Nothing theoretical changes just because we now have factories that can systematically churn out reams of data.  Indeed, the theories of life that we had decades ago, even a century ago, have not fundamentally changed, even though they remain incomplete and imperfect and we have enormous amounts of new understanding of genes and what they do.

The shift to 'omics' has generated masses of data we didn't have before.  What good that will do remains to be seen, as does whether it is the right way to build a science Establishment that generates good for society.  However that turns out, Big Science is certainly a strategy shift, but it has so far generated no sort of paradigm shift.

Wednesday, January 23, 2013

The unexamined walk is not worth taking

I've been reading Robert Macfarlane's beautiful book, The Old Ways: A Journey on Foot, about his travels in Britain and beyond, from walks around his home in Cambridge, England, to the bird islands of the north, to ancient seaways where he, no sailor, sails with friends, and on to Spain and even Palestine.  Macfarlane's is a deep, broad, poetic, idiosyncratic and haunting voice and there is much to take in.

His description of his barefoot walks is whimsical (his friend walks barefoot through nettles because it's chili for the feet), his walk across the Broomway, a path in Essex that only when the tide is out, where many a careless wanderer has met his end, shows a bit of a stubborn, defiant nature that I can't help but believe gets him far.  He travels alone, or enlists the aid of wizened sailors or lifelong inhabitants of the moors, he walks with friends.  But he is always observing.

It's an observation that he tosses off when writing of his travels between bird islands in northern Scotland that catches my attention today.  He is sailing from the Isle of Lewis with four experienced sailors on a 75 year old boat, the Jubilee.
I watched gangs of skua pursue single gannets: their method was to fly above a gannet, drop onto its back, force it down onto the sea, smash its skull with their beaks until the gannet was dizzied, then paddle its head underwater with their feet until it vomited up the contents of its stomach, which the skua then ate. 
This is curious indeed and raises many questions.  It's not the most straightforward way to catch fish, after all.  What else do skua eat, and how else do they procure it?  And are they always so brutal?  How on earth could they learn this rather exotic and complex way of dining?

Great Skua; Wikipedia
In my wanderings through what's written about skua, they are most frequently described as "avian pirates."  The fancy word for this is kleptoparasite, but as long as we're anthropomorphizing, "bully" or even "brute" might be more appropriate. Not only do they smash gannets' skulls, but they have been seen to grab them by the wing in mid-flight so that they stall and fall into the sea, whence the poor birds disgorge the fish they've just caught, freeing it for the skua to scavenge.  And it's not just gannets; they also steal from puffins or terns.

Most skua don't always steal what they eat, though.  They do fish themselves, usually splash diving for surface fish, and they often trail fishing boats and scavenge discards or innards.  But they don't live on only fish.  They eat a wide variety of things, including other seabirds; chicks, fledglings and adults, large and small. They also eat eggs, goose barnacles, tons and tons of goose barnacles, and sand eels when they are available, small mammals.

Gannet; Wikipedia
Skua are generalists, omnivores, but individuals also might specialize, preferring sand eels to barnacles, or whitefish to herring. And huge changes in what they eat over time have been documented, so they certainly can adapt to changes in the availability of foods.

Thieves, pirates, kleptoparasites.  Brutes.  Whatever you want to call them, they don't sound like very nice birds.  But are they worse than the gannet, who you might pity given how he is treated by his near-brethren?  The gannet is a hunter, too, hunting fish, that of course it eats alive, often swallowing the poor fish before surfacing after a dive.  And, people on the Isle of Lewis eat gannets, too (though Macfarlane writes: "I know a Lewisian crofter who, when I asked him whether he liked gannet meat, replied, "I gave a piece to the dog and it spent all week licking its arse to take away the taste.")  So we might just be pots calling the kettle black.

It could be, and surely is argued that kleptoparasitic behavior is innate: skua are born to harass gannets.  Not all seabirds treat each other this way, after all, so it must be something in the skua genome that makes them do this.

But, skua aren't only brutes.  Yes, they do treat gannets abominably, and they do dive bomb people who get too close to their nests, which from a person's perspective might seem brutish, but they are doing it to protect their young.  Which means that they do have some warm and fuzzy feelings. Indeed, like other seabirds, skua invest proportionately more in their young than do land birds.

What part of their behavior would be inborn, anyway?  They don't always smash the gannet's skull, they don't always grab their wing, they don't only eat by stealing, they don't only eat fish, indeed their diet can change from year to year, and they aren't always brutes.  So, exactly what kind of protein would a protein-coding gene be coding for to produce this kind of behavior?

Frigate bird; Wikipedia
Perhaps it's something that skua have been teaching their young for millennia, a behavior passed from generation to generation, not inborn at all. But if so, you'd think it might have caught on among other species of seabirds, so everyone would be dive bombing the poor gannet, a particularly skittish bird. Other seabirds are kleptoparasites, most notably the frigate bird, and many will steal another bird's food when the opportunity arises, but the skua version is not a generally shared behavior.  

So, of course what this all brings up is the question of whether there are genes for behavior or not. Or, is behavior an emergent property for which the groundwork is laid by genes that give us all the ability to scavenge and steal, care well for our young, and a whole host of things besides?  If so, we aren't going to find genes 'for' kleptoparasitism, just as we aren't going to find genes for good parenting -- or bad.

Life evolved depending on one creature living on the flesh of another, by and large.  It's a fundamental legacy of our common ancestry; we all share the building blocks of life, and get, or steal, them from others to survive.  No plant or animal likes being eaten alive, often in a cruel way (since other than humans, species haven't got abattoirs that are at least somewhat terror-free and quick in their killing).  It may be cruel for us to maim, kill, or torture one another for reasons other than food, as is so often what humans do.  But killing for food is what we have to do, whether we are eating plants or animals.  This was one of the observations that led Darwin to his theory that adaptations arise via harsh competition.  It's not the only fact of life, but it's a fundamental one.

Are there genes 'for' survival? 

Tuesday, January 22, 2013

Report from the frozen north

I have just returned from co-teaching a course, that we call Logical Reasoning in Human Genetics, in Helsinki, Finland.  As mentioned in an earlier post about 10 days ago, I have done this jointly with Joe Terwilliger several times before, 3 of them also in Helsinki, but also in Berlin, Maricaibo, and Madrid.  There have been some additional lecturers in most of these--this time Joe Lee from Columbia, Markus Perola and Tero Hiekkailinna from Finland.

Our point is to explain why approaches to understanding the nature of genomic causation have run into the difficulties that all but the most glassy-eyed advocate or know-nothing ostriches are willing to acknowledge.  Complexity has evolutionary and mechanistic origins, and the challenge to identify causal elements in this context is serious.  Confirmation criteria, so fundamental to science, are problematic because different samples have different evolutionary  histories and most genetic variants that affect the kinds of disease and behavior traits we want to understand are rare and have individually very small effects.  Standard scientific methods that rest on repeatability and statistical significance criteria are often stymied by these facts.

The 60 students in the mini-course, of various levels of seniority including post-docs, and professionals as well as graduate students, seemed to find the lectures we gave conceptually eye-opening.  If we did our job, the effect reported is because many in biomedical genetics do not have a very sophisticated understanding of evolutionary genetics, and because wishful thinking has led to many types of biased reporting or inadequately interpreted studies.  Also, of course, there is stress on those results that are successful, at the expense of the much greater studies that find very  little of relevance.

We have, of course, our own view of this field, and we post that view often.  But many people seem simply to be unaware of  the issues involved and their implications.  So we think there is merit to continue to critique the field (in the proper sense of the term, meaning to provide as objective an evaluation of strengths and weaknesses, successes and failures, as we can).

This will happen again only after the jet-lag is over!

Monday, January 21, 2013

Disease driven poverty


In a few of my previous blog posts I’ve discussed the relationship between poverty and infectious disease.  Many of the most prevalent and severe infectious diseases in the world disproportionately affect the world’s poor.  Part of the reason is that the necessary resources aren’t available for tackling such diseases.  A lot of money is currently spent (wasted?) on designing biomedical ‘cures’ for diseases that persist in some places (usually economically poor places) while having already been eradicated in other places (usually economically rich).  It is my position that diseases such as malaria and tuberculosis remain major threats to some populations simply because of the way that financial resources are allocated in our extremely heterogeneous world.

But there is another angle to this story.

Not only does poverty lead to poor health, but poor health can also lead to poverty.  Quite frequently, that is, the arrows point both ways and the reality is a system in which there are “positive” feedback loops.  There is a growing literature on this type of system which is frequently referred to as a “poverty trap.”  Much of this literature has been in economics, where mathematical models have indicated that populations with infectious diseases are less able to ‘develop’ economically.

With economic development at the population level, e.g. with the growth of average income levels, we tend to see an increase in overall life expectancy at age 0.  Most likely this indicates a relationship between improved health and increased wealth.  However, most of the models that actually look at this relationship are either ecological (they are looking at the entire population and frequently assume homogeneity within the population) or at an individual level.  A few models have also looked at community or household levels.

One major problem with models of all types is that results can change when we change our unit of scale.  The effects of poverty on disease, for example, might be different if we look at a community level rather than a province/state level or even consider an entire nation to be a single population.  This is a problem known as the ecological fallacy (and is closely related to the modifiable areal unit problem) in which causal relationships at the population level don’t explain what is happening at, say, the individual level.  

Regardless of these problems and issues, there does appear to be a feedback loop between poverty and disease.

And this is an interesting thing from an anthropological view.  First off, poverty can mean different things to different people.  For example, to some, poverty means “not modern.”  Some indigenous groups actually choose to live in a traditional house rather than a more modern one.  In my opinion, “traditional” (or not modern) does not equal poverty, but it does get mistaken as poverty.  Poverty can also be a relative thing; something that becomes apparent when you don’t have as much stuff as the people with whom you are coming into contact.  Clearly this may lead to psychological and sociological issues, but it might also explain gradients in outcomes (relative health?)  Finally, there is a type of poverty that exists where people are simply unable to put food on the table.  While there can be some argument about the effects of modernization and relative poverty, I would suggest that this final type of poverty is unambiguous and its negative effects are less debatable.  

In poverty trap models we are frequently interested in investigating and understanding threshold levels under or above which equilibria are reached.  (There are quite a few relatively new papers out that are excellent references (see: Bonds, Keenan, Rohani, & Sachs, 2010; Plucinski, Ngonghala, Getz, & Bonds, 2013; Wood, n.d.)).  Perhaps it is easiest to consider at the unit of the household.

An already poor or marginally poor household in which the major breadwinner is afflicted by severe disease is plagued with multiple problems.  For example, aside from the risk of infection for other household members, if that person is afflicted by malaria or dengue fever, they may not be able to work for several weeks.  A house on the margin of poverty may then fall just enough behind in household money and/or food to fall into true poverty.  Households that are already poor may fall even further.

And an important aspect of this situation is that not only is the person who is actually infected met with further troubles; the entire house is also afflicted.  Furthermore, there tends to be heterogeneity in these effects even within households.  That is, poor households may see things such as greater infant mortality, and this effect may be exacerbated when there is a shortage of food or resources in the household.

In poverty trap models, there are usually equilibrium points in poverty levels that, once reached, are quite difficult to break.  From Bonds et al. (2010):

What may be most important in these debates is therefore not whether the effect of health on poverty is more significant than that of poverty on health, but whether the combined effect is powerful enough to generate self-perpetuating patterns of development or the persistence of poverty.  

Children who grow up in households with frequent food shortages may not have the same physical or cognitive abilities as others.  Their immune systems, already taxed by years of exposure to pathogens, may not be able to fight off diseases as well as their healthy counterparts.  Therefore, when they begin their own households, they are already behind in the nutrition, health, and economic game.  And once again, when adults in the new household fall ill and cannot put food on the table, the children will be disproportionately affected.  

This cyclical pattern, where disease leads to poverty and poverty can lead to disproportionate disease, provides a perfect storm in which there aren’t enough resources to keep from getting sick, where once sick you are likely to fall further into poverty, and once you fall further into poverty you are even further away from pulling yourself and your family out.  This leads to the maintenance of poverty and sickness across the generations.

And this story could perhaps get even more complicated when we consider some evolutionary implications.  For example, populations that have historically been afflicted with malaria also tend to have high proportions of blood and blood-related disorders that seem to protect against malaria.  Almost all of these disorders are harmful in some cases (for example, in homozygotes).  Therefore the evolutionary history of disease can lead to a situation where some individuals are actually plagued with sickness from the very beginning of life.  Paradoxically, under situations of heavy malaria burden, some people with these disorders will apparently be healthier than their non-affected counterparts.  I don’t know whether the side effects of these disorders are enough to lead to poverty traps on their own.

Finally, in an age when many scientists appear to be looking “for the gene for (fill in your favorite thing to study)”, poverty traps and households are an interesting thing to ponder.  Poverty and the apparent predisposition of household members toward succumbing to disease can look like a genetic effect.  If it runs in families, and certainly both poverty and sickness do, then it can look a whole lot like there is a genetic reason for it.  I think that poverty trap models are therefore a nice illustration of how we could arrive at the same phenotype (poverty and sickness) from purely socio-economic and ecological factors.  

REFERENCES:

Bonds, M. H., Keenan, D. C., Rohani, P., & Sachs, J. D. (2010). Poverty trap formed by the ecology of infectious diseases. Proceedings of the Royal Society B: Biological Sciences, 277(1685), 1185–92. doi:10.1098/rspb.2009.1778

Plucinski, M. M., Ngonghala, C. N., Getz, W. M., & Bonds, M. H. (2013). Clusters of poverty and disease emerge from feedbacks on an epidemiological network. Journal of The Royal Society Interface, 10(80), doi: 10.1098/rsif.2012.0656.

Wood, J. (in press). The Biodemography of Subsistence Farming: Population, Food and Family. Cambridge University Press.

Friday, January 18, 2013

Burrowing into the unending nature vs nurture debate

Behavior modules
Two interesting stories about behavior this week.  You've probably read about the mouse that builds its nest in a stereotypical way.  The Nature paper (Weber, Peterson, and Hoekstra) describing this was reported in The New York Times, as a story about how genes control complex behavior. 

The research was motivated by the question of how genetics influences the evolution of complex behaviors, but the researchers also wonder whether the environment influences variation in heritable behavior, and whether many or few genetic changes affect behavioral evolution.

The authors describe the burrow built by oldfield mice as having a long entrance and escape tunnels, and they say that the length is consistent wherever these mice build them, although tunnel depth depends on the soil in which they build.




Deer mice make a smaller, simpler burrow, and when the two species are interbred in the lab, the offspring reproduce the more complex burrow of the oldfield mouse, though with varying length.  First generation backcross mice (offspring of the interbreeding then bred with the parental strain) build burrows of varying length, and not all built escape tunnels, which suggests to the researchers that these behaviors are separately determined. 

The researchers then looked for genetic loci that might be influencing these behaviors and found three different loci that seem to contribute to tunnel length, and another that influences the building of escape tunnels.  As explained in the NYT:
All complicated behaviors are affected by many things, Dr. Hoekstra said, so these regions of DNA do not determine tunnel architecture and length by themselves. But tunnel length is about 30 percent inherited, she said, and the three locations account for about half of that variation. The rest is determined by many tiny genetic effects. As for the one location that affected whether or not mice dug an escape tunnel, if a short-burrow mouse had the long-burrow DNA region, it was 40 percent more likely to dig a complete escape tunnel. 
Well, if tunnel length is about 30 percent inherited (and there is variation in tunnel length; average length is 181 cm, with a standard deviation of ±53), and the three locations account for about half of that variation, or perhaps 15% of the variation in tunnel-building, then the rest of the causation is not due to a few major genes.  Though, if it's 70% non-genetic, then one wonders how the environment specifies that a tunnel will have an escape hatch.

The authors conclude that "discrete genetic modules" control complex behaviors and that, "Together, these results suggest that complex behaviours—in this case, a classic ‘extended phenotype'—can evolve through multiple genetic changes each affecting distinct behaviour modules."  One could challenge this rather strong genetic-determinism view.  But how genes (much less environments) determine this type of thing is interesting and challenging to think about.

Overcoming behavior modules
Manduca sexta, or Hawkmoth;
Wikimedia Commons
The second story this week is from a paper in Science reporting that while the hawkmoth seems to innately prefer the nectar of night-blooming flowers, it can learn to collect nectar from other sources.  These moths seem to innately prefer flowers that give off a specific class of aromatic compounds, or at least the aromatics of their preferred flowers are processed similarly by the moth's olfactory response.

But the researchers wondered whether olfactory response necessarily reflects the moth's preference.  To test this, they exposed naive moths to paper flowers scented with different classes of compounds from those they generally prefer.  The moths were then observed to visit flowers emitting these scents at similar frequencies to those they innately prefer.  The authors conclude that "olfactory conditioning provided moths with flexibility in their foraging behavior but did not extinguish their innate preferences for scents from the moth-pollinated flowers." They suggest that "olfactory conditioning may operate in an olfactory "channel" separate from, but parallel to, that involved in the innate responses."  Modularity again. 

So much for exquisitely fine-tuned co-evolution of flower and pollinator. 

Glass half full/half empty
There's something for everyone here. The mouse researchers conclude that behavior is modular and evolves by the mixing and matching of genetic modules. Maybe, but there's plenty of non-heritable influence on mouse burrowing behavior as well, if tunnel length is only "about 30 percent inherited."

Modularity is a fundamental principle of life, and the idea that behavior, as well as morphological traits, can be modular is certainly a possibility -- every 4th grade class has its clown and its bully and its teacher's pet, after all.  But this doesn't mean that it must be genetically determined.  It could be, but the evidence isn't yet there. Maybe the non-mapped variation has to do with general cognitive function and how an animal scopes out its environment and decides what would be a good strategy for living in it.

We are all ultimately reducible to genes, in the sense that they are the basis of who and what we are.  But, as the hawkmoth study shows, the fact that even a lowly moth can easily overcome "innate" behavior with a whiff of something it wasn't born to crave is yet further evidence that organisms are inherently adaptable, a fundamental principle of life.  Given that environments can change quickly and unpredictably, that's a good thing. 

Thursday, January 17, 2013

The height of fashion: fMRI and high-impact publishing

Are some parts of the brain sexier than others?  Apparently there's no small amount of suspicion in fMRI labs that this is true, so Tim Behrens, neuroscientist at the Wellcome Trust Centre for Neuroimaging, University College London and the Functional MRI of the Brain Centre, University of Oxford and colleagues set out to answer this question.  Behrens talked about his study the other day with Quentin Cooper on BBC4's "Material World." The results were also published in the January Trends in Cognitive Sciences.  

You'd think all parts of the brain would be equally interesting, but apparently some labs are right to be suspicious that this isn't so. Behrens and colleagues analysed over 7000 brain imaging studies and concluded that some regions are indeed sexier than others, and studies of these regions tend to be published in higher impact journals than others.

Behrens et al. correlated the area of the brain subjected to fMRI in each of these 7000 papers with the impact factor of the journal in which the paper was published.
The champion of the popularity contest was the presupplementary motor area (pre-SMA), defeating its nearest contender, the dorsolateral prefrontal cortex, by the considerable margin of 25%. Further lowering the frequency threshold to ‘half-a-pre-SMA’ revealed a network of brain regions commonly activated in studies of attention and executive function, including the frontal operculum and/or insula, and the intraparietal sulcus. The only intruders on this cognitive panacea were the hand area of primary motor cortex and Broca’s area, both in the left hemisphere only.
As Behrens explained it to Cooper, the most fashionable region is the anterior insula, the part of the brain that seems to be associated with empathy.  Further, studies of the parts of the brain that are associated with, in Behrens' example, how you might fall in love are more inherently interesting and thus more likely to be published in a top-tier journal than a study of which part of the brain is involved in controlling, say, movement of your little finger.

Behrens and company were also able to identify the least fashionable parts of the brain.  
Leading the way in ignominy was the secondary somatosensory area (Z = 4.4, P < 5 x 10-6), but the supplementary motor area was almost equally disgraced (Z = 4.25, P < 5 x 10-6).  Researchers unfortunate enough to find activity in these regions can expect to be published in a journal with approximately half the impact of their most celebrated colleagues (mean impact factors of approximately 5 compared with approximately 9).
They also looked at keyword associations with impact factor.  Words like emotion, semantic, reward, recognition, attention, face, explicit, and recall clustered together, as did execution, fixation, vibrotactile, inhibition, stroop, saccades, covert and so on.  And, as the authors wrote, "We leave it to the reader to ... decide which set of words was positively correlated with impact factor, and which exhibited a negative correlation."

fMRI of the less sexy primary visual cortex, 
extrastriate visual cortex and lateral 
geniculate body; Wikimedia
Cooper tried to argue that some of this apparent fashionability might actually follow function.  Any kind of study of the human body, he suggested, would focus on the more significant functions.  The left side of the chest rather than the right side, e.g. But Behrens reminded him that it isn't just that there's more research into the big questions, which there is, but that it's primarily papers about some of the big questions that are published in the fashionable journals. Maybe, he suggested, because some are harder to sell to Radio 4. 

Indeed, there is a broader issue, which Cooper also pointed out, and that is that there are certainly funding fads.  "Nano-science," "climate change," "translational medicine," and so on are perhaps buzzwords that help your chances of funding and publication in top-tier journals.  There have been times when it has been best not to include the word "evolution" in your grant title.

But, who is setting the fashions and who is following?  Are journal editors, funding agencies, technological innovation the trendsetters?  Scientists themselves?  Is the idea to game the system? The rewards of publishing in top-tier journals are great, after all -- further funding, raises and promotions -- so, why not?  Anyone can do in their own field what Behrens et al. did with fMRI studies; figure out the key keywords and follow the money.  Some people surely will and do make their next grant decisions like this, maybe particularly in big money fields like genetics.

But, in fact, most people won't.  There were 7000 fMRI studies published, after all (whatever you think of the value of such work), most of them in lower tier journals, which means that a lot of people are interested in parts of the brain that are not fashionable (well, ok, fMRI itself is fashionable, in the fashionable field of neuroscience).  As long as there's money for a topic, someone's going to be studying it.  Some scientists must still be doing what they do for the love of the science.  Backwaters can have their appeal.  And scientists in those backwaters, self-selected to avoid the rapids as they are, might well be happier for it. 

Wednesday, January 16, 2013

Epigenetics -- what else don't we know?

Epigenetics -- a hot topic
Epigenetics* is a buzzword these days, in genetics, yes, but it is appealing enough that it is trending in general usage as well. For better or worse.  While the idea that factors other than changes in DNA can affect development was hypothesized almost a century ago, and called epigenesis by CH Waddington, epigenetics is now more often considered to be changes in the genome that don't involve changes in DNA sequence. Generally this involves chemical modification of nucleotides that causes a gene to be silenced, and many instances have been documented, pathological and not.

Epigenetic mechanisms. Source: Wikimedia Commons
But, much as, say, political scientists invoke genetics to explain why we vote the way we do, or economists to explain our economic behavior, people are invoking epigenetics to explain things like the 'culture of poverty' (due to epigenetic changes because of maternal malnutrition during pregnancy), or psychiatric diseases that seem to be genetic but for which genes have not been identified, or even homosexuality (due to epigenetic signaling that is usually beneficial to the parent, but can be transferred to a fetus of the opposite sex, affecting subsequent sexual behavior). 

Epigenetics is appealing because it's not strict genetic reductionism, and finding genes 'for' traits, particularly behaviors, has proven to be frustratingly difficult (well, at least when done properly), and yet epigenetics explains traits in terms of tractable biological markers, such as methylation of mRNA or histone modification. Whether epigenetics is always correctly applied is another matter.

Epigenetics documented
But the actual, documentable science of epigenetics marches on.  A paper in the Dec 14 issue of Science ("Epigenetic Regulation by Long Noncoding RNAs," Lee) adds a new twist to the epigenetic story, addressing long noncoding RNAs and asks about their function.  Thousands of lncRNAs have been found in the last five years, and their function is just beginning to be documented.

The ENCODE project has shown that 70-90% of the mammalian genome is transcribed, yielding a very large 'transcriptome' of long (defined as greater than 100 nucleotides) noncoding RNA (lncRNA), and that this 'pervasive transcription' actually happens seems to be emerging as the consensus view (though, see this post on differences of opinion on this).

ENCODE, again, has shown that there are about 10 "isoforms overlapping any previously annotated genes, thereby challenging the traditional definition of a gene."  That is, whereas the classic view of messenger RNA is that it is transcribed from coding regions within protein-coding genes -- which comprise only about 1% of the genome -- now it seems that most of the non-coding genome is also transcribed, and that the transcripts start and stop in unexpected places, given accepted wisdom.  These transcripts are "often poorly conserved, unstable, and/or present in few copies" and whether they always have a function is unknown, though Lee suggests that much of this RNA is involved in epigenetic regulation of gene expression. 

lncRNAs were first seen to play a role in genomic imprinting and inactivation of the X-chromosome, with the X-inactive-specific transcript (XIST/Xist) being among the first lncRNAs to be identified in mammals.  Males have 1 X chromosome and females have 2, which means that genes on the X chromosome could be expressed twice as much in females than in males.  This doesn't happen, however, and lncRNA is part of the reason for this.
Nowhere is the abundance of lncRNA more evident than the X-inactivation center (Xic). To balance X-chromosome gene expression between males and females, the Xic on the mammalian X chromosome controls the initiation steps of XCI through a series of RNA-based switches. Today, the Xic serves as a model for understanding epigenetic regulation by lncRNA.
Xist codes for a long piece of RNA that is never translated; the RNA coats the inactive X-chromosome, which results in the silencing of its genes, though it is complicated -- Xist itself is regulated by two other lncRNAs, with downstream effects.  

Lee writes that
"[A]lthough lncRNAs now dominate the Xic, this region was once coding.  Evolution of random XCI 150 million years ago in eutherian mammals coincided with a shift from coding to noncoding space, suggesting that lncRNAs offer distinct advantages over proteins for some forms of epigenetic regulation." 
Why lncRNAs rather than the usual, ubiquitous regulatory elements that turn genes on?  LncRNAs are so large that they seem to address only a unique location in the genome.  Transcription factors, on the other hand, are short and bind with short DNA sequences that often are found in thousands of places in the genome.

Lee presents several classes of lncRNA, those involved in genomic imprinting, which is when only one of the two inherited copies of a gene is expressed; lncRNAs that are involved in non-allelically regulated loci; lncRNAs can activate gene expression as well as repress it; lncRNAs may be found as parts of pseudogenes, from whence they silence or activate the still functional form of the gene.  Lee concludes that much is yet to be discovered about these molecules.  "Indeed, the Wild West is a rich landscape waiting to unfold."

And it is unfolding at a fast rate.  E.g., two papers in the last week's issue of Science document epigenetic involvement in embryonic stem cell pluripotency (here), and epigenetic influences on gene expression as a response to metabolic state (here).  Epigenetics is being over- and too often improperly applied, by people who'd like a simple yet rigorous, scientific explanation for a complex trait, but, like genetics itself, the field is developing to the point where it is possible to begin to sort fact from fiction.

What we don't know
But, just as single genes aren't the explanation for every trait or illness, epigenetics isn't going to be the explanation either.  Naturally enough, we always try to make complete stories from incomplete data -- before we knew about short RNAs or DNA methylation, we thought we understood gene expression.  But, every age has its 'normal science,' as Thomas Kuhn called it in "The Structure of Scientific Revolutions."  It's only when enough challenges arise to that normal science that understanding can broaden and build those challenges into a new and different picture.  The picture may change to encompass new data, but the sense that we actually understand does not; we always think we do.  We never know how much we don't know. 

------------------------------------
*Nice introductions to epigenetics:
Neuroscientist Kevin Mitchell nicely sums up the history of epigenetics, and its current trendiness -- and misapplication -- in a two-part blog series starting here.

Here's a quick YouTube discussion.

Tuesday, January 15, 2013

Not to defend fast food, but does it really cause asthma?

Here's a case for confounding variables if there ever was one: a new study suggests that 'fast foods' cause asthma, rhinoconjunctivitis and eczema.  Indeed, the BBC tells us that teens who eat fast foods three or more times a week are 39% more likely to have severe asthma than kids who eat three or more servings of fruit each week. Sounds pretty clearly causal, doesn't it?  Well, maybe not.

The results are from a long-standing international study of asthma, begun in 1991 because of concern over increasing asthma rates.  Called ISAAC, the International Study of Asthma and Allergies in Childhood, the study is based in New Zealand but is associated with centers all over the world.  Asthma incidence began to rise in the mid 1980's and the trend has yet to be satisfactorily explained by environmental epidemiology.  And of course, many tens of millions of dollars have been spent looking for a genetic cause, without success.   

An ISAAC study published in 2004 reported that eating hamburgers was associated with increased risk.  Other studies have found some association with maternal diet during pregnancy, as well as childhood diet. 

Results from the latest paper relied on questionnaires translated into 53 languages, and included more than 500,000 children from 54 countries. Along with lifestyle questions, teenagers, and parents of younger children, were asked about symptoms of asthma, watery itchy eyes and eczema.  And, quoting from the paper, they were asked the following about their diet: ‘In the past 12 months, how often, on average, did you (did your child) eat or drink the following: meat; seafood; fruit; vegetables (green and root); pulses (peas, beans, lentils); cereal; pasta (including bread); rice; butter; margarine; nuts; potatoes; milk; eggs and fast food/burgers?’.  They could choose 'never or occasionally,' 'once or twice per week,' or ' ≥3 times per week.'

Covariates considered included exercise, television watching, maternal education, maternal smoking in the first year of life and current maternal smoking.

While the BBC story makes a case for fast food being the cause of asthma, rhinoconjunctivitis and eczema, the story is much less clear if you look at the details actually presented in the paper. In teenagers, fruits are 'protective' if eaten once or twice, or 3 or more times a week,
[m]ilk was inversely associated with current wheeze once or twice per week, severe asthma ≥3 times per week, VQ [current wheeze], current and severe rhinoconjunctivitis once or twice per week and current and severe eczema once or twice per week as well as ≥3 times per week for current eczema. Vegetable consumption was also inversely associated with current wheeze ≥3 times per week and VQ once or twice per week and ≥3 times per week.
So, milk was protective if consumed once or twice per week, but not if consumed three or more times per week.

'Risk factor foods' were butter, fast food, margarine and pasta ≥3 times per week.  And seafood was positively associated with 'severe asthma, current and severe rhinoconjunctivitis and current and severe eczema. Butter, margarine, nuts, pasta and pulses [legumes] were positively associated with one or more conditions.'  If there is some consistent causal factor -- dairy fat, say, why butter but not milk?  Why not eggs of meat, if it's just fat that's the risk factor?

In younger children (6-7),
For all centres combined, eggs, fruit, meat and milk ≥3 times per week were inversely associated with all three conditions, current and severe. Cereal ≥3 times per week was inversely associated with severe asthma. Vegetables once or twice per week and ≥3 times per week were inversely associated with current and severe wheeze as well as current and severe rhinoconjunctivitis and ≥3 times per week with current eczema.
So, eggs and meat are protective, whereas they aren't for older children (aged 13-14).  Indeed, risk factor food for younger children is fast food only.  And, the only food category both age groups have in common is fast food.  Which, of course, could include a large variety of things. 

To add to the difficulty making sense of these data, "there is considerable variation for some foods between centres within countries."  Indeed, take a look at the odds ratios (ORs), the measure of the strength of association between variables and outcome, for fruits and fast foods by age and country.  They are, by and large, all over the place.  So, the odds of having severe eczema if you eat fruit in Mexico, say, vs. never eating fruit are 0.15, or 2.0.  That is, risk ranges from much lower than never eating fruit, to twice as high. The odds of having current wheeze if you eat fruit in Spain are 0.50 or 2.0.  The overall data may well suggest a trend, but it's not one that is reliably evident center by center, or even within each country. 

So, apart from the question of why this story has been picked up by the media (well, we know why -- anything that indicts fast food has the potential to be sensational), how is it that these results can be all over the place like this?

For one thing, and we've said this numerous times in numerous posts, food questionnaires are notoriously unreliable.  Do you know how many times per week you consumed pasta in the past 12 months?  Or fruit? 

And, who is likely to eat more fast food?  Kids in cities.  Higher risk of asthma in cities has been documented over and over again.  It isn't clear why that is, though allergy to cockroaches has been suggested, or proximity to asthmagenic industry, although this contradicts the other widely held hypothesis that too much cleanliness is causal (this is the Hygiene Hypothesis, which we blogged about here), and kids on farms have frequently been found to be at lower risk of asthma, for whatever reason.  Ok, it's possible that fast food is the reason, but it's also possible (and we'd say likely) that it's a confounding variable, a factor that stands in for an actual but unknown and unmeasured risk factor that kids in cities are more exposed to than kids in rural areas.  We aren't saying that's so, just suggesting it's a possibility; whether this study shows this rural/urban disparity is not clear. 

The asthma epidemic is relatively recent.  Having fresh fruits and vegetables year-round is also relatively recent, so most grand and great-grandparents were disease-free for some other reason.  Unless, of course, fruit and veg are cancelling out the effect of fast foods in kids who eat both, which we can't tell from the report, but that does seem to be a rather contorted explanation, if so.

It is not our purpose to defend fast food here, but really, how unhealthy is it, relatively speaking?  Yes, it packs a lot of calories, and for a lot of people that's not a good thing, but what else might be wrong with fast food?  And we are talking about fast food as a whole -- Big Macs, fries, fish sandwiches, milk shakes; everything lumped into one.  The authors write, 'Biologically plausible mechanisms for the relationship between fast food consumption and asthma and allergic disease could be related to higher saturated fatty acids, trans fatty acids, sodium, carbohydrates and sugar levels of fast food and possibly preservatives.'  Maybe, but if so, what have fast foods got that bacon, beef, any desserts, and so on don't?

And, the authors suggest that fruits and vegetables are loaded with antioxidants, which are said to be protective -- against almost everything.  Except that multiple large double blind studies have found that they aren't.  So, again, it's possible that fruit and vegetable consumption is a proxy for some other unidentified protective factor.  Or not. 

Given that the data used in this study are recall data, the diseases self-reported, the causal reasoning purely speculative, and that there's a possible problem with multiple confounding variables, we're going to go out on a limb here and suggest that, based on this study, the diet/asthma-eczema-rhinoconjunctivitis connection is a tenuous one at best.