Thursday, September 29, 2011

Neanderthals are Soxier than ever

With the mounting evidence that Neanderthals are just humans, it's easier than ever to compare them to the Red Sox.

With their large brows and their super-jock bods, we have generally held Neanderthals apart as a separate, brutish species. And under the immortal wicked assumption that brawn cannot also have brain, there is practically an industry built on lampooning some members of our own species in comparison.

Exhibit A. (So meta, really.)


Back in Damon's Sox days, the Red Sox even called themselves a "bunch of idiots." (Causing cavemen, who were not idiotic at all, to roll over in their museum drawers.)

But the more that Neanderthal genes are studied (and cross-your- fingers that these results really are based on ancient caveman DNA and not modern lab-rat dandruff),  the harder it is to separate them into a separate species.

That is, the more we know about their DNA, the more fossils that are found to bridge the gaps, and the more artifacts that are found to blur cultural differences, well the less sense it makes to consider Neanderthals as being any different from other Paleolithic humans as the Red Sox are from the rest of us.

And so this begs the question, if Neanderthals are just humans is it appropriate anymore to refer to Neanderthal-human sex as "interbreeding"?

If "the Neanderthals" were just like" the French" or "the Inuit" then describing their extra-population mating behavior as interbreeding would be something you'd probably only do while wearing a white pointy hat or holding your right arm out in front of you like a hemiplegic Frankenstein.

So we could keep calling Neanderthals Neanderthals...okay sure. But instead of thinking of them as a separate animal from neighboring humans, we could just think of them as a separate baseball team.

Neanderthals had their own look, their own strategies, their own traditions, but when they got together with other humans they understood them well enough to play the same games. They could hit a homer just fine, in both senses of the phrase.

This year's Red Sox couldn't have made a link to Neanderthals more complete. They just collapsed--shrouded in mystery with hardly anyone able to really explain why--while others with less muscle managed to go on.

Neanderthals are Soxier than ever. But even though the derogatory stereotype no longer holds, fans would probably flip that thought the other way around.

Further reading: 
http://www.theatlantic.com/entertainment/archive/2011/09/the-red-sox-werent-cursed-they-were-just-terrible/245717/

Parallel pillars point to "Paranthropus"

Are you lying awake at night wondering whether or not you should call robust australopiths "Australopithecus" or "Paranthropus"?

SK 48 - P. robustus from Swartkrans, South Africa
This is just what you need to help you fall asleep quickly. It's a recent article in PNAS called, "CT-based study of internal structure of the anterior pillar in extinct hominins and its implications for the phylogeny of robust Australopithecus."  

The researchers looked at the inside of traits (the anterior pillars on either side of the nose) that look similar on the outside in two South African species (A. africanus and its neighbor robustus) and determined that they're different. They conclude that they're different because they develop differently due to genetic differences and so they are not the same structure in the two homs. 

Without that link to africanus anymore, robustus does not need to be reconstructed as part of a South African lineage of Australopithecus. Instead, it can be appreciated for all its obvious, massively toothy similarities to another well-known robust species, boisei from East Africa. 

Before this study, based heavily on the anterior pillar link, it looked like robustus and boisei may have evolved in parallel or convergently from local africanus (South) and afarensis (East) respectively. This is why they were kept in Australopithecus but were always labeled as "robust australopiths." This study indicates that they did not evolve in parallel and supports the lumping of all robusts into one genus with a single (monophyletic) origin: Paranthropus

But to even more strongly support that robustus and boisei evolved from a common ancestor--to support one genus for the "robust australopiths"--they needed to look at the internal structure of that same region of the face in some boiseis and girlseis and show us that the similarities there with robustus beat out the differences between africanus and robustus

And they did that...how satisfying!

And guess what they found? The boiseis look like the robustuses on the inside even though the outside is different. 

Teeth and jaws are no longer parallels between East and South, instead anterior pillars are a parallel within the South. The case is strengthened for lumping australopiths with the huge teeth and jaws into their own genus with a single origin. Where that origin took place is another question for another time...

Wednesday, September 28, 2011

Another metaphor for our time

Billboard seen in Arles, France

Homeopathic beer is totally watered down

Take a beer.

Sorry for the dilution scare.
Make it a great one like La Fin du Monde.

Take away all the beer molecules by diluting it like crazy.

Leave the vibrations behind.

Drink that water and you should have all the healing effects of real beer without all the calories or addictive effects.

It sounds magical.

So if homeopathy really worked wouldn't somebody be selling homeopathic beer at Whole Foods by now?

Of course they would, but it doesn't work. You'd drink the "beer," not get a smidgen of relief from the human condition, and demand your money back. Diluteries all over the country would have to shut down before they knew what happened.

Then why do people still buy piles of homeopathic medicines?

Compared to beer, it's harder to gauge or track the effects of headache pills; if you take a pill for a headache, it's hard to tell if your headache was relieved by the pill or just by your own body over time.  Homeopathic meds can also provide relief as much as placebos can (which can be effective!). So homeopathic medicines for headaches will keep selling and will keep making the people richer who stock the shelves at Whole Foods with water pills.

Sadly, homeopathic beer would never work. Even though beer is just sweet sweet medicine.

Next time James Randi and friends stage an overdose party, maybe they should wash those pills down with some of it. Maybe they already do.

Sunday, September 25, 2011

Metaphor(um) in Tarragona

While Ken and I were in Catalunya last week, we spent a day wandering around the Roman ruins in Tarragona, once one of the largest cities in the Roman empire.  The current Old City is built within what was once a wall surrounding what was once the forum.

And it was fascinating; the remains of the amphiteatre, the wall around the city, the sports arena, the forum, the arches everywhere, leading from Roman centers to medieval alley to modern street. 


It was also very evocative, for many reasons, not the least of which was to see Tarragona through the lens of evolution: the Roman wall, undoubtedly built on a site where the contemporary Spaniards were living, to which a tower was added in medieval times, and that is now one wall of a modern museum, all seemed strikingly parallel to, say, the stem vertebrate of which the arms were recruited for wings, and then perhaps pink feathers, long legs and neck were added on.  It is how one thing is built upon another in this particular history, changing or melding functions as time went on.
The amphitheatre was used for centuries as a quarry, with the stones removed and scavenged in recruitment for other purposes.  The hall in what was the forum was used to house prisoners beginning in the early 1800s and up through Franco’s time, where hundreds were executed.  Brutal perhaps, as though a civic building were recruited for evil purposes, but of course the Romans had prisoners, and treated them brutally as well.  So this particular function didn’t evolve as much as the rest!

The ruins are now a tourist site, used to feed many Tarragonans, and the city itself.  That’s a modern use, yes, but tourists in classical times are even known to have visited the Egyptian pyramids when they were new, and on into classical times when they were as old to people then as Roman ruins are to us….so re-use for tourism isn’t really new, either.   

Flamingos, in La Camargue, southern France
Of course, genes may be conserved and used as they have been perhaps for eons, but they might also be recruited to new functions, or duplicated, they can become non-functional, pseudogenes.   And any trait or organ can evolve to include a new or additional function. 

The famous analogy of the ‘spandrels of San Marco’, that stressed how one thing, originally with one function, can be recruited for another function is somewhat similar.  But here in Tarragona, one culture finds different uses as well as building upon a previous structure’s original uses.  The scavenging of blocks of amphitheater stone for making houses later shows that sometimes the original use is simply discarded, more or less as a gene might take on a completely different, or additional use over time.

In any case, it was a day for tourism that triggered thoughts about how life works.

Friday, September 23, 2011

Sperm, Meth, Rock'n'Roll

If you're interested in epigenetics at all--which  you should be if you're interested in how evolution works and if you're following any of this Lamarckian Renaissance--then you'll be interested in at least trying to read this paper: Sperm Methylation Profiles Reveal Features of Epigenetic Inheritance and Evolution in Primates by Molaro et al.

This is the "graphical abstract" and as you can see, all their findings are as crystal clear as their importance.

(That was sarcasm. Although, if you stare at this long enough it does start to make sense through the alien acronyms and jargon.)

The authors report, based on sperm studies, that methylation has evolved separately in chimps and humans and has diverged (as we'd expect) and they also explain how methylation changes can drive changes to the DNA sequence.

If you're interested in reading more into this topic, Eva Jablonka is just one scientist I know of who ascribes such evolutionary importance to methylation and maybe there are others (that I am unaware of because this is outside my area).  And she had a nice review in QRB about inheriting epigenetic changes.

Methylation as a force of evolution? Rock'n'roll!

Thursday, September 22, 2011

Taking the piss out of paleoanthropology

This is pretty cool.  John, Lee et al. are calling it the "Malapa soft tissue project" and (*gasp*) it's about finding the truth, a quest that unites all of us scientists and science-lovers, rather than finding your version of the truth that can be pitted against someone else's who may or may not have been mucking around in your territory.

Makes my proposal for "Blind paleoanthropology" that I outlined here,  seem not so ridiculous... maybe.

Update: After reading through the FAQ (scroll down to see it here), it sounds more like a new-fangled way to find collaborators rather than a new-fangled way of doing science. But as people join in, we'll see how this unfolds.

Wednesday, September 21, 2011

If it causes harm in mutant mice, should it cause alarm?



I clicked on the link to the article called "Is it folly to take folic acid?" and I'm betting many of you would too if you saw a headline like that. 


The provocative title reeks of those that tell us bananas might actually be poisonous or that we should beware the dangers of broccoli.

The story opens with an acknowledgment that pregnant women 'round the world are encouraged to take folic acid supplements to reduce risk of birth defects. The government even fortifies our food because it's so effective.

But wait, there's "a new mouse study [that] shows that folic acid supplementation can itself sometimes increase the risk of birth defects or even cause the death of embryos."

What? How could that be? The results are actually from experiments "in mice genetically predisposed to giving birth to embryos with neural tube defects."

So the lesson is not to stop taking folic acid supplements if you're pregnant; it's to stop giving folic acid supplements to mutant mice. 

Now tell me... why the scary headline next to the photo of the blissful pregnant woman? 

Because I clicked on it, didn't I?

Tuesday, September 20, 2011

New ancestors never get old


The media frenzy last week over the new Australopithecus sediba papers coincided with all of my new faculty events on campus.  Whenever I was asked to introduce myself and to describe what I teach and study, every response I heard had to do with the recent reports of fossil hominin discoveries from Malapa, South Africa.

"I just read about some fossils on msNBC. So that's what you're into?" … "How about those new finds! Wow!"

And although anything as equally well-preserved as the sediba stuff, but from 18 million years earlier (like, say, new fossil primates from Rusinga Island) would be greeted with relatively weak popping of eyeballs and with barely perceptible gaping of jaws, I'm pleased to be so quickly matched to my field of study. I'm delighted that paleoanthropology's so prominent in the media. Who doesn't want their passion to be bathed in limelight? (Besides independent music fans.)



But why does paleoanthropology seem to garner and even hog so much attention? This is a sometimes perplexing pattern in the science news and here's a little back and forth between Ken and I inspired by this question.  


Ken: What, if anything, is this [the new A. sediba finds and analyses] really saying that we didn't already basically know?

Holly: We don’t know much about the time frame between 2.5 – 1.5 million years ago and this is when a lot of changes take place that trend toward human-ness and away from ape-ness. The Malapa fossils (A. sediba) preserve both cranial and postcranial bits of single individuals, so proportions of brain size, body size, limbs, teeth, etc can be studied which, again, is a rare opportunity for the hominin fossil record for this time period which is comprised mainly of cranial and dental remains that even with a well-preserved braincase for estimating brain size, don’t come with associated postcranial or skeletal bits for putting that brain size in context. The Malapa homs had small early hominin (read: ape) sized brains at a time in hominin history when we hypothesize and even expect hominins to have relatively larger brains. Many would also expect them to have larger brains given the modern aspects of their hands and pelvis. Furthermore, the skeleton—from the hand to the pelvis to the foot—shows a new mosaic of human-like, biped-like, homo-like, australopith-like, and apelike traits. Lucy (A. afarensis) is famous for showing a mosaic of these things but the Malapa finds show a different mix. These finds are a window into the process of evolution, to the order of when traits like we have appeared in the past. And even if it’s a tiny little momentary portal, it’s still a peek!

Ken: Why is it that a single new find can 'revolutionize' our understanding of 'what makes us human' and all that BS? If we are so vulnerable to a single new find, then why do we insist on making such bold claims about what we currently have?

Holly: I think that some of it has to do with the colorful personalities of the people who are drawn to paleoanthropology—the scientists, writers, reporters, critics, fans etc. If you care so much about your fossil relatives, that’s pretty narcissistic isn’t it? I’m not sure new finds revolutionize anybody’s understanding of “what makes them human”  unless they’re new to paleoanthropology. First-timers to the fossil evidence are probably blown away and their world-views are changed, but long-time fans would probably describe their new perspective as hypothesis creation or story change. And yes, new finds do re-write the story of human evolution if your story includes details beyond “We share a common ancestor with chimpanzees about 6 million years ago and since then bipedalism and big brains evolved.” But even that story could be re-written if we find fossil evidence that bipedalism preceded the LCA (last common ancestor) with chimpanzees.

Why do we “make such bold claims about what we currently have?” I don’t think paleoanthropology holds the monopoly on this behavior. As we discuss on the MT quite a bit, this is part of the game of science and academia, of getting tenure and promotion, of getting grant money.

I also think that paleoanthropologists are simply excited about their finds and can’t help but share that with the rest of us. If that excitement draws science-deniers closer to the light, then that’s a huge bonus.

I think there are only so many ways to make sausage. I'm pretty sure that's not the right idiom, but I'm trying to say that there are only so many ways to garner interest for news reports on new fossil hominins... so they all start to sound the same. Unless you read further and learn the ins and outs of paleoanthropology, then each new fossil is interesting in that informed context, from that informed perspective. If you do a search, all the phrases are the same for fossil news but if you search for reports on raging fires you find just as much repetition/unoriginality. There are only so many words and phrases at our disposal.

Ken: Beyond the circus atmosphere of each successive 'revolutionary' claim, what do we basically know about our origins that seems solid?

Holly: Humans share a common ancestor with chimpanzees between 8-4 million years ago (mya). It lived in Africa. It may or may not have been bipedal but the evidence right now indicates it was not. Since that split, for four million years (until about 2 mya), hominin evolution took place exclusively in Africa (the record is largest from East and South Africa), during which time body size remained fairly small (with rare exceptions) and human-style bipedal traits accumulate in the skeleton. Teeth are rather large during that time, but they trend smaller as time passes, especially the canine. About 3.5 mya a separate lineage splits from the australopiths and the lineage that ends in us. It’s the robust australopiths who had large teeth and jaws used for a harder and/or tougher diet than contemporaneous hominins. The robusts lived only in Africa and disappear around 1 mya. The first cut marked bone and stone tools show up around 2.5 mya in Ethiopia signalling a change in hominin behavior/ecology/diet. As we discussed here, some much earlier cut-marked bones were found at another Ethiopian site 0.8 mya earlier. These isolated “firsts” are followed by a much more regular and ever-denser record of tool production and meat-processing after 2 mya. Just after 2 mya we find the first hominin fossils outside of Africa—in Georgia and Indonesia. The brain gets slightly larger, but not by much, at this time and the skeleton, by about 1.5 mya is close to ours except for the brain size (which is still about half-three-quarters of our size), tooth size (still a bit large), and the shoulder anatomy which doesn’t seem to have the full range of motion of ours (which would mean throwing wasn’t “modern”). Since the first stone tools appear, there is a fairly steady increase in technological complexity to the present, although the Acheulean “handaxe” industry certainly sustains for quite a while (perhaps a million years) suggesting that big bulky stone bifaces were a lot more useful than they appear. Alongside the technological development is ever-increasing body size, with variation present, but the maximum size increases over time. There is also ever-increasing brain size to the present. And this is all happening as we find hominins inhabiting distant and diverse landscapes.*

Ken: Or is this [ A. sediba] basically just another specimen, even if it's interesting?

Holly: Nope. It’s not just another specimen. We don’t have many skeletons let alone skeletons with cranial remains and teeth associated with them. Let alone more than one skeleton associated, giving us some idea of variation within a species, or at least a family. These fossils are truly extraordinary and deserve all the fanfare.

Do all new hominin fossils deserve so much fanfare? I'm sure answers vary widely person-to-person. Some of us hyperventilate over half a molar, others could never hear of a fossil their whole life and die incredibly happy. I don't go ga-ga when stock prices go up, I don't swoon when I read headlines that Brad dumped Angelina, I don't drop everything when Marc Jacobs makes new pants, but all reports of those stories certainly come off as obnoxious as those about new hominin fossils. It's just show biz. And I'm fine with show biz if it gets people interested in evolution which can snowball into them learning about evolution. I love that new fossils even make the news when there are stocks, and celebs, and fashion shows hogging so much of the attention.



*I left out so many established/cool details because this post is already too long.

Friday, September 16, 2011

Visca Barça!

Ken and I are off for the next 2 weeks.  He's giving a talk on Monday in association with the Universitat Pompeu Fabra in Barcelona (details here, if you happen to be in Barcelona and have a free hour).  But, Holly's got a lot planned, and MT is sure to be lively for the next 2 weeks. 

Thursday, September 15, 2011

Evolution as ideology?

There are no life-or-death consequences to denying evolution or global warming by individuals.  Your worldview may help you orient your life, comfort or intimidate you, or whatever.  But (in our country, at least) you don't go to jail for being stupid, for not accepting the truth of evolution (or for being too zealously deterministic about how evolution works).

Society may gain if people adhere to some coherent ideology, even if it's contrary to facts that are already known.  After all, society functioned up until 200 years or so ago, without any knowledge of evolution in the modern sense.  The Greeks did perfectly well before Christ was born, and the Christian world did perfectly well before Mohammed or Joseph Smith came along to claim to provide correctives.

Of course, 'perfectly well' means society functioned.   They had awful murder and mayhem, disease and unfairness and dishonor, as well as good things in life.  But there's no evidence that society does better with one ideology than another--on a general level.

Even in a purely religious state, that denies evolution of species, health care research on the 'evolution' of antibiotic resistance can go on, and so on.  Things might be a bit different in regard to global warming, because the future may well depend in various ways on whether we act according to warming warnings or not.  But even if we are climate ostriches, society will adjust to the disruptions and trauma.  We've always had disruptions and trauma.

So, if we're not just being tribal about our own ideology, what is the point of railing against those who don't believe in evolution?  That such is a  benighted way to view the world, even willfully so since the facts are there for anyone to see, shouldn't upset us....unless it's because we fell threatened by that particular tribalism (remembering, for example, witch trials and auto da fe, done in the name of religion).

When one takes a broader anthropological perspective, is railing against evolution any more harmful to people's lives than hating the Yankees?

Of course, for those of us who try, at least, to understand the real world as it actually is, including the fascinating facts of evolution, those beliefs are important to cling to, at least for personal edification.  And if we make our living that way, it's not so different from making a living as a minister....

Wednesday, September 14, 2011

The Individual and the Group

Doctors treat patients one by one, but public health is about the whole population, or at least subsets of it treated as aggregates.  This is something we've touched on before, including here this week.   Generally, the latter is a higher-level abstraction of risk relative to individuals.  But doctors are not particularly trained in how to apply aggregate data to individuals, or at least there are important, often subtle differences between the two perspectives.  An article at the Huffington Post nicely discusses this from a physician's point of view.

But the same issues may apply to evolution.  A given trait, such as presence of some condition, like brilliance of feather color, or some level of a quantitative variable like blood pressure or stature, may have a net or average reproductive success, and such success rates may vary by the value of the trait.  The success rate can be a matter of chance or may be due to systematic functional effects of the trait value on reproductive success; the latter case is what we mean by 'natural selection'.  We view species today as aggregates, but each individual has its own trait value, so the distinction between population and individual is important on both contemporary and evolutionary time scales, and in similar ways.

In evolution, our models generally ascribe the relative fitness of a given trait, compared to the variation in the rest of the population.  This is because the frequency of trait values in the future depends on what is transferred from this generation to the next.  But in public health, things are somewhat different.  Values (judgments, treatment decisions, and so on) are made on individuals and for individuals, without regard to their effect on the future or on the whole population.  Your doctor treats your tuberculosis, not the population's.

However, in both cases there are risks, or probabilities, involved.  In evolution, what is the probability of having a particular number of children for someone with a given trait?  Note that we refer to traits, not genes -- the effect on contributing genes is an indirect result of what happens to individuals who bear them.  In medicine it is the risk of getting a particular disease for someone with a given level of exposure to some risk factor, or of a given response to therapy.

But while we need individual predictions, and evolution selects on individuals, risks are estimated from populations.  So things are a bit circular, or at least not straightforward.  This provides much to think about.  Risks that are large are easy and behave just as what you were taught in Statistics 101 said they would.  But risks that are small are not different from chance, or from other small risks, and that is not so easy to deal with. Unfortunately, small risks are often what we most have to deal with both in medicine and evolution.

What about risks that are hard to detect, or perhaps even impossible, such as the risk of cancer from dental x-rays?  Should you avoid such exposures because radiation is clearly proven, at the cell level, to cause mutations and mutations can cause cancer?  Who decides what an acceptable risk is, or the statistical criteria for saying that there is, in fact, a risk?  We usually use significance tests for this, but they're subjective judgments.

In evolutionary terms, change due to selection accumulates over generations, but so do chance changes.  If selective differences between contributing genotypes are very small, chance (genetic drift) can be a major force for change.

And what about even large risks?  We've mused on this before.  If the risk that someone with your cholesterol level will have a stroke is, say 15%, does that mean a die is going to be rolled with 6 sides, and if it comes up 6 you're a goner?  Or does it mean that 15% of people like you are certain to have a stroke, and the others just as certain not to (but that we don't know how to tell who's who)?  Public health, or aggregate perspectives in evolutionary biology, don't concern themselves about this, even though of course they know that everything happens to individuals.

But doctors and those who are arguing that some particular genetic variation is important in evolution have to think about the individual level.

Monday, September 12, 2011

Other perspectives on causation--genetic or otherwise

There's a thought-provoking article, "Epidemiology, epigenetics and the 'Gloomy Prospect': embracing randomness in population health research and practice," in the August issue of the International Journal of Epidemiology (IJE) by George Davey Smith, one of the smartest, most thoughtful -- not to mention prolific -- people in the field of epidemiology these days.  He's in the School of Social and Community Medicine at the University of Bristol in the UK, and the paper is the published version of the 2011 IEA John Snow Lecture which he gave at the World Conference of Epidemiology in Edinburgh this past summer.  In the paper he addresses some of the same issues of causation that we often blog about (most recently here) and publish elsewhere (e.g. here, in Genetics, and here, in the IJE), and in doing so he touches on what we think is one of the most overlooked actors in much of life; randomness, or chance.

George specifically addresses epidemiology, the field of public health that has to do with the understanding of patterns of disease, ideally so that public health measures can be instituted to prevent disease outbreaks.  But, his points are equally applicable to many other areas, certainly including genetics.

Epidemiology is a field that uses population-level data to understand disease in aggregate.  This is how risk factors like smoking are discovered, and how events such as food poisoning epidemics, or outbreaks of cholera are explained.  And the field has a long history of success in explaining many disease outbreaks, and identifying many significant risk factors for which public health measures (clean water, e.g; or anti-smoking campaigns) have been implemented.

This is all well and good, and perhaps useful to policy makers.  But, as with genetic studies, the amount of variation in risk that's explained by epidemiological studies is often small, and the usefulness of the population-level approach is limited when it comes to predicting outcomes for individuals, for them or their doctors to optimize their chances of avoiding nasty diseases, and this is the subject of Davey Smith's paper. He uses Winnie Langley as his example; she smoked for 95 years -- why didn't she get lung cancer?  (The actual provenance of these photos isn't clear because they're all over the web, but we got this one here, and the one below here.)

The purpose of epidemiology, as a branch of public health, is to identify causes of disease that can be eliminated or attenuated, to prevent disease.  This is a lot easier when the causes have major effects. Indeed, epidemiology, like genetics, is most successful at dealing with causes with large effects such as infectious agents, cigarettes, or obesity, the equivalent to genes for diseases such as cystic fibrosis or Tay-Sachs or the periodic paralyses.  Though, a major difference is that clearly genetic diseases are much rarer than diseases with widespread environmental causes.  But the point is the same -- current methods in both fields are much better at finding causes that pack a wallop.  Even those, such as dietary salt or cholesterol are not as straightforward as their public image.

Can the risk factors that epidemiologists or geneticists do identify be translated into predicting who will or will not get sick?  Not definitively in either case, although some rare alleles, such as for Huntington's or PKU come close.  In general, however, the answer is no -- despite what direct-to-consumer genetic testing companies would like to sell you.  At least, the probabilities are usuallly low, and the estimates of those probabilities not very stable or precise, since many factors including changeable environmental exposures affect what a given genotype may do.  We've written a lot about why what we know about evolution means this must be true, and after much discussion in his paper of why this is so in epidemiology, Davey Smith makes the same point.

Most epidemiological research, as genetic research, however, is based on the belief that if we just identify more risk factors/genes, we'll be able to account for enough of the variance in risk of our favorite disease that we will be able to predict who will get it.  Genetic epidemiology, 'life course epidemiology', social epidemiology, and so on, are all attempts to expand the universe of risk factors such that eventually the field captures them all, from the uterine environment to old age.

But, as Davey Smith points out -- and we think it's fair to say, as we've pointed out numerous times over many years ourselves -- there is much too much randomness in life to ever reach this goal, even assuming all those replicable risk factors people are now looking for could be found.
The chance events that contribute to disease aetiology can be analysed at many levels, from the social to the molecular. Consider Winnie; why has she managed to smoke for 93 years without developing lung cancer? Perhaps her genotype is particularly resilient in this regard? Or perhaps many years ago the postman called at one particular minute rather than another, and when she opened the door a blast of wind caused Winnie to cough, and through this dislodge a metaplastic cell from her alveoli? Individual biographies would involve a multitude of such events, and even the most enthusiastic lifecourse epidemiologist could not hope to capture them.  Perhaps chance is an under-appreciated contributor to the epidemiology of disease.
He nicely dismantles the idea that siblings' shared environments will be a major clue to risk of most diseases, because, for one thing, it turns out that we share about as much with our siblings as we do with people who grow up in other households.  In large part this is because chance or stochastic events are much larger components of what happens to us than generally assumed.  Current methods tend to allow for statistical noise, but not for the essential role that chance plays in our lives, from the cellular level on up.  This has long been known, but scant attention has been paid to it by the reductionist sciences that epidemiology and genetics are.

Davey Smith points out that epigenetics is the current fad, based on the hope that by finding epigenetic mechanisms we'll soon be able to explain what now just looks like chance, but that this is a false hope.  He makes further points in this long paper, including offering an evolutionary explanation for the centrality of chance in life (it's advantageous to have a variable genotype given that environments are changeable), and so on.

Davey Smith concludes that the purpose of epidemiology after all is not to predict the fate of individuals but to provide population-level statistics.
For our purposes, it is immaterial whether there is true ontological indeterminacy—that events occur for which there is no immediate cause—or whether there is merely epistemological indeterminacy: that each and every aspect of life (from every single one of Winnie’s coughs down to each apparently stochastic subcellular molecular event) cannot be documented and known in an epidemiological context. Luckily, epidemiology is a group rather than individual level discipline, and it is at this level that knowledge is sought; thus averages are what we collect and estimate, even when using apparently individual-level data.
The point of the discipline is to "provide simple, understandable and statistically tractable higher-order regularities".

We're with George up to this point.  Indeed, when epidemiology can point to causes that public health measures can deal with (clean water, window screens, vaccination campaigns) -- that is, population-level causes that are amenable to population-level controls -- it has done its job, and done it well.  But why hasn't environmental epidemiology explained the asthma epidemic satisfactorily?  Even with population-level data.  And why don't the large population-level studies of hormone replacement therapy, or calcium and vitamin D population yield the same results?  Again, this is equivalent to the failings of GWAS (genome wide association studies).  And who can predict heart disease in the future when so many cultural changes, involving the dynamics of lifetime exposures to risk factors known and unknown?

Part of the problem is that main effects can differ among populations -- even assuming what a 'population' is and how one defines and samples it, and that the population-specific effect is not due to changeable population-specific environments.  The ApoE 4 gene variant is associated with Alzheimer's disease in European-derived populations, but much less so in African Americans, for example.  And the same risk variant, which is relatively infrequent in humans, is the standard in our close primate relatives.  Causation is relative, even when strong.  So even the population-based view of epidemiology is often problematic.

There is another point about randomness.  Sometimes, what we mean is that there is a distribution of probabilities of outcomes, as in 1's or 6's in rolls of dice.  There, we know that one has a 1/6 chance of a specific result, the probabilities (risks, in this context) are known and predictable, even if each individual's outcome isn't specifically knowable in advance.  But many chance ('random') factors have no such underlying theoretical distribution of this kind -- the probability you'll be struck by lightning, or that some part of some artery will be clogged by cholesterol plaque.  Dealing with that kind of randomness is far more problematic, yet that is likely to be the major role of probabilism.  In that case, all we can do is estimate risk from past experience and hope the same applies to the future....but we know, in changeable environments, that it won't.

The same kinds of statements apply with even more force when we're trying to infer evolutionary history and how today's genes and their effects got here.  It is a humbling lesson that is difficult to accept, even if the evidence for it is very strong.

As for Winnie, she may not be that much of an outlier, after all, perhaps in fact confirming that epidemiological methods can work when it comes to risk factors with large effects.  She may have smoked all her life, but she said she was too poor to smoke more than 5 cigarettes a day, and after 100, only smoked 1.

Thursday, September 8, 2011

Get thin through social stress!

Brown fat burns energy, and leads to weight loss.  White fat stores energy and leads to weight gain.  Unfortunately for people trying to lose weight, brown fat is primarily found in babies or in adults after chronic exposure to extreme cold.  Indeed, figuring out how to turn white fat to brown is apparently the ultimate quest in obesity research.

But now researchers at Ohio State think they're on to something.  As reported here, a paper in the September Cell Metabolism describes a study that enriched the environments of lab mice.  They put 15-20 mice together in large containers that included exercise equipment, tunnels, toys, huts, a maze, and unlimited food and water.  Control mice, on the other hand, were kept in the usual lab environment; 5 to a small cage, no toys or treadmills, unlimited food and water. 
Key findings include the following:
* Enriched animals showed a significant reduction in abdominal white fat mass (49 percent less than controls).
* Exercise (running in a wheel) alone did not account for the changes in body composition and metabolism of enriched animals.
* Fed a high fat diet (45 percent fat), enriched animals gained 29 percent less weight than control mice and remained lean, with no change in food intake. Enriched animals also had a higher body temperature, suggesting that greater energy output, not suppressed appetite, led to the resistance to obesity.
The researchers interpret this to mean that the enriched social environment triggers the "hypothalamic-adipocyte axis", which in turn triggers the production of "brown-fat-like cells" within white fat.  The researchers report that the protein responsible is BDNF, or Brain-Derived Neurotrophic Factor.  According to a story on the BBC website, the idea is that the complex social environment, things to do, friends to do them with, is challenging, and maintaining a network of friends is stressful -- but in a good way.
This stress is a positive thing for the body, prompting it to produce BDNF and convert the white fat to brown, preventing it building up and causing obesity. 
(Though it is curious that recent research has suggested that it's our social networks that make us fat.  If we have friends who are overweight, we're likely to be overweight as well.  Apparently we need to choose our social networks carefully.  Just any old social stress won't make us thin.) 

The lead author is quoted as saying that he "hopes that his research will help to communicate the impact of social psychology on health in the future, and promote lifestyle changes as an alternative to highly invasive gastric surgery."  Though, he is a neuroscientist, and apparently is working on a way to stimulate BDNF production surgically.

Stripped to the biophysiological bare bones, this sounds potentially promising for people who are trying to lose weight but can't. Of course lifestyle changes are often hard for people to make, and harder to maintain.  Still, it's poignant that the interventions being contemplated as a result of this study are medical -- surgery and/or pharmaceutical ways to trigger BDNF production, rather than the obvious one; "enriching" our lifestyles to include the changes that are already well-known to reduce obesity.

Of course, whether any genes are involved in these responses, or whether this has any bearing on evolution or not, are unlikely to be seriously important, but one can predict that, if this finding is taken seriously, geneticists will be quick to jump on it....for reasons easy to see, and whether that's a good way to spend research resources or not.

Wednesday, September 7, 2011

Science run amok?

Yesterday's post on the scientist on autopilot touched on the subject of scientific publishing.  We wanted to follow up on our suggestion that, at least in a technology-driven field like genetics, it might be a problem if scientists are thinking less and publishing more.

We scientists, especially those whose jobs or even salaries depend on grants, have found self-interest (at least in the short term) in a proliferation of data, technologies, and journals (and headline-blaring public media).  They allow us to compete for funds in the current system as it's developed, because they give both the reality and the illusion of prolific productivity.

Several recent pieces in The Guardian have discussed the ethics and economics of modern scientific publishing.  The first was a piece published on Aug 29 by George Monbiot, "Academic publishers make Murdoch look like a socialist", and it reverberated through the internet.
Who are the most ruthless capitalists in the western world? Whose monopolistic practices make Walmart look like a corner shop and Rupert Murdoch a socialist? You won't guess the answer in a month of Sundays. While there are plenty of candidates, my vote goes not to the banks, the oil companies or the health insurers, but – wait for it – to academic publishers. Theirs might sound like a fusty and insignificant sector. It is anything but. Of all corporate scams, the racket they run is most urgently in need of referral to the competition authorities.
His point is that the public pays to generate the knowledge, and then must pay to see the results. Accessing even a single article in a journal online if you don't have access through an institution can cost you $30-40 or more.  And, university libraries pay thousands of dollars, sometimes tens of thousands, to subscribe to these journals. 
What we see here is pure rentier capitalism: monopolising a public resource then charging exorbitant fees to use it. Another term for it is economic parasitism. To obtain the knowledge for which we have already paid, we must surrender our feu to the lairds of learning.
The 'solution', open-access journals, is too often exorbitantly expensive for authors (they can charge on the order of $2-3000), which of course is yet another way that the public pays, through grants that pay the publication fees, for science.  And it has done nothing to knock the 'top-notch', or 'high-impact' journals off their monopolistic perches.  The bean counters at universities still reward publication in Nature or Science or Cell over publication in the PLoS journals.  And scientists still turn to the 'top-notch' journals first.  Monbiot heavily criticizes publishers for taking extreme advantage of their monopoly.
The returns are astronomical: in the past financial year, for example, Elsevier's operating profit margin was 36% (£724m on revenues of £2bn). They result from a stranglehold on the market. Elsevier, Springer and Wiley, who have bought up many of their competitors, now publish 42% of journal articles.
They can't be blamed, really, since they're in business to make money.  But we in the profession, not they, have allowed the system to develop in this way.

Monbiot's solutions?
In the short term, governments should refer the academic publishers to their competition watchdogs, and insist that all papers arising from publicly funded research are placed in a free public database. In the longer term, they should work with researchers to cut out the middleman altogether, creating – along the lines proposed by Björn Brembs of Berlin's Freie Universität – a single global archive of academic literature and data. Peer-review would be overseen by an independent body. It could be funded by the library budgets which are currently being diverted into the hands of privateers.
The system is being challenged, as Ben Goldacre writes in his Guardian column, Bad Science. "Digital activist" Aaron Swartz at Harvard's Centre for Ethics has apparently attempted to download millions of archived journal articles, to which he would then allow free access.  He has been "accused of theft of intellectual property on a grand scale."

Totally free access may be ethically appealing, but it's not sustainable.  Someone has to pay to publish science, either the producer or the consumer.  Monbiot would say that the problem is that the publishers charge way too much, as evidenced by their high profit margins.  Indeed, most of the work of publishing a scientific paper is done by unpaid reviewers, with the editing done, unpaid, by the author for the 'privilege' of then paying the publication charges.  Open access, in which the authors pay to publish, might seem to solve the problem of cost to the consumer, but fees are high, and can compromise the integrity of the whole endeavor.  And the scientists will pay the cost from their grants, not their own pockets.

David Colquhoun in a piece in Monday's Guardian comes at the issue from a different angle.  He says that peer review doesn't work very well anymore, given the vast quantity of journals and papers now (he cites this statistic: "an estimated 1.3 million papers in 23,750 journals in 2006"). 
The blame for this sad situation lies with the people who have imposed a publish-or-perish culture, namely research funders and senior people in universities. To have "written" 800 papers is regarded as something to boast about rather than being rather shameful. University PR departments encourage exaggerated claims, and hard-pressed authors go along with them.
The result is an equally vast quantity of poor papers and bad science, and, as he says, the only beneficiaries of the system as it now stands are publishers.  The administrators and grant system can't be blamed, as we allowed this system to develop over the past 40 or so years, because it snuck in gently when money was plentiful and scientists less so.  Once administrators saw the overhead and other score-counting opportunities, they naturally seized on them.  And with exceedingly competitive grant funds, and an unwillingness to be realistic and fund more or less randomly, we pretend to use criteria like publication counts as evidence of quality.

Colquhoun suggests that a solution is for authors to publish their own papers on their websites and open them up for comments, though he believes this is unlikely to work, as Nature tried this a few years ago and it fell flat.  So instead he proposes a quota system:
I'd suggest scientists should limit themselves to an average of two original papers a year. They should also be limited to holding one research grant at a time. Anyone who thought their work necessitated more than this would have to be scrutinised very carefully. It's well known that small research groups give better value than big ones, so that should be the rule.
And then, he says, the papers might actually be read rather than just counted.

There's no question that quality and quantity are conflated, but as long as the pressures on scientists are to bring in overhead (er, to do important research), and to publish -- anything -- rather than on quality, the system isn't going to change. But, quality could be better ensured by making scientists responsible for the claims they make in their grant applications and publications, setting limits on the number of grants a researcher can have, even a lifetime limit, reducing the amount of overhead universities are allowed to charge, or other kinds of caps and limitations.

Also, we can hardly tell, with the proliferation of results, which are going to be important (whatever that means), what of each paper is grain rather than chaff, and which are junk. Here we're not referring to fraud and fabrication. But when we haven't the time or ability to judge the details, how can we judge the actual importance? Replication is one good way, but it can these days rarely be done very convincingly. And how can we recognize the unsung gems hidden in the morass of legitimate, but incremental, if over-claimed, results?

Of course, history decades from now will make some of these judgments. It may not recognize the gems that should have been followed up, however. So there are many things to think about--and that's hard to do in a system that forces us to flood the market and hence systematically takes away the time we have to do that kind of measured reflection.

Tuesday, September 6, 2011

Science on autopilot

Are scientists' science skills getting rusty?

A new study has shown that in recent years, aircraft accidents, including serious fatal ones, are due not to malfunctioning of the plane, but of the pilots.  Not because they're asleep at the wheel, but because they've no longer been trained at the wheel.  The pilots are given less and less time flying planes, and more and more training using the autopilot, the onboard computer that does everything for them, so they can flirt with the flight attendant (well, that's illegal now, so maybe they read comics).  Computers taking over where thought and judgment turn out to be better.
"What happens is, you don't actually hand-fly or manipulate the controls, whether it's a control yoke or a sidestick controller," Hiatt said. "Therefore, your computer skills get greatly enhanced, but your flying skills start to get rusty."
How is this relevant to science?  In our field, it's highly relevant, at least conceptually.  We are moving, or being pushed, more and more towards collecting some samples (blood or tissue samples), processing them with a robotic instrument to extract DNA or protein or something of that sort, sending that off to an even bigger, more expensive, and more wholly automated instrument that will generate far more data than a person can actually look at.

Then, this mass of results is fed into some ready-made, often online computer software, often on a large network of computers somewhere 'out there', which then sends back ready-made results files.  DNA sequencing and genotyping provide one example.  Each blood sample from a single person, or leaf from a single plant, is processed to extract DNA, the DNA is sent to a microchip based sequencer or genotyper, and the millions of results are processed by a computer to sieve out unreliable 'calls'.  Then this file is sent to a genome browser or other high data throughput device that summarizes the results.  So how many variable sites are there between the two copies of the genome in this person, or among the persons whose DNA was submitted?  How many of those are in the Human Genome or Mouse data bases?  What is each of their variant frequencies?  Which variants are statistically analyzed, again by sophisticated if 'canned' statistical packages, to find those that are associated with the others in the sample, and which are associated with some trait, like a disease or a crop plant's yield?

The blurry eyed scientist at the end of this process stares at masses of results for hours. Then s/he writes up something to describe the findings for a journal, saving gobs of data out of the paper but to submit along with the main paper as Supplemental material that, when the paper is published, will be available on line at the journal's web page.

But this is not all.  Now, the journal's web page is a centralized reviewing program that helps the editors assign reviewers, process the manuscript, receive the reviews notify everybody of the status of the paper and the reviews and the revisions.  Not only that, the author gets automated messages about the paper's status, and even (of course) instructions as to how to pay the publication cost.

Open access online journals do have their upside.  They are open access, for one thing, so that, in principle, results are accessible to far more people than fee-for-access and/or print-only journals.  And publication is generally faster than for print -- so all our cutting-edge results are out there as quickly as possible.

But when the author pays for publication, it's hard to see that the quality of the journal isn't compromised by the conflict of interest this creates for the publisher.  It's a lovely you-scratch-my-back-I'll-scratch-yours system, serving the insatiable need for investigators to publish, and the need of the publisher to turn a profit.  

In some ways this system does not make the science--DNA sequencing and analysis--bad, because something so intricate may be better handled by software and hardware developed by the narrow, but highly technical professionals.  But ordinary scientists--most of us--might be much more likely to blunder through mistakes of all kind that would be difficult for readers, who haven't the time, technology, or talent to detect.

The data may become part of public data bases (a very positive trend in life sciences) but may have layers of errors in them, undetected by the scientist.  Whether or not the interpretation is correct, probably the centralized technology does at least constrain errors up to a point -- but not completely.

But investigators are in a sense losing the primary skills that could be important in better designing and carrying out studies.  Like students who no longer no how to do multiplication, logarithms and the like by hand, most biologists know how to manage equipment but not how to do the primary work, such as writing computer programs specifically tailored to the problems they are working on.  (Many lab methods have become so automated, or off-the-shelf that scientists may not even know how to do simple things like pour gels or make buffers anymore, either.)  This can force analysis into what some large lab's programmers decided was the right way to do the analysis. Assumptions differ, program algorithms differ, and options differ.  Local judgments are being suspended to a considerable extent.  If you don't have to think about the problem except to decide whether to use Structure or Adixmap, Haploview or SeattleSNPs, or which analytic package to use, each with its own often very complex, subtle, or even unstated operating assumptions, then you are at least one step removed from a deeper understanding.

Hi-tech science is so high-tech that it is also to a considerable extent unrealistic to expect it to be other than the kind of operation it is today.  Where the balance of knowledge, skills, and centralization should lie is a matter of judgment. But the haste, numbers of 'errata' regularly published in journals, and overwhelming amounts of data being produced through a proliferating array of journals, raises important questions.

Friday, September 2, 2011

Who's at risk and why?

The problem of determining causation is one of our ongoing themes here on MT, so, with that in mind, given what is known about the causes of heart disease, why is this man at risk of heart disease?  The article on the CNN website begins:
If you're not overweight, eat pretty well and exercise now and then, you might think you're in good heart health. But doctors say you don't have to look like a heart attack waiting to happen to be one.
Tom Bare, 54, is a case in point. The high school science teacher was thin, active and ate well, but still needed open-heart surgery this spring to bypass blocked coronary arteries.
The risk factors for heart disease are well-known -- obesity, smoking, high cholesterol, inactivity, diabetes, family history.  It's primarily a lifestyle disease. But Tom Bare was doing everything right, was even taking statins to lower his cholesterol (which they did, from 300 to 125), but because of his family history (probably a mix of genetic background and decades of environmental exposures), he was worried.  And it turned out he had good reason to be, given the plaque on his arteries.  So he underwent bypass surgery, and is now hoping he can avoid the heart attack that once seemed to be inevitable.

Bill Clinton has gone even further.  If heart disease is a lifestyle disease, it's clear why he had it.  He loved junk food and was overweight.  After surgery for heart disease, he has now drastically changed his diet.  He calls himself a vegan, and eats no meat, dairy, eggs or added oil.  He has lost 20 pounds and says he is healthier than he has ever been.  And is hoping he has dramatically reduced his risk of dying of heart disease. Indeed, Clinton is now following the "heart attack-proof" diet, if you believe the man selling it. 

Two men, one with the classic risk factors, the other doing everything currently considered to be protective.  Why are they both at risk?

With most common chronic diseases, causation is worked out at the population level, and then applied to individuals who will look like the average at-risk person determined in aggregate not at all, spot-on, or somewhere in between (as both Bare and Clinton, neither of whom had all the major risk factors).  Someone -- epidemiologists, clinicians translating epidemiological data to clinical practice -- will have to make a judgement call as to when a patient looks enough like study subjects with the disease, that is, has enough risk factors, to warrant intervention.

These people could be at genetic risk, but not involving known genes.  But small effects at hundreds of genes could be contributing in ways we can specifically identify but that might be reflected in the family history.  Or not.  This is the problem with genetically based prediction, and the same applies to environmental exposures, surely not all of which are identified, measured, or even accurately measurable.

You can calculate your own risk of heart disease using one of many heart attack risk calculators on the web.  Here's one -- enter a few numbers and you'll get your probability of having a heart attack in the next 10 years.

But what if the calculator tells you your risk is 30%?  The site explains that this means that 30 of 100 people with your level of risk will have a heart attack in the next 10 years.  So, does that mean that all 100 of you share the same level of risk, and bad luck or unidentified factors will push 30 of you over the edge, or that 70 of you are not in fact at risk, while the unlucky 30 are at 100% risk?  And if 70 of you aren't at risk, why are you being treated as though you are?

It's understandable that clinicians will treat anyone with some risk as though they are at 100% risk, and suggest the patient do whatever it takes to bring down that risk, but it skirts the question of how to reliably predict who will in fact have a heart attack and who will not -- something that in fact can not be known, if people change their behavior when told they are at risk, and if luck or unidentified factors are in fact at play.  From a public health point of view, that's perfectly ok, but it does make understanding causation rather murky. 

Thursday, September 1, 2011

Killing malaria

Malaria kills more children per year than any other human disease.  Indeed, it has been surmised that no other single cause has killed more people, in all of human history.   And killer malaria makes people very sick long before it kills them.  Decades of studies have shown many different genetically based forms of resistance that the toll of malaria-related natural selection has taken, but that hasn't knocked the disease off the pedestal as the gold medal killer.

But, rates of malarial morbidity and mortality have been declining rapidly in parts of sub-Saharan Africa, including Eritrea, Rwanda, Zanzibar, Pemba, Tanzania mainland, Kenya and Zambia (as reported here), and in some cases it's not at all clear why.  Some of the decline is due to widespread use of bed netting to prevent mosquito bites, some to improved medical treatment, and some to use of pesticides, but the decline is also being seen in areas where none of this is happening.  A paper in the current issue of Malaria Journal by Meyrowitsch et al. suggests:
...other factors not related to intervention could potentially have an impact on mosquito vectors, and thereby reduce transmission, which subsequently will result in reductions in number of infected cases. Among these factors are urbanization, changes in agricultural practices and land use, and economic development resulting in e.g. improved housing construction.
Or, the decline might also be attributable to a decrease in the mosquito population due to changing rainfall patterns caused by climate change, an hypothesis tested by Meyrowitsch et al. They collected mosquitoes weekly in light traps in 50 households in northeast Tanzania in in two separate study periods (1998 - 2001 and 2003 - 2009), an area with no organized mosquito control.  It's a rural area; the study communities have around 1000 inhabitants, and people live in "mud-walled houses thatched with dried coconut leaves."  There are generally 2 rainy periods per year here, a long one in March-June and a shorter one in October-November.

Insect counts showed a marked decreased in the mosquito population over the 11 year study period (the primary mosquito vectors for malaria in sub-Saharan Africa are Anopheles gambiae and Anopheles funestus).
The average number of Anopheles gambiae and Anopheles funestus per trap decreased by 76.8% and 55.3%, respectively over the 1st period, and by 99.7% and 99.8% over the 2nd period. During the last year of sampling (2009), the use of 2368 traps produced a total of only 14 Anopheline mosquitoes. With the exception of the decline in An. gambiae during the 1st period, the results did not reveal any statistical association between mean trend in monthly rainfall and declining malaria vector populations.
Below are the tables of results for the two sampling periods.  (If you click on a table, it will actually be readable.)  If you look at rows 5 and 6 of, the total mosquito counts by year, you'll notice that the decline is not linear.  Instead, it seems something dramatic happened between 1998/9 and 1999/2000, the year with the most significant decline in mosquito numbers, and then again between 2004 and 2005.  The number of traps used in the first first period was less than the second, so the two periods aren't totally comparable so let's just stick with the second.  After 2006, the number of An, gambiae rose again, and then something happened between 2008 and 9 to drastically reduce insect numbers by 2009.  And it doesn't seem to be differences in total rainfall.  The authors don't discuss this apparent flux, but instead treat the decline as a general trend.  But, it might be that something distinct happened in 1999 and 2004 that explains the sharp decline, which could be overlooked by treating the decline as linear.





In any case, clearly the mosquito population has dropped precipitously.  The authors don't have morbidity and mortality statistics for the study period, but they assume they both fell. And these same kinds of results have been reported for various other parts of sub-Saharan Africa.  Indeed, they point out that a study on the island of Pemba, Tanzania, found that malaria transmission began to fall before the start of the malaria control program there. 

They conclude that the unpredictability of the rainfall resulting, presumably, from climate change could be the cause of these declining mosquito counts, rather than absolute differences in monthly rainfall.  And/or the decline may be due to:
...changes in socio-ecological conditions in the study area (e.g. changes in temperature, ability for water to pool, deforestation or land-use, change in the use of agricultural pesticides or insecticide-like compounds not directly applied for targeting malaria vectors, improved house constructions or changes related to agricultural activities). An increase in predatorily pressure on the mosquito population (e.g. birds or invertebrates) or an insect pathogen that specifically targeted mosquitoes, e.g. a bacterial, viral or fungi infection, could also potentially have induced the observed declines.
This decline in mosquito numbers, and thus in malarial infection, is very interesting in its own right, at least to us, not to mention very important if it signals the beginning of the decline in malarial infection.  But it's also interesting that the reason for the decline is so elusive -- another instance of the difficulty of determining causation.  Indeed, the explanation for the elimination of malaria from the United States early in the 20th century is still debated.  If the decline is in fact a trend, leading to elimination of the mosquito in areas where malaria has been endemic, it doesn't matter so much why it happened -- again, as in the US.  If, however, the population numbers are going to continue to jump around, potentially rising again, it's very important to figure out why.  And, as the authors state, if malaria is going to stick around, even if at lower levels, and children aren't going to be exposed as frequently, they won't develop immunity so that the few infections they do get will make them sicker.

One can say that the rate of malaria is not just about physical ecology or human biological susceptibility, but also about human culture.  If that's the explanation, it's very curious and interesting.  That's because in the first place, endemic malaria may have been due to the spread of settled agriculture, exposing land to water pooling where nearby mosquitoes could breed.  So culture enabled malaria to rise.  And now, if the suggestions are true, culture is leading to its decline.  In both cases much, at least, of this was unintended (such as global warming).  And if mass scale agriculture or global human population eventually decline, fewer people sharing the world's resources may ironically mean higher risk of malaria, as in the bad old days.