Friday, February 26, 2010

Why does genetic determinism persist, in spite of the evidence?

What's so wrong with genetic determinism anyway? Isn't it just political correctness to claim that who we are isn't determined by our genes? Genes are the basis of every trait, aren't they? And, they follow rules so we can predict who'll have what. Lots of diseases are caused by mutational variants at single genes, so traits like behaviors and intelligence and skin color must also be reducible to single genes. And, indeed, we can even retrodict the reason that a given trait evolved, since, obviously, natural selection has molded all living things into what they are today, yes?

 Well, no, even if conventional wisdom is that we can predict pretty much anything from genes. And, people seem to believe it in droves; thousands are happily paying genotyping companies to have their disease risk determined, and at least 15,000 people have volunteered to have their whole genome sequenced by the 100,000 Genome Project. The predictions are often expressed in terms of probabilities, but vended and perceived as being causal -- if you're a regular here, you know what we think about this.

And genetic explanations for many traits, including behaviors, are being adopted far and wide. Just type "gene for" or "genes for" into Google, and see how many hits you get (hint: you'll find that it's close to 10 million, and Donald Trump has the genes for success). Delve even briefly into those results and you see that we now are being told by political scientists that how we vote is genetically determined, not to mention whether or not we vote, economists and psychologists tell us that 'the warrior gene' makes us aggressive (except when it doesn't) and influences whether or not we belong to a gang (if we happen to be male). The list is long and growing -- just do this same search again next week, and you'll see. (By the way, the selection against 'good genes' via the deaths of warriors while cowards stayed home dining well and having the warriors' women has worried societal thinkers from Plato through Darwin and the Nazis to the present. But if that was truly selective at the genetic level, why has society not become all cowards?)

Given how much we've learned about the complex polygenic, gene-by-environment nature of complex traits (e.g., see our discussion of GWAS vs lifestyle questionnaires as predictors of disease risk, and, indeed, a study of 19,000 women published in the Journal of the American Medical Association last week reports that risk of cardiovascular disease can't be predicted from genes identified by GWAS), why is determinism still so appealing? Especially given the potential risk (see our recent posts about eugenics, here and here).Why won't it loose it's grip on contemporary thinking (just as the other side of the pendulum, environmentalism, gripped the post-WWII generation)?

Certainly some of it is just momentum -- when the expensive infrastructure for genetic treasure hunting is built, be it in service to political science departments, psychology, epidemiology, anthropology or economics, it's hard to dismantle, even when the evidence against this approach mounts and the payoffs turn out to be slight. 

And too, it's all too easy to construct scenarios to explain positive results, even non-significant ones. Making selective, after-the-fact claims of success is just what homeopathic medicine, phrenology, and other nonsense therapies do. The temptation seems to be too great for many to resist, particularly in a field like anthropology, with its focus on human evolution and how we got to be what we are. It must be genetic, and it must be due to natural selection. And generally these stories can't be tested, so we can make up anything we want! So, we've got genes for long distance running in West Africans because of their history of cattle rustling, genes for opposable thumbs because, well, because of almost any explanation for why they make us human, and even genes for ping pong prowess in China (although the proposed adaptive reason for this escapes us at the moment, but there is one).

But hold on. These kinds of Just-So stories needn't be so rampant, if researchers would just ask themselves a few simple questions. First, when trying to explain the association of a gene with a trait is a researcher should wonder, if the trait of choice is really so adaptive, (meaning that people with the trait consistently had more children than those without, over many generations), why the trait still varies after so many generations of selective pressure. Perhaps selection wasn't all that strong, and the trait not so adaptive after all. Was the trait present in mammals before there ever were humans? If so, no human-specific arguments bear much weight.

Further, and this is just good science, how can the chosen evolutionary explanation be tested? How would we know why a trait evolved? We can't know enough about environmental pressures at the time the variant arose, nor about how it varied through time, to be truly convinced that we know how or why a trait is selected. So, if you've constructed an explanatory framework that involves assumptions about environmental pressures tens of thousands, or hundreds of thousands of years ago, how will you test it?  And, why should a complex trait like thumbs or language or head size be due to a single gene anyway? If not, what is its genetic and evolutionary basis?

As we say in The Mermaid's Tale, natural selection certainly happens, but in general it's weak, usually very very weak, among other things, rather than a steady, simple driving 'force'. And, the idea that complex traits are due to single genes or even a few genes has been shown over and over not to be so, even if there are determined efforts to make them genetically predictable. So, why do these selective genes-for scenarios persist?

We live in a fundamentalist age, and there's more than a hint of simplistic fundamentalism to these selective scenarios: Darwin was right, and it's heresy to question Darwin. Therefore, everything's due to natural selection -- even though Darwin himself cautioned against that kind of thinking. And, the legacy of success at finding single genes for many (rare, usually pediatric) diseases has lulled us into thinking that all traits should be due to a single gene, so why we're human and no longer chimps is probably due to a single gene, or at least a single trait (the use of language, brain size, uprightedness, thumbs, tool use, and so on), for which we'll surely find the gene. People may deny that their explanations are so simplistic, and say they recognize complexity, but look at what they say and how they act.

Fundamentalism is dangerous, no matter its form. And genetic fundamentalism, as the history of eugenics has taught us, is no exception. Throw in traits fraught with societal import, such as intelligence or criminality -- which can't even be defined meaningfully (does anyone ever look for genes for white collar crime?), much less reduced to single genes -- and it can be explosive.

The underlying problem is a hunger for simple answers to satisfy various desires, such as easy prediction, pharmaceutical bonanza, simple explanations of cause and origin. The scientific challenge is to understand the way genetic mechanisms and genetic variation, as part of a mix of causation, work and how traits affected by such a mix evolve.

 A lot hinges on the meaning of 'determinism'. The usual concept is the same as cause, in the sense that we say gravit y determines how fast something will fall, or temperature when something will melt or boil. But in genetics it can be a confusion of probabilistic association and physical cause. The problem is too great for a post, but probable association -- you have a 27% of being diabetic if you have such-and-such genotype --is a very different kind of determination. It usually is misleadingly simple in many ways, especially when many factors, measured and unmeasured, are involved. Rarely does it mean strict causation.






Thursday, February 25, 2010

Lab on a chip

An example of a fine use of technology and research money -- medical labs on paper 'chips', at the cost of a penny each, of great potential use in places with little or no medical care.  We blogged about a similar kind of thing here.  (Showing you that we aren't complete naysayers!)

Nature's typical Big Mistake

Well, we had hoped it wouldn't happen, but in the media-hype age we guess it's unavoidable. Our copy of Nature arrived today, the issue with the genome sequences of Archbiship Desmond Tutu and several Khoisan-speaking 'Bushmen', (in the past, at least, it had become protocol to call them by the former cultural-linguistic rather than the latter originally potentially pejorative name), which we blogged about here . The cover, showing San individuals marching across open country, bows and arrows in hand, makes nice anthropology in classical National Geographic style, setting the stage.

The cover's main heading "Southern African Genomes" is fine. But the photo, and the subheading, are not and especially for a leading science publication which should know better. The subheading is "Genetic variance in the oldest known modern human lineage." That description is completely wrong, and in a way falls into classical colonialist thinking of people like the San as 'other', relative to our noble selves. The authors knew this because we'd talked to them about it (they're here at Penn State), so we presume the cover was Nature's editorial (or Sales Pandering Department) decision, not the authors'.

The San genomes are no older than any other human genomes, of course--including yours, whoever you are who may be looking at this post. We all go back to the same set of common ancestors. What the proper description should be is something to the effect that the San have humans' most divergent sequences, meaning that the San sequences are more different from each other, and have a deeper common origin relative to other populations; another way to put that is that it's been longer since they shared common ancestry. That reflects isolation of small local populations, the result of being displaced by the fairly recent Bantu expansions, and other demographic factors, but in no way are today's San 'ancient'. It's sad to see this misconception, with the accompanying picture, perpetuated here. But it's a persistence of classical thinking in our society.

Now besides calling them ancient, is it just gratuitous political correctness for us to object to the San being pictured on Nature's cover? Well, ask yourself why they didn't put a picture of Desmond Tutu (not necessarily nude) as their cover instead? Why the exotica of four naked arrow-toting San marching through the tall grass? This is classical gawking at the inferior 'other'.

On the other hand, the San live the way photographed and they supposedly gave 'informed consent' to being studied (and depicted on a magazine with worldwide circulation?). There happens to be a clear-cut historical relevance here. In the 1800s, leading European scientists brought a Khoisan woman, Sarah Baarteman was her 'western' name, and exhibited her (naked) for Europeans to gawk at like a museum specimen--and especially her large buttocks and breasts, and extended labia minora (the 'Hottentot apron') to titillate Victorians. She was dubbed the 'Hottentot Venus'.

Is this more of the same colonialist exploitation? It is hard to say it's not. Where does legitimate anthropology end and exploitation begin? It's not a new question. Nature is clearly being exploitive. They could have used the sampled San's faces (which were in the article itself). It's inevitably related to the power differential and the fact that it's us studying them. Would we agree to be studied if some of them, in their native state, arrived at our doorstep asking if we'd be in their study?

Whether and how anyone is harmed by this kind of thinking only the future will tell. Whether it's been detrimental to indigenous populations in the past is also debatable, because clear-cut exploitation need not be seriously affected by such anthropological things. Fortunately, these kinds of whole genome sequence data are rapidly becoming so common that their circus value will properly diminish. Soon, it'll just be science, and then how valuable the actual information gathered is or isn't will be determined by the usual, and proper, determinants of scientific impact.


'The' gene? For what?


There is yet another story in the news these days, about a diabetes drug that apparently raises the risk of heart disease. In this case, the chemical known as rosiglitazone binds to a receptor called PPAR-gamma in fat cells. Variation in a PPAR-gama gene has been found by several studies to be associated with higher risk of type 2 (adult-onset) diabetes, the form of the disease in which cells do not respond to circulating insulin, thus leading to elevated glucose in the blood, and to pathological consequences.

However, there is some evidence accumulating, according to the news stories, that rosiglitazone may increase the risk of heart attacks.

Whether or not this turns out to be true, it is but one of many stories about side effects. Indeed, as a follow-up story suggest, there are some boys-will-be-boys elements of questionable ethics--where the company was hesitant to accept or acknowledge at least one study that showed the elevated heart-attack risk--resembling many other stories of yielding to heavily vested interests, an aspect of the ethics of science that could use some serious scrutiny. Again, we see the news stories but can't claim to know what the preponderance of the facts are in this case.

Separate from any ethical questions, most drugs, if you read the small print, can have many side effects. The small print in drug inserts may contain several anti-lawyer rather than true biological effects of the drug, since if an effect is rare in a drug trial it will be difficult to prove a true casual connection.

Above is one image of one section across one stage (mid-gestation) of a mouse embryo, stained (purple) to show cells expressing PPAR-gamma. This is from the great gene-usage site GenePaint.org that shows the mouse mid-gestation expression of around 20 thousand genes. Go there and you can look at all sections in two different mouse embryos available for this gene (here's one).

The point is that even at one stage in one strain of animal, the gene is expressed in many tissues. Ironically, but not necessary atypically, the gene is not expressed in the heart at this stage. However this is part of a system of energy-related cellular functions that are used by most if not all cells, and the gene is in fact expressed in most cells at some level (see Wikipedia for PPARg).

Regardless of how the drug-reaction story turns out, it and many other like it show a point that we tried to make in The Mermaid's Tale, and that we were by no means at all the first to observe: many genes--probably the vast majority of genes--are expressed in several different tissues at different times or under various circumstances. Signaling and related cell-communication and gene-expression regulating systems (of which PPAR is an example), are often used in many different types of cells. It is not the gene, but the combination of genes, that has functional effects.

Thus, it is totally to be expected that a drug that targets a gene product can, or probably will, have multiple effects. The effect may be positive relative to disease in one type of cell, but harmful in other cells. Usually, doses are adjusted to minimize such effects, but humans and our experiences and cellular contexts are so variable that it is difficult to avoid some cost for a given benefit.

This is part of the way life is organized. It's how life evolved. It's how organisms with many different tissues and organs evolved--because an organism is the result of exquisitely complex and highly stereotyped cell-to-cell interactions.

Attempts to target just one gene in one type of cell are a real challenge as a result. That's why even targeting a mutant gene in a given type of cancer has proved to be so challenging, as another story in the NY Times has discussed.

Wednesday, February 24, 2010

Would a new eugenics backfire?

Using genotypes to predict phenotypes is a coming activity that will only increase. In the past, traits that were assumed to be genetic were used to commit all sorts of offenses, minor or major, to the independence or well-being of individuals, and groups were targeted--assuming that individuals bore the traits assigned to their group.

This was the eugenics era and these measures were justified on darwinian and social grounds that the unfit were not worthy to live as 'normal' or 'fit' people do. The current era of genetic determinism does not seem likely to lead to mass executions the way the twisting of eugenics did in the 20th century. But what forms it will take, and the degree to which genetic information is used is probably unpredictable.

Whether it will be institutional (government policy, insurance or employment structures, etc., that are based on genotype), or whether it will be implemented by individuals, only time will tell. Clearly however, parents will be using such information in regard to their reproductive patterns--a matter of individual 'eugenics'. That is already happening.

If parents who carry dangerous mutations abort or prevent the birth of offspring with serious impairing disease, is this good for society? Here, we do not deal with the question of whether such persons deserve life or should be viewed as defective, any more than those of us who, say, wear glasses or are born with deafness are.

One subtle point was once known to all geneticists, but may have been forgotten, depending on how genetically-based abortion or in vitro fertilization (IVF) programs are carried out. It is called reproductive compensation, and it goes like this:

Remember yesterday's post, where we discussed a gene with a normal D allele and a recessive disease allele, d. Let's say that dd individuals have the disease and can't reproduce, so the source of the allele in the population are the Dd 'carriers' of the harmful allele. If screening is done to detect dd fetuses, which are aborted, then no more cases of disease will be produced. This would also prevent those two copies of the 'd' allele from being present in the next generation. Two Dd parents will produce 1/4 dd offspring, so a lot of 'd's will disappear in a single generation of screening, a rapid form of beneficial artificial selection.

But if the screening accepts Dd offspring, rather than aborting them, there will continue to be carriers in the population. Reproductive compensation occurs if parents selectively remove 1/4 of their offspring but, because they can do this kind of testing and prevention, allow the birth of more Dd carriers. In this case, the frequency of the 'd' allele can actually increase in the next generation, causing an excess production of carrier children by parents jubilant that they won't bear diseased children. In the long run, unless the screening program is always maintained, there can be an increase of disease at some point in the future. If we're doing this to countless of genes for which we develop screening tests, the total risk of such offspring could increase greatly. How serious this problem is, is a legitimate question. It was one that was discussed in the eugenics era and even afterward.

We don't happen to know how such screening programs handle the Dd conceptions. But if screening removes or prevents the birth of Dd as well as dd children then a true (and rapid) reduction of harmful alleles will occur. IVF might automatically do this, but if for people who can't afford or are not aware of the fancier and costlier technology rely on abortion, they would have to abort 3/4 of all their fetuses to prevent new copies of 'd' in their offspring, which they are most unlikely to do! This would mean an advantage for the wealthy and privileged....not exactly a new phenomenon.

A second issue is what the societal impact would be if, say, everyone decided to genetically engineer children to be good baseball players, or good guitarists (or fearless soldiers, or intelligent investors, or mathematicians). To each parent this may seem a good goal, but in society at large, it would likely be a case of the endless evolutionary rat race. All those brilliant mathematicians would not be particularly brilliant in the eyes of society. There will still only be so many jobs in math departments.

Instead, in a society of mathematicians there would simply evolve some new way of competing for resources and success. If there are too many born with Olympic sprinter genes, of course they can't all be on the Olympic team, or the Olympics would have to change. Or we'd just get bored with sprints. Or the best times would simply decrease with a similar proportion of winners as we now have.

The point is that of unintended consequences. Evolution works with what's present. If I dream of producing an Isaac Newton, and nobody else does, I might have a successful genius whose Nobel Prize I can boast of. But if everybody produces Isaac Newtons, society will simply have to develop different criteria for its prizes. It would be like grade inflation: like Garrison Keillor's Lake Woebegone, all the children will be Nobel Prizewinners!

In this sense, eugenic dreams can be illusions, or almost certainly would be illusions.

Tuesday, February 23, 2010

Eugenics continued -- its modern guises

We have mentioned eugenics in several recent posts (e.g., here and here), but it's a thorny subject worth pursuing more fully. Indeed, 'Deadly Medicine', an exhibit from the US Holocaust Museum, is at Penn State for the next few months, reminding us that eugenics was alive and well in the US before the Nazis put it into practice.

There are two faces of eugenics. One is the goal that parents have of not bearing undesirable children. Normally, that means children with serious disabling disease. It can be a noble wish and even a noble act. So long as abortion or in vitro fertilization (IVF) and embryo screening are considered morally acceptable options, and the suffering of the embryo or fetus is not considered to be great, genetic screening can achieve this end. Premarital screening also works, if couples can be dissuaded from marriage (or from child-bearing) if they could produce such offspring (and, indeed, an AP story reporting that many genetic diseases are in decline because of prenatal testing was widely published on Feb 17).

If two carriers of recessive alleles (genetic variants) that are potentially harmful have children, then 1/4 of their offspring would inherit the 'bad' allele from each parent and would be affected with the disease. Call the defective allele 'd' and the normal one 'D'. Each parent is a Dd genotype. Randomly picking one of the alleles from each parent gives a 1/2 chance of picking d from each, or 1/4 chance of picking d's from both. These are the classical proportions discovered by Mendel.

There are many single-gene diseases for which, if the 'd' alleles are known, parents can be screened to determine if they are at risk of producing a dd offspring. Not implanting such an embryo, or aborting such a fetus, prevents the birth of someone carrying the disease.

Tay-Sachs disease in the Jewish population, and serious anemias called Thalassemia, are examples of diseases that have actually been substantially reduced or even nearly eliminated by such screening programs.

The fact is that, like much else in life, the genes have many--often hundreds--of alleles, and only a few of them are known to be seriously harmful. Others confer lesser risk and for most of them we simply don't know. It's less clear about alleles whose effects are uncertain, with comparably complicated moral issues associated with aborting these if they're detected. Still, even those few that are known are clear.

Most of us these days would consider this not just acceptable but a good form of personal eugenics. Others, however, object morally to any abortion, to making God's decisions, or to labeling disabled people as somehow not fully human or not worth living. A whole field of 'disability studies' deals with these issues, and much is being written about the degree to which meaningful, valuable lives can be lived by those formerly considered unworthy--hence questioning blanket policies of aborting them. These are ways in which the morality of eugenics enters society.

Is there a place to draw the line as to what counts as disease? If we ever learn to evaluate a fetus's IQ from its genotype, would it be OK to abort 'stupid' fetuses? If so, should parents be allowed to determine the decision point on the IQ scale? What about, say, musical or athletic ability? What about engineering genes into an embryo to give it traits you want your children to have?

Most if not all the humans who have had their whole genomes sequenced have genotypes at some genes that are 'disease' genotypes. The discoverer of DNA structure, Jim Watson, is one. As someone has quipped, if his own technology had been available to his parents, in the current climate of personal eugenics, he would have been aborted!

What about sex? In some countries there is reportedly substantial prenatal testing and aborting of female fetuses, because sons are more valuable to the parents than daughters. Is that OK? As long as the parents, and not society, make the decision?  Of course, this can have ramifications into succeeding generations, as males find it difficult to marry, creating a sort of pendulum effect as to which sex is more valued.

The fears of eugenics imposed by society, as was done in the first half of the 20th century, spook many people as we increasingly enter the genetic age in which the belief is strong that genes generally will predict one's traits, one's identity as a person. We might not impose gas chambers on people because of their group membership, but what about policy involving insurance, employment, access to education, and the like aimed at individuals? Or in the classical example, to screen potential immigrants? Will subtle or unsubtle pressure be imposed on those who would choose not to screen, on the grounds that their impaired offspring will be a burden on the health care system and hence on everyone? (That's a classic argument used by the original eugenics movement, and vigorously used in Germany to justify 'euthanasia' to countless thousands of 'defective' societal burdens)

These are issues we'll have to be facing in the future. And we don't need to raise the Nazi holocaust specter to see that they will be important. They involve both our concepts of genetic causation, our concepts of personal value, and in subtle ways our concepts of societal responsibility. There will be a great increase in the number of genetic variants that have high predictive power.

A lot of the eugenics movement in the last century, and even the rationale for the holocaust, was framed around evolutionary and Darwinian ideas. Nature eliminates the unfit, so why can't we help Nature out? If we don't, will our advanced medical and social-net system lead to the gradual pollution of the human gene pool? If so, it is not just the right of individual parents to decide, but society's, because society pays the bills economically and in terms of its abilities, for decreasing overall 'fitness'. These were the classic arguments. Is there a risk that renewed darwinian determinism--of some form specific to our time, not last century's, will lead society in similar directions?

These issues will not go away. But, in our next post, we'll raise important additional questions about what the impact of screening is in terms of the human gene pool, and what the impact on society of widespread personal eugenics might be.

Monday, February 22, 2010

Ibn Khaldun, part II: Early premonitions about evolution

In our last post, we described the profound understanding of the 14th century Islamic scholar Ibn Khaldun, that the nature of human life and culture are environmentally determined. Khaldun discussed many things in his 1377 AD work, the Muqaddimah (or Prolegomenon), including the nature of racial variation.

He ascribed skin, hair, and eye color variation to adaptation to sunlight and climate. He knew of the basic patterns of geographic variation in these traits. His explanations were of course not genetic in the modern sense, which wasn't possible in his time, except to the extent that he knew the traits were heritable. Yet.....

Instead of genes, Khaldun explained things in Galenic terms, of balance among the humours relative to the local environment. Of course, today we use the idea of natural selection in a similar way: selection, it could be said, favors or disfavors people in terms of their physiological state, in a sense, the balance of their physiological traits relative to the local environment. That is different in technical detail, but is not so very different in concept!

Khaldun also described, again without anything close to adequate genetic theory, the effects of admixture on traits like skin color, as they changed over the generations as people moved up and down along the Nile.

But in a more global and we think remarkable way, Khaldun's insight led to premonitions about evolution itself. He was a devout, literal Muslim, but he wrote:

One should look at the world of creation. It started out from the minerals and progressed, in an ingenious, gradual manner to plants and animals...The animal world then widens, its species become numerous, and, in a gradual process of creation, it finally leads to man, who is able to think and reflect. The higher stage of man is reached from the world of monkeys, in which both sagacity and perception are found, but which has not reached the stage of actual reflection and thinking. At this point we come to the first stage of man after the world of monkeys. This is as far as our physical observation extends.

So what, you say? Anybody can have some guesswork idea that one species came from another, and a lot of people did. Khaldun was a historian and professional bureaucrat, not a biologist. He may have made observations, or repeated accepted lore, but did no direct biology of his own. Still, what Darwin and Wallace did was provide a gradual process of adaptation to environments to explain biological change. What about Khaldun?

He uses the word creation, but clearly states that life evolved via a gradual process in mind, not separate creation events. He twice stresses gradual, rather than a phenomenon of special creation. He is, of course, invoking a traditional hierarchical and progress-based view of life, one consistent with Islamic thought, and to some extent in Darwin, too. He also viewed races in hierarchical ways, some being closer to animal states than others (Darwin also had a racial hierarchy with similar peoples on its upper and lower ends). But Khaldun's forms of life evolve into other forms, and begins with the origin of life from primordial non-living materials.

Likewise, as we said above, he includes continuing evolution within species, and of course gives particular attention to adaptive evolutionary change within our species, that is, leading to racial variation. Darwin's ideas were expressed totally differently, but were not so far advanced as one might think, relative to Khaldun.

Darwin's ideas of inheritance, the blending of a person's mother's and father's causal 'gemmules' that moved from every part of the body to the gonads, lent itself to quantitative theory. But Darwin's gemmules were modified by the environment. Not so different in concept, given the technologies available at the time, from Khaldun's centuries earlier.

Khaldun was simply expressing the 1800 year-old theory due to Hippocrates, that Galen accepted, as did Lamarck, and that Darwin's theory, too, basically echoed 500 years after Khaldun: your humoural state is secreted by your tissues and travels to the sperm or egg and influences your offspring.

This is remarkable in many ways, but two are particularly important. The first, perhaps, is that this writing is not mentioned in the leading histories of biology or of evolution that we have consulted (e.g., Eisely's Darwin's Century, Greene's The Death of Adam, Mayr's Growth of Biological Thought, not even Gould's megatome The Structure of Evolutionary Theory). One might accuse western historians of science of wearing rather restrictive blinkers!

In fact, Khaldun is well known to western historians and was, at least, to anthropologists. Khaldun's environmental determinism is mentioned briefly in Marvin Harris' classic book The Rise of Anthropological Theory, and credited with influencing 18th century thinkers about culture and its processes of change. But this kind of thinking is not exactly of high profile in anthropology these days.

The second remarkable fact is that Muslims know of Khaldun's thoughts and you can find web sites proclaiming them to argue to a skeptical world that Muslims cannot be accused of benighted anti-evolutionism, views that many Christian fundamentalists hold. In fact, in searching such sites to see what they said, we learned of one Ibn Arabi, someone we had not known of before, who said the same thing as Khaldun, but several centuries earlier. The similarity makes it likely that the view was widespread, repeated by many others, and may be well-known to Islamic scholars today (but certainly not to us). Khaldun may have been repeating what he'd read and filtering in his time's Galenic causal terms--we don't know.

Ibn Khaldun's work is widely available, which shows the short memory and restricted search space of our understanding of the origin of ideas. In turn, a faulty educational system leads us to continually have to rediscover things (and proclaim new insight, of course!) that should be credited properly and should not have required so much costly research to learn.

We can't claim any personal wisdom or hubris in writing these thoughts. We had to learn of Khaldun's general work through its presentation on a BBC4 radio program ("In Our Time") that we happened to listen to. The program only discussed the theory of environmental determinism of history, not anything about biological evolution. But it sent us straight to the library to read Khaldun for ourselves.

And that experience should teach those of us in science, who would like to understand the truth, another lesson in humility.


Friday, February 19, 2010

Ibn Khaldun, Part I: a brief disquisition on memory loss.

We're in a genetic determinism age, when even scientists who should know better are attributing almost any human trait or aspect of society to genes. Darwinian views on the evolution of behavior are rife.

We think this is greatly overstated in two ways. First, traits that are influenced by many genes (typically called 'complex' or 'multifactorial') are also affected by environments (a generic term for all things other than gene(s) one is looking at), and the trait value of an individual is not well predicted from genetic data. Second, many assumptions about what we're 'hard wired' to be like, that is, what we're genetically determined 'for', are short-sighted, based typically on the here-and-now back-strapolated to evolutionary time. Explanatory scenarios are (we think) too often built mainly on speculative Just-So stories.

In the 19th century, leading social scientists clearly knew that culture and society for the most part were to be accounted for in social terms, not genetic or psychological ones. However, the thrill of the evolutionary chase, in a cultural climate that has forgotten the awful determinism of eugenics and is enthralled with molecular reductionism, has led to serious under-training and amnesia in this regard. Geneticists, mainly trained to run high-throughput gear, may perhaps be excused. But not the social scientists who are jumping into bed with the geneticists (we really wouldn't want to spoil the fun of these rolls in the hay if we didn't think they could have serious ramifications).

But this amnesia is nothing compared to the lost memory of one of the first people to notice and clearly understand the role of environment, in this case cultural and physical, on human societal behavior. We have recently learned, again thanks to the BBC, that that person was the 14th century Islamic philosopher and historian, Ibn Khaldun. (This is a program called "In Our Time", and if you don't already know of it, you might give it a try. It is a phenomenal, intelligent program, as an editorial in the NYTimes has recently said. Podcasts or online listening are available)

Khaldun was the first in western thought to develop a general theory of the processes of societal history. His main point was that there are regularities in social structure that are repeated, predictable, and explicable in terms of social environments. History, he said, should be about its essential processes, not just enumerate its unique, local events.

His most famous work is the Muqaddimah, also called the Prolegomenon, written while he was in a 3-year desert retreat and published in 1377AD. Khaldun presents his theory, using what was known of the history of empires and so on at the time, and focusing on details from the historic relationships between the urban dynasties and the desert Bedouins of the region of North Africa known as the Mahgreb, a region he knew intimately from experience. The Muqaddimah was an extended introduction to his much longer work on the details, called the Asabiyya, meaning 'group feeling' or 'social cohesion'.

Humans, being individually frail relative to the rigors of survival, must cooperate. An essential 'group feeling' holds desert society together so this an be accomplished. From time to time, this solidarity is the basis for the formation of sedentary, urban civilizations. These are large, complex and require even more cooperation, because unlike desert society, urban society requires specialists (rulers, officials, tradesmen, craftsmen, and the like). Desert people live economically stark but self-sufficient lives. Urbanites are dependent on their 'infrastructure' as we would say today.

Khaldun shows how urban society, with its royalty and class structure (needed to keep the peace among otherwise contentious people seeking self-interest), becomes sedentary and much wealthier than desert society. People increasingly forget their cultural roots and become lazy, selfish, antagonistic toward each other, and so on. They grow to covet luxury (theirs, or their neighbor's). This requires officials, tax collectors, judges, police, and so on. The rulers become decadent and lose touch with their origins and their responsibilities. After only a few generations (Khaldun says four), the dynasty becomes 'senile'. At its fringes, and due to independent group feelings, a new political movement arises, which displaces the senile dynasty with its own dynasty and the cycle repeats. The Vandals invade and conquer Rome!

The important point in all this for our blog is that this view is one of strong environmental determinism. The same people (that is, the same genotypes) live entirely different lifestyles in all sorts of ways, depending on their physical and cultural environment. One can argue with any of his specific points, but the general picture Khaldun paints is cogent. It is simply an obvious fact that it is the environment, broadly defined, that affects so much of our behavior. The nature and relative amounts of cooperation vs competition, sharing vs sharp dealing, self-protection and risk-taking, and so on are environmentally determined, even if the mix of people includes those with genotypes that may predispose them, relatively, to more or less of a given kind of behavior.

There is of course variation, and that has a genetic component, by which some are smarter than others, or more nervous, or more fidgety. But these are minor variations given the overall patterns of culture, and countless GWAS and other studies have shown how weakly predictable from genotypes are such traits, behavioral, morphological, or disease. And if we can't predict it (at least not yet or without high-level computation), then Nature--in the form of natural selection--would be comparably nondiscriminating.

These are lesson we should learn today: our short-term here-and-now perspective is not trustworthy when it comes to asserting what humans are programmed or evolved 'for'. We have a repertoire of responses that can lead to a wide variety of conditions that, in turn, affect our behavior and our biology.

Khaldun even makes the observation--in 1377!--that some epidemiologists seem to claim to have discovered based on fancy and expensive studies today: sedentary culture with its excess of luxury and dietary overkill leads to obesity and vulnerability to disease. That, in some ways, is a lesson geneticists, driven by vested interests, shunt to the background, despite the obvious fact that it's a major rather than minor factor in public health.

Khaldun explicitly worked from a Galenic point of view. That is, he saw human health in terms of the balance, or lack of it, among the four humours. This of course colored his view and makes some of his writing seem quaint. It is certainly dated, though we should not be complacent in thinking that our present view won't seem comparably dated and quaint a few centuries from now.

He also suggested that the Bedouins were healthy because of their spare, tough life style. Clearly he exaggerated and did not have actual demographic data. At that time, too, he could not consider infectious diseases (microbes weren't known), but given the problem of enough clean water and population densities of cities, such diseases would likely have strengthened his case. That is, the burden of infectious diseases would likely have been worse in cities than in the desert.

We're not lauding a 14th century scholar for having perfect insight that needs no revision today. But we would assert that short-sighted thinking is very costly these days, in terms of objectives we claim to have in regard to health. His observations about political dynasties may have lessons, too. Many would argue that the American global era is coming to an end in part because of our wasteful, lazy, sedentary, and complacent view of the world.

Khaldun made some other remarkable observations about evolution itself nearly 500 years before Darwin and Wallace (though he didn't use that word--at least not in the English translation we have), including discussion of how racial variation was the result of adaptation over time. We'll talk about that in part II, our next post.

Thursday, February 18, 2010

African diversity

The latest news from the GenoSphere is the paper in Nature reporting the 'whole genome' sequence of several Africans (described here, among many other places). We take a particular interest in this paper because some of the lead investigators are friends of ours here at Penn State, where much of the sequencing and, in particular, annotation (analysis) work on whole genome and ancient DNA genome analysis is being done.

One is Archbishop Desmond Tutu, the others are San ("Bushmen"). The rationale for this choice besides its manifest value as a publicity stunt is the legitimate representation by Bishop Tutu of the culturally Bantu part of southern Africa and the San to represent another language group. This set of sequences thus at least in a crude way spans the two major and most different populations of the African continent. We know very well that there are differences between these populations in terms of genetic variation (see Sarah Tishkoff's recent paper in Science, 2009., PubMed ID: 19407144). So this study fleshes the picture out in much greater detail (to the extent that can be done with many fewer sampled subjects).

What this study shows is, as every other human whole genome sequence has shown, a multitude of new sequence variants--new in 3 ways: some will be sequencing errors, some will be rare variants in the population that simply haven't yet been seen, and some will really be new: unique mutations between the subjects' parents and themselves.

No major 'disease' variants were seen. Of course, as Tutu's case dramatically shows, he doesn't have any major diseases, but we knew that without needing DNA, from the fact that he's elderly (all 5 subjects were around 80 years old) and in good health (he's had some diseases in the past, but apparently nothing diagnosable genetically yet -- his reaction to this was reportedly 'immense relief', in spite of the fact that he's 80 and has had cancer and TB, albeit treatable so far, a reminder to the rest of us that non-genetic diseases are probably more likely to fell us than anything lurking in our genomes).

The San individuals, too, are adults in good health. In their harsh environment, if you have a real genetic disorder you're not around to be sequenced. Variants for adult milk drinking ability, lighter skin color, or malarial resistance were not found, but that shows mainly what's expected. Skin color is genetically complex, they haven't been fighting malaria, and they're not in the northeast African populations that have adapted to life-long milk drinking.

Whether major disease or other simple trait variants are found in any given person (especially an older healthy adult) will depend entirely on the major variants circulating in their population, their frequency and strength of effect, and the luck of their genetic draw in sampling them from their respective population. Some of the first people sequenced do carry such variants, but others don't. Interesting, informative, and expected.

The great sequence differences among the San individuals was also to be expected. For some historical reasons, they have accumulated more divergence than other African populations. Probably, they are today a relict population that survived being shoved into the Kalahari desert by the expanding, more technically powerful Bantu speakers that is known to have occurred not too many centuries past. Prior to that, they may have been living in small, widely scattered bands across much of sub-Saharan Africa, with little gene flow (marriage exchange) between them. Thus, new variation would arise (as it does everywhere) but staying very local. That, plus chance (genetic drift) will allow differences to accumulate between San populations. Why so many variants were found is curious and may or may not be due to sequencing errors (time will tell), but the deep split seems robust.

The one rather loose statement being made about these samples, and one that is potentially dangerous is that the San have the most 'ancient' human DNA lineages. That's patently false. We all have had the same time since our common human ancestors. There has not likely even been more generations in the San than in other humans (this would depend on the average age at which San, and others, have borne half their children).

Instead of the San being in any way more differently human from the rest of us, their lineage is not older but has simply been more isolated from the rest. It does represent an interesting subject of study, and one long known, for which we now have good data.

These new data go into the bookshelf of whole human genome sequences.

Genome sequencing now can or at least should be moved from the Melodrama Dept to real, routine science. The technology is racing ahead, and soon around $1500 or less will buy you such sequence, indeed, one at 40x coverage--that means, each part of the genome will have been sequenced independently 40 times (on average), greatly increasing the accuracy of the billions of basepair 'calls' from the sequencing device.

(There are some technical issues with the current paper that we can just mention. It is at lower coverage level and hence perhaps slightly more error prone that 40x coverage would be. And not all the sequences reported were 'whole genome'; some were just the protein coding 'exome' parts. These are minor points relative to this post)

We're soon to be whelmed, if not overwhelmed by such sequences, most of which will go into growing data bases where geneticists can analyze the pattern of variation in all sorts of ways. Some disease-related information will surely result, but the hype, hype, hype about how each sequence reveals important disease information will stop. Or it should stop, at least. We can get on with our work, without the TV crews and material-hungry journalists.

This paper is interesting and is a first stage in getting better data on variation in Africa. So far it shows what we knew to be the case about Bantus, San, and about admixture between them in southern Africa. The sequence shows much variation between the groups, and especially among the San, but we've known that for more than 20 years (PNAS, 1989, vol 86, pages 9350-54, PubMed ID 2594772). That work was done by old friends of ours Henry Harpending and Linda Vigilant who at the time, like the current genome sequence authors, were also here at Penn State (and in our own Anthropology Department) at the time.

The lack of major new findings is what would be expected, but demonstrates that we really are learning things from the past generation of studies of genetic variation. The details of future sequences will be worth waiting for, but when they come they will not be worth shouting about.

Wednesday, February 17, 2010

Guest blogger, Jennifer Wagner, JD: Enforcing forced definitions of race

Most of us didn’t need to go to law school to learn that the Supreme Court in its infamous Dred Scott case ruled that blacks or African Americans were not citizens. Stated another way, the Court ruled that only white persons were entitled to automatic citizenship by birthright. What is perhaps less well-known, however, is the story of how the law (Congress and the courts) helped reinforce popular views on race. Ian Haney-Lopez deserves much credit for telling this story in White by Law: The Legal Construction of Race, and I encourage everyone to read it. For present purposes, I will do my best to highlight the major events on the intersection of race, anthropological/scientific expertise, and the law.

I mentioned the Dred Scott opinion and its announcement that a prerequisite for birthright citizenship was being white. Birthright citizenship was not broadened to include any individual born on U.S. soil until passage of the Civil Rights Act of 1866, and that Act – like many of the other legal reforms of the day – was a reform in theory more so than practice. It wasn’t until 1898 that birthright citizenship was interpreted as including all individuals born on U.S. soil (even those individuals whose parents were ineligible for naturalization).

So we’ve discussed how birthright citizenship was dictated by race concepts. But there is another type of citizenship: naturalized citizenship. Congress established the rules for naturalization in 1790, making naturalized citizenship available to “any alien, being a free white person.” Notice that Congress did not refer to any “European” or even any “Caucasian” person. Congress steadfastly maintained this prerequisite that naturalization applicants be white. In 1870 Congress loosened restrictions on naturalized citizenship to include not only white persons but also those persons of African descent (Act of July 14, 1870, Ch. 255, §7, 16 Stat. 254).

To recap, then, before 1866 no non-whites were eligible for birthright citizenship and before 1870 no non-whites were eligible for naturalized citizenship. Yet what did this mean? Notice how the options for claiming eligibility for naturalized citizenship were now based on somewhat different considerations: appearances (specifically, skin color) and ancestry (specifically, geographic origins). Why wasn’t the prerequisite based on race (or something like race) deleted instead of enumerating an exhaustive list of two options? If Congress had done so, it would have had the effect of permitting citizenship (and legal rights that accompany such citizen status) to Native Americans and Asians. What does this all mean? It means that when people immigrated to the United States and sought to naturalize, the immigrants had to claim they were “white” or claim they were of African descent. For obvious reasons, the former was the preferred approach.

In the first case to challenge the prerequisites (In re Ah Yup, 1878), a California court quoted anthropological and scientific expertise to legitimize its decision that the Chinese applicant was not white and, therefore, not eligible for citizenship. A Texas court (In re Rodriguez, 1897), considering a Mexican applicant, turned to anthropological and scientific expertise as well, but ultimately held that while Mexicans wouldn’t be considered white by “strict scientific definition” they are eligible for naturalization as “white,” thanks to treaties related to the U.S. expansion through Florida and the Southwest. These cases were relatively clear. They had predictable outcomes. The confusion of race, skin color, ancestry, and nationality in America’s history – including legal adjudications – should now be readily apparent.

At the turn of the 20th Century, whiteness was not equated with a single race or even equated with being of European ancestry or having geographical origins from any European nation. Recall that William Z. Ripley convinced many of the existence of three distinct European races of Nordics, Alpines, and Mediterraneans (determined largely on the basis of head shape, skin color and stature), and Comte de Gobineau’s attribution of behavioral and physical attributes to Aryan speakers unified the Nazis. The Dillingham Commission Report of 1911 illustrates how American policy was not immune from heavy reliance on eugenics and rising nativist sentiment.

Early U.S. immigrants were predominately from Northern and Western Europe. It’s unsurprising, then, that all immigrants were not seen as equally “suitable” for naturalized citizenship and that distinctions were made between new immigrants and old immigrants. When immigration patterns changed, the courts ran into problems. Immigrants of the late 19th Century were predominantly from Southern and Eastern Europe as well as from Asia, and they were not treated the same as “old” immigrants. Yet immigrants from Europe regardless of region were scientifically considered Caucasian. The courts waffled on how to treat Syrians and Asian Indians, for example. The courts had to decide whether to continue to follow scientific opinion (thereby expanding who is white by making the term synonymous with Caucasian) or whether to preserve common knowledge about who is white. I encourage you to read Americans in Waiting: The Lost Story of Immigration and Citizenship in the United States, written by my colleague Hiroshi Motomura, for a fascinating account of immigration policy in America.

The transition in prerequisite cases from white (i.e. skin color) to Caucasian (i.e. race) is noted to have started in 1909 with the case of In Re Najour. Ian Haney Lopez has described this as follows: "The Najour court reasoned syllogistically from Caucasian to 'white' to citizen. Doing so, it tied the 'white person' restriction to a rapidly expanding anthropological classification. Herein lies the significance to the courts of the strict equation of 'white' and 'caucasian.' By making persons from North Africa to Oceania 'white,' the broad definition of Caucasian employed by Judge Newman [who decided In Re Najour] arguably vitiated the restrictive impulse animating the 'white person' bar, and thus undercut the prerequisite laws. If courts accepted that all those categorized as Caucasians were 'white persons,' many people generally seen as non-White would become White, at least for purposes of citizenship." White by Law: The Legal Construction of Race, at 51.

Not all cases followed this reasoning however, which is why the two Supreme Court rulings in 1922 and 1923 (
Ozawa v. United States, 260 U.S. 178 and United States v. Thind, 261 U.S. 204) were so meaningful. In Ozawa, the Supreme Court defined “white” as the equivalent of members of the Caucasian race and denied a Japanese applicant citizenship. Just three months later, in Thind, the Supreme Court denied an Asian Indian applicant citizenship, retreated from the use of Caucasian for white, and rejected anthropological/scientific expertise. The Court explained, “the average well informed white American would learn with some degree of astonishment that the race to which he belongs is made up of such heterogeneous elements.” (Thind at 211)

So by 1923 the courts were saying that scientific definitions of race were not important and that popular notions of race and whiteness were what mattered. In 1924 the National Origins Act set up a strict quota system to limit the less “suitable” European immigrants from southern and eastern Europe. In essence, as Mae Ngai has stated, “‘the law constructed a white American race, in which persons of European descent shared a common whiteness distinct from those deemed to be not white.’” (as quoted in Americans in Waiting: The Lost Story of Immigration and Citizenship in the United States at 127).

So courts decided citizenship after 1923, and legally constructed and reinforced the concept of race, (1) without regard to anthropological and scientific knowledge on human variation or the non-existence of human races and (2) by acceptance of popular conceptions of race, perceived differences of the character of individuals on the basis of skin color. It was not until passage of the Nationality Act of 1940 that birthright citizenship was freed from racial restrictions, and naturalized citizenship was not freed from racial restrictions until passage of the Immigration and Nationality Act of 1952.

Native Americans factually were not U.S. immigrants, yet they faced similar struggles of confused notions of skin color, race, and ancestry. That story, however, must be told another day.

Jennifer Wagner, JD

Racing to confusion

One of the most important and contentious subjects in today's American society is that of race, or is it ethnicity, or is it color, or ....? There are good, natural reasons to want to belong to a group, whatever name you use for it. It can give you a feeling and the protection of solidarity. It can help you form alliances. It can get you legal entitlement rights. It can be a path to status (high or low depending).

But the term race also has biological connotations, of genetic continuity, or even genetic inherency (if one thinks of traits like hair form or skin color). Viewed, as many have done, in evolutionary terms, natural selection will have favored better traits within the evolution of each race (whatever that is), and by the very same reasoning, must have also favored values of one race vs another. That's because natural selection picks what's 'best' at any given time, and it's unlikely that, relative to any societal value today, all groups (whatever they are) would have evolved to be identical today. That doesn't mean differences would be great, of course.

Race was central from Darwin's time through WWII in the form of eugenics, a scientific movement concerned with these evolutionary issues. The term was due to Darwin's cousin Francis Galton. Scientists, generally from the privileged parts of society, expressed long-standing concerns about the masses of the less-worthy (a concern Darwin's inspiration Thomas Malthus, and Darwin himself shared).

The Nazi regime used mass eugenics to justify its tyranny, but thousands of other people were incarcerated, sterilized, or even killed because of the supposed characteristics of their race -- that is, in what the Germans called racial 'hygiene', individuals were picked on because of the assumption that any member of a race had the traits of that race. That was not the whole story, of course. Individuals within a society who were its 'worst' were also picked out for similar treatment. Invoking Darwin and natural selection, eugenicists piously presumed to be able to help evolution along by deciding who shall thrive and who shall not, and trying to make policy accordingly.

An important question besides concern about having to pay to support the starving lower class was the threat seen coming from immigrants into the advanced European societies. Our best and most respected scientists viewed races as real, evolutionarily determined, biological categories. They were treated just like classical 'type' specimens in biology. Every person was assumed to be a member of a pure race, or someone admixed among pure races. This, of course, assumes there are, or were such things as pure races to be admixed from. And that assumption was certainly and explicitly made.

Mixing of different types was considered important. In its classical form, inbred marriages were not legal of individuals closer than first cousins. In the US there were even laws against two epileptics marrying. From a genetic and evolutionary perspective,too, it was natural to think that if members of two races mixed, their hybrid offspring would bear some of the genes--and hence traits--of their inferior-race parents. Inter-racial marriage was illegal. and believe it or not, how much mixture should be tolerated became a serious topic of interest.

In our age in which genetic ancestry estimation from DNA sequence is a major recreation (with some legal entitlement elements that may be quite serious), the subject is certainly alive and well in all of its senses. Immigrants now, as then, were viewed as a potential source for the degradation of society. Somebody had to bar the door! But what criterion would be used?

It's too big a subject to go into in this post, but Jennifer Wagner, a colleague of ours who is a lawyer currently writing a doctoral thesis on her specialty interest, human rights law and concepts of genetic ancestry, has provided a summary of some of the slippery legal (court-related) issues in regard to defining who can immigrate, who can become a citizen. That formalized the rather informal writings of eugenicists, and gave related policy the ability to be implemented. We'll put up Jen's guest post tomorrow, and it's worth checking back for, as she is very thoughtful and knowledgeable on this subject (among others!).

Tuesday, February 16, 2010

I saw Mummy kissing mosquitos!

In a thrilling example of what the Journal of the American Medical Association considers relevant (to sales), we now know what killed King Tut. Here's the typically measured reaction reported in the story about the story.

"This is very exciting that we can take modern technology and learn more about Egyptian history," said Howard Markel, a medical historian at the University of Michigan's Center for the History of Medicine.

Well, the facts: From a morphological exam of dear ol' Tut, we already knew that he had problems with his spine that might be (hint-hint) attributed to Marfan's syndrome or some other genetic (and thus real) disease. Thus, to really be sure we're not imagining his abnormal spine curvature, we need some DNA to spice up our investigation (and which will make it real science!). We're less sure that the DNA would be necessary to prove that he had a deformed toe and broken leg, but that will probably come in the next release from this genetics project. Be patient! You can't expect the investigators to give away the whole store in the first release.

Now, the story that we saw did not happen to mention the lack of any genetic finding relative to Tut's spinal anomaly (nor the toe), so it did not enlighten us on that score. Presumably we'll have to settle for the actual direct if untrustworthy evidence of the naked eye rather than a complex genetic allele (most of which we already know quite well have poor predictive value).

But there's another bombshell in this same story: the investigators detected DNA sequence evidence of infection with the malaria parasite. Let's assume that this is not contamination from the many possible sources from which that could arise since the tomb was discovered.

That Tut was young is undeniable (by looking at him, and believe it or not, despite the paucity of telomeric-length evidence from his DNA!!). Of course, even the story notes that many mummies of various adult ages at death also have already been shown to have been exposed to malaria. Thus, the rigorous conclusion was drawn that Tut may have died of malaria and a broken leg (and broken his Mummy's heart!).

Now if all this seems like good logic, rigorous science, or something beyond material for editorial hype and television programs, then you've hit the wrong blog!

Still, it's nice to see what it takes to send thrills up the spine (if not the toe) of reporters and the august JAMA readership. Hopefully, that's not what your doctor's doing rather than keeping up with his profession.

(If you detect a nuance of satire in this post--tut, tut!--we hope you'll forgive us)

Monday, February 15, 2010

The poetry of science

As we've said before, BBC radio is an amazing resource. Three episodes of the World Service program, Discovery, explore "What Scientists Believe". Philosopher of science, Stephen Webster, interviews six scientists in an effort to discover how their beliefs, values and personalities influence the work they choose to do. In the final episode, he talks with Andrew Gosler, a zoologist who has spent decades of his life exploring a wood near Oxford, England, with a particular interest in a little bird called the Great Tit (the photo here is by Luc Viatour). He also talks with Piers Ingram, an applied mathematician who does medical research modeling cell behavior.

Webster accompanied Gosler into the wood in an effort to find out what makes him tick. Gosler said he'd come to zoology from birdwatching as a child. Though he grew up in London, a relative began taking him birding when he was eight, when he discovered that he simply needed to know the names of the birds and flowers he saw around him. This need stayed with him. So now he makes his living by spending time in the woods, finding answers to questions that interest him (such as why the Great Tit's eggs are more speckled in nests lower down the hill, and when it is that Tits put on fat), but he also finds peace and spiritual meaning in the woods. Webster asked him what he'd miss most if he could no longer go there. He said his salary. He'd quit his job before he'd sit at a desk all day.

Which is just where Piers Ingram finds himself. In front of a computer, building computer models of cellular function. He said, among many other things, that he would like to apply his models to organs, or even organisms, but he had a lot more to learn before he could do that. To Ingram, it's not of the essence to connect with what he studies at the organismal level, as it is for Gosler. He probably never was driven to learn the name of everything he saw in the woods, and we don't know whether he finds peace or spiritual meaning in his job. Though, clearly he loves his job.

EO Wilson has said that the problem with modern biologists is that they don't know the difference between moles and voles (moles are how you quantify DNA, aren't they?). And we've told the story here before about the Kawasaki mouse in our lab. Gosler is probably older, and perhaps even a lot older, than Ingram -- and as an organismal biologist, he probably earns significantly less -- but he surely could tell a mole from a vole, and would know that a mouse needs innards to survive. Gosler's overriding questions have to do with conservation and climate change, while Ingram's have to do with finding medical applications for what he learns about cells. One can predict that Ingram would have a greater chance of 'success' than Gosler, because his goals are much more immediate and finite and within his control. Does this make him a better scientist? No, it makes him a more pragmatic one.

Scientists, and of course especially those caught up in the heavily reductionist and grant- and hence productivity-driven world, often scoff at such thoughts. They denigrate the 'organismal' or even 'ecological' thinkers as soft-headed and their work as vague or unimportant -- pasttime rather than real science. From a molecular deterministic point of view, they may be right. More knowledge about mechanism comes, and comes faster, from experimental than observational research, for example, because it's more technical in being more focused on one variable at a time.

But the world is made of whole organisms, and as we stress in Mermaid's Tale, of interactions among cooperating components at all levels from DNA up to the global biosphere. Molecular understanding is not the only kind of understanding. And, because our brains are the result of interactions par excellence, the satisfaction of higher-levels of knowledge is a natural fact, and a part of our evolution and our existence.

So it will be a sad day when there's no longer a place for poetry in science.


Friday, February 12, 2010

Remember the olive oil!

A new study shows that a 'Mediterranean' diet is good for your mental health. This gets CNN headlines, but is about as much a surprise as that the Grinch steals Christmas. Things good for your cardiovascular health are good for reducing risks of stroke and so on, and that includes reduction in risk of dementias.

Mediterranean diets are high in olive oil (and wine!):
A Mediterranean diet includes a lot of fruit, vegetables and fish, olive oil, legumes and cereals, and fewer dishes containing dairy, meat, poultry, and saturated fatty acids than other diets. It also involves small to moderate amounts of alcohol.
So says this story. It of course tempers the alcohol part as any good Puritan American story must. Wine for breakfast and lunch, common in the Middle Sea countries, could never openly be advocated.

Still, high fats like cheese, preserved meat, and so on are known to be associated with high risk of cardiovascular disease (CVD). About 20 years ago, people in North Karelia, a region of Finland that had the world's highest CVD risk, were asked to swap diets with a study population in Italy that had much lower risk (because they lived on rather than dying from a Mediterranean diet). The Italians, not demented, demurred. Salt fish and cheese, reindeer and bear meat are all delicious, but they can plug you up fast, apparently. However, the Finns are no fools either, and the North Karelians adopted a sea change, and tried a more Mediterranean than Baltic diet. Over a 20 year period, their CVD rates declined by about 75%! Yes, that's not a typo!

So this current story is entirely credible, but it's not much of a new story. Fortunately, when the lovely people from the lovely climate go shopping, not being demented, they are able to remember the olive oil (and the wine). You should, too!

On the other hand you probably should pay less attention to the latest attempt by media releases to get you to pay attention to scientist's latest claims to dramatic findings.

Bon appetit!

Thursday, February 11, 2010

Take two aspirin and get some rest? No, it's normal. No, you need a sedative....

An indicator of the nature of science is its standard texts. A text is usually a 'safe' work, a statement of core knowledge that is solid and reliable. Main facts and theories in texts are usually very slow to change, and they should be.

In medicine, things are similar. A medical manual, sometimes called a vade mecum, is a ready reference for diagnosis or treatment. It should be the solid underpinning on which daily, immediate decisions are made, solid knowledge rather than research speculations or tentative ideas. But often things are not that way, and it provides an important lesson for some areas of science that have important policy implications relevant to this blog, and our concern here with the nature of biological and genetic knowledge.

The NYTimes has a story on a forthcoming and long-awaited Diagnostic and Statistical Manual of Mental Disorders. According to this article, major revisions will be made in diagnostic criteria, with all that implies about ideas concerning underlying cause--and highly relevant both to accepted medical treatment, health-care costs, and the interpretation of scientific studies.

If the classification of mental disorders is being revised in a major way so that tomorrow's diagnosis and treatment are greatly different from today's, yet without major new 'facts' about mental illness having been discovered, what does this say about the state of knowledge in this field? Largely, it says that knowledge is on very shaky grounds. And it raises the important question: should we believe this new version?

We noted in a recent post that we are all, even scientists, often even within our own field, helpless in the face of a barrage of facts in the huge research literature. We must trust the integrity of research. When research is distorted for political opportunism (as in the anti-climate change luddites) or personal gain (as in conflicts of interest in medically or pharmaceutically related research), we are all losers, lost in seas of uncertainty.

In this case, however, the lost feeling comes not from a sense of fraud or conflict of interest, but in a sense that the field itself is a sea of uncertainty. Its guides represent fragile current knowledge, often argued out in the journals on the basis of belief rather than fact because the facts are so elusive. GWAS mapping results (or, rather, their basic lack of definitiveness) for psychiatric disease, presented in an era that accepts genes as metaphors for all types of causation in life--and society--do not provide clear guidelines for understanding.

We must have faith in the literature, but who can, in this area? There is clearly a need for more research on the brain, but also perhaps on the society that judges and evaluates behavior. Because this is part of the picture. If parents, teachers, doctors, therapists, drug companies, hospitals, schools, retirement centers, and the like all have a stake (often financial but sometimes even psychological) in decisions then it is hard to see how we can emerge from the forest of uncertainty.

Wednesday, February 10, 2010

Will the climate ever change?

There is yet another story in the NYTimes on the climategate issue. This is an accusation by skeptics that the IPCC (the Intergovernmental Panel on Climate Change) has been sloppy and the leader should resign. Of course skeptics would say that!

Science depends on skeptics. In fact, all scientists should be skeptics! Instead, most scientists are skeptics of details (especially of others' work!), but not of deeper issues. There is too much circling of wagons or playing it safe so as not to ruffle the status quo that works for them -- in this case, the grant system.

But the kind of skepticism in this story is not scientific, not responsible, and not honorable. It is like current US Republican tactics in their desperate need for power, feeling of importance, wealth, or whatever. It takes some faults in the system (and any system has faults, since they're all run by humans), and treats those faults as if they undermine the basic premise of important human-affected climate change. There are problems with some of the use of scientific reports, as one might expect in so large an issue as climate change, with so many investigators involved. And there are accusations, that based on the story seem rather thin, that the director of the IPCC has conflicts of interest. He may, or may not have financial conflicts, but of course is and probably must be an advocate for the cause of recognizing the reality of climate change.

Assailing him and the IPCC for flaws that are trivial relative to the real issues is a flaw in reasoning, but reasoning is not what these particular critics are up to.

Sadly, our current political system works by demogoguery rather than on the basis of honor and facts. Societies always have to deal with vanity, greed, inequality, and the like. But an educated democracy ought, at least, to be able to deal nobly with threats to our shared, communal interests. Profiteering by these deniers is not different from war profiteering: the deniers want to keep things the same, which means good for their selfish interests, even when trauma threatens.

This is important. We are all, even us scientists (yes, even us!) are helpless in the face of knowledge outside the narrow spheres of our own experience and expertise. As we sit here gazing out at two feet of snow (and more falling), with the temperature around -5 degrees, how can we judge the truth of climate change statements? We must rely on the integrity of the reported results of science. We happen to accept the seeming overwhelming evidence that global warming is a fact and is due in pat to human agency. But we have zero knowledge of our own on that subject--zero! We must judge based on faith in the integrity of science publishing.

What we do about climate change is, and needs to be, political. But when we can't keep the facts themselves honest, or they are distorted for self-serving ends, we are all lost in a blizzard of uncertainty. Manipulating data is a threat, but so is the demagoguery associated with politicizing the facts.