Tuesday, July 12, 2016

In Memoriam: Al Knudsen, a modest, under-recognized founder of cancer genetics (and more)

My first job was a young faculty member was in the Graduate School of Biomedical Sciences, at the University of Texas Health Science Center in Houston.  Our small Center for Demographic and Population Genetics was part of the Graduate School, and it was small enough that we got to know, and interact with, the Dean.  And what a dean he was!

The great, and good Al Knudsen (1922-2016).  Google images.
It was a small graduate school, so Dr Knudsen still was active in research, cancer research. One of the first talks I heard down there in Houston, when I still didn't have my first pair of cowboy boots, y'all, was an interesting idea about the causes of cancer.

Radiation was a known carcinogen, as were some chemicals, and there were various ideas about how carcinogenesis worked at the gene level. The basic idea was that these agents caused genetic mutations that led cells to misbehave, and though abnormal, escape detection by the immune system. More mutations meant more cancer risk, and this was consistent with 'multi-hit' ideas of cancer. More mutations took longer to accumulate, which was consistent with the increasing risk of cancer with age.  But genetics was still very rudimentary then, compared to now, direct testing primitive at best. And there were some curious exceptions.  An interesting fact was that some cancers seemed familial, arising in close relatives, and typically at earlier ages than the sporadic versions of what seemed to be the same type of tumor.  Why?

One example was the eye cancer retinoblastoma which arose in children or young adults, mostly in isolated cases; but there were affected families in which Rb was often present at birth.  Knudsen's idea was that in affected families one harmful allele was being transmitted, but the disease did not arise until a second mutation occurred.  Al published a quantitative mutational model of the onset age pattern in a PNAS paper in 1971, just before I myself had arrived in Houston, but by chance I had heard him present his work at the time of my job interview.

The basic idea was a 2-hit hypothesis, in which you could inherit one Rb mutation, and then only had to 'wait' for some one of your embryonic retinal cells to suffer the bad luck of a hit in the normal copy in order for a cancer to develop.  That waiting time accounted for the earlier onset of familial cases, because they only had to 'wait' for one mutation, whereas sporadic cases needed to experience two Rb hits in the same cell lineage.

This was a profound insight.  It allowed for cancer genetic findings, in which some forms of cancer clustered in families (e.g., some breast and colorectal cancers). Yet most cases were sporadic.  It was shown roughly at that time, by clever work in those crude days of human genetics, that tumors were clonal--the tumor, even when it had spread, was the descendant of a single aberrant (mutated) cell.

It did not take long for this sort of thinking, along with various methods for detection, to find the Rb gene....and other genes related to cancer.  This eventually included genomewide tests for loss of detectable variation based on microsatellite sites, continued to confirm the idea, far beyond those types of cancer that seem to be caused largely by changes in a single gene. The idea of somatic mutation caused by environmental factors, was complemented by the idea that it is common to inherit genotypes that are partially altered but insufficient by themselves to cause cancer, so that the tumor only arises later in life, after environmentally-caused (or stochastic) further mutations occur.

Knudsen's basically 2-hit idea was quickly generalized to 'multi-hit' models of cancer, and the discovery that cancers in a given individual were clonal led to models in which combinations of inherited mutations (present in every cell) and those that occurred somatically, seemed to account for the basic biology of cancer.  Many of the individual genes whose mutation puts a person at very elevated risk of one or more forms of cancer have since been identified, and newer technology has allowed their functional nature (and reason for their role in cancer) to be found.  Some are involved in DNA repair or control of cell division, and it's understandable why their mutational loss is dangerous.

The sources of variation in these genes may vary, but cancer as a combination of inherited and somatically generated mutations is a, if not the, prevailing general model for its biological nature and epidemiology, and shows why tumors are somatic evolutionary phenomena at the gene level.  But his nugget of an idea triggered much broader work in human genetics that, once technology caught up to the challenge, led to our understanding (and, too often, convenient ignoring) of the role of combined inherited and somatically induced variation as a major cause of the common, complex disorders for which genomewide mapping has become a routine approach.

I was still in Houston when Dr Knudsen moved to the Fox Chase Cancer Center in Philadelphia.  We missed him, but over the following decades he continued to contribute to the understanding of cancer.  His inspiring, gentle, and generous nature was an exception in the snake-pit that has become so common in the 'business model' of so many biomedical research circles.

Al's foundational work earned him many honors.  But he didn't get one that I think he richly deserved: his quiet, transformative role in understanding cancer, and the much broader impact on human genetics that followed as a result, deserved a Nobel Prize.

Tuesday, July 5, 2016

When scientific theory constrains

It's good from time to time to reflect on how we know what we think we know.  And to remember that, as it has been in any time in history, much of what we now think is true will sooner or later be found to be false or, often, only inaccurately or partially correct.  Some of this is because values change -- not so long ago homosexuality was considered to be an illness, e.g.  Some is because of new discoveries -- when archaea were first discovered they were thought to be exotic microbes that inhabited extreme environments but now they're known to live in all environments, even in and on us. And of course these are just two of countless examples.

But what we think we know can be influenced by our assumptions about what we think is true, too. It's all too easy to look at data and interpret it in a way that makes sense to us, even if there are multiple possible interpretations.  This can be a particular problem in social science, when we've got a favorite theory and the data can be seen to confirm it; this is perhaps easiest to notice if you yourself aren't wedded to any of the theories.  But it's also true in biology. It is understandable that we want to assert that we now know something, and are rewarded for insight and discoveries, rather than more humbly hesitating to make claims.

Charitable giving
The other day I was listening to the BBC Radio 4 program Analysis on the charitable impulse.  Why do people give to charity?  It turns out that a lot of psychological research has been done on this, to the point that charities are now able to manipulate us into giving.  If you call your favorite NPR station to donate during a fund drive, e.g., if you're told that the caller just before you gave a lot of money, you're more likely to make a larger donation than if you're told the previous caller pledged a small amount.

A 1931 advertisement for the British charity, Barnardo's Homes; Wikipedia

Or, if an advertisement pictures one child, and tells us the story of that one child, we're more likely to donate than if we're told about 30,000 needy children.  This works even if we're told the story of two children, one after the other.  But, according to one of the researchers, if we're shown two children at once, and told that if we give, the money will randomly go to just one of the children, we're less likely to give.  This researcher interpreted this to mean that two is too many.

But there seem to me to be other possible interpretations given that the experiment changes more than one variable.  Perhaps it's that we don't like the idea that someone else will choose who gets our money.  Or that we feel uncomfortable knowing that we've helped only one child when two are needy.  But surely something other than that two is too many, given that in 2004 so many people around the world donated so much money to organizations helping tsunami victims that many had to start turning down donations.  These were anonymous victims, in great numbers.  Though, as the program noted, people weren't nearly as generous to the great number of victims of the earthquake in Nepal in 2015, with no obvious explanation.

The researcher did seem to be wedded to his one vs too many interpretation, despite the contradictory data.  In fact, I would suggest that the methods, given what were presented, don't allow him to legitimately draw any conclusion.  Yet he readily did.

Thinness microbes?
The Food Programme on BBC Radio 4 is on to the microbiome in a big way.  Two recent episodes (here and here) explore the connection between gut microbes, food, and health and the program promises to update us as new understanding develops.  As we all know by now, the microbiome, the bug intimates that accompany us through life, in and on our body, may affect our health, our weight, our behavior, and perhaps much more.  Or not.


Pseudomonas aeruginosa, Enterococcus faecalis and Staphylococcus aureus on Tryptic Soy Agar.  Wikipedia

Obesity, asthma, atopy, periodontal health, rheumatoid arthritis, Parkinson's, Alzheimer's, autism, and many many more conditions have been linked with, or are suggested to be linked with, in one way or another, our microbiome.  Perhaps we're hosting the wrong microbes, or not a diverse enough set of microbes, or we wipe the good ones out with antibiotics along with the bad, or with alcohol, and what we eat may have a lot to do with this.

One of the researchers interviewed for the program was experimenting with a set of identical twins in Scotland.  He varied their diets having them eat, for example, lots of junk food and alcohol, or a very fibrous diet, and documented changes in their gut microbiomes which apparently can change pretty quickly with changes in diet.  The most diverse microbiome was associated with the high fiber diet. Researchers seem to feel that diversity is good.

Along with a lot of enthusiasm and hype, though, mostly what we've got in microbiome research so far is correlations.  Thin people tend to have a different set of microbes than obese people, and people with a given neurological disease might statistically share a specific subset of microbes.  But this tells us nothing about cause and effect -- which came first, the microbiome or the condition?  And because the microbiome can change quickly and often, how long and how consistently would an organism have to reside in our gut before it causes a disease?

There was some discussion of probiotics in the second program, the assumption being that controlling our microbiome affects our health.  Perhaps we'll soon have probiotic yogurt or kefir or even a pill that keeps us thin, or prevents Alzheimer's disease.  Indeed, this was the logical conclusion from all the preceding discussion.

But one of the researchers, inadvertently I think, suggested that perhaps this reductionist conclusion was unwarranted.  He cautioned that thinking about probiotic pills rather than lifestyle might be counterproductive.  But except for factors with large effects such as smoking, the effect of "lifestyle" on health is rarely obvious.  We know that poverty, for example, is associated with ill health, but it's not so easy to tease out how and why.  And, if the microbiome really does directly influence our health, as so many are promising, the only interesting relevant thing about lifestyle would be how it changes our microbiomic makeup.  Otherwise, we're talking about complexity, multiple factors with small effects -- genes, environmental factors, diet, and so on, and all bets about probiotics and "the thinness microbiome" are off.  But, the caution was, to my mind, an important warning about the problem of assuming we know what we think we know; in this case, that the microbiome is the ultimate cause of disease.

The problem of theory
These are just two examples of the problem of assumption-driven science. They are fairly trivial, but if you are primed to notice, you'll see it all around you. Social science research is essentially the interpretation of observational data from within a theoretical framework. Psychologists might interpret observations from the perspective of behavioral, or cognitive, or biological psychology, e.g., and anthropologists, at least historically, from, say, a functionalist or materialist or biological or post-modernist perspective. Even physicists interpret data based on whether they are string theorists or particle physicists.

And biologists' theoretical framework? I would suggest that two big assumptions that biologists make are reductionism and let's call it biological uniformitarianism. We believe we can reduce causation to a single factor, and we assume that we can extrapolate our findings from the mouse or zebrafish we're working on to other mice, fish and species, or from one or some people to all people. That is, we assume invariance rather than that what we can expect is variation. There is plenty of evidence to show that by now we should know better.

True, most biologists would probably say that evolutionary theory is their theoretical framework, and many would add that traits are here because they're adaptive, because of natural selection. Evolution does connect people to each other and people to other species, it has done so by working on differences, not replicated identity, and there is no rule for the nature or number of those differences or for extrapolating from one species or individual to another. We know nothing to contradict evolutionary theory, but that every trait is adaptive is an assumption, and a pervasive one.

Theory and assumption can guide us, but they can also improperly constrain how we think about our data, which is why it's good to remind ourselves from time to time to think about how we know what we think we know. As scientists we should always be challenging and testing our assumptions and theories, not depending on them to tell us that we're right.

Thursday, June 30, 2016

Mammoth ego above all

From time to time our genetics colleagues propose projects that perhaps belong more to Hollywood or the video industry.  At least that way no inadvertent harm could be done.  But, hell, a big ego splash or some book royalties might make one a bit reckless.

I refer here to ideas about cloning extinct animal species from DNA rescued from preserved bone or tissues. Some recent species, like Dodos, might be directly clonable in some way, and in some senses could have some value--assuming that making one or a few of them is good for someone--themselves, that is, not just zoo-goers.

However, how do you do this?  The best idea would be to take an entire cell, re-set its DNA/RNA status to that of a fertilized egg, and let 'er rip.  But how?  Unless you can make that cell develop all by itself in a Petri dish, the answer isn't obvious.  The usual idea, at least for mammals, is to find a surrogate mother.

Generally the approach has been, for mammals at least, to extract a complete nuclear genome from a fossil specimen, take a surrogate mother's egg cell (from its ovary), remove its nuclear DNA, replace it with the fossil's DNA, and inject it into a hormonally prepared mother for gestation.  This is basically the 'Dolly' approach and to my (clearly incomplete) knowledge it is still the basic idea.  But there are problems with this, if you really want to revive a species as opposed just to getting a farm-worthy sheep or the like.

One problem is that you need the host-species' egg, and that means its mitochondrial DNA (mtDNA, which is not in the nucleus) and the RNA and other contents of the host egg cell.  Even if you can use an entire intact nuclear genome, the resulting new individual is necessarily a hybrid between its mother and the other exogenous DNA.  For agriculture, there are some well-known problems such as the fact that the host's cell will reflect some aspects of the host, like mutations in mtDNA or the RNA from its nucleus, and so on.  The new individual is in some senses already a bit old, for example.

These problems can probably be dealt with, or perhaps have already been.  But with extinct species there are other problems.  Among them is that the DNA is usually not entirely intact in any cell. From fossils or extinct species, it may be degraded into very small fragments.  By sequencing currently living relative species, we can identify roughly what part of the genome those fragments come from, and if we have several samples we could perhaps find in some sample each necessary fragments.  That's still not the same as having intact chromosomes, and almost certainly some large subset will be missing.  Then, however, you have to find the surrogate mother, and if the species is extinct you have to use a current, similar species to do this.

Mammoths and Neanderthals . . . . 
Let's for the moment ignore the fact that the cloned individual in these cases really won't be a cloned individual of its original species, but some kind of hybrid or chimera.  Suppose the attempt were made.  What then?

Two of the favorite species that the raw, I'd say cruel Frankenstein egotism and its simplistic portrayal of the idea to the public, are Neanderthals and mammoths.  Maybe the suggestions have only been publicity stunts, given the real issues in the world that serious geneticists could address, but they seem to have been offered seriously, and they show to me how science needs to be kept under control.

Neanderthals and mammoths are social beings.  Real ones were gestated in mothers of their own kind, eating their natural diet, in their natural social (and hence hormonal, physiological) circumstances, and born to families and nurturing, educating, protecting, natural social groups.  Once born, they were incorporated into the group in a way that they presumably evolved to be.

How should a cloned Neanderthal be treated?  It would be basically human: recall that we all carry some Neanderthal inheritance through extensive inter-mating way back then. Would it have civil rights, such as to education, a home, voting?  Could one experiment on it, to test aspects of its physiology? Could it be put on display in a zoo or some university museum?

What about mere mammoths?  One leading Mary Shelly imitator has suggested that cloned mammoths could be plunked back into 'natural' conditions in the Arctic, where there aren't (currently) too many humans around to destroy their environment--assuming, in a fantasy way, that commercial tour groups would not be immediately organized, to go gawk at them (including by 'scientists' who want then to publish papers in Nature on their behavior), and that they wouldn't be kept in zoos or labs in our august universities.  Such innocent-sounding ideas as cloning these extinct beings are far more egotism and ignorance--or willful avoidance of realities--than any sort of sound proposal.

Like humans, mammoths are social beings.  The Arctic today would not be a hospitable or normal place for a mammoth.  A surrogate elephant mother would not gestate a normal mammoth, in ways large or small that we can't really anticipate or even know.  It would be plunked down by itself or in a pack of clones that didn't have normal social life to be raised in.  Even if it had a mother who loved it rather than rejecting it as a freak, it would not be a mammoth mother, nor would its life  replicate the mammoth's natural existence as a herd species.  It is cruelty to suggest otherwise.

In memoriam, or a lesson never learned?
Let's not forget one Ota Benga, an African man, obtained from a slave trader, and exhibited at the St Louis World Fair in 1904 and then put on display--is there a better way to state this?--in the Bronx zoo!



From Benga Wikipedia page

Baartman, from her Wikipedia page

Nor let us forget poor Saartjie Baartman, a southern African native, put on display, naked, around Europe to exhibit her exaggerated private parts, in early 19th century Europe  Traveling freak shows, might one say?

It is to be noted that the people involved were leaders of their time, and people responsible for such dehumanizing practices have included prominent scientists at major universities.  Anthropologist Franz Boas requested that explorer Robert Peary bring an Inuk back from the Arctic, to be studied by scientists at Columbia.
Minik Wallace (also called Minik or Mene ) (ca. 1890 – October 29, 1918) was an Inuk brought as a child in 1897 from Greenland to New York with his father and others by the explorer Robert Peary. The six Inuit were studied by staff of the American Museum of Natural History, which had custody. The adults and one child died soon of tuberculosis (TB), and one young man was returned to Greenland. After deceiving Minik by a staged burial, the museum put the skeleton of his father on exhibit. Adopted by the museum's building superintendent, William Wallace, Minik did not return to Greenland until after 1910. A few years later, he came back to the United States, where he lived and worked until dying of influenza in the 1918 pandemic.  (Wikipedia)

. . . . . and nobody to defend them
In our age of science, the idea of cloning extinct species seems exciting and appealing on the surface, a kindly sort of rescue--until you think about it carefully.  As you can tell, I feel quite strongly about this.  It's playing games with other sentient organisms' lives for no seriously justifiable purpose.  If it's hard enough to justify or excuse what we allow to be done with mice and other species for research purposes (and I include myself among the accused), this is beyond beyond.

To me, personally, these juicy cloning suggestions reflect the cold, selfish, arrogance of science, and scientists in leading universities.  The poor target species have nobody to defend them, and that is why these proposals, irresponsibly presented in the media, can get off the ground.  One sees little censure of these ghoulish proposals, whose presumptive nature is essentially a continuation of the eugenics movement.

The video industry is as close as we need to get to these notions.  Let us not forget our history.

Thursday, June 23, 2016

Full-scale Disneyland (with canals!), and sustainability issues

We recently returned from a 2-week trip to Italy.  Two of our children and their spouses live in Europe.  One couple lives in a small town in northern Italy, the other in central Switzerland.  The latter drove down to Italy where we all enjoyed seeing each other, which is not easy given the distances.  But while this vacation/family gathering was very pleasant and, to us, important, it raises some less pleasant thoughts about sustainability in our time in history.

My concerns are personal, but in a sense also global, and to some extent they relate to societal inequity: not everybody can just drop a few thou and travel across the ocean for a couple of weeks' dinners with family.  But beyond unfairness, my concerns are about other issues.  This was a very energy-bad vacation, and we weren't alone!

We flew from the east coast to Venice, the most convenient airport for our purposes.  We flew on a large plane, maybe 2/3 full of passengers. As we all know, one of the worst ways to contribute to global warming is to fly. The aircraft was largely filled with people taking cruises in the Mediterranean.  The trip was about 3700 miles each way, not including the various train and car junkets we took during those two weeks.

And then there's Venice itself.  We stayed a couple of days there to recover from jet lag, and to see the sites.  We'd been there for a science meeting once before.  Bella Venezia!  Once home to a world-leading trade empire, and to many great cultural and architectural wonders and of course its romantic lacework of canals.  The glory days were then, but what is the city today?  Venice takes in something like 100,000 tourists a day, well more than the number of people who actually live there.  The piazzas, side streets, walkways, and bridges--and they are very scenic indeed--are often a shoulder-to-shoulder river of tourists.  They (and we) sightsee in museums, shop, eat, shop, stay in hotels, eat, and shop.  It is obvious that a huge amount of money pours into the city, every day, all year, and has been doing so for decades if not centuries.

Even forgetting their thought-provoking historical value and more trivial entertainment value, and just thinking of them as Disneyesque curiosities for selfie-ops, these museums, shops, and hotels are staffed by an army of people who earn their living from the tourist trade.  So while Venice is in a sense unique and beautiful, it is also in a perhaps deeper sense something of a fake, a touristic Potemkin village, a hyper, full-time, full-scale Disneyland entertainment park, there today mainly to pluck the pockets of the relatively idle affluent and wasteful denizens of our planet (I certainly include myself in that category!).

St Mark's Square, Venice; By Nino Barbieri - Own work

Venice is but one rather small city on the global tourist map.  If you think about the amount of fuel used to transport everyone to, from, and around Venice (and even take into account that the gondolas don't require fossil fuel!), and then multiply that by the hundreds of tourist sites around the world, you have to wonder what hope there is for containing global warming.  There is no sign of self-restraint of any kind here--even on departure to return home, the airport luxury shops do a booming business as tourists part with whatever dollars they've not yet spent.

But what can one seriously do?
It is easy to chastise people who take such totally needless trips, even if accompanied by a self-incorporating mea culpa.  After all, this really is a nearly total luxury.  For most of human history those relatives who moved or sailed far away never saw their family again and corresponded by mail (if at all, if there was such a thing as 'mail').  That was just how life was!  Our family get-togethers are a new, pure luxury.  In a seriously conservation-dedicated world, we could dispense at least with the purely sightseeing, self-indulging kinds of global vacationing.  That would seem like something trivial, a luxury that a resource-conscious world could easily forego.  But even if we all were so equitable, fair, future-aware, and so on, things aren't nearly so simple.

The world is crowded with people and much of it is industrialized, with the number of people who live on the land, as subsistence farmers, declining every year.  We have hugely diverse economies, in a sense creating occupations that earn money so we can swap that for food and so on.  Most of it isn't really necessary.  Among these non-food related activities is tourism, which is huge because so many people are now wealthy and idle enough to take global junkets.

In turn that means that much of the world depends on travel and sightseeing.  Countless peoples' livelihoods are involved.  This is in a sense quite antithetical to global sustainability.  If we seriously slowed down travel to save fossil fuels and reduce warming, then tourism, air travel, cruise ships, and the people involved in the manufacture and operation of planes, ships, trains and buses, their ports and terminals, would lose their jobs. The manufacturers of tourist-related goods, including Venetian carnival masks, post-cards, luxury shopping goods, hotel supplies, restaurant foods, chefs, waiters, menu printers, clerks, etc. would be hit.  Venice, already a shell of its former self, would cease to have a reason to exist.  Even those who deliver all these goods during the night, and those who remove the trash, invisible to the tourists sleeping quietly in their beds, would be affected.  Society would somehow have to do something about their employment needs.  

This means that the idea of just paring back on consumption really is a dream--or, as every even mild economic depression shows, a nightmare.  And just the one example of tourism, essentially a luxury trade, involves countless thousands of people.  Needless to say, all of this is grossly unfair to the huge majority of people living on or below the margins.  It shows the inadvertent implications, even the distanced cruelty, of those idealists who want quick changes in sustainability directions.

It is difficult to have a non-selfish moral position on these issues.  If we say "let's change things slowly so as not to be too disruptive to too many people", the normal human tendency is to think the problem isn't so real, and not even go along with 'slowly' with much dedication. That's why car companies begin making and hawking, and consumers purchasing, bigger cars and trucks the moment gas prices drop.  [I insert this post-posting editorial change because today's NY Times had a story about the return of gas-guzzlers, in the same spirit of what this post is about]


If we say 'we must rush' then too many will find rationales for not going along ('OK, it's a good idea, but I can't do it--I have to see my family overseas!').  So where is a feasible ground to be found, and to what extent should we personally expect to be affected by it?  What will we give up for the cause?  The question, for me, is not abstractly how much one must cut out of what one does, but how much I must cut!  That gets pretty close to home, so to speak.

I can't help but add a rather gratuitous, if snide, side comment. The problems are compounded in an ironic way.  We have agricultural sustainability issues, as everyone by now should know.  The 'developed' world suffers common diseases largely due to bad nutrition and that means to over-eating. So while much of the world barely scrapes by, many in the rich world waddle along largely over-weight (these are not the minority of people struggling with genetic or epigenetic problems that make weight control a real challenge).  The obesity epidemic is why we hear complaints about airplane seats being too small!  So I remark snarkily that, as a consequence, one reason air travel is so environmentally unfriendly is the countless tons of human bulk that are being transported daily across the oceans in tourist-filled aircraft.  One thing leads to another.

We just took what was clearly a very energy-bad trip, no matter how understandable our desire to be with family and our decision to go.  We could, of course, have talked with our family members via Skype--indeed, we already do that often.  I complain that leaders in sustainability and climate change, including the very organization that documents it for the UN, fly all over the world and meet in fancy hotels to discuss the problem and tell everyone what they (that is, they) must do to 'save the planet'. The leading spokespersons for sustainability and climate-change avoidance could set a very public example and work only via Skype! 

In the context of global conservation, sustainability, and climate issues, who should feel guilty about what?  If do as I say not as I do is not acceptable, then what justifies our personal exceptionalism? For me, the answers are far from clear.

Tuesday, May 31, 2016

Genes: convenient tokens of our time

My post today, perhaps typically cranky, was triggered by an essay at Aeon about the influence that the film Still Alice has had on thinking about Alzheimer's Disease (AD). As the piece puts it, AD is presented in the film as a genetic disease with a simply predictable doom-like known genetic cause.  The authors argue that the movie is more than entertainment.  It's a portrayal that raises an important ethical issue, because it is very misleading to leave the impression that AD is a predictable genetic disease.  That's because a clear genetic causation, and thus the simple 'we can test for it' representation, applies only to a small fraction of AD.  The film badly misrepresents the overall reality of this awful form of the disease (a good treatment of Alzheimer's disease and its history is Margaret Lock's thoughtful The Alzheimer Conundrum, 2013, Princeton Press).

While focusing on AD, the Aeon piece makes strong statements about our obsession with genes, in ways that we think can be readily generalized.  In a nutshell, genes have become the convenient tokens of our time.

Symboling is a key to making us 'human'
If there is any one thing that most distinguishes our human species from others, it may be the use of language as a symbolic way to perceive the world and communicate to others.  Symboling has long been said by anthropologists to be an important key to our evolution and the development of culture, itself based on language.

Symbol and metaphor are used not just to represent the world and to communicate about it, but also to sort out our social structure and our relationships with each other and the world.  Language is largely the manipulation or invocation of symbols.  In a species that understands future events and generalities, like death and sex, in abstract terms, the symbols of language can be reassuring or starkly threatening.  We can use them to soothe ourselves or to manipulate others, and they can also be used in the societal dance around who has power, influence, and resources.

Symbols represent a perception of reality, but a symbol is not in itself reality.  It is our filter, on or around which we base our interactions and even our material lives.  And, science is as thoroughly influenced by symbols as any other human endeavor.

Science is, like religion, a part of our culture that purports to lead us to understand and generalize about the world, but because science is itself a cultural endeavor, it is also part and parcel of the hierarchy and empire building we do in general, part of a cultural machinery that includes self-promotion, and mutually reinforcing service industries including news media, and even scientific journals themselves.

The current or even growing pressures to maintain factory-like 'productivity' in terms of grants coming in and papers going out is largely at odds with the fundamental purpose of science (as opposed to 'technology').  Unlike designing a better product, in the important, leading-edge areas of science, we don't know where we're going.  That is indeed the reason that it is science.  Exploring the unknown is what really good science is about.  That's not naturally an assembly-line process, because the latter depends on using known facts.  However, our society is increasingly forcing science to be like a factory, with a rather short-term kind of fiscal accountability.

Our culture, like any culture, creates symbols to use as tokens as we go about our lives.  Tokens are reassuring or explanatory symbols, and we naturally use them in the manipulations for various resources that culture is often about.  Nowadays, a central token is the gene.

DNA; Wikipedia

Genes as symbols
Genes are proffered as the irrefutable ubiquitous cause of things, the salvation, the explanation, in ways rather similar to the way God and miracles are proffered by religion.  Genes conveniently lead to manipulation by technology, and technology sells in our industrial culture. Genes are specific rather than vague, are enumerable, can be seen as real core 'data' to explain the world.  Genes are widely used as ultimate blameworthy causes, responsible for disease which comes to be defined as what happens when genes go 'wrong'.  Being literally unseen, like angels, genes can take on an aura of pervasive power and mystery.  The incantation by scientists is that if we can only be enabled to find them we can even cure them (with CRISPR or some other promised panacea), exorcising their evil. All of this invocation of fundamental causal tokens is particulate enough to be marketable for grants and research proposals, great for publishing in journals and for news media to gawk at in wonder. Genes provide impressively mysterious tokens for scientists to promise almost to create miracles by manipulating.  Genes stand for life's Book of Truth, much as sacred texts have traditionally done and, for many, still do.

Genes provide fundamental symbolic tokens in theories of life--its essence, its evolution, of human behavior, of good and evil traits, of atoms of causation from which everything follows. They lurk in the background, responsible for all good and evil.  So in our age in human history, it is not surprising that reports of finding genes 'for' this or that have unbelievable explanatory panache.  It's not a trivial aspect of this symbolic role that people (including scientists) have to take others' word for what they claim as insights.

This token does, of course, have underlying reality
We're in the age of science, so that it is only to be expected that we'll have tokens relevant to this endeavor.  That we have our symbols around which to build other aspects of our culture doesn't mean that the biology of genes is being made up out of whole cloth.  Unlike religion, where things can be 'verified' only by claims of communication with God, genes can of course, at least in principle, be checked and claims tested.  Genes obviously do have major and fundamental roles in life.  If that isn't true, we are really misperceiving fundamentals of our existence.  So, even when complexities of causation are daunting, we can claim and blame what we want on genes and in a sense be correct at least at some level.  That enhances and endorses the token value of genes.

Genes do have great sticking power.  The Aeon piece about AD is just one of countless daily examples.  A fraction of cases of AD are so closely associated with the presence of some known variants in a couple of genes, that true causation--whatever the mechanism--seems an entirely plausible explanation.  Likewise, there are hundreds or thousands of disorders that seem clearly to be inherited and as the result of malfunction of one or two specific genes.  The cultural extension of this in our society that we are stressing here is the extension of these clearly causative findings to the idea that causation can be enumerated in convenient ways mainly by peoples' inherited genomes and that other aspects of biological causation are often treated as being rather superficial or incidental.  That in a sense is typical of deeply held cultural icons or tokens.

The problem with genes as tokens is that they are invoked generally or generically in the competition for cultural resources, material and symbolic.  Personally, we think there are issues, genetic issues in fact, that deserve greater investment, rather than just the easier to invoke bigger-is-better approach. They include a much more intense attack on those many traits that we already know without any serious doubt are tractably genetic--due to one or only a couple of genes, and therefore which real genetic therapy might treat or prevent effectively.  By contrast, most traits even if they are affected by genetic variation as all traits must be, are predominantly due to environmental or chance causative factors.  We have ways to avoid many diseases that don't require genetic approaches, but as vague entities they're perfect subjects for invoking the gene token, and policy in the industrial world clearly shows this.

Some progress does of course occur because of genetically-based research, but the promise far outpaces the reality of genetic cures.  But genes are the material tokens that keep the motor running far beyond the actual level of progress.  They effectively reflect our time--our molecular, computer, technological culture imagery, our love of scale, size and the material grandeur they generate.

Every culture, every generation has its tokens and belief systems.  Genes are among ours.  They're never perfect.  People seek hope, and what velvet robes and gilded cathedrals and mosques provide for many, whereas the humming laboratories do for a growing number of others.

Tokens, symbols and metaphors: they drive much of what people do, even in science.

Monday, May 30, 2016

Cancer moonshot and slow-learners

Motivated by Vice President Biden's son's death at an early age from cancer, President Obama recently announced a new health initiative which he's calling the cancer 'moonshot'.  This is like a second Nixonian 'war' on cancer but using a seemingly more benign metaphor (though cancer is so awful that treating it as a 'war' seems apt in that sense). Last week the NYTimes printed an op-ed piece that pointed out one of the major issues and illusions belied by the rhetoric of the new attack on cancer, as with the old:  Curing one cancer may extend a person's life, but it also increases his or her chances of a second cancer, since risks of cancer rise with age.

Cancers 'compete' with each other for our lives
The op-ed's main point is that the more earlier onset cancers we cure, the more late onset, less tractable tumors we'll see.  In that sense, cancers 'compete' with each other for our lives.  The first occurrence would get us unless the medical establishment stops it, thus opening the door for some subsequent Rogue Cell to generate a new tumor at some later time in the person's life.  It is entirely right and appropriate in every way to point this out, but the issues are subtle (though not at all secret).

First, the risk of some cancers slows with age.  Under normal environmental conditions, cancers increase in frequency with age because they are generally due to the accumulation of multiple mutations of various sorts, so that the more cell-years of exposure the more mutations that will arise.  At some point, one of our billions of cells acquires a set of mutational changes that lead it to stop obeying the rules of restraint in form and cell-division that are appropriate for the normal function of its particular tissue. A tumor is a combination of exposure to mutagens and mutations that occur simply by DNA replication errors--totally chance events--when cells divide.  As the tumor grows it acquires further mutations that lead it to spread or resist chemotherapy etc.

This is important but the reasons are subtle.  The attack on cells by lifestyle-related mutagens like radiation or chemicals in the environment becomes reduced in intensity as people age and simplify their lives, slowing down a lot of exposures to these risk factors. However, cell division rates, the times when mutations arise, themselves slow down, so the rate of accumulation of new mutations, whether they be by chance or by exposures, slows.  This decrease in the increase of risk with age at least tempers the caution that curing cancers in adults will leave them alive for many years and hence at risk for at least some many more cancers (though surely it will make them vulnerable to some!)


Apollo 11, first rocket to land humans on the moon; Wikipedia

Competing causes: more to the story, but nothing at all new
There's an important issue not mentioned in the article, but that is much more important in an indirect way.  This is an issue the authors of the op-ed didn't think about or for some reason didn't mention or perhaps because they are specialists they just weren't aware of.  But it's not at all secret, and indeed is something we ourselves studied for many years, and we've blogged about here before: anything that reduces early onset diseases increases the number of late onset diseases.  So, curing cancer early on (which is what the op-ed was about) increases risk for every later-onset disease, not just cancer.  In the same way as we've noted before, reducing heart disease or auto accident rates or snake bite deaths will increase dementia, heart disease, diabetes, and cancer--all other later-onset diseases--simply because more people will live to be at risk.  This is the Catch-22 of biomedical intervention.

In this sense all the marketing rhetoric about 'precision' genomic medicine is playing a game with the public, and the game is for money--research money among other things.  There's no cure for mortality or the reality of aging.  Whether due to genetic variants or lifestyle, we are at increasing risk for the panoply of diseases as we age, simply because exposure durations increase.  And every victory of medicine at earlier ages is a defeat for late-age experience.  Even were we to suppose that massive CRISPRization could cure every disease as it arose, and people's functions didn't diminish with age, the world would be so massively overpopulated as to make ghastly science fiction movies seem like Bugs Bunny cartoons.

But the conundrum is that because of the obvious and understandable fact that nobody wants major early onset diseases, it seems wholly reasonable to attack them with all the research and therapeutic vigor at our disposal. The earlier and more severe, the greater the gain in satisfactory life-years that will be made.  But the huge investment that NIH and their universities clients make in genomics and you-name-it related to late-age diseases is almost sure to backfire in these ways.  Cancer is but one example.

People should be aware of these things.  The statistical aspects of competing causes have long been part of demographic and public health theory.  Even early in the computer era many leading demographers were working on the quantitative implications of competing causes of death and disease, and similar points were very clear at the time.  The relevance to cancer, as outlined above, was also obvious.  I know this first-hand, because I was involved in this myself early in my career.  It was an important part of theorizing, superficial as well as thoughtful, about the nature of aging and species-specific lifespan, and much else.  The hard realities of competing causes have been part of the actuarial field since, well, more or less since the actuarial field began.  It is a sober lesson that apparently nobody wants to hear.  So it should not be written about as if it were a surprise, or a new discovery or realization.  Instead, the question--and it is in every way a fair question--should be why we cannot digest this lesson.  Is it because of our normal human frailty wishful thinking about death and disease, or because it is not convenient for the biomedical industries to recognize this sober reality front and center?

It's hard to accept mortality and that life is finite.  Some people want to live as long as possible, no matter the state of their health, and will reach for any life-raft at any age when we're ill.  But a growing number are signing Do Not Resuscitate documents, and the hospice movement, to aid those with terminal conditions who want to die in peace rather than wired to a hospital bed, continues to grow.  None of us wants a society like that in Anthony Trollope's 1881 dystopic novel The Fixed Period, where at age 67 everyone is given a nice comfortable exit--at least that was the policy until it hit too close to home for those who legislated it.  But we don't want uncomforable, slow deaths, either.

The problem of competing causes is a serious but subtle one, but health policy should reflect the realities of life, and of death.  I wouldn't bet on it, however, because there is nothing to suggest that humans as a collective electorate are ready or able to face up to the facts, when golden promises are being made by legislators, bureaucrats, pharmas, and so on.  But, science and scientists should be devoted to truth, even when truth isn't convenient to their interests or for the public to hear.

Thursday, May 19, 2016

Another look at 'complexity'

A fascinating and clear description of one contemporary problem of sciences involved in 'complexity' can be found in an excellent discussion of how brains work, in yesterday's Aeon Magazine essay ("The Empty Brain," by Robert Epstein).  Or rather, of how brains don't work.  Despite the ubiquity of the metaphor, brains are not computers.  Newborn babies, Epstein says, are born with brains that can learn, respond to the environment and change as they grow.
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
We are absolutely unqualified to discuss or even comment on the details or the neurobiology discussed.  Indeed, even the author himself doesn't provide any sort of explanation of how brains actually work, using general hand-waving terms that are almost tautologically true, as when he says that experiences 'change' the brains.  This involves countless neural connections (it must, since what else is there in the brain that is relevant?), and would be entirely different in two different people.

In dismissing the computer metaphor as a fad based on current culture, which seems like a very apt critique, he substitutes vague reasons without giving a better explanation.  So, if we don't somehow 'store' an image of things in some 'place' in the brain, somehow we obviously do retain abilities to recall it.  If the data-processing imagery is misleading, what else could there be?

We have no idea!  But one important thing is that this essay reveals is that the problem of understanding multiple-component phenomena is a general one.  The issues with the brain seem essentially the same as the issues in genomics, that we write about all the time, in which causation of the 'same' trait in different people is not due to the same causal factors (and we are struggling to figure out what they are in the first place).

A human brain, but what is it?  Wikipedia

In some fields like physics, chemistry, and cosmology, each item of a given kind, like an electron or a field or photon or mass is identical and their interactions replicable (if current understanding is correct).  Complexities like the interactions or curves of motion among many galaxies each with many stars, planets, and interstellar material and energy, the computational and mathematical details are far too intricate and extensive for simple solutions.  So one has to break the pattern down into subsets and simulate them on a computer.  This seems to work well, however, and the reason is that the laws of behavior in physics apply equally to every object or component.

Biology is comprised of molecules and at their level of course the same must be true.  But at anything close to the level of our needs for understanding, replicability is often very weak, except in the general sense that each person is 'more or less' alike in its physiology, neural structures, and so on. But at the level of underlying causation, we know that we're generally each different, often in ways that are important.  This applies to normal development, health and even to behavior.  Evolution works by screening differences, because that's how new species and adaptations and so on arise.  So it is difference that is fundamental to us, and part of that is that each individual with the 'same' trait has it for different reasons.  They may be nearly the same or very different--we have no a priori way to know, no general theory that is of much use in predicting, and we should stop pouring resources into projects to nibble away at tiny details, a convenient distraction from the hard thinking that we should be doing (as well as addressing many clearly tractable problems in genetics and behavior, where causal factors are strong, and well-known).

What are the issues?
There are several issues here and it's important to ask how we might think about them.  Our current scientific legacy has us trying to identify fundamental causal units, and then to show how they 'add up' to produce the trait we are interested in.  Add up means they act independently and each may, in a given individual, have its own particular strength (for example, variants at multiple contributing genes, with each person carrying a unique set of variants, and the variants having some specifiable independent effect).  When one speaks of 'interactions' in this context, what is usually meant is that (usually) two factors combine beyond just adding up.  The classical example within a given gene is 'dominance', in which the effect of the Aa genotype is not just the sum of the A and the a effects.  Statistical methods allow for two-way interactions in roughly this way, by including terms like zAXB (some quantitative coefficient times the A and the B state in the individual), assuming that this is the same in every A-B instance (z is constant).

This is very generic (not based on any theory of how these factors interact), but for general inference that they do act in relevant ways, it seems fine.  Theories of causality invoke such patterns as paths of factor interaction, but they almost always assume various clearly relevant simplifications:  that interactions are only pair-wise, that there is no looping (the presence of A and B set up the effect, but A and B don't keep interacting in ways that might change that and there's no feedback from other factors), that the size of effects are fixed rather than being different in each individual context.

For discovery purposes this may be fine in many multivariate situations, and that's what the statistical package industry is about. But the assumptions may not be accurate and/or the number and complexity of interactions too great to be usefully inferred in practical data--too many interactions for achievable sample sizes, their parameters being affected by unmeasured variables, their individual effects too small to reach statistical 'significance' but in aggregate accounting for the bulk of effects, and so on.

These are not newly discovered issues, but often they can only be found by looking under the rug, where they've been conveniently swept because our statistical industry doesn't and cannot adequately deal with them.  This is not a fault of the statistics except in the sense that they are not modeling things accurately enough, and in really complex situations, which seem to be the rule rather than the exception, it is simply not an appropriate way to make inferences.

We need, or should seek, something different.  But what?
Finding better approaches is not easy, because we don't know what form they should take.  Can we just tweak what we have, or are we asking the wrong sorts of questions for the methods we know about?  Are our notions of causality somehow fundamentally inadequate?  We don't know the answers.  But what we now do have is a knowledge of the causal landscape that we face.  It tells us that enumerative approaches are what we know how to do, but what we also know are not an optimal way to achieve understanding.  The Aeon essay describes yet another such situation, so we know that we face the same sort of problem, which we call 'complexity' as a not very helpful catchword, in many areas.  Modern science has shown this to us.  Now we need to use appropriate science to figure it out.