Wednesday, May 14, 2014

Red wine, resveratrol, and the credibility factor

On Tuesday we blogged about the latest Hot News that resveratrol, the long-established purportedly disease-protection ingredient in red wine and other foods, isn't protective after all.  The study was a modest-sized one in the Chianti region of Italy where the wine is good and the sunshine warm and welcoming.

Though not large or in any sense exhaustive, this study made the news outside of Vatican City only, we might suggest, because it seemed to contradict the literature that has for many years praised resveratrol, and red wine, for its benefits for supposed physiological or biochemical reasons.  Naturally, the news media were hot to seize on the new study.  It stirred controversy, a favorite of the media.

But how do we know which to believe?  Should this study cast doubt on the prior studies' results?  If it is a good and reliable result, one can see why it could be reported as News.  But hold on.

On what possible grounds would this study be viewed as definitive, or at least definitive enough, to cast doubts on the extensive prior work?  If a responsible reporter were contacted by the author or a journal about this, why should the reporter believe or even report these results?

Given the numerous studies supporting a protective effect for resveratrol, on what grounds would this be viewed as credible, or should the media assume there may be problems with the study and that possibility be the headline and basis of the the story?  One can think of many issues to do with the epistemology of this kind of study, such as confounding, including confounding socioeconomic factors, as we mentioned in our original post on this story.  Why isn't the first reaction of the media that there must be something wrong with the latest results, and the study's bottom line be incorrect?

Indeed,  why shouldn't the media treat this as a junk study, not worth reporting, since it did not seem to have any clear or definitive reason for countering prior results?  Or at least ask what other factors may have been correlated with exposure to resveratrol, to negate its normally reported effect?

One possibilities is that the result is just a statistical fluke, and the actual resveratrol effect in the Chianti population is protective but that the roll of the sampling dice by chance turned up an opposite result.  How would we know?

We can put this another way: doesn't the study show that the media, and perhaps some scientists, treat this as a more definitive study than the prior work, and that that is what they seem to assume makes it newsworthy?  One can ask whether we can't expect this one study to be used as a rationale for proposals for funding for yet more, larger studies to resolve this 'controversy'.  Or, should we not expect that next week or next year another study will appear, and be given credibility by the media and authors, that will reverse the current wisdom?

The idea that one factor has an important net effect that will be realized is the kind of reductionistic thinking we often criticize, including in our prior resveratrol post.  No author would admit to thinking in a one-factor way, because we all know that would make us vulnerable to criticism, but this story shows that underlying the work is just that sort of approach.  At some point, if this is the kind of result we get, and we don't assume the current study is flawed, we should say we know enough to conclude that resveratrol has a protective effect, and go forth with other studies rather than repetitions of this one.

Or, if for some reason we actually give credence to this story, we need to ask why so many prior studies were junk studies, and then the question arises as to who to believe, and when we actually have an answer--rather than just another rationale for not going on to something more important, or at least to questions that have actual answers.

Or, it's possible that both results are correct; resveratrol is protective in some populations and some contexts, and not in others.  As always, it's hard to know how to interpret contradictory results, and too often it's the latest results that are given most credence for no reason other than that they are the latest results.

Meanwhile, drink red wine if you like red wine,  but then don't drive, because that is a risk factor we should actually believe!

5 comments:

Anonymous said...

"On what possible grounds would this study be viewed as definitive, or at least definitive enough, to cast doubts on the extensive prior work?"

This is an excellent question, and post in general. The problem with stories like this is that they force the debate to be between two polar opposites: (1) no effect at all versus (2) one constant effect (in both magnitude and sign). However, it makes sense to think about effects as random variables, and this provides a way to think about these stories. Another way to describe it is "context dependence", but I think Andrew Gelman's "varying treatment effects" is better because it suggests practical statistical tools for improving our thinking about this stuff: e.g. http://www.stat.columbia.edu/~gelman/research/published/bayes_management.pdf.

So in this case, a more reasonable response might be "where does this new study's estimate of the effect of resveratrol fall in the distribution of past estimates?"

Ken Weiss said...

We try to deal with just some of these points today. We did not know the Gelman paper. The questions involve whether there are other factors not being known that affect the risk factor's effect, or this study is a poor one for some reason that can't affect what seemed to be many consistent (positive) studies, or it's a statistical fluke, etc. Our today's post doesn't pose the question in the same words, but is consistent.

However, there is also a question of quality of samples and studies and so on, relative to this and past studies as well, so that a 'distribution' is perhaps not the best way to think of the problem, which is more complex. That is, the risk is not just a random variable, and even if it were, what makes it 'random'? We don't have the answers, but the questions are fair ones to ask.

This to me makes the idea that we can be saved by Bayes largely wishful thinking, I must say.

If you don't already know of it, John Ioannides has written about some issues in study quality (in a somewhat different context, involving studies published in China relative to their use or not in meta-analysis etc.).

Anonymous said...

"Our today's post doesn't pose the question in the same words, but is consistent. "

Completely agree, I just like Gelman's 'varying treatment effects' term, and wanted to share.

I too agree that data quality issues are important.

In response to "...what makes it 'random'?" There are simple, old, and (in my opinion) correct answers to this general problem: e.g. context dependence, unmeasured variables, lurking variables, etc. So I guess I'm equating random with ignorance, or with lack of relevant good quality data.

"This to me makes the idea that we can be saved by Bayes largely wishful thinking, I must say. "

Absolutely. I completely agree. I did not mean to come across as a-"now if you only had the correct statistical paradigm..."- kind of person. At bottom, I view all these statistical issues as being of secondary importance relative to the psychology of data interpretation. My position is that explicitly viewing treatment effects as neither zero nor constant is a step in the right direction. That is, to the extent that it makes sense to ask quantitative questions like "what is the effect of the rate of red wine consumption on health indicator X?", it is much more honest to view such effects as random, or varying. Anything but constant.

Its funny you mention John Ioannides, because although I really respect his work, sometimes I feel that he is a little too much of a Bayes-booster (http://stevencarlislewalker.wordpress.com/2013/11/12/another-attack-on-frequentist-statistics/).

In general, I was not critiquing your post at all. I completely agreed, and just felt like sharing.

Ken Weiss said...

We greatly appreciate thoughtful discussion, such as yours in this case. When method has priority over the substantial and even conceptual issues, we think things are on a wrong track. Sometimes, of course, it doesn't matter since the 'wrongness' is minor. But in the cases we pick on, we hope to focus on situations where the underlying real issues are over-riding.....but inconvenient to face up to.

Ken Weiss said...

We don't know Ioannides, but while his work is always thought-provoking he seems, too, to have a vested interest in areas that are often problematic, and in a non-regular and hence probably non-Bayesian way....such as the staunch defense of meta-analysis in situations where the assumptions, such as of approximate replication in different studies, are questionable, to say the least.