Tuesday 29 July 2008

Are we Underrating the Anecdotal?

There is an ancient British superstition which says if a child rides on a bear's back it will be protected from whooping-cough. Perhaps it’s just as well we destroyed the bears and their habitat centuries ago or we’d have a serious health and safety issue on our hands.

Such absurd beliefs can arise because of how we are programmed. There is a survival advantage to making associations between things that turn out to be true (such as poisonous berries and death) but no particular disadvantage to making associations that turn out to be false. This is how superstition develops. If I rub olive oil into a wound and it doesn’t get infected, why not do it every time I get a wound – just in case that’s what provided the protection. If you are interested in a full explanation, see this article in Scientific American.

Evolution has taught us to hedge our bets. So when someone tells their doctor about something they are convinced is curing their condition, the doctor is right to be cautious.

On the other hand, you sometimes read about a study finding evidence to support a piece of received wisdom and can almost hear the cries of “Well I could have told them that!” being uttered in kitchens up and down the country. I guess if you have an infinite number of monkeys randomly making superstitious connections, eventually they will come up with some genuine cures.

As someone with a science degree, I do have a reasonable grasp of the meaning and importance of controlled experimentation and empirical measurement – but as someone with a keen interest in nutrition who has experimented extensively with my own diet I am aware that anecdotal evidence often contradicts the advice given by the establishment; when that happens, it’s hard not to question whether sufficient value is attached to the potential value of that evidence.

Some years back a friend of mine was diagnosed with ulcerative colitis. It came as no surprise, since it ran in his family. Purely by chance, he was introduced to a ‘juicing’ diet by someone. This involved living, more or less, on the juice of various fruits and vegetables.

My friend decided to do some research. Naturally, he did not rent a medical facility and conduct controlled clinical trials with a recruited sample of colitis sufferers. Instead he read the testimonies of fellow sufferers who had tried juicing.

There are concerns around adopting radical diets like this which a doctor would quite rightly raise. Nevertheless, my friend went ahead with it for several months, over the course of which his symptoms started to disappear. I know, I know – what else was he doing during this period? It’s likely that his whole outlook on life had changed. Maybe as well as adopting the diet he had stopped sleeping for 3 hours a night and pouring half a litre of scotch on his cornflakes.

Nevertheless, whilst the symptoms were gradually disappearing for whatever reason, his specialist was suggesting fairly powerful drugs that would help slow down, but not cure the illness and telling my friend to prepare for the possibility of a life with a colostomy bag; he was not very interested in my friend’s supposed miracle cure.

Disillusioned by the advice he was getting from his specialist, but excited by his apparent reprieve from a life with a colostomy bag, my friend spoke to someone he knew in the medical profession, who laughed and said “Don’t expect them to be interested in anything that’s not in the textbooks.”

I understand why medical professionals would feel like this. If they took the time to fully explore every patient-reported treatment, they would barely have time to see a fraction of the patients they are expected to handle. Moreover, both practically and ethically, doctors can only recommend treatment for which there is clinical evidence and can only sanction lifestyle choices the establishment generally accepts to be safe. My friend’s specialist would have been irresponsible to say “Wow, sounds like that worked for a bunch of people and it sort of makes sense – so I would stick with the juicing instead of taking these clinically proven drugs.”

Don’t get me wrong, clinical researchers have looked into (and no doubt continue to look into) the connection between diet and ulcerative colitis and it would be a mistake for me to try to establish just how much was known when my friend was advised; but for me this story is important because it illustrates how a collection of anecdotes can be a powerful indicator that clinical research is required.

It’s possible my friend’s story was one of many that percolated into the medical research community and contributed to the anecdotal ‘noise’ driving clinical research on the condition – but given the reaction of his specialist, I doubt it; and this is what concerns me. The percolation process feels inherently inefficient. Do all the anecdotes get through? Many anecdotes may not even get as far as a specialist if patients have the impression there is no interest.

When my friend was researching his condition, the internet was embryonic and certainly lacked the kind of community-based interaction that now abounds. 15 years later there are millions of voices talking about the things that matter to them most – and health comes pretty high up the list. The internet is a huge database of case studies – the question is whether we are taking full advantage of it. The market research industry (obvious candidates to leverage this medium) is only just beginning to tap into the potential of social media as a research tool. It seems a safe bet that the medical profession is no further forward. The difference is, the stakes are so much higher for the latter. If market researchers don’t have a big push on harnessing social media, Acme sells fewer widgets. If the medical profession don’t, progress in researching important health issues is slower, so more people suffer.

But as well as being under-used in aggregate, are individual anecdotes sometimes given less importance than they deserve?

There was recent controversy over Dr John Biffa’s criticism of a diabetes organisation. He implied they are wilfully giving bad dietary advice to diabetics and subsequently received a grilling here and elsewhere. It was a pity that Biffa so poorly dealt with an issue that has been widely debated on the web – are low carbohydrate diets a better way to treat diabetes than the low fat diet recommended by most diabetes organisations?

If you trawl the web for testimonies on the subject, it is possible to find many from people who say they have self-treated type 2 diabetes with a low carbohydrate diet - and people appear to have been talking about it for a long time. Indeed I was unable to find any testimonies from people for whom such a diet had failed. I came away with the impression that this is the best diet for diabetics.

Of course this means nothing – it’s circumstantial, anecdotal and subject to recall bias and the placebo effect (not to mention my own ability with Google.) Plus, there are also testimonies from diabetics whose lives have been improved by the recommended low fat diet. Nevertheless, it seems that one organisation is indeed changing their advice and it would be fair to say that there is now a fair bit of research taking place in the area.

My question is this: can’t a single anecdote be considered powerful in its own right if it contradicts advice positioned as being for everyone in a certain group? For example, if an organisation is recommending a diet for people with a certain condition, yet one person is shown to improve significantly on another diet, doesn’t that prove the advice is flawed? Surely the advice should either be re-positioned or withdrawn, pending further research? If we believe it does then Biffa’s insinuations of chicanery sullied a valid criticism about the quality of advice to diabetics by this organisation.

To return to the question about anecdotes driving research, I would like to offer a final thought. Trials to evaluate the effects of diet are often expensive and difficult to run, and for potential funders like pharmaceutical firms there is rarely a pot of gold at the end of the rainbow; but this is all the more reason why we should make sure the research that is done is the right research. Instead of ignoring people who publish internet testimonials about themselves and others on the basis that it’s ‘purely anecdotal’, why not find a way to proactively engage with them? This would create a double benefit.

First, there would be the potential for more case studies to drive research in the right direction. The internet is teeming with people documenting their own experiences and supposed cures. Just because they are talking about it on the web instead of in a consulting room, does it make their story any less valid as a case study?

Second, by engaging with people who are potentially disillusioned with the medical establishment, a better understanding between all concerned could be fostered – it is a PR opportunity not to be missed. After all, in this case it may turn out they had a point.

11 comments:

Asclepius said...

On some issues even scientific evidence cannot change opinion in the medical establishment:

http://tierneylab.blogs.nytimes.com/2008/07/21/good-news-on-saturated-fat/

In such cases it is unlikely that anecdotal evidence would have any influence amongst medical types at all.

I can envisage a scenario where science falls way behind belief/anecdotal evidence - we can see this happening with nutrition already.

The danger this presents is not so much the implications of self-diagnosis/medication by the general population (it is good that we experiment individually with nutrition - in a responsible way), but in the damage done to our faith in the medical and scientific community.

Asclepius said...

Another example of science playing catch up!

http://www.sciencedaily.com/releases/2008/07/080728192811.htm

Anonymous said...

I have to say I can’t really agree with your dismissal of the scientific method as the best way to provide objective data on medical conditions. I can understand your frustration that personal cases and evidence are being lost in the forest of other data noise, but there is a reason why social networking sites have not been widely used in academic research (into which medical research falls). Inherently it is hard to prove things using qualitative data. It is rare to find conclusive acceptance of a situation when there is not also some quantitative information that has probably been subjected to a statistical test of some kind. If you want to take on the basis of science in your blog, then so be it, but that’s a hard war to win. If on the other hand, you want to start that process of gathering anecdotal data in a rigorous enough way for a researcher to draw some solid conclusions from, then that would be a step forward…



“My question is this: can’t a single anecdote be considered powerful in its own right if it contradicts advice positioned as being for everyone in a certain group? For example, if an organisation is recommending a diet for people with a certain condition, yet one person is shown to improve significantly on another diet, doesn’t that prove the advice is flawed?”



Unfortunately not. This is just an example of hypothesis testing error. Roughly speaking, if a larger number of people (say 10%) all showed improvements then you might legitimately question the original null hypothesis that the two diets are similarly effective for the defined population; i.e. it’s entirely believable that one person out of 100 doesn’t respond as expected, but much less likely as the number increases.

Methuselah said...

Keep in mind here that nowhere am I dismissing the scientific method as the best way to do things, but rather querying whether the choice of area to research is sufficiently informed by case studies. I am certainly not suggesting we can prove anything with qualitative data - just that ignoring it is bad.

I take your point about null hypotheses - but I am making a very specific point here and did choose my words quite carefully. I am not necessarily saying that a hypothesis is being contradicted by a single case study - what I am saying is that the advice is flawed. In other words, whilst it's fine for a hypothesis remain 'not disproven', it is not acceptable for advice to diabetics to be issued if it does not apply, quite literally, to all diabetics. I am saying that it would be better to issue no advice at all.

If you have time, you may enjoy this video, which is very pertinent to what we are talking about, and as a fellow 'scientist', you will appreciate.

Gary Taubes

Seth Roberts said...

"I understand why medical professionals would feel like this. If they took the time to fully explore every patient-reported treatment, they would barely have time to see a fraction of the patients they are expected to handle. Moreover, both practically and ethically, doctors can only recommend treatment for which there is clinical evidence and can only sanction lifestyle choices the establishment generally accepts to be safe."

Good post. I agree with your overall point. In this particular paragraph you let doctors off too easy. Why do doctors exist? To help people? Or to protect their dignity? I think the former. I don't think dismissing everything any patient tells them about this or that miracle cure is completely compatible with doing the best possible job to help people. As for "practically and ethically" here I completely disagree. Doctors are allowed to think for themselves. Like everyone else, they pay a price for doing so -- but they can.

Methuselah said...

Seth,

You may be surprised to know that I find myself agreeing with your last point in spite of the fact that it disagrees with one of the points in my post. I think I wanted to have as many readers 'stay with me' on the central theme and as a result was overly conciliatory on points like this one. You're absolutely right - doctors are human beings with minds of their own and we know there are individual doctors who do engage more fully with people on 'non-standard' approaches and who do not fall foul of ethical or legal problems.

It's easy to aim a scatter gun at the medical / research community at large, as I have done in this post, and avoid the wrath of any individuals, but I guess it's a bit cowardly. If I am honest, I do believe there are many doctors who fail patients in this way - and it's not the ones who do so out of ignorance I deplore, but those who do it out of laziness.

Methuselah said...

...although this article does highlight the perils of more public advocacy...

The Perils of Crossing the Establishment Boundaries on Dietary Advice

Anonymous said...

One thing that hasn't been mentioned is that the low fat/high carb diet that has long been advocated by prominent diabetes foundations in the US and the UK, is *NOT meant for the TREATMENT of diabetes*. They know full well that the high carbohydrates of that exacerbate hyperglycemia in diabetic patients (that's where the pharmaceuticals come in). Though for some reason the mainstream dieticians seem to think that dietary carbyhydrates are the only source of energy the body can use (I exaggerate, but only a little).

The low fat/high carb diet is advised because of heart disease. They advocate that particular diet because they mistakenly believe that a *high fat/low carb diet will cause heart disease*, for which hyperglycemics are already at higher risk.

And conversely, they believe that a low fat/high carb diet helps to prevent heart disease. However, there is no proof that is true, in fact the uncontrolled BG that a high carb diet promotes (with other factors) may actually contribute to the development CVD.

So the ADA and Diabetes UK advocate a diet long known to harm diabetics, load them up instead with drugs that have serious side effect potential and which only improve blood glucose control modestly, because they are trying to protect diabetic patients from heart disease, yet the uncontrolled BG of diabetes contributes to heart disease.

Crazy!

Methuselah said...

Anna - yes, crazy indeed. There are a lot of issues around this (such as the ones you raise) which I carefully avoided getting into in this post so that I coudl deliver the central message without drawing fire from medical experts on the details....but I may revisit them later. It think others like Gary Taubes have done a great job of demolishing the argument for the low fat diet to treat diabetes, but I think to change the minds of the establishment we need as many voices to chime in as possible.

thequickbrownfox said...

Excellent post. This is something I have been wondering a lot myself recently. Particularly, since reading about the health benefits of a low carb, high fat diet. I agree that more attention should be payed to anecdotal evidence for the reasons you describe (along with the limitations you describe). However, I think you've left out some of the large flaws in statistics based controlled randomised double-blind etc. trials. As good as they are in some situations they should not be seen as the only form of research. To me the most obvious flaw is that they try to look at a large a group as possible and the larger the group the more diluted the information you can get from each individual. They focus on what the group has in common and deemphasises their differences. This ignores the fact that each human is an incredibly complex and unique being. So in the end the results can be so vague and subject to statistical manipulation that in many cases they can be unwittingly biased towards the researchers preconceptions. This is particularly likely to be true in diet-based studies because all food contains countless different substances, which combine with each other and in countless different ways and even the form in which they are eaten is very significant. This is much more complicated than a drug trial, in which everyone in the test group is introduced to a single new molecule. Actually, this is also a reason that I feel that diet-based treatments have the potential to be much more effective than drug-based ones.

Another factor that makes the misinterpretation of a study's results even worse is the fact that the researchers interpretation is usually attached to the raw data of the study. In my opinion they should be two different things. The data collector should publish the data and methodology without any reference to its meaning and then he/she should supply an analysis in a separate document and others should do the same, with no extra weight being given to any analysis except for that assigned by the person browsing these documents dependant on how coherent the analysis is or the reputation of the author. Ideally, all these will be published online and linked appropriately to make this is clear, simple and transparent as possible. Oh, and available to the public. Does something like this exist already?

Due to the above problems I think the "medical establishment" can be very slow to react to the changes in science and now with the advent of the internet, people are moving forward on their own. Particularly those with serious conditions, whose medical alternative is highly disruptive treatment with a low rate of success and a slow death. For these people it is worth the risk to try out the wackiest new anecdotal-evidence-based ideas out there and this has sometimes led to success. There are cases of people curing or halting their terminal illnesses. By dismissing these as anecdotal instead of attempting to understand them the medical community risks alienating their patients, who will become more inclined to follow each other's advice. I believe this is already happening and while it is in some respects a good thing, an unfortunate side effect is that they may also seek treatment from the people who really are quacks and charge money for it.

Methuselah said...

QBF - you make some really good points here. I do think that if the medical establishment is not careful they will end up being marginalised by their own slowness to adopt new ideas as the collective power of internet-driven testimony gathers momentum.

Blog Widget by LinkWithin