Posted: Jun 16, 2012 5:13 am
by epepke
Shrunk wrote:
epepke wrote: There are some disturbing factors in cancer reporting and research. Lots of professionals seem to think that early detection is important for effective treatment. It certainly seems plausible that catching something quickly should require less severe intervention which is more likely to work. Is this true, however? There's a confounding purely numerical problem, though. Cancer survival rates are counters and reported for statistics by the amount of time after detection. So, if someone gets a cancer detected two years earlier, that person will be reported to have lived two years longer with the cancer, completely regardless of the effectiveness of the actual treatment. I've seen two meta-studies on this. One concluded that all of the supposed more life with cancer was because of this, and the other concluded that most of it was.

That's correct, but of no relevance to the issue of survival times with chemo vs. no treatment in people who have already been diagnosed with cancer. Right?

It would be of no relevance, if and only if there were some clear studies that very carefully controlled for this effect. They would, rather obviously, be valid independently from this effect.

I don't know if there are such studies. If there are, then I think that decisions based on them would be perfectly valid. I don't know, however, that this is the case.

What I think I know is that most decisions made about chemotherapy aren't so careful. I think that a culture of casual chemotherapy and radiotherapy has evolved on the basis of not being particularly careful

You may think I'm being anal or picky, but I think this is fairly important.

ETA: Note that I do not know this for certain. However, bald assertions that cancers, or cancers of certain kinds can be treated with chemotherapy if caught early enough, strike me as a bit vapid. I don't know what evidence, if any, there are.

This is quite similar to some of the hassles I have with e.g. Mr. Samsa that I have. A lot, seems to me, to be applied willy-nilly as best practice according to a goodness by definition. The evidence presented seems to consist of cases where there is some limited evidence of something working in some cases. There often turns out to be very little evidence that is distinguishable from confirmation bias.

Practitioners, moreover, are assumed to do the right things. When I point out cases where practitioners do wrong things, I am told that they are incompetent with the belief that they therefore do not count. But they do count, because they are doing things that harm people. They are part of the social game, and they seem to justify their decisions (or they are justified for them) in ways largely indistinguishable from the ways people justify the good things.

I did science for many years, and I know how damnably hard it is to get your stuff right. By the time it gets to clinical practice, though, this difficulty seems to get filtered out.