Critical Thinking

Published

I will preface this by saying that I do not consider myself a scientist. I do have an undergrad in chemistry, a minor in mathematics, and early in my career was in a fellowship in a combined MS-PhD biochemistry program. But while I count myself as an initiate in the sciences, I have not worked in the field in some decades, and I have not had the practice that comes from long years in research.

But some habits, and approaches, don’t leave you no matter how far you are out of the field. I still want to see results replicated a minimum of three times, independently. I still go by what one of my professors told me: you’ll tend to find what you are looking for in the lab, hence why you need to have independent replication. And in the lab I learned a good many valuable things, one of which is that you do not argue with what reality is telling you.

Evaluating what reality is telling you requires what is called “critical thinking”. Oddly enough, finding a clear description of what this entails has not been easy. Nevertheless, the following appear to be essential components: an awareness of the ways in which human judgment can go awry, tools to correct for those tendencies, and tools to evaluate what is being presented. Asking for sources is a good beginning (there was a time when asking for someone’s source was apt to result in an astonished reaction of “don’t you trust me?”). So is the question, “How do you know this is accurate?” (This is not, by the way, intended to be at all a complete summation of critical thinking. Please see the resources page.)

Some miscellaneous items.

The below are my current understanding of two concepts that are critical in evaluating medical treatment: what is “random” and “regression to the mean”.

Not entirely random thoughts on random-ness.

Brian Caffo PhD of John Hopkins gives a thoughtful discussion of “random” in Mathematical Biostatistics Bootcamp 1 (Coursera). He discussed in the first session that experiments basically utilize a particular analytical and predictive model. What I took away from his discussion is that when we are talking about “random” in the context of experimental science, this refers to factors that fall outside of a model’s capacity to predict, quantify or allow for.

The concept of random is somewhat difficult for humans, who have a bent for organizing events into neat storylines and prefer predictability and purpose. Random, in casual use, can take on overtones of “without cause or purpose”. This, I think, can provoke resistance in many people whenever the idea of random comes up – which it does when evaluating whether or not treatments work or not. Yet even erratic, unpredictable factors have immediate preceding events that could charitably be classified as causes. We just don’t have a model that allows us to predict when and how these events will occur.

Some notes on regression to the mean

Regression to the mean is often cited as an explanation for why someone gets better after a treatment; they were going to get better anyway, illnesses fluctuate, etc. The idea is that there is a baseline to which all events return, whether it’s athletic performance or someone’s energy level (or weight). It’s crucial to note that when speaking of an individual, this baseline refers to their individual baseline, not a group average. A diabetic will have daily fluctuations in their serum glucose, just as non-diabetics do. But, the diabetic’s average or baseline serum glucose is going to be higher than that of a non-diabetic.

Absent treatment, illnesses that are not self-limiting will worsen, not improve, even if symptoms fluctuate from day to day. Left to themselves, cancers rarely if ever shrink; diabetes worsens; cavities in teeth do not miraculously reverse course. Torn ligaments rarely decide to suddenly fluctuate into an un-torn state. So far, overweight people do not wake up suddenly finding that they are twenty pounds lighter. Etc.

Fluctuations in regression to the mean refers to fluctuations that are due to random factors – e.g. those that are not readily predicted by the course of the illness or the model for it. Unless these factors modify the disease process, repair physiology, or heal structures, they’re not likely to results in sustained improvement. If someone’s blood sugar improves for three weeks running once they start taking metformin, that’s not regression to the mean.

I see this concept however misused in popular understanding, with any and all improvement sometimes being attributed to “regression to the mean”.

When distinguishing between improvement due to “regression to the mean” and improvement due to treatment, it’s important to have the following pieces of information:

  • The normal variations in symptoms for that particular individual – both duration (e.g. how long they last) and the intensity – e.g. how much symptoms tend to worsen or improve.
  • the normal variations in symptoms for that condition, if known;
  • the natural course of that particular condition.

Ideally studies of treatments should last long enough to allow for these fluctuations. In practice, I tell patients that we are looking for improvement that lasts longer than what they have experienced before and/or results in more improvement than they have had before. (E.g. there should be sustained improvement that is above what they have previously experienced). Where objective measures of the disease process are available (e.g. labs, imaging. functional measures such as ability to carry out daily living activities) these should also be used as appropriate to verify subjective sense of improvement.

By Les Witherspoon

Formerly practicing naturopathic doctor. Views are my own and do not speak for any employers or clients, nor for the profession at large.