Recently, I had the honor to correspond with Dr. Prasanta Bandyopadhyay about my realist review of ketogenic diets (KDs) for cancer patients (Beneficial effects of KDs for cancer patients (Klement 2017)). Dr. Bandyopadhyay is a professor of Philosophy in the Department of History, Philosophy, and Religious Studies at Montana State University, USA (check out his homepage here). Together with Gordon Brittan Jr. and Mark L. Taper he has written one of the most inspiring books I have recently read: Belief, Evidence and Uncertainty – Problems of Epistemic Inference. In my review, I had taken the concepts of evidence and confirmation developed in this book to summarize the available evidence for any putative anti-tumor effects of KDs in cancer patients and whether we should believe that such effects are “real” (in the sense of occurring in real world settings).
In the following I will reproduce the email discussion I had with Dr. Bandyopadhyay about my paper, which I think could be of interest for other readers as well. I greatly appreciate the comments he has sent me as they motivated me to dive deeper into the philosophical issues associated with the methodology of my paper. Here are his initial comments (C), my answers (A) to them and his quick response to my answers:
C1: You use the word „realism“ in a double senses. One is the ontological sense where things you study are in the world independent of our theory or an agent’s belief. The other one is the day to day sense (One page before your conclusion Second column, last para, line five). You also used it in its second sense in the first sentence of the section called „Material and methods.“
A1: You refer to my sentence “I tried to objectively evaluate alternative hypotheses appearing realistic within the context of each particular study, but did not consider any form of ad hoc hypothesis such as a spontaneous remission as likely.”
This was meant to justify my assignment of evidence in front of the usual reader of this medical journal and insofar is indeed meant in a day-to-day sense of “realistic”. However, there is also a metaphysical side to it. “Metaphysical” hypotheses should be excluded from scientific investigations since it would always be possible to invent one which fits the data perfectly and thus has the evidence on its side compared to “realistic” (again in the day-to-day sense) hypotheses whose plausibility is based on human intuition, experience and expectations (or shortly, scientific reasoning). Even worse, by definition there is no way to falsify metaphysical hypothesis (Popper 1985). As such I thank you for your remark and admit that it would have been better to use the word “scientifically plausible” or “conceivable” (in the sense of Carnap; see e.g. Bradley 2017) instead of “realistic”.
The first sentence of the Materials & Methods section reads “Realist reviews were originally designed and still are mainly used for investigating complex policy interventions.” Here it is not so clear that I would refer to the day-to-day sense of the word since realist reviews are deeply rooted within the metaphysical sense of realism by focusing on explanation of complex interventions rather than judgment and decision making. Such explanation involves mechanisms that are constituted by ontological entities or aspects, and this is the connection to metaphysics.
Answer to A1: I don’t have anything new to say about your worry on the use of the word, „realism.“
C2: When you first introduce the definition of confirmation you follow the standard way including ours about confirmation. In the next line, you are comparing H1 and H2. Unless H1 and H2 are mutually exclusive and jointly exhaustive you won’t be able to use Bayes‘ theorem that way. However, others do use measure similar to what you have used.
A2: You refer to the notion on page 3 where I state that hypothesis H1 is better confirmed by the data than H2 if P(H1|D) > P(H2|D). The comparison phrase “better confirmed … than” makes the implicit assumption that both hypothesis should be confirmed by the data, i.e. P(H1|D) > P(H1) & P(H2|D) > P(H1). Your definition of confirmation (which I adopted in this paper) is that data D confirm H if and only if P(H|D) > P(H). If I assume that both hypothesis have the same prior probability, P(H1)=P(H2) it follows from
(i) P(H1|D) > P(H2|D) > P(H2)
that P(H1|D)−P(H1) > P(H2|D)−P(H2) or, in words, that the degree of confirmation for H1 is higher than the degree of confirmation for H2.
Therefore the only assumption needed for my statement is that both hypotheses should have the same prior probability, but not that they must be mutually exclusive and jointly exhaustive.
Answer to A2: This is very interesting for me. What you say is that to claim Pr(H1|D) > Pr(H2|D) assumes that Pr(H1) and Pr(H2) are separately confirmed by the same data. Of course, you don’t mean if the latter condition holds, that is, both Pr(H1) and Pr(H2) are confirmed by the same data then it follows that Pr(H1|D) > Pr(H2|D). I am just curious about your claim, i.e., if priors are equal on both H1 and H2 separately, then both (i) and (ii) hold. Do you have a proof for it? I will check it by myself sometime.
C3: Discussion area. (Second paragraph) You use „always“ subjective. I would suggest if you would like to use that strong claim then please add „provided priors are not anchored in frequency-based priors.
A3: You refer to my statement that the answer to the question “Given the available data to which degree should we believe that KDs have beneficial effects for cancer patients? “depends on the prior probability that we give to this hypothesis and insofar is always subjective. I think that even frequency-based priors are subjective in this context because every “real-world” frequency we can derive from data is uncertain. In fact, I argued for a sort-of frequency prior saying that we could use the proportion of available animal studies with a “positive” outcome as a prior for the hypothesis that KDs have positive effects in humans. Nevertheless, the outcome of every new animal study would change that prior and therefore its adoption is subjective in the sense that we are aware of that uncertainty. Ideally, frequency based priors would account for the uncertainties in the frequency estimates by not taking a fixed value, but using a probability distribution characterized by some hyperparameters that specify the uncertainty. Again, such specification would be subjective.
Answer to A3: I don’t have any comment on 3.
C4: Next page. The ontological aspects (left column at the middle of that paragraph) are connected to the causation question. Do you want to say because we don’t have a good handle on the causation question we are not entitled to say much about the ontological aspects?
A4: What I wanted to say is the other way around: Since we don’t know enough about the ontological aspects we are not in a position to explain and predict the effects of a KD intervention in individual patients. The causation question was stated as: “What are the mechanisms that researchers put forward to explain their findings?” A good mechanistic understanding of the causal connections between the ontological aspects constituting a mechanism would be helpful for explaining and predicting outcomes of the intervention (i.e. a KD), independent of whether we believe these aspects to be organized within distinct “levels” or not (Weinberger 2017). (The view that mechanisms are organized in levels with causal connections occurring among intra-level components that in turn constitute the next-higher level and thereby provide constitutive explanation for a phenomenon is widely held among the “new mechanists” such as Craver & Bechtel 2007.)
Answer to A4: I see why you say that it is the other way round. Because we may not know the ontological relationships holding in the world ( here you are an ontological realist) we may not be able to know the appropriate causal mechanism at work. OK
C5: Your comments about Cartwright. You are right to point out that there could be a confusion about data and evidence in her quote. You could explore whether she confuses confirmation with evidence. Just to let you know that my statistical ecologist do use these types of LR based models in his scientific research. So, it is not just a philosopher’s pipe-dream. The only difference is that his is much more mathematically challenging.
A5: I am currently working on a more mathematically oriented Bayesian approach to synthesize evidence from different sources especially for more complex (complementary) medical interventions such as KDs.
Answer to A5: No comment on your response to Cartwright. Your comment on A5. This will be very interesting. We did discuss how we can gather data from different sources in chapter eight and how that accumulation of data is different from Bayesian updating. Of course, we are assuming independence of data.
C6: The end of that paragraph. We have a meta-justification for using LR as an evidence measure. Please see Lele (2004) for that justification from our Monograph.
A6: Thank you for this hint.
C7: There is a theorem we touched on in our Monograph concerning when we can go from confirmation to evidence and evidence to confirmation. It holds under a special case.
A7: Thank you for this hint. This is the special case when two hypotheses are mutually exclusive and jointly exhaustive. In this case download putty , if the data provide evidential support for H against –H, i.e. P(D|H) > P(D|–H), then it follows from Bayes’ theorem that P(H|D) > P(H) (also stated recently in Bradley 2017). The derivation is as follows:
(i) P(D|H) > P(D|–H)
(ii) P(H) = 1–P(–H) & P(H|D) = 1–P(–H|D) (because H and –H are mutually exclusive and jointly exhaustive)
(iii) P(D|H) = P(H|D) P(D)/P(H) & P(D|–H) = P(–H|D) P(D)/P(–H)
(i)&(ii)&(iii) ⇒ P(H|D)/P(H) > P(–H|D)/P(–H)
⇒ P(H|D) (1–P(H)) > (1–P(H|D)) P(H)
⇒ P(H|D) – P(H|D) P(H) > P(H) – P(H|D) P(H)
⇒ P(H|D) > P(H)
On page 38 of your monograph, where this relation between evidence and confirmation is indicated, you make an important point about the nature of this relationship:
“Finally, although evidence is accompanied by confirmation and vice versa when the hypotheses being compared are mutually exclusive and jointly exhaustive, even then the relation is not linear. Indeed, in sample cases they can vary dramatically. A hypothesis for which the evidence is very strong may not be very well confirmed, a claim that is very well confirmed may have no more than weak evidence going for it.”
Bradley D. 2017. Carnap’s epistemological critique of metaphysics. Synthese.
Craver CF, Bechtel W. 2007. Top-down causation without top-down causes. Biol Philos. 22:547–563.
Popper K. 1985. Realism and the Aim of Science. Bartley WWI, editor. London and New York: Routledge.
Weinberger N. 2017. Mechanisms without mechanistic explanation. Synthese. in press.