I’ve invented a way to break bad news gradually, instead of all at once. Suppose there is a big question—“Are you breaking up with me?” or “Do I have hepatitis, Doc?” The traditional algorithm is decidedly O(1); the girlfriend or the doctor simply says “yes” or “no,” and the news is broken. It would be nice if we could delay the news, so that the answer became gradually more clear as time passed. Here’s a procedure to do just that.

At each timestep, the doctor (say) flips a coin and hides the outcome from the patient. If it is heads, he simply says “heads.” If it is tails **and the patient has hepatitis**, he says “heads.” If it is tails **and the patient does not have ****hepatitis**, he says “tails.”

Let’s analyze this from the patient’s point of view, supposing that both answers start out equally likely in his mind. That is, Suppose there have been *N* timesteps. If the doctor ever says “tails,” then the patient knows he’s in the clear. So the interesting question is how the patient’s degree of belief changes when the doctor has said “heads” every time for *N* timesteps.

Using Bayes’s theorem and some algebra, you can show that In order to get *N* “heads” responses given no hepatitis, the coin would have to land heads-up *N* times. And we know that has probability After a line of algebra, we get

This approaches 100% as *N* tends toward infinity, which is what we expected. On the other hand, if the patient doesn’t have hepatitis then we expect a “tails” to come up after only 2 timesteps.