Correspondence between beliefs and predictions
It’s obvious that a set of beliefs can imply certain predictions (“make your beliefs pay rent”, etc.), but what about the reverse? Given an arbitrary statement X, is it possible to encode whether I believe X by writing down a series of predictions?
Another way to phrase this is to consider the following scenario. I want to find out whether Alice believes X. Alice doesn’t want to tell me whether she believes X. However, she has agreed to make a bunch of predictions that I ask for, and to make those predictions truthfully. Can I say whether she believes X based on the predictions?
To take a trivial example, any prediction like “The sun will rise tomorrow” can just be passed along to Alice, and she will have to answer truthfully.
Another example of a statement is “This is a fair coin”. The statement itself is not a prediction, so I’m not allowed to ask it directly. The way to find out whether Alice believes the statement is to instead ask “If I toss this coin 1,000,000 times, it will come up heads roughly half the times” or something similar.
One strategy is to “go meta” by asking for Alice’s prediction for “Alice will believe X one minute from now”, but I think that’s kind of “boring”, in the sense that I think it goes against the spirit of my original query. We might stipulate that we are not allowed to ask Alice to predict her own future belief states.