It is well known that the usual versions of probability kinematics have serious limitations. According to the classical notion of conditioning when one learns a piece of information A its probability raises to its maximum (one). Moreover no further instance of learning will be capable of defeating A. Once a piece of information is learned one should be maximally confident about it and this confidence should remain unaltered forever. It is clear that there are many instances of learning that cannot be accommodated in this Procrustean bed. There are various ways of amending this limited picture by enriching the Bayesian machinery. For example, one can appeal to a notion of primitive conditional probability capable of making sense of conditioning on zero measure events. But the detailed consideration of this alternative leads to similar limitations: the picture of learning that thus arises continues to be cumulative. There are many ways of overcoming these important limitations. Williamson considers one possible way of doing so in his essay reprinted in the section on Bayesian epistemology. One of the lessons that have been learned in recent years is that there is no apparent way of circumventing this rigidity of Bayesianism without introducing in some way a qualitative doxastic or epistemic notion as a primitive alongside probability. Here are two examples: Williamson proposes a model where knowledge is a primitive, while Levi appeals to a primitive notion of full belief.
Arló-Costa, H. , Hendricks, V. F. , van Benthem, J. (2016)., Introduction, in H. Arló-Costa, V. F. Hendricks & J. Van Benthem (eds.), Readings in formal epistemology, Dordrecht, Springer, pp. 189-193.
This document is unfortunately not available for download at the moment.