TransWikia.com

Minimizing an f-divergence and Jeffrey's Rule

MathOverflow Asked by jw7642 on November 24, 2021

My question is about f-divergences and Richard Jeffrey’s (1965) rule for updating probabilities in the light of partial information.

The set-up:

  • Let $p: mathcal{F} rightarrow [0,1]$ be a probability function on a finite algebra of propositions.
  • Suppose that the probability of $E$ in $mathcal{F}$ shifts from its prior value $p(E)$ to its posterior value $p'(E) = k$.
  • Jeffrey’s Rule then says, for all $X$ in $mathcal{F}$,
    $p'(X) = sum_{E}p(X|E)p'(E)$.
    In other words, it offers a fairly straightforward generalization of Bayesian conditioning for partial information.

The concept of an f-divergence seems to be a fairly natural generalization of the Kullback-Leiber divergence. And minimizing the Kullback-Leibler divergence between prior and posterior probability functions is known to agree with Jeffrey’s Rule (Williams, 1980).

Here is where I get stuck.
I have seen it written that "minimizing an arbitrary f-divergence subject to the constraint $p(E_{i}) = k$ is equivalent to updating by Jeffrey’s Rule". However, I can only find proofs going in one direction, namely, all f-divergences agree with Jeffrey’s Rule (e.g., Diaconis and Zabell, 1982, Theorem 6.1).

Q: Is it also true that only f-divergences agree with Jeffrey’s Rule? Or might there be some non-f-divergence $mathcal{D}$ such that minimizing it subject to the same constraint also agrees with Jeffrey’s Rule?

Any pointers would be awesome.

Refs:

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP