Skip to content

Commit

Permalink
Updates from Overleaf
Browse files Browse the repository at this point in the history
  • Loading branch information
ludwigbothmann committed Nov 22, 2023
1 parent 939583b commit b0aa3b2
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions slides/information-theory/slides-info-kl.tex
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@

First, we could simply see KL as the expected log-difference between $p(x)$ and $q(x)$:

$$ D_{KL}(p \| q) = \E_{X \sim p}[\log(p(x)) - \log(q(x))].$$
$$ D_{KL}(p \| q) = \E_{X \sim p}[\log(p(X)) - \log(q(X))].$$

This is why we integrate out with respect to the data distribution $p$.
A \enquote{good} approximation $q(x)$ should minimize the difference to $p(x)$.
Expand Down Expand Up @@ -168,7 +168,7 @@

But maybe we want to pose the question "How different is $q$ from $p$?" by formulating it as:
"If we sample many data from $p$, how easily can we see that $p$ is better than $q$ through LR, on average?"
$$ \E_p \left[\log \frac{p(x)}{q(x)}\right] $$
$$ \E_p \left[\log \frac{p(X)}{q(X)}\right] $$
That expected LR is really KL!

\framebreak
Expand Down

0 comments on commit b0aa3b2

Please sign in to comment.