TransWikia.com

Relating quantum max-relative entropy to classical maximum entropy

Quantum Computing Asked on January 23, 2021

The quantum max-relative entropy between two states is defined as

$$D_{max }(rho | sigma):=log min {lambda: rho leq lambda sigma},$$

where $rholeq sigma$ should be read as $sigma – rho$ is positive semidefinite. In other words, $D_{max}$ is the logarithm of the smallest positive real number that satisfies $rholeqlambdasigma$.

In classical information theory, the maximum entropy principle designates the Normal distribution as being the best choice distribution amongst other candidates because it maximizes Shannon entropy,

$$H(X) = -int_{-infty}^{infty} f(x) ln f(x) enspace dx$$
where $f(x)$ is a probability distribution of random variable $X$.

Can the first measure be extended to probability distributions, rather than binary states, to coincide with the second? How are quantum max-relative entropy and maximum entropy related, given that maximum entropy, in the classical sense, represents a highly disordered and unconcentrated state?

One Answer

As far as I'm aware there isn't much of a meaningful connection. The corresponding entropy for $D_{max}$ is the min-entropy (written $H_{min}$ or $H_{infty}$). It measures a sort of `worst case' uncertainty whereas the Shannon or von Neumann entropies measure an average uncertainty. To answer your first question: the quantum relative entropies or divergences are defined to be defined as generalizations of divergences from information theory, see the definitions of $D_{infty}$ for continuous variables or discrete

Reply to comment

Relative entropies (also called divergences) are not entropies like the standard Shannon entropy. Notice that they take in two arguments $rho$ and $sigma$ as opposed to something like the Shannon entropy which only has a single probability distribution as an argument (or von Neumann entropy with quantum states).

However, you can define these `standard' entropies from the divergences. You can think of the divergences as being a generalization of entropy. For example, let's take two probability distributions $p$ and $q$. The Kullback-leibler divergence is defined (for discrete distributions) as $$ D(p| q) = sum_x p(x) logfrac{p(x)}{q(x)}. $$ Now we can define the Shannon entropy in terms of this divergence by setting the second argument to be a uniform distribution. Doing so we get $$ begin{aligned} D(p| U) &= sum_x p(x) log |X| p(x) &= sum_x p(x) (log p(x) + log|X|) &= -H(X) +log|X|. end{aligned} $$ Rearranging we get $H(X) = log|X| - D(p| U)$. We can do a similar thing with the quantum version of the Kullback-leibler divergence to define the von Neumann entropy. Similarly, we can use $D_{max}$ (quantum or classical) to define a min-entropy $H_{min}$ (quantum or classical). To summarize the divergences (or relative entropies) are generalizations of standard entropies, from which the standard entropies can be recovered. Note the divergences are extremely useful, they can be used also to define conditional entropies and other things like the mutual information.

You can think of a divergence as measuring a distance between its two arguments (Note it is not a metric though). The max divergence is the largest of the divergences and thus gives an overly generous measure of the distance. It's corresponding 'standard' entropy $H_{min}$ is the smallest of the `standard' entropies as it gives an overly generous measure of how much we know about the argument. To clarify, when I said above $H_{min}$ gives a worst case uncertainty I was thinking from the perspective of cryptography where it is most commonly used. In cryptography you often want to measure the knowledge an adversary has about some secret and $H_{min}$ returns the smallest uncertainty for the adversary. For security it's best to overestimate the knowledge of an eavesdropper.

Correct answer by Rammus on January 23, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP