State-dependent information

Last updated

In information theory, state-dependent information is the generic name given to the family of state-dependent measures that in expectation converge to the mutual information.

Contents

State-dependent informations often appear in neuroscience applications.

Let and be random variables and be a state within . The state-dependent information between a random variable and a state is written as . There are currently three known varieties of state-dependent information. They are usually denoted as , , and .

Specific-Surprise

"Specific-surprise", is defined by a Kullback–Leibler divergence,

.

As a special case of the chain-rule for Kullback-Liebler divergerences, specific-surprise follows the chain-rule for variables. Using as a random variable, this is specifically,

.

Intuitively, specific-surprise is thought of as “how much did my beliefs about change upon learning that ”? Which is zero when there’s no change. It is nonnegative. Specific-surprise has also been called “Bayesian Surprise”.

Specific-Information

"Specific-information", is defined by a difference of entropies,

.

Specific-information follows the chain-rule for states. Using a state as a state of random variable , this is specifically,

.

Specific-information is interpreted as "how did the uncertainty about change upon learning ?" This can be in the positive or negative. When follows a uniform distribution, the and are equivalent.

State-Specific-Information

The state-specific information, , is a synonym for the Pointwise mutual information.

References