annotate draft.tex @ 25:3f08d18c65ce

Updates to section 2.
author samer
date Tue, 13 Mar 2012 16:02:05 +0000
parents 79ede31feb20
children fb1bfe785c05
rev   line source
samer@18 1 \documentclass[conference,a4paper]{IEEEtran}
samer@4 2 \usepackage{cite}
samer@4 3 \usepackage[cmex10]{amsmath}
samer@4 4 \usepackage{graphicx}
samer@4 5 \usepackage{amssymb}
samer@4 6 \usepackage{epstopdf}
samer@4 7 \usepackage{url}
samer@4 8 \usepackage{listings}
samer@18 9 %\usepackage[expectangle]{tools}
samer@9 10 \usepackage{tools}
samer@18 11 \usepackage{tikz}
samer@18 12 \usetikzlibrary{calc}
samer@18 13 \usetikzlibrary{matrix}
samer@18 14 \usetikzlibrary{patterns}
samer@18 15 \usetikzlibrary{arrows}
samer@9 16
samer@9 17 \let\citep=\cite
samer@24 18 \newcommand{\colfig}[2][1]{\includegraphics[width=#1\linewidth]{ifigs/#2}}%
samer@18 19 \newcommand\preals{\reals_+}
samer@18 20 \newcommand\X{\mathcal{X}}
samer@18 21 \newcommand\Y{\mathcal{Y}}
samer@18 22 \newcommand\domS{\mathcal{S}}
samer@18 23 \newcommand\A{\mathcal{A}}
samer@25 24 \newcommand\Data{\mathcal{D}}
samer@18 25 \newcommand\rvm[1]{\mathrm{#1}}
samer@18 26 \newcommand\sps{\,.\,}
samer@18 27 \newcommand\Ipred{\mathcal{I}_{\mathrm{pred}}}
samer@18 28 \newcommand\Ix{\mathcal{I}}
samer@18 29 \newcommand\IXZ{\overline{\underline{\mathcal{I}}}}
samer@18 30 \newcommand\x{\vec{x}}
samer@18 31 \newcommand\Ham[1]{\mathcal{H}_{#1}}
samer@18 32 \newcommand\subsets[2]{[#1]^{(k)}}
samer@18 33 \def\bet(#1,#2){#1..#2}
samer@18 34
samer@18 35
samer@18 36 \def\ev(#1=#2){#1\!\!=\!#2}
samer@18 37 \newcommand\rv[1]{\Omega \to #1}
samer@18 38 \newcommand\ceq{\!\!=\!}
samer@18 39 \newcommand\cmin{\!-\!}
samer@18 40 \newcommand\modulo[2]{#1\!\!\!\!\!\mod#2}
samer@18 41
samer@18 42 \newcommand\sumitoN{\sum_{i=1}^N}
samer@18 43 \newcommand\sumktoK{\sum_{k=1}^K}
samer@18 44 \newcommand\sumjtoK{\sum_{j=1}^K}
samer@18 45 \newcommand\sumalpha{\sum_{\alpha\in\A}}
samer@18 46 \newcommand\prodktoK{\prod_{k=1}^K}
samer@18 47 \newcommand\prodjtoK{\prod_{j=1}^K}
samer@18 48
samer@18 49 \newcommand\past[1]{\overset{\rule{0pt}{0.2em}\smash{\leftarrow}}{#1}}
samer@18 50 \newcommand\fut[1]{\overset{\rule{0pt}{0.1em}\smash{\rightarrow}}{#1}}
samer@18 51 \newcommand\parity[2]{P^{#1}_{2,#2}}
samer@4 52
samer@4 53 %\usepackage[parfill]{parskip}
samer@4 54
samer@4 55 \begin{document}
samer@4 56 \title{Cognitive Music Modelling: an Information Dynamics Approach}
samer@4 57
samer@4 58 \author{
hekeus@16 59 \IEEEauthorblockN{Samer A. Abdallah, Henrik Ekeus, Peter Foster}
hekeus@16 60 \IEEEauthorblockN{Andrew Robertson and Mark D. Plumbley}
samer@4 61 \IEEEauthorblockA{Centre for Digital Music\\
samer@4 62 Queen Mary University of London\\
hekeus@16 63 Mile End Road, London E1 4NS\\
hekeus@16 64 Email:}}
samer@4 65
samer@4 66 \maketitle
samer@18 67 \begin{abstract}
samer@18 68 People take in information when perceiving music. With it they continually
samer@18 69 build predictive models of what is going to happen. There is a relationship
samer@18 70 between information measures and how we perceive music. An information
samer@18 71 theoretic approach to music cognition is thus a fruitful avenue of research.
samer@18 72 In this paper, we review the theoretical foundations of information dynamics
samer@18 73 and discuss a few emerging areas of application.
hekeus@16 74 \end{abstract}
samer@4 75
samer@4 76
samer@25 77 \section{Introduction}
samer@9 78 \label{s:Intro}
samer@9 79
samer@25 80 \subsection{Expectation and surprise in music}
samer@18 81 One of the effects of listening to music is to create
samer@18 82 expectations of what is to come next, which may be fulfilled
samer@9 83 immediately, after some delay, or not at all as the case may be.
samer@9 84 This is the thesis put forward by, amongst others, music theorists
samer@18 85 L. B. Meyer \cite{Meyer67} and Narmour \citep{Narmour77}, but was
samer@18 86 recognised much earlier; for example,
samer@9 87 it was elegantly put by Hanslick \cite{Hanslick1854} in the
samer@9 88 nineteenth century:
samer@9 89 \begin{quote}
samer@9 90 `The most important factor in the mental process which accompanies the
samer@9 91 act of listening to music, and which converts it to a source of pleasure,
samer@18 92 is \ldots the intellectual satisfaction
samer@9 93 which the listener derives from continually following and anticipating
samer@9 94 the composer's intentions---now, to see his expectations fulfilled, and
samer@18 95 now, to find himself agreeably mistaken.
samer@18 96 %It is a matter of course that
samer@18 97 %this intellectual flux and reflux, this perpetual giving and receiving
samer@18 98 %takes place unconsciously, and with the rapidity of lightning-flashes.'
samer@9 99 \end{quote}
samer@9 100 An essential aspect of this is that music is experienced as a phenomenon
samer@9 101 that `unfolds' in time, rather than being apprehended as a static object
samer@9 102 presented in its entirety. Meyer argued that musical experience depends
samer@9 103 on how we change and revise our conceptions \emph{as events happen}, on
samer@9 104 how expectation and prediction interact with occurrence, and that, to a
samer@9 105 large degree, the way to understand the effect of music is to focus on
samer@9 106 this `kinetics' of expectation and surprise.
samer@9 107
samer@25 108 Prediction and expectation are essentially probabilistic concepts
samer@25 109 and can be treated mathematically using probability theory.
samer@25 110 We suppose that when we listen to music, expectations are created on the basis
samer@25 111 of our familiarity with various styles of music and our ability to
samer@25 112 detect and learn statistical regularities in the music as they emerge,
samer@25 113 There is experimental evidence that human listeners are able to internalise
samer@25 114 statistical knowledge about musical structure, \eg
samer@25 115 \citep{SaffranJohnsonAslin1999,EerolaToiviainenKrumhansl2002}, and also
samer@25 116 that statistical models can form an effective basis for computational
samer@25 117 analysis of music, \eg
samer@25 118 \cite{ConklinWitten95,PonsfordWigginsMellish1999,Pearce2005}.
samer@25 119
samer@25 120
samer@25 121 \comment{
samer@9 122 The business of making predictions and assessing surprise is essentially
samer@9 123 one of reasoning under conditions of uncertainty and manipulating
samer@9 124 degrees of belief about the various proposition which may or may not
samer@9 125 hold, and, as has been argued elsewhere \cite{Cox1946,Jaynes27}, best
samer@9 126 quantified in terms of Bayesian probability theory.
samer@9 127 Thus, we suppose that
samer@9 128 when we listen to music, expectations are created on the basis of our
samer@24 129 familiarity with various stylistic norms that apply to music in general,
samer@24 130 the particular style (or styles) of music that seem best to fit the piece
samer@24 131 we are listening to, and
samer@9 132 the emerging structures peculiar to the current piece. There is
samer@9 133 experimental evidence that human listeners are able to internalise
samer@9 134 statistical knowledge about musical structure, \eg
samer@9 135 \citep{SaffranJohnsonAslin1999,EerolaToiviainenKrumhansl2002}, and also
samer@9 136 that statistical models can form an effective basis for computational
samer@9 137 analysis of music, \eg
samer@9 138 \cite{ConklinWitten95,PonsfordWigginsMellish1999,Pearce2005}.
samer@25 139 }
samer@9 140
samer@9 141 \subsection{Music and information theory}
samer@24 142 With a probabilistic framework for music modelling and prediction in hand,
samer@25 143 we are in a position to apply Shannon's quantitative information theory
samer@25 144 \cite{Shannon48}.
samer@25 145 \comment{
samer@25 146 which provides us with a number of measures, such as entropy
samer@25 147 and mutual information, which are suitable for quantifying states of
samer@25 148 uncertainty and surprise, and thus could potentially enable us to build
samer@25 149 quantitative models of the listening process described above. They are
samer@25 150 what Berlyne \cite{Berlyne71} called `collative variables' since they are
samer@25 151 to do with patterns of occurrence rather than medium-specific details.
samer@25 152 Berlyne sought to show that the collative variables are closely related to
samer@25 153 perceptual qualities like complexity, tension, interestingness,
samer@25 154 and even aesthetic value, not just in music, but in other temporal
samer@25 155 or visual media.
samer@25 156 The relevance of information theory to music and art has
samer@25 157 also been addressed by researchers from the 1950s onwards
samer@25 158 \cite{Youngblood58,CoonsKraehenbuehl1958,Cohen1962,HillerBean66,Moles66,Meyer67}.
samer@25 159 }
samer@9 160 The relationship between information theory and music and art in general has been the
samer@9 161 subject of some interest since the 1950s
samer@9 162 \cite{Youngblood58,CoonsKraehenbuehl1958,HillerBean66,Moles66,Meyer67,Cohen1962}.
samer@9 163 The general thesis is that perceptible qualities and subjective
samer@9 164 states like uncertainty, surprise, complexity, tension, and interestingness
samer@9 165 are closely related to
samer@9 166 information-theoretic quantities like entropy, relative entropy,
samer@9 167 and mutual information.
samer@9 168 % and are major determinants of the overall experience.
samer@9 169 Berlyne \cite{Berlyne71} called such quantities `collative variables', since
samer@9 170 they are to do with patterns of occurrence rather than medium-specific details,
samer@9 171 and developed the ideas of `information aesthetics' in an experimental setting.
samer@9 172 % Berlyne's `new experimental aesthetics', the `information-aestheticians'.
samer@9 173
samer@9 174 % Listeners then experience greater or lesser levels of surprise
samer@9 175 % in response to departures from these norms.
samer@9 176 % By careful manipulation
samer@9 177 % of the material, the composer can thus define, and induce within the
samer@9 178 % listener, a temporal programme of varying
samer@9 179 % levels of uncertainty, ambiguity and surprise.
samer@9 180
samer@9 181
samer@9 182 \subsection{Information dynamic approach}
samer@9 183
samer@24 184 Bringing the various strands together, our working hypothesis is that as a
samer@24 185 listener (to which will refer as `it') listens to a piece of music, it maintains
samer@25 186 a dynamically evolving probabilistic model that enables it to make predictions
samer@24 187 about how the piece will continue, relying on both its previous experience
samer@24 188 of music and the immediate context of the piece. As events unfold, it revises
samer@25 189 its probabilistic belief state, which includes predictive
samer@25 190 distributions over possible future events. These
samer@25 191 % distributions and changes in distributions
samer@25 192 can be characterised in terms of a handful of information
samer@25 193 theoretic-measures such as entropy and relative entropy. By tracing the
samer@24 194 evolution of a these measures, we obtain a representation which captures much
samer@25 195 of the significant structure of the music.
samer@25 196
samer@25 197 One of the consequences of this approach is that regardless of the details of
samer@25 198 the sensory input or even which sensory modality is being processed, the resulting
samer@25 199 analysis is in terms of the same units: quantities of information (bits) and
samer@25 200 rates of information flow (bits per second). The probabilistic and information
samer@25 201 theoretic concepts in terms of which the analysis is framed are universal to all sorts
samer@25 202 of data.
samer@25 203 In addition, when adaptive probabilistic models are used, expectations are
samer@25 204 created mainly in response to to \emph{patterns} of occurence,
samer@25 205 rather the details of which specific things occur.
samer@25 206 Together, these suggest that an information dynamic analysis captures a
samer@25 207 high level of \emph{abstraction}, and could be used to
samer@25 208 make structural comparisons between different temporal media,
samer@25 209 such as music, film, animation, and dance.
samer@25 210 % analyse and compare information
samer@25 211 % flow in different temporal media regardless of whether they are auditory,
samer@25 212 % visual or otherwise.
samer@9 213
samer@25 214 Another consequence is that the information dynamic approach gives us a principled way
samer@24 215 to address the notion of \emph{subjectivity}, since the analysis is dependent on the
samer@24 216 probability model the observer starts off with, which may depend on prior experience
samer@24 217 or other factors, and which may change over time. Thus, inter-subject variablity and
samer@24 218 variation in subjects' responses over time are
samer@24 219 fundamental to the theory.
samer@9 220
samer@18 221 %modelling the creative process, which often alternates between generative
samer@18 222 %and selective or evaluative phases \cite{Boden1990}, and would have
samer@18 223 %applications in tools for computer aided composition.
samer@18 224
samer@18 225
samer@18 226 \section{Theoretical review}
samer@18 227
samer@24 228 \subsection{Entropy and information in sequences}
samer@25 229 Let $X$ denote some variable whos value is initially unknown to our
samer@25 230 hypothetical observer. We will treat $X$ mathematically as a random variable,
samer@25 231 with a value to be drawn from some set (or \emph{alphabet}) $\A$ and a
samer@25 232 probability distribution representing the observer's beliefs about the
samer@25 233 true value of $X$.
samer@25 234 In this case, the observer's uncertainty about $X$ can be quantified
samer@25 235 as the entropy of the random variable $H(X)$. For a discrete variable
samer@25 236 with probability mass function $p:\A \to [0,1]$, this is
samer@25 237 \begin{equation}
samer@25 238 H(X) = \sum_{x\in\A} -p(x) \log p(x) = \expect{-\log p(X)},
samer@25 239 \end{equation}
samer@25 240 where $\expect{}$ is the expectation operator. The negative-log-probability
samer@25 241 $\ell(x) = -\log p(x)$ of a particular value $x$ can usefully be thought of as
samer@25 242 the \emph{surprisingness} of the value $x$ should it be observed, and
samer@25 243 hence the entropy is the expected surprisingness.
samer@25 244
samer@25 245 Now suppose that the observer receives some new data $\Data$ that
samer@25 246 causes a revision of its beliefs about $X$. The \emph{information}
samer@25 247 in this new data \emph{about} $X$ can be quantified as the
samer@25 248 Kullback-Leibler (KL) divergence between the prior and posterior
samer@25 249 distributions $p(x)$ and $p(x|\Data)$ respectively:
samer@25 250 \begin{equation}
samer@25 251 \mathcal{I}_{\Data\to X} = D(p_{X|\Data} || p_{X})
samer@25 252 = \sum_{x\in\A} p(x|\Data) \log \frac{p(x|\Data)}{p(x)}.
samer@25 253 \end{equation}
samer@25 254 If there are multiple variables $X_1, X_2$
samer@25 255 \etc which the observer believes to be dependent, then the observation of
samer@25 256 one may change its beliefs and hence yield information about the
samer@25 257 others.
samer@25 258 The relationships between the various joint entropies, conditional
samer@25 259 entropies, mutual informations and conditional mutual informations
samer@25 260 can be visualised in Venn diagram-like \emph{information diagrams}
samer@25 261 or I-diagrams \cite{Yeung1991}, for example, the three-variable
samer@25 262 I-diagram in \figrf{venn-example}.
samer@25 263
samer@18 264
samer@18 265 \begin{fig}{venn-example}
samer@18 266 \newcommand\rad{2.2em}%
samer@18 267 \newcommand\circo{circle (3.4em)}%
samer@18 268 \newcommand\labrad{4.3em}
samer@18 269 \newcommand\bound{(-6em,-5em) rectangle (6em,6em)}
samer@18 270 \newcommand\colsep{\ }
samer@18 271 \newcommand\clipin[1]{\clip (#1) \circo;}%
samer@18 272 \newcommand\clipout[1]{\clip \bound (#1) \circo;}%
samer@18 273 \newcommand\cliptwo[3]{%
samer@18 274 \begin{scope}
samer@18 275 \clipin{#1};
samer@18 276 \clipin{#2};
samer@18 277 \clipout{#3};
samer@18 278 \fill[black!30] \bound;
samer@18 279 \end{scope}
samer@18 280 }%
samer@18 281 \newcommand\clipone[3]{%
samer@18 282 \begin{scope}
samer@18 283 \clipin{#1};
samer@18 284 \clipout{#2};
samer@18 285 \clipout{#3};
samer@18 286 \fill[black!15] \bound;
samer@18 287 \end{scope}
samer@18 288 }%
samer@18 289 \begin{tabular}{c@{\colsep}c}
samer@18 290 \begin{tikzpicture}[baseline=0pt]
samer@18 291 \coordinate (p1) at (90:\rad);
samer@18 292 \coordinate (p2) at (210:\rad);
samer@18 293 \coordinate (p3) at (-30:\rad);
samer@18 294 \clipone{p1}{p2}{p3};
samer@18 295 \clipone{p2}{p3}{p1};
samer@18 296 \clipone{p3}{p1}{p2};
samer@18 297 \cliptwo{p1}{p2}{p3};
samer@18 298 \cliptwo{p2}{p3}{p1};
samer@18 299 \cliptwo{p3}{p1}{p2};
samer@18 300 \begin{scope}
samer@18 301 \clip (p1) \circo;
samer@18 302 \clip (p2) \circo;
samer@18 303 \clip (p3) \circo;
samer@18 304 \fill[black!45] \bound;
samer@18 305 \end{scope}
samer@18 306 \draw (p1) \circo;
samer@18 307 \draw (p2) \circo;
samer@18 308 \draw (p3) \circo;
samer@18 309 \path
samer@18 310 (barycentric cs:p3=1,p1=-0.2,p2=-0.1) +(0ex,0) node {$I_{3|12}$}
samer@18 311 (barycentric cs:p1=1,p2=-0.2,p3=-0.1) +(0ex,0) node {$I_{1|23}$}
samer@18 312 (barycentric cs:p2=1,p3=-0.2,p1=-0.1) +(0ex,0) node {$I_{2|13}$}
samer@18 313 (barycentric cs:p3=1,p2=1,p1=-0.55) +(0ex,0) node {$I_{23|1}$}
samer@18 314 (barycentric cs:p1=1,p3=1,p2=-0.55) +(0ex,0) node {$I_{13|2}$}
samer@18 315 (barycentric cs:p2=1,p1=1,p3=-0.55) +(0ex,0) node {$I_{12|3}$}
samer@18 316 (barycentric cs:p3=1,p2=1,p1=1) node {$I_{123}$}
samer@18 317 ;
samer@18 318 \path
samer@18 319 (p1) +(140:\labrad) node {$X_1$}
samer@18 320 (p2) +(-140:\labrad) node {$X_2$}
samer@18 321 (p3) +(-40:\labrad) node {$X_3$};
samer@18 322 \end{tikzpicture}
samer@18 323 &
samer@18 324 \parbox{0.5\linewidth}{
samer@18 325 \small
samer@18 326 \begin{align*}
samer@18 327 I_{1|23} &= H(X_1|X_2,X_3) \\
samer@18 328 I_{13|2} &= I(X_1;X_3|X_2) \\
samer@18 329 I_{1|23} + I_{13|2} &= H(X_1|X_2) \\
samer@18 330 I_{12|3} + I_{123} &= I(X_1;X_2)
samer@18 331 \end{align*}
samer@18 332 }
samer@18 333 \end{tabular}
samer@18 334 \caption{
samer@24 335 Information diagram visualisation of entropies and mutual informations
samer@18 336 for three random variables $X_1$, $X_2$ and $X_3$. The areas of
samer@18 337 the three circles represent $H(X_1)$, $H(X_2)$ and $H(X_3)$ respectively.
samer@18 338 The total shaded area is the joint entropy $H(X_1,X_2,X_3)$.
samer@18 339 The central area $I_{123}$ is the co-information \cite{McGill1954}.
samer@18 340 Some other information measures are indicated in the legend.
samer@18 341 }
samer@18 342 \end{fig}
samer@18 343 [Adopting notation of recent Binding information paper.]
samer@18 344 \subsection{'Anatomy of a bit' stuff}
samer@18 345 Entropy rates, redundancy, predictive information etc.
samer@18 346 Information diagrams.
samer@18 347
samer@18 348 \begin{fig}{predinfo-bg}
samer@18 349 \newcommand\subfig[2]{\shortstack{#2\\[0.75em]#1}}
samer@18 350 \newcommand\rad{1.8em}%
samer@18 351 \newcommand\ovoid[1]{%
samer@18 352 ++(-#1,\rad)
samer@18 353 -- ++(2 * #1,0em) arc (90:-90:\rad)
samer@18 354 -- ++(-2 * #1,0em) arc (270:90:\rad)
samer@18 355 }%
samer@18 356 \newcommand\axis{2.75em}%
samer@18 357 \newcommand\olap{0.85em}%
samer@18 358 \newcommand\offs{3.6em}
samer@18 359 \newcommand\colsep{\hspace{5em}}
samer@18 360 \newcommand\longblob{\ovoid{\axis}}
samer@18 361 \newcommand\shortblob{\ovoid{1.75em}}
samer@18 362 \begin{tabular}{c@{\colsep}c}
samer@18 363 \subfig{(a) excess entropy}{%
samer@18 364 \newcommand\blob{\longblob}
samer@18 365 \begin{tikzpicture}
samer@18 366 \coordinate (p1) at (-\offs,0em);
samer@18 367 \coordinate (p2) at (\offs,0em);
samer@18 368 \begin{scope}
samer@18 369 \clip (p1) \blob;
samer@18 370 \clip (p2) \blob;
samer@18 371 \fill[lightgray] (-1,-1) rectangle (1,1);
samer@18 372 \end{scope}
samer@18 373 \draw (p1) +(-0.5em,0em) node{\shortstack{infinite\\past}} \blob;
samer@18 374 \draw (p2) +(0.5em,0em) node{\shortstack{infinite\\future}} \blob;
samer@18 375 \path (0,0) node (future) {$E$};
samer@18 376 \path (p1) +(-2em,\rad) node [anchor=south] {$\ldots,X_{-1}$};
samer@18 377 \path (p2) +(2em,\rad) node [anchor=south] {$X_0,\ldots$};
samer@18 378 \end{tikzpicture}%
samer@18 379 }%
samer@18 380 \\[1.25em]
samer@18 381 \subfig{(b) predictive information rate $b_\mu$}{%
samer@18 382 \begin{tikzpicture}%[baseline=-1em]
samer@18 383 \newcommand\rc{2.1em}
samer@18 384 \newcommand\throw{2.5em}
samer@18 385 \coordinate (p1) at (210:1.5em);
samer@18 386 \coordinate (p2) at (90:0.7em);
samer@18 387 \coordinate (p3) at (-30:1.5em);
samer@18 388 \newcommand\bound{(-7em,-2.6em) rectangle (7em,3.0em)}
samer@18 389 \newcommand\present{(p2) circle (\rc)}
samer@18 390 \newcommand\thepast{(p1) ++(-\throw,0) \ovoid{\throw}}
samer@18 391 \newcommand\future{(p3) ++(\throw,0) \ovoid{\throw}}
samer@18 392 \newcommand\fillclipped[2]{%
samer@18 393 \begin{scope}[even odd rule]
samer@18 394 \foreach \thing in {#2} {\clip \thing;}
samer@18 395 \fill[black!#1] \bound;
samer@18 396 \end{scope}%
samer@18 397 }%
samer@18 398 \fillclipped{30}{\present,\future,\bound \thepast}
samer@18 399 \fillclipped{15}{\present,\bound \future,\bound \thepast}
samer@18 400 \draw \future;
samer@18 401 \fillclipped{45}{\present,\thepast}
samer@18 402 \draw \thepast;
samer@18 403 \draw \present;
samer@18 404 \node at (barycentric cs:p2=1,p1=-0.17,p3=-0.17) {$r_\mu$};
samer@18 405 \node at (barycentric cs:p1=-0.4,p2=1.0,p3=1) {$b_\mu$};
samer@18 406 \node at (barycentric cs:p3=0,p2=1,p1=1.2) [shape=rectangle,fill=black!45,inner sep=1pt]{$\rho_\mu$};
samer@18 407 \path (p2) +(140:3em) node {$X_0$};
samer@18 408 % \node at (barycentric cs:p3=0,p2=1,p1=1) {$\rho_\mu$};
samer@18 409 \path (p3) +(3em,0em) node {\shortstack{infinite\\future}};
samer@18 410 \path (p1) +(-3em,0em) node {\shortstack{infinite\\past}};
samer@18 411 \path (p1) +(-4em,\rad) node [anchor=south] {$\ldots,X_{-1}$};
samer@18 412 \path (p3) +(4em,\rad) node [anchor=south] {$X_1,\ldots$};
samer@18 413 \end{tikzpicture}}%
samer@18 414 \\[0.5em]
samer@18 415 \end{tabular}
samer@18 416 \caption{
samer@18 417 Venn diagram representation of several information measures for
samer@18 418 stationary random processes. Each circle or oval represents a random
samer@18 419 variable or sequence of random variables relative to time $t=0$. Overlapped areas
samer@18 420 correspond to various mutual information as in \Figrf{venn-example}.
samer@18 421 In (c), the circle represents the `present'. Its total area is
samer@18 422 $H(X_0)=H(1)=\rho_\mu+r_\mu+b_\mu$, where $\rho_\mu$ is the multi-information
samer@18 423 rate, $r_\mu$ is the residual entropy rate, and $b_\mu$ is the predictive
samer@18 424 information rate. The entropy rate is $h_\mu = r_\mu+b_\mu$.
samer@18 425 }
samer@18 426 \end{fig}
samer@18 427
samer@25 428 \subsection{Predictive information rate}
samer@18 429 In previous work \cite{AbdallahPlumbley2009}, we introduced
samer@18 430 % examined several
samer@18 431 % information-theoretic measures that could be used to characterise
samer@18 432 % not only random processes (\ie, an ensemble of possible sequences),
samer@18 433 % but also the dynamic progress of specific realisations of such processes.
samer@18 434 % One of these measures was
samer@18 435 %
samer@18 436 the \emph{predictive information rate}
samer@18 437 (PIR), which is the average information
samer@18 438 in one observation about the infinite future given the infinite past.
samer@18 439 If $\past{X}_t=(\ldots,X_{t-2},X_{t-1})$ denotes the variables
samer@18 440 before time $t$,
samer@18 441 and $\fut{X}_t = (X_{t+1},X_{t+2},\ldots)$ denotes
samer@18 442 those after $t$,
samer@18 443 the PIR at time $t$ is defined as a conditional mutual information:
samer@18 444 \begin{equation}
samer@18 445 \label{eq:PIR}
samer@18 446 \IXZ_t \define I(X_t;\fut{X}_t|\past{X}_t) = H(\fut{X}_t|\past{X}_t) - H(\fut{X}_t|X_t,\past{X}_t).
samer@18 447 \end{equation}
samer@18 448 % (The underline/overline notation follows that of \cite[\S 3]{AbdallahPlumbley2009}.)
samer@18 449 % Hence, $\Ix_t$ quantifies the \emph{new}
samer@18 450 % information gained about the future from the observation at time $t$.
samer@18 451 Equation \eqrf{PIR} can be read as the average reduction
samer@18 452 in uncertainty about the future on learning $X_t$, given the past.
samer@18 453 Due to the symmetry of the mutual information, it can also be written
samer@18 454 as
samer@18 455 \begin{equation}
samer@18 456 % \IXZ_t
samer@18 457 I(X_t;\fut{X}_t|\past{X}_t) = H(X_t|\past{X}_t) - H(X_t|\fut{X}_t,\past{X}_t).
samer@18 458 % \label{<++>}
samer@18 459 \end{equation}
samer@18 460 % If $X$ is stationary, then
samer@18 461 Now, in the shift-invariant case, $H(X_t|\past{X}_t)$
samer@18 462 is the familiar entropy rate $h_\mu$, but $H(X_t|\fut{X}_t,\past{X}_t)$,
samer@18 463 the conditional entropy of one variable given \emph{all} the others
samer@18 464 in the sequence, future as well as past, is what
samer@18 465 we called the \emph{residual entropy rate} $r_\mu$ in \cite{AbdallahPlumbley2010},
samer@18 466 but was previously identified by Verd{\'u} and Weissman \cite{VerduWeissman2006} as the
samer@18 467 \emph{erasure entropy rate}.
samer@18 468 % It is not expressible in terms of the block entropy function $H(\cdot)$.
samer@18 469 It can be defined as the limit
samer@18 470 \begin{equation}
samer@18 471 \label{eq:residual-entropy-rate}
samer@18 472 r_\mu \define \lim_{N\tends\infty} H(X_{\bet(-N,N)}) - H(X_{\bet(-N,-1)},X_{\bet(1,N)}).
samer@18 473 \end{equation}
samer@18 474 The second term, $H(X_{\bet(1,N)},X_{\bet(-N,-1)})$,
samer@18 475 is the joint entropy of two non-adjacent blocks each of length $N$ with a
samer@18 476 gap between them,
samer@18 477 and cannot be expressed as a function of block entropies alone.
samer@18 478 % In order to associate it with the concept of \emph{binding information} which
samer@18 479 % we will define in \secrf{binding-info}, we
samer@18 480 Thus, the shift-invariant PIR (which we will write as $b_\mu$) is the difference between
samer@18 481 the entropy rate and the erasure entropy rate: $b_\mu = h_\mu - r_\mu$.
samer@18 482 These relationships are illustrated in \Figrf{predinfo-bg}, along with
samer@18 483 several of the information measures we have discussed so far.
samer@18 484
samer@18 485
samer@24 486 \begin{fig}{wundt}
samer@24 487 \raisebox{-4em}{\colfig[0.43]{wundt}}
samer@24 488 % {\ \shortstack{{\Large$\longrightarrow$}\\ {\scriptsize\emph{exposure}}}\ }
samer@24 489 {\ {\large$\longrightarrow$}\ }
samer@24 490 \raisebox{-4em}{\colfig[0.43]{wundt2}}
samer@24 491 \caption{
samer@24 492 The Wundt curve relating randomness/complexity with
samer@24 493 perceived value. Repeated exposure sometimes results
samer@24 494 in a move to the left along the curve \cite{Berlyne71}.
samer@24 495 }
samer@24 496 \end{fig}
samer@24 497
samer@25 498 \subsection{Other sequential information measures}
samer@25 499
samer@25 500 James et al \cite{JamesEllisonCrutchfield2011} study the predictive information
samer@25 501 rate and also examine some related measures. In particular they identify the
samer@25 502 $\sigma_\mu$, the difference between the multi-information rate and the excess
samer@25 503 entropy, as an interesting quantity that measures the predictive benefit of
samer@25 504 model-building (that is, maintaining an internal state summarising past
samer@25 505 observations in order to make better predictions). They also identify
samer@25 506 $w_\mu = \rho_\mu + b_{\mu}$, which they call the \emph{local exogenous
samer@25 507 information}.
samer@24 508
samer@18 509 \subsection{First order Markov chains}
samer@18 510 These are the simplest non-trivial models to which information dynamics methods
samer@18 511 can be applied. In \cite{AbdallahPlumbley2009} we, showed that the predictive information
samer@18 512 rate can be expressed simply in terms of the entropy rate of the Markov chain.
samer@18 513 If we let $a$ denote the transition matrix of the Markov chain, and $h_a$ it's
samer@18 514 entropy rate, then its predictive information rate $b_a$ is
samer@18 515 \begin{equation}
samer@18 516 b_a = h_{a^2} - h_a,
samer@18 517 \end{equation}
samer@18 518 where $a^2 = aa$, the transition matrix squared, is the transition matrix
samer@18 519 of the `skip one' Markov chain obtained by leaving out every other observation.
samer@18 520
samer@18 521 \subsection{Higher order Markov chains}
samer@18 522 Second and higher order Markov chains can be treated in a similar way by transforming
samer@18 523 to a first order representation of the high order Markov chain. If we are dealing
samer@18 524 with an $N$th order model, this is done forming a new alphabet of possible observations
samer@18 525 consisting of all possible $N$-tuples of symbols from the base alphabet. An observation
samer@18 526 in this new model represents a block of $N$ observations from the base model. The next
samer@18 527 observation represents the block of $N$ obtained by shift the previous block along
samer@18 528 by one step. The new Markov of chain is parameterised by a sparse $K^N\times K^N$
samer@18 529 transition matrix $\hat{a}$.
samer@18 530 \begin{equation}
samer@18 531 b_{\hat{a}} = h_{\hat{a}^{N+1}} - N h_{\hat{a}},
samer@18 532 \end{equation}
samer@18 533 where $\hat{a}^{N+1}$ is the $N+1$th power of the transition matrix.
samer@18 534
samer@9 535
samer@4 536
hekeus@16 537 \section{Information Dynamics in Analysis}
samer@4 538
hekeus@16 539 \subsection{Musicological Analysis}
samer@4 540 refer to the work with the analysis of minimalist pieces
samer@4 541
samer@24 542 \begin{fig}{twopages}
samer@24 543 % \colfig[0.96]{matbase/fig9471} % update from mbc paper
samer@24 544 \colfig[0.97]{matbase/fig72663}\\ % later update from mbc paper (Keith's new picks)
samer@24 545 \vspace*{1em}
samer@24 546 \colfig[0.97]{matbase/fig13377} % rule based analysis
samer@24 547 \caption{Analysis of \emph{Two Pages}.
samer@24 548 The thick vertical lines are the part boundaries as indicated in
samer@24 549 the score by the composer.
samer@24 550 The thin grey lines
samer@24 551 indicate changes in the melodic `figures' of which the piece is
samer@24 552 constructed. In the `model information rate' panel, the black asterisks
samer@24 553 mark the
samer@24 554 six most surprising moments selected by Keith Potter.
samer@24 555 The bottom panel shows a rule-based boundary strength analysis computed
samer@24 556 using Cambouropoulos' LBDM.
samer@24 557 All information measures are in nats and time is in notes.
samer@24 558 }
samer@24 559 \end{fig}
samer@24 560
samer@24 561 \begin{fig}{metre}
samer@24 562 \scalebox{1}[0.8]{%
samer@24 563 \begin{tabular}{cc}
samer@24 564 \colfig[0.45]{matbase/fig36859} & \colfig[0.45]{matbase/fig88658} \\
samer@24 565 \colfig[0.45]{matbase/fig48061} & \colfig[0.45]{matbase/fig46367} \\
samer@24 566 \colfig[0.45]{matbase/fig99042} & \colfig[0.45]{matbase/fig87490}
samer@24 567 % \colfig[0.46]{matbase/fig56807} & \colfig[0.48]{matbase/fig27144} \\
samer@24 568 % \colfig[0.46]{matbase/fig87574} & \colfig[0.48]{matbase/fig13651} \\
samer@24 569 % \colfig[0.44]{matbase/fig19913} & \colfig[0.46]{matbase/fig66144} \\
samer@24 570 % \colfig[0.48]{matbase/fig73098} & \colfig[0.48]{matbase/fig57141} \\
samer@24 571 % \colfig[0.48]{matbase/fig25703} & \colfig[0.48]{matbase/fig72080} \\
samer@24 572 % \colfig[0.48]{matbase/fig9142} & \colfig[0.48]{matbase/fig27751}
samer@24 573
samer@24 574 \end{tabular}%
samer@24 575 }
samer@24 576 \caption{Metrical analysis by computing average surprisingness and
samer@24 577 informative of notes at different periodicities (\ie hypothetical
samer@24 578 bar lengths) and phases (\ie positions within a bar).
samer@24 579 }
samer@24 580 \end{fig}
samer@24 581
samer@23 582 \subsection{Content analysis/Sound Categorisation}.
samer@23 583 Using Information Dynamics it is possible to segment music. From there we
samer@23 584 can then use this to search large data sets. Determine musical structure for
samer@23 585 the purpose of playlist navigation and search.
hekeus@16 586 \emph{Peter}
samer@4 587
samer@4 588 \subsection{Beat Tracking}
hekeus@16 589 \emph{Andrew}
samer@4 590
samer@4 591
samer@24 592 \section{Information dynamics as compositional aid}
hekeus@13 593
samer@23 594 In addition to applying information dynamics to analysis, it is also possible
samer@23 595 use this approach in design, such as the composition of musical materials. By
samer@23 596 providing a framework for linking information theoretic measures to the control
samer@23 597 of generative processes, it becomes possible to steer the output of these processes
samer@23 598 to match a criteria defined by these measures. For instance outputs of a
samer@23 599 stochastic musical process could be filtered to match constraints defined by a
samer@23 600 set of information theoretic measures.
hekeus@13 601
samer@23 602 The use of stochastic processes for the generation of musical material has been
samer@23 603 widespread for decades -- Iannis Xenakis applied probabilistic mathematical
samer@23 604 models to the creation of musical materials, including to the formulation of a
samer@23 605 theory of Markovian Stochastic Music. However we can use information dynamics
samer@23 606 measures to explore and interface with such processes at the high and abstract
samer@23 607 level of expectation, randomness and predictability. The Melody Triangle is
samer@23 608 such a system.
hekeus@13 609
samer@23 610 \subsection{The Melody Triangle}
samer@23 611 The Melody Triangle is an exploratory interface for the discovery of melodic
samer@23 612 content, where the input -- positions within a triangle -- directly map to
samer@23 613 information theoretic measures associated with the output.
samer@23 614 The measures are the entropy rate, redundancy and predictive information rate
samer@23 615 of the random process used to generate the sequence of notes.
samer@23 616 These are all related to the predictability of the the sequence and as such
samer@23 617 address the notions of expectation and surprise in the perception of
samer@23 618 music.\emph{self-plagiarised}
hekeus@13 619
samer@23 620 Before the Melody Triangle can used, it has to be `populated' with possible
samer@23 621 parameter values for the melody generators. These are then plotted in a 3d
samer@23 622 statistical space of redundancy, entropy rate and predictive information rate.
samer@23 623 In our case we generated thousands of transition matrixes, representing first-order
samer@23 624 Markov chains, by a random sampling method. In figure \ref{InfoDynEngine} we see
samer@23 625 a representation of how these matrixes are distributed in the 3d statistical
samer@23 626 space; each one of these points corresponds to a transition
samer@23 627 matrix.\emph{self-plagiarised}
hekeus@17 628
hekeus@17 629 \begin{figure}
hekeus@17 630 \centering
samer@21 631 \includegraphics[width=\linewidth]{figs/mtriscat}
samer@21 632 \caption{The population of transition matrices distributed along three axes of
samer@21 633 redundancy, entropy rate and predictive information rate (all measured in bits).
samer@21 634 The concentrations of points along the redundancy axis correspond
samer@21 635 to Markov chains which are roughly periodic with periods of 2 (redundancy 1 bit),
samer@21 636 3, 4, \etc all the way to period 8 (redundancy 3 bits). The colour of each point
samer@21 637 represents its PIR---note that the highest values are found at intermediate entropy
samer@21 638 and redundancy, and that the distribution as a whole makes a curved triangle. Although
samer@21 639 not visible in this plot, it is largely hollow in the middle.
samer@21 640 \label{InfoDynEngine}}
hekeus@17 641 \end{figure}
hekeus@17 642
samer@4 643
samer@23 644 When we look at the distribution of transition matrixes plotted in this space,
samer@23 645 we see that it forms an arch shape that is fairly thin. It thus becomes a
samer@23 646 reasonable approximation to pretend that it is just a sheet in two dimensions;
samer@23 647 and so we stretch out this curved arc into a flat triangle. It is this triangular
samer@23 648 sheet that is our `Melody Triangle' and forms the interface by which the system
samer@23 649 is controlled. \emph{self-plagiarised}
samer@4 650
samer@23 651 When the Melody Triangle is used, regardless of whether it is as a screen based
samer@23 652 system, or as an interactive installation, it involves a mapping to this statistical
samer@23 653 space. When the user, through the interface, selects a position within the
samer@23 654 triangle, the corresponding transition matrix is returned. Figure \ref{TheTriangle}
samer@23 655 shows how the triangle maps to different measures of redundancy, entropy rate
samer@23 656 and predictive information rate.\emph{self-plagiarised}
hekeus@17 657 \begin{figure}
hekeus@17 658 \centering
samer@24 659 \includegraphics[width=0.85\linewidth]{figs/TheTriangle.pdf}
hekeus@17 660 \caption{The Melody Triangle\label{TheTriangle}}
hekeus@17 661 \end{figure}
samer@23 662 Each corner corresponds to three different extremes of predictability and
samer@23 663 unpredictability, which could be loosely characterised as `periodicity', `noise'
samer@23 664 and `repetition'. Melodies from the `noise' corner have no discernible pattern;
samer@23 665 they have high entropy rate, low predictive information rate and low redundancy.
samer@23 666 These melodies are essentially totally random. A melody along the `periodicity'
samer@23 667 to `repetition' edge are all deterministic loops that get shorter as we approach
samer@23 668 the `repetition' corner, until it becomes just one repeating note. It is the
samer@23 669 areas in between the extremes that provide the more `interesting' melodies. That
samer@23 670 is, those that have some level of unpredictability, but are not completely ran-
samer@23 671 dom. Or, conversely, that are predictable, but not entirely so. This triangular
samer@23 672 space allows for an intuitive explorationof expectation and surprise in temporal
samer@23 673 sequences based on a simple model of how one might guess the next event given
samer@23 674 the previous one.\emph{self-plagiarised}
samer@23 675
samer@4 676
samer@23 677
samer@23 678 Any number of interfaces could be developed for the Melody Triangle. We have
samer@23 679 developed two; a standard screen based interface where a user moves tokens with
samer@23 680 a mouse in and around a triangle on screen, and a multi-user interactive
samer@23 681 installation where a Kinect camera tracks individuals in a space and maps their
samer@23 682 positions in the space to the triangle.
samer@23 683 Each visitor would generate a melody, and could collaborate with their co-visitors
samer@23 684 to generate musical textures -- a playful yet informative way to explore
samer@23 685 expectation and surprise in music.
samer@23 686
samer@23 687 As a screen based interface the Melody Triangle can serve as composition tool.
samer@23 688 A triangle is drawn on the screen, screen space thus mapped to the statistical
samer@23 689 space of the Melody Triangle.
samer@23 690 A number of round tokens, each representing a melody can be dragged in and
samer@23 691 around the triangle. When a token is dragged into the triangle, the system
samer@23 692 will start generating the sequence of notes with statistical properties that
samer@23 693 correspond to its position in the triangle.\emph{self-plagiarised}
samer@23 694
samer@23 695 In this mode, the Melody Triangle can be used as a kind of composition assistant
samer@23 696 for the generation of interesting musical textures and melodies. However unlike
samer@23 697 other computer aided composition tools or programming environments, here the
samer@23 698 composer engages with music on the high and abstract level of expectation,
samer@23 699 randomness and predictability.\emph{self-plagiarised}
samer@23 700
hekeus@13 701
hekeus@13 702 Additionally the Melody Triangle serves as an effective tool for experimental investigations into musical preference and their relationship to the information dynamics models.
samer@4 703
hekeus@13 704 %As the Melody Triangle essentially operates on a stream of symbols, it it is possible to apply the melody triangle to the design of non-sonic content.
hekeus@13 705
hekeus@13 706 \section{Musical Preference and Information Dynamics}
samer@23 707 We carried out a preliminary study that sought to identify any correlation between
samer@23 708 aesthetic preference and the information theoretical measures of the Melody
samer@23 709 Triangle. In this study participants were asked to use the screen based interface
samer@23 710 but it was simplified so that all they could do was move tokens around. To help
samer@23 711 discount visual biases, the axes of the triangle would be randomly rearranged
samer@23 712 for each participant.\emph{self-plagiarised}
hekeus@16 713
samer@23 714 The study was divided in to two parts, the first investigated musical preference
samer@23 715 with respect to single melodies at different tempos. In the second part of the
samer@23 716 study, a background melody is playing and the participants are asked to continue
samer@23 717 playing with the system under the implicit assumption that they will try to find
samer@23 718 a second melody that works well with the background melody. For each participant
samer@23 719 this was done four times, each with a different background melody from four
samer@23 720 different areas of the Melody Triangle. For all parts of the study the participants
samer@23 721 were asked to signal, by pressing the space bar, whenever they liked what they
samer@23 722 were hearing.\emph{self-plagiarised}
samer@4 723
hekeus@13 724 \emph{todo - results}
samer@4 725
hekeus@13 726 \section{Information Dynamics as Evaluative Feedback Mechanism}
hekeus@13 727
hekeus@13 728 \emph{todo - code the info dyn evaluator :) }
samer@4 729
samer@23 730 It is possible to use information dynamics measures to develop a kind of `critic'
samer@23 731 that would evaluate a stream of symbols. For instance we could develop a system
samer@23 732 to notify us if a stream of symbols is too boring, either because they are too
samer@23 733 repetitive or too chaotic. This could be used to evaluate both pre-composed
samer@23 734 streams of symbols, or could even be used to provide real-time feedback in an
samer@23 735 improvisatory setup.
hekeus@13 736
samer@23 737 \emph{comparable system} Gordon Pask's Musicolor (1953) applied a similar notion
samer@23 738 of boredom in its design. The Musicolour would react to audio input through a
samer@23 739 microphone by flashing coloured lights. Rather than a direct mapping of sound
samer@23 740 to light, Pask designed the device to be a partner to a performing musician. It
samer@23 741 would adapt its lighting pattern based on the rhythms and frequencies it would
samer@23 742 hear, quickly `learning' to flash in time with the music. However Pask endowed
samer@23 743 the device with the ability to `be bored'; if the rhythmic and frequency content
samer@23 744 of the input remained the same for too long it would listen for other rhythms
samer@23 745 and frequencies, only lighting when it heard these. As the Musicolour would
samer@23 746 `get bored', the musician would have to change and vary their playing, eliciting
samer@23 747 new and unexpected outputs in trying to keep the Musicolour interested.
samer@4 748
samer@23 749 In a similar vein, our \emph{Information Dynamics Critic}(name?) allows for an
samer@23 750 evaluative measure of an input stream, however containing a more sophisticated
samer@23 751 notion of boredom that \dots
samer@23 752
hekeus@13 753
hekeus@13 754
hekeus@13 755
samer@4 756 \section{Conclusion}
samer@4 757
samer@9 758 \bibliographystyle{unsrt}
hekeus@16 759 {\bibliography{all,c4dm,nime}}
samer@4 760 \end{document}