view draft.tex @ 18:ca694f7dc3f9

added bib files.
author samer
date Fri, 09 Mar 2012 18:45:55 +0000
parents e47aaea2ac28
children 739b2444a4ac
line wrap: on
line source
\documentclass[conference,a4paper]{IEEEtran}
\usepackage{cite}
\usepackage[cmex10]{amsmath}
\usepackage{graphicx}
\usepackage{amssymb}
\usepackage{epstopdf}
\usepackage{url}
\usepackage{listings}
%\usepackage[expectangle]{tools}
\usepackage{tools}
\usepackage{tikz}
\usetikzlibrary{calc}
\usetikzlibrary{matrix}
\usetikzlibrary{patterns}
\usetikzlibrary{arrows}

\let\citep=\cite
\newcommand{\colfig}[2][1]{\includegraphics[width=#1\linewidth]{figs/#2}}%
\newcommand\preals{\reals_+}
\newcommand\X{\mathcal{X}}
\newcommand\Y{\mathcal{Y}}
\newcommand\domS{\mathcal{S}}
\newcommand\A{\mathcal{A}}
\newcommand\rvm[1]{\mathrm{#1}}
\newcommand\sps{\,.\,}
\newcommand\Ipred{\mathcal{I}_{\mathrm{pred}}}
\newcommand\Ix{\mathcal{I}}
\newcommand\IXZ{\overline{\underline{\mathcal{I}}}}
\newcommand\x{\vec{x}}
\newcommand\Ham[1]{\mathcal{H}_{#1}}
\newcommand\subsets[2]{[#1]^{(k)}}
\def\bet(#1,#2){#1..#2}


\def\ev(#1=#2){#1\!\!=\!#2}
\newcommand\rv[1]{\Omega \to #1}
\newcommand\ceq{\!\!=\!}
\newcommand\cmin{\!-\!}
\newcommand\modulo[2]{#1\!\!\!\!\!\mod#2}

\newcommand\sumitoN{\sum_{i=1}^N}
\newcommand\sumktoK{\sum_{k=1}^K}
\newcommand\sumjtoK{\sum_{j=1}^K}
\newcommand\sumalpha{\sum_{\alpha\in\A}}
\newcommand\prodktoK{\prod_{k=1}^K}
\newcommand\prodjtoK{\prod_{j=1}^K}

\newcommand\past[1]{\overset{\rule{0pt}{0.2em}\smash{\leftarrow}}{#1}}
\newcommand\fut[1]{\overset{\rule{0pt}{0.1em}\smash{\rightarrow}}{#1}}
\newcommand\parity[2]{P^{#1}_{2,#2}}

%\usepackage[parfill]{parskip}

\begin{document}
\title{Cognitive Music Modelling: an Information Dynamics Approach}

\author{
	\IEEEauthorblockN{Samer A. Abdallah, Henrik Ekeus, Peter Foster}
	\IEEEauthorblockN{Andrew Robertson and Mark D. Plumbley}
	\IEEEauthorblockA{Centre for Digital Music\\
		Queen Mary University of London\\
		Mile End Road, London E1 4NS\\
		Email:}}

\maketitle
\begin{abstract}
	People take in information when perceiving music.  With it they continually
	build predictive models of what is going to happen.  There is a relationship
	between information measures and how we perceive music.  An information
	theoretic approach to music cognition is thus a fruitful avenue of research.
	In this paper, we review the theoretical foundations of information dynamics
	and discuss a few emerging areas of application.
\end{abstract}


\section{Expectation and surprise in music}
\label{s:Intro}

	One of the effects of listening to music is to create 
	expectations of what is to come next, which may be fulfilled
	immediately, after some delay, or not at all as the case may be.
	This is the thesis put forward by, amongst others, music theorists 
	L. B. Meyer \cite{Meyer67} and Narmour \citep{Narmour77}, but was
	recognised much earlier; for example, 
	it was elegantly put by Hanslick \cite{Hanslick1854} in the
	nineteenth century:
	\begin{quote}
			`The most important factor in the mental process which accompanies the
			act of listening to music, and which converts it to a source of pleasure, 
			is \ldots the intellectual satisfaction 
			which the listener derives from continually following and anticipating 
			the composer's intentions---now, to see his expectations fulfilled, and 
			now, to find himself agreeably mistaken. 
			%It is a matter of course that 
			%this intellectual flux and reflux, this perpetual giving and receiving 
			%takes place unconsciously, and with the rapidity of lightning-flashes.'
	\end{quote}
	An essential aspect of this is that music is experienced as a phenomenon
	that `unfolds' in time, rather than being apprehended as a static object
	presented in its entirety. Meyer argued that musical experience depends
	on how we change and revise our conceptions \emph{as events happen}, on
	how expectation and prediction interact with occurrence, and that, to a
	large degree, the way to understand the effect of music is to focus on
	this `kinetics' of expectation and surprise.

	The business of making predictions and assessing surprise is essentially
	one of reasoning under conditions of uncertainty and manipulating
	degrees of belief about the various proposition which may or may not
	hold, and, as has been argued elsewhere \cite{Cox1946,Jaynes27}, best
	quantified in terms of Bayesian probability theory.
   Thus, we suppose that
	when we listen to music, expectations are created on the basis of our
	familiarity with various stylistic norms %, that is, using models that
	encode the statistics of music in general, the particular styles of
	music that seem best to fit the piece we happen to be listening to, and
	the emerging structures peculiar to the current piece.  There is
	experimental evidence that human listeners are able to internalise
	statistical knowledge about musical structure, \eg
	\citep{SaffranJohnsonAslin1999,EerolaToiviainenKrumhansl2002}, and also
	that statistical models can form an effective basis for computational
	analysis of music, \eg
	\cite{ConklinWitten95,PonsfordWigginsMellish1999,Pearce2005}.

	\subsection{Music and information theory}
	Given a probabilistic framework for music modelling and prediction,
	it is a small step to apply quantitative information theory \cite{Shannon48} to
	the models at hand.
	The relationship between information theory and music and art in general has been the 
	subject of some interest since the 1950s 
	\cite{Youngblood58,CoonsKraehenbuehl1958,HillerBean66,Moles66,Meyer67,Cohen1962}. 
	The general thesis is that perceptible qualities and subjective
	states like uncertainty, surprise, complexity, tension, and interestingness
	are closely related to 
	information-theoretic quantities like entropy, relative entropy,
	and mutual information.
%	and are major determinants of the overall experience.
	Berlyne \cite{Berlyne71} called such quantities `collative variables', since 
	they are to do with patterns of occurrence rather than medium-specific details,
	and developed the ideas of `information aesthetics' in an experimental setting.  
%	Berlyne's `new experimental aesthetics', the `information-aestheticians'.

%	Listeners then experience greater or lesser levels of surprise
%	in response to departures from these norms. 
%	By careful manipulation
%	of the material, the composer can thus define, and induce within the
%	listener, a temporal programme of varying
%	levels of uncertainty, ambiguity and surprise. 


	Previous work in this area \cite{Berlyne74} treated the various 
	information theoretic quantities
	such as entropy as if they were intrinsic properties of the stimulus---subjects
	were presented with a sequence of tones with `high entropy', or a visual pattern
	with `low entropy'. These values were determined from some known `objective'
	probability model of the stimuli,%
	\footnote{%
		The notion of objective probabalities and whether or not they can
		usefully be said to exist is the subject of some debate, with advocates of 
		subjective probabilities including de Finetti \cite{deFinetti}.
		Accordingly, we will treat the concept of a `true' or `objective' probability 
		models with a grain of salt and not rely on them in our 
		theoretical development.}%
	or from simple statistical analyses such as
	computing emprical distributions. Our approach is explicitly to consider the role
	of the observer in perception, and more specifically, to consider estimates of
	entropy \etc with respect to \emph{subjective} probabilities.
\subsection{Information dynamic approach}

	Bringing the various strands together, our working hypothesis is that
	as a listener (to which will refer gender neutrally as `it')
	listens to a piece of music, it maintains a dynamically evolving statistical
	model that enables it to make predictions about how the piece will
	continue, relying on both its previous experience of music and the immediate
	context of the piece.
	As events unfold, it revises its model and hence its probabilistic belief state,
	which includes predictive distributions over future observations.
	These distributions and changes in distributions can be characterised in terms of a handful of information 
	theoretic-measures such as entropy and relative entropy.
%	to measure uncertainty and information. %, that is, changes in predictive distributions maintained by the model.
	By tracing the evolution of a these measures, we obtain a representation
	which captures much of the significant structure of the
	music. 
	This approach has a number of features which we list below.

		\emph{Abstraction}:
	Because it is sensitive mainly to \emph{patterns} of occurence, 
	rather the details of which specific things occur, 
	it operates at a level of abstraction removed from the details of the sensory 
	experience and the medium through which it was received, suggesting that the 
	same approach could, in principle, be used to analyse and compare information 
	flow in different temporal media regardless of whether they are auditory, 
	visual or otherwise. 

		\emph{Generality} applicable to any probabilistic model.

		\emph{Subjectivity}:
	Since the analysis is dependent on the probability model the observer brings to the
	problem, which may depend on prior experience or other factors, and which may change
	over time, inter-subject variablity and variation in subjects' responses over time are 
	fundamental to the theory. It is essentially a theory of subjective response
			
	%modelling the creative process, which often alternates between generative
	%and selective or evaluative phases \cite{Boden1990}, and would have
	%applications in tools for computer aided composition.


\section{Theoretical review}

	In this section, we summarise the definitions of some of the relevant quantities
	in information dynamics and show how they can be computed in some simple probabilistic
	models (namely, first and higher-order Markov chains, and Gaussian processes [Peter?]).

	\begin{fig}{venn-example}
		\newcommand\rad{2.2em}%
		\newcommand\circo{circle (3.4em)}%
		\newcommand\labrad{4.3em}
		\newcommand\bound{(-6em,-5em) rectangle (6em,6em)}
		\newcommand\colsep{\ }
		\newcommand\clipin[1]{\clip (#1) \circo;}%
		\newcommand\clipout[1]{\clip \bound (#1) \circo;}%
		\newcommand\cliptwo[3]{%
			\begin{scope}
				\clipin{#1};
				\clipin{#2};
				\clipout{#3};
				\fill[black!30] \bound;
			\end{scope}
		}%
		\newcommand\clipone[3]{%
			\begin{scope}
				\clipin{#1};
				\clipout{#2};
				\clipout{#3};
				\fill[black!15] \bound;
			\end{scope}
		}%
		\begin{tabular}{c@{\colsep}c}
			\begin{tikzpicture}[baseline=0pt]
				\coordinate (p1) at (90:\rad);
				\coordinate (p2) at (210:\rad);
				\coordinate (p3) at (-30:\rad);
				\clipone{p1}{p2}{p3};
				\clipone{p2}{p3}{p1};
				\clipone{p3}{p1}{p2};
				\cliptwo{p1}{p2}{p3};
				\cliptwo{p2}{p3}{p1};
				\cliptwo{p3}{p1}{p2};
            \begin{scope}
               \clip (p1) \circo;
               \clip (p2) \circo;
               \clip (p3) \circo;
               \fill[black!45] \bound;
            \end{scope}
				\draw (p1) \circo;
				\draw (p2) \circo;
				\draw (p3) \circo;
				\path 
					(barycentric cs:p3=1,p1=-0.2,p2=-0.1) +(0ex,0) node {$I_{3|12}$}
					(barycentric cs:p1=1,p2=-0.2,p3=-0.1) +(0ex,0) node {$I_{1|23}$}
					(barycentric cs:p2=1,p3=-0.2,p1=-0.1) +(0ex,0) node {$I_{2|13}$}
					(barycentric cs:p3=1,p2=1,p1=-0.55) +(0ex,0) node {$I_{23|1}$}
					(barycentric cs:p1=1,p3=1,p2=-0.55) +(0ex,0) node {$I_{13|2}$}
					(barycentric cs:p2=1,p1=1,p3=-0.55) +(0ex,0) node {$I_{12|3}$}
					(barycentric cs:p3=1,p2=1,p1=1) node {$I_{123}$}
					;
				\path
					(p1) +(140:\labrad) node {$X_1$}
					(p2) +(-140:\labrad) node {$X_2$}
					(p3) +(-40:\labrad) node {$X_3$};
			\end{tikzpicture}
			&
			\parbox{0.5\linewidth}{
				\small
				\begin{align*}
					I_{1|23} &= H(X_1|X_2,X_3) \\
					I_{13|2} &= I(X_1;X_3|X_2) \\
					I_{1|23} + I_{13|2} &= H(X_1|X_2) \\
					I_{12|3} + I_{123} &= I(X_1;X_2) 
				\end{align*}
			}
		\end{tabular}
		\caption{
		Venn diagram visualisation of entropies and mutual informations
		for three random variables $X_1$, $X_2$ and $X_3$. The areas of 
		the three circles represent $H(X_1)$, $H(X_2)$ and $H(X_3)$ respectively.
		The total shaded area is the joint entropy $H(X_1,X_2,X_3)$.
		The central area $I_{123}$ is the co-information \cite{McGill1954}.
		Some other information measures are indicated in the legend.
		}
	\end{fig}
	[Adopting notation of recent Binding information paper.]
	\subsection{'Anatomy of a bit' stuff}
	Entropy rates, redundancy, predictive information etc.
	Information diagrams.

 	\begin{fig}{predinfo-bg}
		\newcommand\subfig[2]{\shortstack{#2\\[0.75em]#1}}
		\newcommand\rad{1.8em}%
		\newcommand\ovoid[1]{%
			++(-#1,\rad) 
			-- ++(2 * #1,0em) arc (90:-90:\rad)
 			-- ++(-2 * #1,0em) arc (270:90:\rad) 
		}%
		\newcommand\axis{2.75em}%
		\newcommand\olap{0.85em}%
		\newcommand\offs{3.6em}
		\newcommand\colsep{\hspace{5em}}
		\newcommand\longblob{\ovoid{\axis}}
		\newcommand\shortblob{\ovoid{1.75em}}
		\begin{tabular}{c@{\colsep}c}
			\subfig{(a) excess entropy}{%
				\newcommand\blob{\longblob}
				\begin{tikzpicture}
					\coordinate (p1) at (-\offs,0em);
					\coordinate (p2) at (\offs,0em);
					\begin{scope}
						\clip (p1) \blob;
						\clip (p2) \blob;
						\fill[lightgray] (-1,-1) rectangle (1,1);
					\end{scope}
					\draw (p1) +(-0.5em,0em) node{\shortstack{infinite\\past}} \blob;
					\draw (p2) +(0.5em,0em) node{\shortstack{infinite\\future}} \blob;
					\path (0,0) node (future) {$E$};
					\path (p1) +(-2em,\rad) node [anchor=south] {$\ldots,X_{-1}$};
					\path (p2) +(2em,\rad) node [anchor=south] {$X_0,\ldots$};
				\end{tikzpicture}%
			}%
			\\[1.25em]
			\subfig{(b) predictive information rate $b_\mu$}{%
				\begin{tikzpicture}%[baseline=-1em]
					\newcommand\rc{2.1em}
					\newcommand\throw{2.5em}
					\coordinate (p1) at (210:1.5em);
					\coordinate (p2) at (90:0.7em);
					\coordinate (p3) at (-30:1.5em);
					\newcommand\bound{(-7em,-2.6em) rectangle (7em,3.0em)}
					\newcommand\present{(p2) circle (\rc)}
					\newcommand\thepast{(p1) ++(-\throw,0) \ovoid{\throw}}
					\newcommand\future{(p3) ++(\throw,0) \ovoid{\throw}}
					\newcommand\fillclipped[2]{%
						\begin{scope}[even odd rule]
							\foreach \thing in {#2} {\clip \thing;}
							\fill[black!#1] \bound;
						\end{scope}%
					}%
					\fillclipped{30}{\present,\future,\bound \thepast}
					\fillclipped{15}{\present,\bound \future,\bound \thepast}
					\draw \future;
					\fillclipped{45}{\present,\thepast}
					\draw \thepast;
					\draw \present;
					\node at (barycentric cs:p2=1,p1=-0.17,p3=-0.17) {$r_\mu$};
					\node at (barycentric cs:p1=-0.4,p2=1.0,p3=1) {$b_\mu$};
					\node at (barycentric cs:p3=0,p2=1,p1=1.2) [shape=rectangle,fill=black!45,inner sep=1pt]{$\rho_\mu$};
					\path (p2) +(140:3em) node {$X_0$};
	%            \node at (barycentric cs:p3=0,p2=1,p1=1) {$\rho_\mu$};
					\path (p3) +(3em,0em) node  {\shortstack{infinite\\future}};
					\path (p1) +(-3em,0em) node  {\shortstack{infinite\\past}};
					\path (p1) +(-4em,\rad) node [anchor=south] {$\ldots,X_{-1}$};
					\path (p3) +(4em,\rad) node [anchor=south] {$X_1,\ldots$};
				\end{tikzpicture}}%
				\\[0.5em]
		\end{tabular}
		\caption{
		Venn diagram representation of several information measures for
		stationary random processes. Each circle or oval represents a random
		variable or sequence of random variables relative to time $t=0$. Overlapped areas
		correspond to various mutual information as in \Figrf{venn-example}.
		In (c), the circle represents the `present'. Its total area is
		$H(X_0)=H(1)=\rho_\mu+r_\mu+b_\mu$, where $\rho_\mu$ is the multi-information
		rate, $r_\mu$ is the residual entropy rate, and $b_\mu$ is the predictive
		information rate. The entropy rate is $h_\mu = r_\mu+b_\mu$.
		}
	\end{fig}

	\paragraph{Predictive information rate}
	In previous work \cite{AbdallahPlumbley2009}, we introduced 
%	examined several
%	information-theoretic measures that could be used to characterise
%	not only random processes (\ie, an ensemble of possible sequences),
%	but also the dynamic progress of specific realisations of such processes.
%	One of these measures was 
%
	the \emph{predictive information rate}
	(PIR), which is the average information 
	in one observation about the infinite future given the infinite past.
	If $\past{X}_t=(\ldots,X_{t-2},X_{t-1})$ denotes the variables
	before time $t$, 
	and $\fut{X}_t = (X_{t+1},X_{t+2},\ldots)$ denotes
	those after $t$,
	the PIR at time $t$ is defined as a conditional mutual information: 
	\begin{equation}
		\label{eq:PIR}
		\IXZ_t \define I(X_t;\fut{X}_t|\past{X}_t) = H(\fut{X}_t|\past{X}_t) - H(\fut{X}_t|X_t,\past{X}_t).
	\end{equation}
%	(The underline/overline notation follows that of \cite[\S 3]{AbdallahPlumbley2009}.)
%	Hence, $\Ix_t$ quantifies the \emph{new}
%	information gained about the future from the observation at time $t$. 
	Equation \eqrf{PIR} can be read as the average reduction
	in uncertainty about the future on learning $X_t$, given the past. 
	Due to the symmetry of the mutual information, it can also be written
	as 
	\begin{equation}
%		\IXZ_t 
		I(X_t;\fut{X}_t|\past{X}_t) = H(X_t|\past{X}_t) - H(X_t|\fut{X}_t,\past{X}_t).
%		\label{<++>}
	\end{equation}
%	If $X$ is stationary, then 
	Now, in the shift-invariant case, $H(X_t|\past{X}_t)$ 
	is the familiar entropy rate $h_\mu$, but $H(X_t|\fut{X}_t,\past{X}_t)$, 
	the conditional entropy of one variable given \emph{all} the others 
	in the sequence, future as well as past, is what 
	we called the \emph{residual entropy rate} $r_\mu$ in \cite{AbdallahPlumbley2010},
	but was previously identified by Verd{\'u} and Weissman \cite{VerduWeissman2006} as the
	\emph{erasure entropy rate}.
%	It is not expressible in terms of the block entropy function $H(\cdot)$.
	It can be defined as the limit
	\begin{equation}
		\label{eq:residual-entropy-rate}
		r_\mu \define \lim_{N\tends\infty} H(X_{\bet(-N,N)}) - H(X_{\bet(-N,-1)},X_{\bet(1,N)}).
	\end{equation}
	The second term, $H(X_{\bet(1,N)},X_{\bet(-N,-1)})$, 
	is the joint entropy of two non-adjacent blocks each of length $N$ with a 
	gap between them, 
	and cannot be expressed as a function of block entropies alone.
%	In order to associate it with the concept of \emph{binding information} which
%	we will define in \secrf{binding-info}, we 
	Thus, the shift-invariant PIR (which we will write as $b_\mu$) is the difference between 
	the entropy rate and the erasure entropy rate: $b_\mu = h_\mu - r_\mu$.
	These relationships are illustrated in \Figrf{predinfo-bg}, along with
	several of the information measures we have discussed so far.


	\subsection{First order Markov chains}
	These are the simplest non-trivial models to which information dynamics methods
	can be applied. In \cite{AbdallahPlumbley2009} we, showed that the predictive information
	rate can be expressed simply in terms of the entropy rate of the Markov chain.
	If we let $a$ denote the transition matrix of the Markov chain, and $h_a$ it's
	entropy rate, then its predictive information rate $b_a$ is
	\begin{equation}
		b_a = h_{a^2} - h_a,
	\end{equation}
	where $a^2 = aa$, the transition matrix squared, is the transition matrix
	of the `skip one' Markov chain obtained by leaving out every other observation.

	\subsection{Higher order Markov chains}
	Second and higher order Markov chains can be treated in a similar way by transforming
	to a first order representation of the high order Markov chain. If we are dealing
	with an $N$th order model, this is done forming a new alphabet of possible observations 
	consisting of all possible $N$-tuples of symbols from the base alphabet. An observation
	in this new model represents a block of $N$ observations from the base model. The next
	observation represents the block of $N$ obtained by shift the previous block along
	by one step. The new Markov of chain is parameterised by a sparse $K^N\times K^N$
	transition matrix $\hat{a}$.
	\begin{equation}
		b_{\hat{a}} = h_{\hat{a}^{N+1}} - N h_{\hat{a}},
	\end{equation}
	where $\hat{a}^{N+1}$ is the $N+1$th power of the transition matrix.  
		


\section{Information Dynamics in Analysis}

 	\subsection{Musicological Analysis}
	refer to the work with the analysis of minimalist pieces
	
	\subsection{Content analysis/Sound Categorisation}.  Using Information Dynamics it is possible to segment music.  From there we can then use this to search large data sets. Determine musical structure for the purpose of playlist navigation and search. 
	\emph{Peter}

\subsection{Beat Tracking}
 \emph{Andrew}  


\section{Information Dynamics as Design Tool}

In addition to applying information dynamics to analysis, it is also possible use this approach in design, such as the composition of musical materials.
By providing a framework for linking information theoretic measures to the control of generative processes, it becomes possible to steer the output of these processes to match a criteria defined by these measures. 
For instance outputs of a stochastic musical process could be filtered to match constraints defined by a set of information theoretic measures.  

The use of stochastic processes for the generation of musical material has been widespread for decades -- Iannis Xenakis applied probabilistic mathematical models to the creation of musical materials, including to the formulation of a theory of Markovian Stochastic Music.
However we can use information dynamics measures to explore and interface with such processes at the high and abstract level of expectation, randomness and predictability. 
The Melody Triangle is such a system.

 \subsection{The Melody Triangle}  The Melody Triangle is an exploratory interface for the discovery of melodic content, where the input -- positions within a triangle -- directly map to information theoretic measures associated with the output.
 The measures are the entropy rate, redundancy and predictive information rate of the random process used to generate the sequence of notes. 
These are all related to the predictability of the the sequence and as such address the notions of expectation and surprise in the perception of music.\emph{self-plagiarised}
 	
Before the Melody Triangle can used, it has to be ÔpopulatedÕ with possible parameter values for the melody generators.
These are then plotted in a 3d statistical space of redundancy, entropy rate and predictive information rate. 
In our case we generated thousands of transition matrixes, representing first-order Markov chains, by a random sampling method. In figure \ref{InfoDynEngine} we see a representation of how these matrixes are distributed in the 3d statistical space; each one of these points corresponds to a transition matrix.\emph{self-plagiarised}

\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{MatrixDistribution.png}
\caption{The population of transition matrixes distributed along three axes of redundancy, entropy rate and predictive information rate.  Note how the distribution makes a curved triangle-like plane floating in 3d space.  \label{InfoDynEngine}}
\end{figure}
 
	
When we look at the distribution of transition matrixes plotted in this space, we see that it forms an arch shape that is fairly thin. 
It thus becomes a reasonable approximation to pretend that it is just a sheet in two dimensions; and so we stretch out this curved arc into a flat triangle. 
It is this triangular sheet that is our ÔMelody TriangleÕ and forms the interface by which the system is controlled. \emph{self-plagiarised}
	
When the Melody Triangle is used, regardless of whether it is as a screen based system, or as an interactive installation, it involves a mapping to this statistical space. 
When the user, through the interface, selects a position within the triangle, the corresponding transition matrix is returned. 
Figure \ref{TheTriangle} shows how the triangle maps to different measures of redundancy, entropy rate and predictive information rate.\emph{self-plagiarised}
 \begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{TheTriangle.pdf}
\caption{The Melody Triangle\label{TheTriangle}}
\end{figure}	
Each corner corresponds to three different extremes of predictability and unpredictability, which could be loosely characterised as ÔperiodicityÕ, ÔnoiseÕ and ÔrepetitionÕ. 
Melodies from the ÔnoiseÕ corner have no discernible pattern; they have high entropy rate, low predictive information rate and low redundancy. 
These melodies are essentially totally random. 
A melody along the ÔperiodicityÕ to ÔrepetitionÕ edge are all deterministic loops that get shorter as we approach the ÔrepetitionÕ corner, until it becomes just one repeating note. 
It is the areas in between the extremes that provide the more ÔinterestingÕ melodies. 
That is, those that have some level of unpredictability, but are not completely ran- dom. Or, conversely, that are predictable, but not entirely so. 
This triangular space allows for an intuitive explorationof expectation and surprise in temporal sequences based on a simple model of how one might guess the next event given the previous one.\emph{self-plagiarised}
	
	
	
Any number of interfaces could be developed for the Melody Triangle.  
We have developed two; a standard screen based interface where a user moves tokens with a mouse in and around a triangle on screen, and a multi-user interactive installation where a Kinect camera tracks individuals in a space and maps their positions in the space to the triangle. 
 Each visitor would generate a melody, and could collaborate with their co-visitors to generate musical textures -- a playful yet informative way to explore expectation and surprise in music.  
	
As a screen based interface the Melody Triangle can serve as composition tool.  
A triangle is drawn on the screen, screen space thus mapped to the statistical space of the Melody Triangle.
 A number of round tokens, each representing a melody can be dragged in and around the triangle. 
 When a token is dragged into the triangle, the system will start generating the sequence of notes with statistical properties that correspond to its position in the triangle.\emph{self-plagiarised}  
	
In this mode, the Melody Triangle can be used as a kind of composition assistant for the generation of interesting musical textures and melodies. 
However unlike other computer aided composition tools or programming environments, here the composer engages with music on the high and abstract level of expectation, randomness and predictability.\emph{self-plagiarised}
	  
	
Additionally the Melody Triangle serves as an effective tool for experimental investigations into musical preference and their relationship to the information dynamics models.

	%As the Melody Triangle essentially operates on a stream of symbols, it it is possible to apply the melody triangle to the design of non-sonic content.
	
\section{Musical Preference and Information Dynamics}
We carried out a preliminary study that sought to identify any correlation between aesthetic preference and the information theoretical measures of the Melody Triangle. 
In this study participants were asked to use the screen based interface but it was simplified so that all they could do was move tokens around. 
To help discount visual biases, the axes of the triangle would be randomly rearranged for each participant.\emph{self-plagiarised}

The study was divided in to two parts, the first investigated musical preference with respect to single melodies at different tempos. 
In the second part of the study, a back- ground melody is playing and the participants are asked to find a second melody that Õworks wellÕ with the background melody. 
For each participant this was done four times, each with a different background melody from four different areas of the Melody Triangle. 
For all parts of the study the participants were asked to ÔmarkÕ, by pressing the space bar, whenever they liked what they were hearing.\emph{self-plagiarised}   

\emph{todo - results}

\section{Information Dynamics as Evaluative Feedback Mechanism}

\emph{todo - code the info dyn evaluator :) }
	
It is possible to use information dynamics measures to develop a kind of `critic' that would evaluate a stream of symbols. 
For instance we could develop a system to notify us if a stream of symbols is too boring, either because they are too repetitive or too chaotic.  
This could be used to evaluate both pre-composed streams of symbols, or could even be used to provide real-time feedback in an improvisatory setup.  

\emph{comparable system}  Gordon Pask's Musicolor (1953) applied a similar notion of boredom in its design.
The Musicolour would react to audio input through a microphone by flashing coloured lights.  
Rather than a direct mapping of sound to light, Pask designed the device to be a partner to a performing musician.  
It would adapt its lighting pattern based on the rhythms and frequencies it would hear, quickly `learning' to flash in time with the music.  
However Pask endowed the device with the ability to `be bored'; if the rhythmic and frequency content of the input remained the same for too long it would listen for other rhythms and frequencies, only lighting when it heard these.  
As the Musicolour would `get bored', the musician would have to change and vary their playing, eliciting new and unexpected outputs in trying to keep the Musicolour interested.
	
In a similar vain, our \emph{Information Dynamics Critic}(name?) allows for an evaluative measure of an input stream, however containing a more sophisticated notion of boredom that \dots
	

   

\section{Conclusion}

\bibliographystyle{unsrt}
{\bibliography{all,c4dm,nime}}
\end{document}