annotate toolboxes/FullBNT-1.0.7/nethelp3.3/gpfwd.htm @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
rev   line source
wolffd@0 1 <html>
wolffd@0 2 <head>
wolffd@0 3 <title>
wolffd@0 4 Netlab Reference Manual gpfwd
wolffd@0 5 </title>
wolffd@0 6 </head>
wolffd@0 7 <body>
wolffd@0 8 <H1> gpfwd
wolffd@0 9 </H1>
wolffd@0 10 <h2>
wolffd@0 11 Purpose
wolffd@0 12 </h2>
wolffd@0 13 Forward propagation through Gaussian Process.
wolffd@0 14
wolffd@0 15 <p><h2>
wolffd@0 16 Synopsis
wolffd@0 17 </h2>
wolffd@0 18 <PRE>
wolffd@0 19 y = gpfwd(net, x)
wolffd@0 20 [y, sigsq] = gpfwd(net, x)
wolffd@0 21 [y, sigsq] = gpfwd(net, x, cninv)
wolffd@0 22 </PRE>
wolffd@0 23
wolffd@0 24
wolffd@0 25 <p><h2>
wolffd@0 26 Description
wolffd@0 27 </h2>
wolffd@0 28 <CODE>y = gpfwd(net, x)</CODE> takes a Gaussian Process data structure <CODE>net</CODE>
wolffd@0 29 together
wolffd@0 30 with a matrix <CODE>x</CODE> of input vectors, and forward propagates the inputs
wolffd@0 31 through the model to generate a matrix <CODE>y</CODE> of output
wolffd@0 32 vectors. Each row of <CODE>x</CODE> corresponds to one input vector and each
wolffd@0 33 row of <CODE>y</CODE> corresponds to one output vector. This assumes that the
wolffd@0 34 training data (both inputs and targets) has been stored in <CODE>net</CODE> by
wolffd@0 35 a call to <CODE>gpinit</CODE>; these are needed to compute the training
wolffd@0 36 data covariance matrix.
wolffd@0 37
wolffd@0 38 <p><CODE>[y, sigsq] = gpfwd(net, x)</CODE> also generates a column vector <CODE>sigsq</CODE> of
wolffd@0 39 conditional variances (or squared error bars) where each value corresponds to a pattern.
wolffd@0 40
wolffd@0 41 <p><CODE>[y, sigsq] = gpfwd(net, x, cninv)</CODE> uses the pre-computed inverse covariance
wolffd@0 42 matrix <CODE>cninv</CODE> in the forward propagation. This increases efficiency if
wolffd@0 43 several calls to <CODE>gpfwd</CODE> are made.
wolffd@0 44
wolffd@0 45 <p><h2>
wolffd@0 46 Example
wolffd@0 47 </h2>
wolffd@0 48 The following code creates a Gaussian Process, trains it, and then plots the
wolffd@0 49 predictions on a test set with one standard deviation error bars:
wolffd@0 50 <PRE>
wolffd@0 51
wolffd@0 52 net = gp(1, 'sqexp');
wolffd@0 53 net = gpinit(net, x, t);
wolffd@0 54 net = netopt(net, options, x, t, 'scg');
wolffd@0 55 [pred, sigsq] = gpfwd(net, xtest);
wolffd@0 56 plot(xtest, pred, '-k');
wolffd@0 57 hold on
wolffd@0 58 plot(xtest, pred+sqrt(sigsq), '-b', xtest, pred-sqrt(sigsq), '-b');
wolffd@0 59 </PRE>
wolffd@0 60
wolffd@0 61
wolffd@0 62 <p><h2>
wolffd@0 63 See Also
wolffd@0 64 </h2>
wolffd@0 65 <CODE><a href="gp.htm">gp</a></CODE>, <CODE><a href="demgp.htm">demgp</a></CODE>, <CODE><a href="gpinit.htm">gpinit</a></CODE><hr>
wolffd@0 66 <b>Pages:</b>
wolffd@0 67 <a href="index.htm">Index</a>
wolffd@0 68 <hr>
wolffd@0 69 <p>Copyright (c) Ian T Nabney (1996-9)
wolffd@0 70
wolffd@0 71
wolffd@0 72 </body>
wolffd@0 73 </html>