wolffd@0: wolffd@0:
wolffd@0:wolffd@0: y = gpfwd(net, x) wolffd@0: [y, sigsq] = gpfwd(net, x) wolffd@0: [y, sigsq] = gpfwd(net, x, cninv) wolffd@0:wolffd@0: wolffd@0: wolffd@0:
y = gpfwd(net, x)
takes a Gaussian Process data structure net
wolffd@0: together
wolffd@0: with a matrix x
of input vectors, and forward propagates the inputs
wolffd@0: through the model to generate a matrix y
of output
wolffd@0: vectors. Each row of x
corresponds to one input vector and each
wolffd@0: row of y
corresponds to one output vector. This assumes that the
wolffd@0: training data (both inputs and targets) has been stored in net
by
wolffd@0: a call to gpinit
; these are needed to compute the training
wolffd@0: data covariance matrix.
wolffd@0:
wolffd@0: [y, sigsq] = gpfwd(net, x)
also generates a column vector sigsq
of
wolffd@0: conditional variances (or squared error bars) where each value corresponds to a pattern.
wolffd@0:
wolffd@0:
[y, sigsq] = gpfwd(net, x, cninv)
uses the pre-computed inverse covariance
wolffd@0: matrix cninv
in the forward propagation. This increases efficiency if
wolffd@0: several calls to gpfwd
are made.
wolffd@0:
wolffd@0:
wolffd@0: wolffd@0: net = gp(1, 'sqexp'); wolffd@0: net = gpinit(net, x, t); wolffd@0: net = netopt(net, options, x, t, 'scg'); wolffd@0: [pred, sigsq] = gpfwd(net, xtest); wolffd@0: plot(xtest, pred, '-k'); wolffd@0: hold on wolffd@0: plot(xtest, pred+sqrt(sigsq), '-b', xtest, pred-sqrt(sigsq), '-b'); wolffd@0:wolffd@0: wolffd@0: wolffd@0:
gp
, demgp
, gpinit
Copyright (c) Ian T Nabney (1996-9) wolffd@0: wolffd@0: wolffd@0: wolffd@0: