Daniel@0: Daniel@0:
Daniel@0:Daniel@0: y = gpfwd(net, x) Daniel@0: [y, sigsq] = gpfwd(net, x) Daniel@0: [y, sigsq] = gpfwd(net, x, cninv) Daniel@0:Daniel@0: Daniel@0: Daniel@0:
y = gpfwd(net, x)
takes a Gaussian Process data structure net
Daniel@0: together
Daniel@0: with a matrix x
of input vectors, and forward propagates the inputs
Daniel@0: through the model to generate a matrix y
of output
Daniel@0: vectors. Each row of x
corresponds to one input vector and each
Daniel@0: row of y
corresponds to one output vector. This assumes that the
Daniel@0: training data (both inputs and targets) has been stored in net
by
Daniel@0: a call to gpinit
; these are needed to compute the training
Daniel@0: data covariance matrix.
Daniel@0:
Daniel@0: [y, sigsq] = gpfwd(net, x)
also generates a column vector sigsq
of
Daniel@0: conditional variances (or squared error bars) where each value corresponds to a pattern.
Daniel@0:
Daniel@0:
[y, sigsq] = gpfwd(net, x, cninv)
uses the pre-computed inverse covariance
Daniel@0: matrix cninv
in the forward propagation. This increases efficiency if
Daniel@0: several calls to gpfwd
are made.
Daniel@0:
Daniel@0:
Daniel@0: Daniel@0: net = gp(1, 'sqexp'); Daniel@0: net = gpinit(net, x, t); Daniel@0: net = netopt(net, options, x, t, 'scg'); Daniel@0: [pred, sigsq] = gpfwd(net, xtest); Daniel@0: plot(xtest, pred, '-k'); Daniel@0: hold on Daniel@0: plot(xtest, pred+sqrt(sigsq), '-b', xtest, pred-sqrt(sigsq), '-b'); Daniel@0:Daniel@0: Daniel@0: Daniel@0:
gp
, demgp
, gpinit
Copyright (c) Ian T Nabney (1996-9) Daniel@0: Daniel@0: Daniel@0: Daniel@0: