annotate toolboxes/FullBNT-1.0.7/nethelp3.3/rbffwd.htm @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 <html>
Daniel@0 2 <head>
Daniel@0 3 <title>
Daniel@0 4 Netlab Reference Manual rbffwd
Daniel@0 5 </title>
Daniel@0 6 </head>
Daniel@0 7 <body>
Daniel@0 8 <H1> rbffwd
Daniel@0 9 </H1>
Daniel@0 10 <h2>
Daniel@0 11 Purpose
Daniel@0 12 </h2>
Daniel@0 13 Forward propagation through RBF network with linear outputs.
Daniel@0 14
Daniel@0 15 <p><h2>
Daniel@0 16 Synopsis
Daniel@0 17 </h2>
Daniel@0 18 <PRE>
Daniel@0 19 a = rbffwd(net, x)
Daniel@0 20 function [a, z, n2] = rbffwd(net, x)
Daniel@0 21 </PRE>
Daniel@0 22
Daniel@0 23
Daniel@0 24 <p><h2>
Daniel@0 25 Description
Daniel@0 26 </h2>
Daniel@0 27 <CODE>a = rbffwd(net, x)</CODE> takes a network data structure
Daniel@0 28 <CODE>net</CODE> and a matrix <CODE>x</CODE> of input
Daniel@0 29 vectors and forward propagates the inputs through the network to generate
Daniel@0 30 a matrix <CODE>a</CODE> of output vectors. Each row of <CODE>x</CODE> corresponds to one
Daniel@0 31 input vector and each row of <CODE>a</CODE> contains the corresponding output vector.
Daniel@0 32 The activation function that is used is determined by <CODE>net.actfn</CODE>.
Daniel@0 33
Daniel@0 34 <p><CODE>[a, z, n2] = rbffwd(net, x)</CODE> also generates a matrix <CODE>z</CODE> of
Daniel@0 35 the hidden unit activations where each row corresponds to one pattern.
Daniel@0 36 These hidden unit activations represent the <CODE>design matrix</CODE> for
Daniel@0 37 the RBF. The matrix <CODE>n2</CODE> is the squared distances between each
Daniel@0 38 basis function centre and each pattern in which each row corresponds
Daniel@0 39 to a data point.
Daniel@0 40
Daniel@0 41 <p><h2>
Daniel@0 42 Examples
Daniel@0 43 </h2>
Daniel@0 44 <PRE>
Daniel@0 45
Daniel@0 46 [a, z] = rbffwd(net, x);
Daniel@0 47
Daniel@0 48 <p>temp = pinv([z ones(size(x, 1), 1)]) * t;
Daniel@0 49 net.w2 = temp(1: nd(2), :);
Daniel@0 50 net.b2 = temp(size(x, nd(2)) + 1, :);
Daniel@0 51 </PRE>
Daniel@0 52
Daniel@0 53 Here <CODE>x</CODE> is the input data, <CODE>t</CODE> are the target values, and we use the
Daniel@0 54 pseudo-inverse to find the output weights and biases.
Daniel@0 55
Daniel@0 56 <p><h2>
Daniel@0 57 See Also
Daniel@0 58 </h2>
Daniel@0 59 <CODE><a href="rbf.htm">rbf</a></CODE>, <CODE><a href="rbferr.htm">rbferr</a></CODE>, <CODE><a href="rbfgrad.htm">rbfgrad</a></CODE>, <CODE><a href="rbfpak.htm">rbfpak</a></CODE>, <CODE><a href="rbftrain.htm">rbftrain</a></CODE>, <CODE><a href="rbfunpak.htm">rbfunpak</a></CODE><hr>
Daniel@0 60 <b>Pages:</b>
Daniel@0 61 <a href="index.htm">Index</a>
Daniel@0 62 <hr>
Daniel@0 63 <p>Copyright (c) Ian T Nabney (1996-9)
Daniel@0 64
Daniel@0 65
Daniel@0 66 </body>
Daniel@0 67 </html>