annotate toolboxes/FullBNT-1.0.7/nethelp3.3/somtrain.htm @ 0:e9a9cd732c1e tip

first hg version after svn
author wolffd
date Tue, 10 Feb 2015 15:05:51 +0000
parents
children
rev   line source
wolffd@0 1 <html>
wolffd@0 2 <head>
wolffd@0 3 <title>
wolffd@0 4 Netlab Reference Manual somtrain
wolffd@0 5 </title>
wolffd@0 6 </head>
wolffd@0 7 <body>
wolffd@0 8 <H1> somtrain
wolffd@0 9 </H1>
wolffd@0 10 <h2>
wolffd@0 11 Purpose
wolffd@0 12 </h2>
wolffd@0 13 Kohonen training algorithm for SOM.
wolffd@0 14
wolffd@0 15 <p><h2>
wolffd@0 16 Synopsis
wolffd@0 17 </h2>
wolffd@0 18 <PRE>
wolffd@0 19
wolffd@0 20 net = somtrain{net, options, x)
wolffd@0 21 </PRE>
wolffd@0 22
wolffd@0 23
wolffd@0 24 <p><h2>
wolffd@0 25 Description
wolffd@0 26 </h2>
wolffd@0 27 <CODE>net = somtrain{net, options, x)</CODE> uses Kohonen's algorithm to
wolffd@0 28 train a SOM. Both on-line and batch algorithms are implemented.
wolffd@0 29 The learning rate (for on-line) and neighbourhood size decay linearly.
wolffd@0 30 There is no error function minimised during training (so there is
wolffd@0 31 no termination criterion other than the number of epochs), but the
wolffd@0 32 sum-of-squares is computed and returned in <CODE>options(8)</CODE>.
wolffd@0 33
wolffd@0 34 <p>The optional parameters have the following interpretations.
wolffd@0 35
wolffd@0 36 <p><CODE>options(1)</CODE> is set to 1 to display error values; also logs learning
wolffd@0 37 rate <CODE>alpha</CODE> and neighbourhood size <CODE>nsize</CODE>.
wolffd@0 38 Otherwise nothing is displayed.
wolffd@0 39
wolffd@0 40 <p><CODE>options(5)</CODE> determines whether the patterns are sampled randomly
wolffd@0 41 with replacement. If it is 0 (the default), then patterns are sampled
wolffd@0 42 in order. This is only relevant to the on-line algorithm.
wolffd@0 43
wolffd@0 44 <p><CODE>options(6)</CODE> determines if the on-line or batch algorithm is
wolffd@0 45 used. If it is 1
wolffd@0 46 then the batch algorithm is used. If it is 0
wolffd@0 47 (the default) then the on-line algorithm is used.
wolffd@0 48
wolffd@0 49 <p><CODE>options(14)</CODE> is the maximum number of iterations (passes through
wolffd@0 50 the complete pattern set); default 100.
wolffd@0 51
wolffd@0 52 <p><CODE>options(15)</CODE> is the final neighbourhood size; default value is the
wolffd@0 53 same as the initial neighbourhood size.
wolffd@0 54
wolffd@0 55 <p><CODE>options(16)</CODE> is the final learning rate; default value is the same
wolffd@0 56 as the initial learning rate.
wolffd@0 57
wolffd@0 58 <p><CODE>options(17)</CODE> is the initial neighbourhood size; default 0.5*maximum
wolffd@0 59 map size.
wolffd@0 60
wolffd@0 61 <p><CODE>options(18)</CODE> is the initial learning rate; default 0.9. This parameter
wolffd@0 62 must be positive.
wolffd@0 63
wolffd@0 64 <p><h2>
wolffd@0 65 Examples
wolffd@0 66 </h2>
wolffd@0 67 The following example performs on-line training on a SOM in two stages:
wolffd@0 68 ordering and convergence.
wolffd@0 69 <PRE>
wolffd@0 70
wolffd@0 71 net = som(nin, [8, 7]);
wolffd@0 72 options = foptions;
wolffd@0 73
wolffd@0 74 <p>% Ordering phase
wolffd@0 75 options(1) = 1;
wolffd@0 76 options(14) = 50;
wolffd@0 77 options(18) = 0.9; % Initial learning rate
wolffd@0 78 options(16) = 0.05; % Final learning rate
wolffd@0 79 options(17) = 8; % Initial neighbourhood size
wolffd@0 80 options(15) = 1; % Final neighbourhood size
wolffd@0 81 net2 = somtrain(net, options, x);
wolffd@0 82
wolffd@0 83 <p>% Convergence phase
wolffd@0 84 options(14) = 400;
wolffd@0 85 options(18) = 0.05;
wolffd@0 86 options(16) = 0.01;
wolffd@0 87 options(17) = 0;
wolffd@0 88 options(15) = 0;
wolffd@0 89 net3 = somtrain(net2, options, x);
wolffd@0 90 </PRE>
wolffd@0 91
wolffd@0 92
wolffd@0 93 <p><h2>
wolffd@0 94 See Also
wolffd@0 95 </h2>
wolffd@0 96 <CODE><a href="kmeans.htm">kmeans</a></CODE>, <CODE><a href="som.htm">som</a></CODE>, <CODE><a href="somfwd.htm">somfwd</a></CODE><hr>
wolffd@0 97 <b>Pages:</b>
wolffd@0 98 <a href="index.htm">Index</a>
wolffd@0 99 <hr>
wolffd@0 100 <p>Copyright (c) Ian T Nabney (1996-9)
wolffd@0 101
wolffd@0 102
wolffd@0 103 </body>
wolffd@0 104 </html>