annotate toolboxes/FullBNT-1.0.7/netlab3.3/demolgd1.m @ 0:cc4b1211e677 tip

initial commit to HG from Changeset: 646 (e263d8a21543) added further path and more save "camirversion.m"
author Daniel Wolff
date Fri, 19 Aug 2016 13:07:06 +0200
parents
children
rev   line source
Daniel@0 1 %DEMOLGD1 Demonstrate simple MLP optimisation with on-line gradient descent
Daniel@0 2 %
Daniel@0 3 % Description
Daniel@0 4 % The problem consists of one input variable X and one target variable
Daniel@0 5 % T with data generated by sampling X at equal intervals and then
Daniel@0 6 % generating target data by computing SIN(2*PI*X) and adding Gaussian
Daniel@0 7 % noise. A 2-layer network with linear outputs is trained by minimizing
Daniel@0 8 % a sum-of-squares error function using on-line gradient descent.
Daniel@0 9 %
Daniel@0 10 % See also
Daniel@0 11 % DEMMLP1, OLGD
Daniel@0 12 %
Daniel@0 13
Daniel@0 14 % Copyright (c) Ian T Nabney (1996-2001)
Daniel@0 15
Daniel@0 16
Daniel@0 17 % Generate the matrix of inputs x and targets t.
Daniel@0 18
Daniel@0 19 ndata = 20; % Number of data points.
Daniel@0 20 noise = 0.2; % Standard deviation of noise distribution.
Daniel@0 21 x = [0:1/(ndata - 1):1]';
Daniel@0 22 randn('state', 42);
Daniel@0 23 rand('state', 42);
Daniel@0 24 t = sin(2*pi*x) + noise*randn(ndata, 1);
Daniel@0 25
Daniel@0 26 clc
Daniel@0 27 disp('This demonstration illustrates the use of the on-line gradient')
Daniel@0 28 disp('descent algorithm to train a Multi-Layer Perceptron network for')
Daniel@0 29 disp('regression problems. It is intended to illustrate the drawbacks')
Daniel@0 30 disp('of this algorithm compared to more powerful non-linear optimisation')
Daniel@0 31 disp('algorithms, such as conjugate gradients.')
Daniel@0 32 disp(' ')
Daniel@0 33 disp('First we generate the data from a noisy sine function and construct')
Daniel@0 34 disp('the network.')
Daniel@0 35 disp(' ')
Daniel@0 36 disp('Press any key to continue.')
Daniel@0 37 pause
Daniel@0 38 % Set up network parameters.
Daniel@0 39 nin = 1; % Number of inputs.
Daniel@0 40 nhidden = 3; % Number of hidden units.
Daniel@0 41 nout = 1; % Number of outputs.
Daniel@0 42 alpha = 0.01; % Coefficient of weight-decay prior.
Daniel@0 43
Daniel@0 44 % Create and initialize network weight vector.
Daniel@0 45 net = mlp(nin, nhidden, nout, 'linear');
Daniel@0 46 % Initialise weights reasonably close to 0
Daniel@0 47 net = mlpinit(net, 10);
Daniel@0 48
Daniel@0 49 % Set up vector of options for the optimiser.
Daniel@0 50 options = foptions;
Daniel@0 51 options(1) = 1; % This provides display of error values.
Daniel@0 52 options(14) = 20; % Number of training cycles.
Daniel@0 53 options(18) = 0.1; % Learning rate
Daniel@0 54 %options(17) = 0.4; % Momentum
Daniel@0 55 options(17) = 0.4; % Momentum
Daniel@0 56 options(5) = 1; % Do randomise pattern order
Daniel@0 57 clc
Daniel@0 58 disp('Then we set the options for the training algorithm.')
Daniel@0 59 disp(['In the first phase of training, which lasts for ',...
Daniel@0 60 num2str(options(14)), ' cycles,'])
Daniel@0 61 disp(['the learning rate is ', num2str(options(18)), ...
Daniel@0 62 ' and the momentum is ', num2str(options(17)), '.'])
Daniel@0 63 disp('The error values are displayed at the end of each pass through the')
Daniel@0 64 disp('entire pattern set.')
Daniel@0 65 disp(' ')
Daniel@0 66 disp('Press any key to continue.')
Daniel@0 67 pause
Daniel@0 68
Daniel@0 69 % Train using online gradient descent
Daniel@0 70 [net, options] = olgd(net, options, x, t);
Daniel@0 71
Daniel@0 72 % Now allow learning rate to decay and remove momentum
Daniel@0 73 options(2) = 0;
Daniel@0 74 options(3) = 0;
Daniel@0 75 options(17) = 0.4; % Turn off momentum
Daniel@0 76 options(5) = 1; % Randomise pattern order
Daniel@0 77 options(6) = 1; % Set learning rate decay on
Daniel@0 78 options(14) = 200;
Daniel@0 79 options(18) = 0.1; % Initial learning rate
Daniel@0 80
Daniel@0 81 disp(['In the second phase of training, which lasts for up to ',...
Daniel@0 82 num2str(options(14)), ' cycles,'])
Daniel@0 83 disp(['the learning rate starts at ', num2str(options(18)), ...
Daniel@0 84 ', decaying at 1/t and the momentum is ', num2str(options(17)), '.'])
Daniel@0 85 disp(' ')
Daniel@0 86 disp('Press any key to continue.')
Daniel@0 87 pause
Daniel@0 88 [net, options] = olgd(net, options, x, t);
Daniel@0 89
Daniel@0 90 clc
Daniel@0 91 disp('Now we plot the data, underlying function, and network outputs')
Daniel@0 92 disp('on a single graph to compare the results.')
Daniel@0 93 disp(' ')
Daniel@0 94 disp('Press any key to continue.')
Daniel@0 95 pause
Daniel@0 96
Daniel@0 97 % Plot the data, the original function, and the trained network function.
Daniel@0 98 plotvals = [0:0.01:1]';
Daniel@0 99 y = mlpfwd(net, plotvals);
Daniel@0 100 fh1 = figure;
Daniel@0 101 plot(x, t, 'ob')
Daniel@0 102 hold on
Daniel@0 103 axis([0 1 -1.5 1.5])
Daniel@0 104 fplot('sin(2*pi*x)', [0 1], '--g')
Daniel@0 105 plot(plotvals, y, '-r')
Daniel@0 106 legend('data', 'function', 'network');
Daniel@0 107 hold off
Daniel@0 108
Daniel@0 109 disp('Note the very poor fit to the data: this should be compared with')
Daniel@0 110 disp('the results obtained in demmlp1.')
Daniel@0 111 disp(' ')
Daniel@0 112 disp('Press any key to exit.')
Daniel@0 113 pause
Daniel@0 114 close(fh1);
Daniel@0 115 clear all;