Daniel@0: %DEMEV2 Demonstrate Bayesian classification for the MLP. Daniel@0: % Daniel@0: % Description Daniel@0: % A synthetic two class two-dimensional dataset X is sampled from a Daniel@0: % mixture of four Gaussians. Each class is associated with two of the Daniel@0: % Gaussians so that the optimal decision boundary is non-linear. A 2- Daniel@0: % layer network with logistic outputs is trained by minimizing the Daniel@0: % cross-entropy error function with isotroipc Gaussian regularizer (one Daniel@0: % hyperparameter for each of the four standard weight groups), using Daniel@0: % the scaled conjugate gradient optimizer. The hyperparameter vectors Daniel@0: % ALPHA and BETA are re-estimated using the function EVIDENCE. A graph Daniel@0: % is plotted of the optimal, regularised, and unregularised decision Daniel@0: % boundaries. A further plot of the moderated versus unmoderated Daniel@0: % contours is generated. Daniel@0: % Daniel@0: % See also Daniel@0: % EVIDENCE, MLP, SCG, DEMARD, DEMMLP2 Daniel@0: % Daniel@0: Daniel@0: % Copyright (c) Ian T Nabney (1996-2001) Daniel@0: Daniel@0: Daniel@0: clc; Daniel@0: Daniel@0: disp('This program demonstrates the use of the evidence procedure on') Daniel@0: disp('a two-class problem. It also shows the improved generalisation') Daniel@0: disp('performance that can be achieved with moderated outputs; that is') Daniel@0: disp('predictions where an approximate integration over the true') Daniel@0: disp('posterior distribution is carried out.') Daniel@0: disp(' ') Daniel@0: disp('First we generate a synthetic dataset with two-dimensional input') Daniel@0: disp('sampled from a mixture of four Gaussians. Each class is') Daniel@0: disp('associated with two of the Gaussians so that the optimal decision') Daniel@0: disp('boundary is non-linear.') Daniel@0: disp(' ') Daniel@0: disp('Press any key to see a plot of the data.') Daniel@0: pause; Daniel@0: Daniel@0: % Generate the matrix of inputs x and targets t. Daniel@0: Daniel@0: rand('state', 423); Daniel@0: randn('state', 423); Daniel@0: Daniel@0: ClassSymbol1 = 'r.'; Daniel@0: ClassSymbol2 = 'y.'; Daniel@0: PointSize = 12; Daniel@0: titleSize = 10; Daniel@0: Daniel@0: fh1 = figure; Daniel@0: set(fh1, 'Name', 'True Data Distribution'); Daniel@0: whitebg(fh1, 'k'); Daniel@0: Daniel@0: % Daniel@0: % Generate the data Daniel@0: % Daniel@0: n=200; Daniel@0: Daniel@0: % Set up mixture model: 2d data with four centres Daniel@0: % Class 1 is first two centres, class 2 from the other two Daniel@0: mix = gmm(2, 4, 'full'); Daniel@0: mix.priors = [0.25 0.25 0.25 0.25]; Daniel@0: mix.centres = [0 -0.1; 1.5 0; 1 1; 1 -1]; Daniel@0: mix.covars(:,:,1) = [0.625 -0.2165; -0.2165 0.875]; Daniel@0: mix.covars(:,:,2) = [0.25 0; 0 0.25]; Daniel@0: mix.covars(:,:,3) = [0.2241 -0.1368; -0.1368 0.9759]; Daniel@0: mix.covars(:,:,4) = [0.2375 0.1516; 0.1516 0.4125]; Daniel@0: Daniel@0: [data, label] = gmmsamp(mix, n); Daniel@0: Daniel@0: % Daniel@0: % Calculate some useful axis limits Daniel@0: % Daniel@0: x0 = min(data(:,1)); Daniel@0: x1 = max(data(:,1)); Daniel@0: y0 = min(data(:,2)); Daniel@0: y1 = max(data(:,2)); Daniel@0: dx = x1-x0; Daniel@0: dy = y1-y0; Daniel@0: expand = 5/100; % Add on 5 percent each way Daniel@0: x0 = x0 - dx*expand; Daniel@0: x1 = x1 + dx*expand; Daniel@0: y0 = y0 - dy*expand; Daniel@0: y1 = y1 + dy*expand; Daniel@0: resolution = 100; Daniel@0: step = dx/resolution; Daniel@0: xrange = [x0:step:x1]; Daniel@0: yrange = [y0:step:y1]; Daniel@0: % Daniel@0: % Generate the grid Daniel@0: % Daniel@0: [X Y]=meshgrid([x0:step:x1],[y0:step:y1]); Daniel@0: % Daniel@0: % Calculate the class conditional densities, the unconditional densities and Daniel@0: % the posterior probabilities Daniel@0: % Daniel@0: px_j = gmmactiv(mix, [X(:) Y(:)]); Daniel@0: px = reshape(px_j*(mix.priors)',size(X)); Daniel@0: post = gmmpost(mix, [X(:) Y(:)]); Daniel@0: p1_x = reshape(post(:, 1) + post(:, 2), size(X)); Daniel@0: p2_x = reshape(post(:, 3) + post(:, 4), size(X)); Daniel@0: Daniel@0: plot(data((label<=2),1),data(label<=2,2),ClassSymbol1, 'MarkerSize', ... Daniel@0: PointSize) Daniel@0: hold on Daniel@0: axis([x0 x1 y0 y1]) Daniel@0: plot(data((label>2),1),data(label>2,2),ClassSymbol2, 'MarkerSize', ... Daniel@0: PointSize) Daniel@0: Daniel@0: % Convert targets to 0-1 encoding Daniel@0: target=[label<=2]; Daniel@0: disp(' ') Daniel@0: disp('Press any key to continue') Daniel@0: pause; clc; Daniel@0: Daniel@0: disp('Next we create a two-layer MLP network with 6 hidden units and') Daniel@0: disp('one logistic output. We use a separate inverse variance') Daniel@0: disp('hyperparameter for each group of weights (inputs, input bias,') Daniel@0: disp('outputs, output bias) and the weights are optimised with the') Daniel@0: disp('scaled conjugate gradient algorithm. After each 100 iterations') Daniel@0: disp('the hyperparameters are re-estimated twice. There are eight') Daniel@0: disp('cycles of the whole algorithm.') Daniel@0: disp(' ') Daniel@0: disp('Press any key to train the network and determine the hyperparameters.') Daniel@0: pause; Daniel@0: Daniel@0: % Set up network parameters. Daniel@0: nin = 2; % Number of inputs. Daniel@0: nhidden = 6; % Number of hidden units. Daniel@0: nout = 1; % Number of outputs. Daniel@0: alpha = 0.01; % Initial prior hyperparameter. Daniel@0: aw1 = 0.01; Daniel@0: ab1 = 0.01; Daniel@0: aw2 = 0.01; Daniel@0: ab2 = 0.01; Daniel@0: Daniel@0: % Create and initialize network weight vector. Daniel@0: prior = mlpprior(nin, nhidden, nout, aw1, ab1, aw2, ab2); Daniel@0: net = mlp(nin, nhidden, nout, 'logistic', prior); Daniel@0: Daniel@0: % Set up vector of options for the optimiser. Daniel@0: nouter = 8; % Number of outer loops. Daniel@0: ninner = 2; % Number of innter loops. Daniel@0: options = foptions; % Default options vector. Daniel@0: options(1) = 1; % This provides display of error values. Daniel@0: options(2) = 1.0e-5; % Absolute precision for weights. Daniel@0: options(3) = 1.0e-5; % Precision for objective function. Daniel@0: options(14) = 100; % Number of training cycles in inner loop. Daniel@0: Daniel@0: % Train using scaled conjugate gradients, re-estimating alpha and beta. Daniel@0: for k = 1:nouter Daniel@0: net = netopt(net, options, data, target, 'scg'); Daniel@0: [net, gamma] = evidence(net, data, target, ninner); Daniel@0: fprintf(1, '\nRe-estimation cycle %d:\n', k); Daniel@0: disp([' alpha = ', num2str(net.alpha')]); Daniel@0: fprintf(1, ' gamma = %8.5f\n\n', gamma); Daniel@0: disp(' ') Daniel@0: disp('Press any key to continue.') Daniel@0: pause; Daniel@0: end Daniel@0: Daniel@0: disp(' ') Daniel@0: disp('Network training and hyperparameter re-estimation are now complete.') Daniel@0: disp('Notice that the final error value is close to the number of data') Daniel@0: disp(['points (', num2str(n), ') divided by two.']) Daniel@0: disp('Also, the hyperparameter values differ, which suggests that a single') Daniel@0: disp('hyperparameter would not be so effective.') Daniel@0: disp(' ') Daniel@0: disp('First we train an MLP without Bayesian regularisation on the') Daniel@0: disp('same dataset using 400 iterations of scaled conjugate gradient') Daniel@0: disp(' ') Daniel@0: disp('Press any key to train the network by maximum likelihood.') Daniel@0: pause; Daniel@0: % Train standard network Daniel@0: net2 = mlp(nin, nhidden, nout, 'logistic'); Daniel@0: options(14) = 400; Daniel@0: net2 = netopt(net2, options, data, target, 'scg'); Daniel@0: y2g = mlpfwd(net2, [X(:), Y(:)]); Daniel@0: y2g = reshape(y2g(:, 1), size(X)); Daniel@0: Daniel@0: disp(' ') Daniel@0: disp('We can now plot the function represented by the trained networks.') Daniel@0: disp('We show the decision boundaries (output = 0.5) and the optimal') Daniel@0: disp('decision boundary given by applying Bayes'' theorem to the true') Daniel@0: disp('data model.') Daniel@0: disp(' ') Daniel@0: disp('Press any key to add the boundaries to the plot.') Daniel@0: pause; Daniel@0: Daniel@0: % Evaluate predictions. Daniel@0: [yg, ymodg] = mlpevfwd(net, data, target, [X(:) Y(:)]); Daniel@0: yg = reshape(yg(:,1),size(X)); Daniel@0: ymodg = reshape(ymodg(:,1),size(X)); Daniel@0: Daniel@0: % Bayesian decision boundary Daniel@0: [cB, hB] = contour(xrange,yrange,p1_x,[0.5 0.5],'b-'); Daniel@0: [cNb, hNb] = contour(xrange,yrange,yg,[0.5 0.5],'r-'); Daniel@0: [cN, hN] = contour(xrange,yrange,y2g,[0.5 0.5],'g-'); Daniel@0: set(hB, 'LineWidth', 2); Daniel@0: set(hNb, 'LineWidth', 2); Daniel@0: set(hN, 'LineWidth', 2); Daniel@0: Chandles = [hB(1) hNb(1) hN(1)]; Daniel@0: legend(Chandles, 'Bayes', ... Daniel@0: 'Reg. Network', 'Network', 3); Daniel@0: Daniel@0: disp(' ') Daniel@0: disp('Note how the regularised network predictions are closer to the') Daniel@0: disp('optimal decision boundary, while the unregularised network is') Daniel@0: disp('overtrained.') Daniel@0: Daniel@0: disp(' ') Daniel@0: disp('We will now compare moderated and unmoderated outputs for the'); Daniel@0: disp('regularised network by showing the contour plot of the posterior') Daniel@0: disp('probability estimates.') Daniel@0: disp(' ') Daniel@0: disp('The first plot shows the regularised (moderated) predictions') Daniel@0: disp('and the second shows the standard predictions from the same network.') Daniel@0: disp('These agree at the level 0.5.') Daniel@0: disp('Press any key to continue') Daniel@0: pause Daniel@0: levels = 0:0.1:1; Daniel@0: fh4 = figure; Daniel@0: set(fh4, 'Name', 'Moderated outputs'); Daniel@0: hold on Daniel@0: plot(data((label<=2),1),data(label<=2,2),'r.', 'MarkerSize', PointSize) Daniel@0: plot(data((label>2),1),data(label>2,2),'y.', 'MarkerSize', PointSize) Daniel@0: Daniel@0: [cNby, hNby] = contour(xrange, yrange, ymodg, levels, 'k-'); Daniel@0: set(hNby, 'LineWidth', 1); Daniel@0: Daniel@0: fh5 = figure; Daniel@0: set(fh5, 'Name', 'Unmoderated outputs'); Daniel@0: hold on Daniel@0: plot(data((label<=2),1),data(label<=2,2),'r.', 'MarkerSize', PointSize) Daniel@0: plot(data((label>2),1),data(label>2,2),'y.', 'MarkerSize', PointSize) Daniel@0: Daniel@0: [cNbm, hNbm] = contour(xrange, yrange, yg, levels, 'k-'); Daniel@0: set(hNbm, 'LineWidth', 1); Daniel@0: Daniel@0: disp(' ') Daniel@0: disp('Note how the moderated contours are more widely spaced. This shows') Daniel@0: disp('that there is a larger region where the outputs are close to 0.5') Daniel@0: disp('and a smaller region where the outputs are close to 0 or 1.') Daniel@0: disp(' ') Daniel@0: disp('Press any key to exit') Daniel@0: pause Daniel@0: close(fh1); Daniel@0: close(fh4); Daniel@0: close(fh5);