Daniel@0: function net = somtrain(net, options, x) Daniel@0: %SOMTRAIN Kohonen training algorithm for SOM. Daniel@0: % Daniel@0: % Description Daniel@0: % NET = SOMTRAIN{NET, OPTIONS, X) uses Kohonen's algorithm to train a Daniel@0: % SOM. Both on-line and batch algorithms are implemented. The learning Daniel@0: % rate (for on-line) and neighbourhood size decay linearly. There is no Daniel@0: % error function minimised during training (so there is no termination Daniel@0: % criterion other than the number of epochs), but the sum-of-squares Daniel@0: % is computed and returned in OPTIONS(8). Daniel@0: % Daniel@0: % The optional parameters have the following interpretations. Daniel@0: % Daniel@0: % OPTIONS(1) is set to 1 to display error values; also logs learning Daniel@0: % rate ALPHA and neighbourhood size NSIZE. Otherwise nothing is Daniel@0: % displayed. Daniel@0: % Daniel@0: % OPTIONS(5) determines whether the patterns are sampled randomly with Daniel@0: % replacement. If it is 0 (the default), then patterns are sampled in Daniel@0: % order. This is only relevant to the on-line algorithm. Daniel@0: % Daniel@0: % OPTIONS(6) determines if the on-line or batch algorithm is used. If Daniel@0: % it is 1 then the batch algorithm is used. If it is 0 (the default) Daniel@0: % then the on-line algorithm is used. Daniel@0: % Daniel@0: % OPTIONS(14) is the maximum number of iterations (passes through the Daniel@0: % complete pattern set); default 100. Daniel@0: % Daniel@0: % OPTIONS(15) is the final neighbourhood size; default value is the Daniel@0: % same as the initial neighbourhood size. Daniel@0: % Daniel@0: % OPTIONS(16) is the final learning rate; default value is the same as Daniel@0: % the initial learning rate. Daniel@0: % Daniel@0: % OPTIONS(17) is the initial neighbourhood size; default 0.5*maximum Daniel@0: % map size. Daniel@0: % Daniel@0: % OPTIONS(18) is the initial learning rate; default 0.9. This Daniel@0: % parameter must be positive. Daniel@0: % Daniel@0: % See also Daniel@0: % KMEANS, SOM, SOMFWD Daniel@0: % Daniel@0: Daniel@0: % Copyright (c) Ian T Nabney (1996-2001) Daniel@0: Daniel@0: % Check arguments for consistency Daniel@0: errstring = consist(net, 'som', x); Daniel@0: if ~isempty(errstring) Daniel@0: error(errstring); Daniel@0: end Daniel@0: Daniel@0: % Set number of iterations in convergence phase Daniel@0: if (~options(14)) Daniel@0: options(14) = 100; Daniel@0: end Daniel@0: niters = options(14); Daniel@0: Daniel@0: % Learning rate must be positive Daniel@0: if (options(18) > 0) Daniel@0: alpha_first = options(18); Daniel@0: else Daniel@0: alpha_first = 0.9; Daniel@0: end Daniel@0: % Final learning rate must be no greater than initial learning rate Daniel@0: if (options(16) > alpha_first | options(16) < 0) Daniel@0: alpha_last = alpha_first; Daniel@0: else Daniel@0: alpha_last = options(16); Daniel@0: end Daniel@0: Daniel@0: % Neighbourhood size Daniel@0: if (options(17) >= 0) Daniel@0: nsize_first = options(17); Daniel@0: else Daniel@0: nsize_first = max(net.map_dim)/2; Daniel@0: end Daniel@0: % Final neighbourhood size must be no greater than initial size Daniel@0: if (options(15) > nsize_first | options(15) < 0) Daniel@0: nsize_last = nsize_first; Daniel@0: else Daniel@0: nsize_last = options(15); Daniel@0: end Daniel@0: Daniel@0: ndata = size(x, 1); Daniel@0: Daniel@0: if options(6) Daniel@0: % Batch algorithm Daniel@0: H = zeros(ndata, net.num_nodes); Daniel@0: end Daniel@0: % Put weights into matrix form Daniel@0: tempw = sompak(net); Daniel@0: Daniel@0: % Then carry out training Daniel@0: j = 1; Daniel@0: while j <= niters Daniel@0: if options(6) Daniel@0: % Batch version of algorithm Daniel@0: alpha = 0.0; Daniel@0: frac_done = (niters - j)/niters; Daniel@0: % Compute neighbourhood Daniel@0: nsize = round((nsize_first - nsize_last)*frac_done + nsize_last); Daniel@0: Daniel@0: % Find winning node: put weights back into net so that we can Daniel@0: % call somunpak Daniel@0: net = somunpak(net, tempw); Daniel@0: [temp, bnode] = somfwd(net, x); Daniel@0: for k = 1:ndata Daniel@0: H(k, :) = reshape(net.inode_dist(:, :, bnode(k))<=nsize, ... Daniel@0: 1, net.num_nodes); Daniel@0: end Daniel@0: s = sum(H, 1); Daniel@0: for k = 1:net.num_nodes Daniel@0: if s(k) > 0 Daniel@0: tempw(k, :) = sum((H(:, k)*ones(1, net.nin)).*x, 1)/ ... Daniel@0: s(k); Daniel@0: end Daniel@0: end Daniel@0: else Daniel@0: % On-line version of algorithm Daniel@0: if options(5) Daniel@0: % Randomise order of pattern presentation: with replacement Daniel@0: pnum = ceil(rand(ndata, 1).*ndata); Daniel@0: else Daniel@0: pnum = 1:ndata; Daniel@0: end Daniel@0: % Cycle through dataset Daniel@0: for k = 1:ndata Daniel@0: % Fraction done Daniel@0: frac_done = (((niters+1)*ndata)-(j*ndata + k))/((niters+1)*ndata); Daniel@0: % Compute learning rate Daniel@0: alpha = (alpha_first - alpha_last)*frac_done + alpha_last; Daniel@0: % Compute neighbourhood Daniel@0: nsize = round((nsize_first - nsize_last)*frac_done + nsize_last); Daniel@0: % Find best node Daniel@0: pat_diff = ones(net.num_nodes, 1)*x(pnum(k), :) - tempw; Daniel@0: [temp, bnode] = min(sum(abs(pat_diff), 2)); Daniel@0: Daniel@0: % Now update neighbourhood Daniel@0: neighbourhood = (net.inode_dist(:, :, bnode) <= nsize); Daniel@0: tempw = tempw + ... Daniel@0: ((alpha*(neighbourhood(:)))*ones(1, net.nin)).*pat_diff; Daniel@0: end Daniel@0: end Daniel@0: if options(1) Daniel@0: % Print iteration information Daniel@0: fprintf(1, 'Iteration %d; alpha = %f, nsize = %f. ', j, alpha, ... Daniel@0: nsize); Daniel@0: % Print sum squared error to nearest node Daniel@0: d2 = dist2(tempw, x); Daniel@0: fprintf(1, 'Error = %f\n', sum(min(d2))); Daniel@0: end Daniel@0: j = j + 1; Daniel@0: end Daniel@0: Daniel@0: net = somunpak(net, tempw); Daniel@0: options(8) = sum(min(dist2(tempw, x)));