Daniel@0: function [Y, Loss] = separationOraclePrecAtK(q, D, pos, neg, k) Daniel@0: % Daniel@0: % [Y,Loss] = separationOraclePrecAtK(q, D, pos, neg, k) Daniel@0: % Daniel@0: % q = index of the query point Daniel@0: % D = the current distance matrix Daniel@0: % pos = indices of relevant results for q Daniel@0: % neg = indices of irrelevant results for q Daniel@0: % k = length of the list to consider Daniel@0: % Daniel@0: % Y is a permutation 1:n corresponding to the maximally Daniel@0: % violated constraint Daniel@0: % Daniel@0: % Loss is the loss for Y, in this case, 1-Prec@k(Y) Daniel@0: Daniel@0: Daniel@0: % First, sort the documents in descending order of W'Phi(q,x) Daniel@0: % Phi = - (X(q) - X(x)) * (X(q) - X(x))' Daniel@0: Daniel@0: % Sort the positive documents Daniel@0: ScorePos = - D(pos,q); Daniel@0: [Vpos, Ipos] = sort(full(ScorePos'), 'descend'); Daniel@0: Ipos = pos(Ipos); Daniel@0: Daniel@0: % Sort the negative documents Daniel@0: ScoreNeg = -D(neg,q); Daniel@0: [Vneg, Ineg] = sort(full(ScoreNeg'), 'descend'); Daniel@0: Ineg = neg(Ineg); Daniel@0: Daniel@0: % Now, solve the DP for the interleaving Daniel@0: Daniel@0: numPos = length(pos); Daniel@0: numNeg = length(neg); Daniel@0: n = numPos + numNeg; Daniel@0: Daniel@0: cVpos = cumsum(Vpos); Daniel@0: cVneg = cumsum(Vneg); Daniel@0: Daniel@0: Daniel@0: % If we don't have enough positive (or negative) examples, scale k down Daniel@0: k = min([k, numPos, numNeg]); Daniel@0: Daniel@0: % Algorithm: Daniel@0: % For each precision score in 0, 1/k, 2/k, ... 1 Daniel@0: % Calculate maximum discriminant score for that precision level Daniel@0: Precision = (0:(1/k):1)'; Daniel@0: Discriminant = zeros(k+1, 1); Daniel@0: NegsBefore = zeros(numPos, k+1); Daniel@0: Daniel@0: % For 0 precision, all positives go after the first k negatives Daniel@0: Daniel@0: NegsBefore(:,1) = k + binarysearch(Vpos, Vneg(k+1:end)); Daniel@0: Daniel@0: Discriminant(1) = Vpos * (numNeg - 2 * NegsBefore(:,1)) + numPos * cVneg(end) ... Daniel@0: - 2 * sum(cVneg(NegsBefore((NegsBefore(:,1) > 0),1))); Daniel@0: Daniel@0: Daniel@0: Daniel@0: % For precision (a-1)/k, swap the (a-1)'th positive doc Daniel@0: % into the top (k-a) negative docs Daniel@0: Daniel@0: for a = 2:(k+1) Daniel@0: NegsBefore(:,a) = NegsBefore(:,a-1); Daniel@0: Daniel@0: % We have a-1 positives, and k - (a-1) negatives Daniel@0: NegsBefore(a-1, a) = binarysearch(Vpos(a-1), Vneg(1:(k-a+1))); Daniel@0: Daniel@0: % There were NegsBefore(a-1,a-1) negatives before (a-1) Daniel@0: % Now there are NegsBefore(a,a-1) Daniel@0: Discriminant(a) = Discriminant(a-1) ... Daniel@0: + 2 * (NegsBefore(a-1,a-1) - NegsBefore(a-1,a)) * Vpos(a-1); Daniel@0: Daniel@0: if NegsBefore(a-1,a-1) > 0 Daniel@0: Discriminant(a) = Discriminant(a) + 2 * cVneg(NegsBefore(a-1,a-1)); Daniel@0: end Daniel@0: if NegsBefore(a-1,a) > 0 Daniel@0: Discriminant(a) = Discriminant(a) - 2 * cVneg(NegsBefore(a-1,a)); Daniel@0: end Daniel@0: end Daniel@0: Daniel@0: % Normalize discriminant scores Daniel@0: Discriminant = Discriminant / (numPos * numNeg); Daniel@0: [s, x] = max(Discriminant - Precision); Daniel@0: Daniel@0: % Now we know that there are x-1 relevant docs in the max ranking Daniel@0: % Construct Y from NegsBefore(x,:) Daniel@0: Daniel@0: Y = nan * ones(n,1); Daniel@0: Y((1:numPos)' + NegsBefore(:,x)) = Ipos; Daniel@0: Y(isnan(Y)) = Ineg; Daniel@0: Daniel@0: % Compute loss for this list Daniel@0: Loss = 1 - Precision(x); Daniel@0: end Daniel@0: