utils_algorithms#

Module Contents#

class MinNormSolver#
MAX_ITER = 1000#
STOP_CRIT = 1e-05#
_min_norm_element_from2(v1v2, v2v2)#

Analytical solution for min_{c} |cx_1 + (1-c)x_2|_2^2 d is the distance (objective) optimzed v1v1 = <x1,x1> v1v2 = <x1,x2> v2v2 = <x2,x2>

_min_norm_2d(dps)#

Find the minimum norm solution as combination of two points This is correct only in 2D ie. min_c |sum c_i x_i|_2^2 st. sum c_i = 1 , 1 >= c_1 >= 0 for all i, c_i + c_j = 1.0 for some i, j

_min_norm_2d_accelerated(dps)#
_projection2simplex()#

Given y, it solves argmin_z |y-z|_2 st sum z = 1 , 1 >= z_i >= 0 for all i

_next_point(grad, n)#
find_min_norm_element()#

Given a list of vectors (vecs), this method finds the minimum norm element in the convex hull as min |u|_2 st. u = sum c_i vecs[i] and sum c_i = 1. It is quite geometric, and the main idea is the fact that if d_{ij} = min |u|_2 st u = c x_i + (1-c) x_j; the solution lies in (0, d_{i,j}) Hence, we find the best 2-task solution, and then run the projected gradient descent until convergence

find_min_norm_element_FW()#

Given a list of vectors (vecs), this method finds the minimum norm element in the convex hull as min |u|_2 st. u = sum c_i vecs[i] and sum c_i = 1. It is quite geometric, and the main idea is the fact that if d_{ij} = min |u|_2 st u = c x_i + (1-c) x_j; the solution lies in (0, d_{i,j}) Hence, we find the best 2-task solution, and then run the Frank Wolfe until convergence