| |
- init_factor(W_norm, XW, Y, z)
- Initialize the starting point of W, according to the author's code
- proximal_gradient_descent(X, Y, z, **kwargs)
- This function implements supervised sparse feature selection via l2,1 norm, i.e.,
min_{W} ||XW-Y||_F^2 + z*||W||_{2,1}
Input
-----
X: {numpy array}, shape (n_samples, n_features)
input data, guaranteed to be a numpy array
Y: {numpy array}, shape (n_samples, n_classes)
input class labels, each row is a one-hot-coding class label
z: {float}
regularization parameter
kwargs: {dictionary}
verbose: {boolean}
True if user want to print out the objective function value in each iteration, false if not
Output
------
W: {numpy array}, shape (n_features, n_classes)
weight matrix
obj: {numpy array}, shape (n_iterations,)
objective function value during iterations
value_gamma: {numpy array}, shape (n_iterations,)
suitable step size during iterations
Reference
---------
Liu, Jun, et al. "Multi-Task Feature Learning Via Efficient l2,1-Norm Minimization." UAI. 2009.
|