| |
- proximal_gradient_descent(X, Y, z, **kwargs)
- This function implements supervised sparse feature selection via l2,1 norm, i.e.,
min_{W} sum_{i}log(1+exp(-yi*(W'*x+C))) + z*||W||_{2,1}
Input
-----
X: {numpy array}, shape (n_samples, n_features)
input data
Y: {numpy array}, shape (n_samples, n_classes)
input class labels, each row is a one-hot-coding class label, guaranteed to be a numpy array
z: {float}
regularization parameter
kwargs: {dictionary}
verbose: {boolean}
True if user want to print out the objective function value in each iteration, false if not
Output
------
W: {numpy array}, shape (n_features, n_classes)
weight matrix
obj: {numpy array}, shape (n_iterations,)
objective function value during iterations
value_gamma: {numpy array}, shape (n_iterations,s)
suitable step size during iterations
Reference:
Liu, Jun, et al. "Multi-Task Feature Learning Via Efficient l2,1-Norm Minimization." UAI. 2009.
|