| |
- tree_fs(X, y, z, idx, **kwargs)
- This function implements tree structured group lasso regularization with least square loss, i.e.,
min_{w} ||Xw-Y||_2^2 + z\sum_{i}\sum_{j} h_{j}^{i}|||w_{G_{j}^{i}}|| where h_{j}^{i} is the weight for the j-th group
from the i-th level (the root node is in level 0)
Input
-----
X: {numpy array}, shape (n_samples, n_features)
input data
y: {numpy array}, shape (n_samples,)
input class labels or regression target
z: {float}
regularization parameter of L2 norm for the non-overlapping group
idx: {numpy array}, shape (3, n_nodes)
3*nodes matrix, where nodes denotes the number of nodes of the tree
idx(1,:) contains the starting index
idx(2,:) contains the ending index
idx(3,:) contains the corresponding weight (w_{j})
kwargs: {dictionary}
verbose: {boolean}
True if user want to print out the objective function value in each iteration, false if not
Output
------
w: {numpy array}, shape (n_features,)
weight vector
obj: {numpy array}, shape (n_iterations,)
objective function value during iterations
value_gamma: {numpy array}, shape (n_iterations,)
suitable step size during iterations
Note for input parameter idx:
(1) For idx, if each entry in w is a leaf node of the tree and the weight for this leaf node are the same, then
idx[0,0] = -1 and idx[1,0] = -1, idx[2,0] denotes the common weight
(2) In idx, the features of the left tree is smaller than the right tree (idx[0,i] is always smaller than idx[1,i])
Reference:
Liu, Jun, et al. "Moreau-Yosida Regularization for Grouped Tree Structure Learning." NIPS. 2010.
Liu, Jun, et al. "SLEP: Sparse Learning with Efficient Projections." http://www.public.asu.edu/~jye02/Software/SLEP, 2009.
|