2

Lasso Regression

Unsolved
Optimization
Supervised

Difficulty: 7 | Problem written by ankita
Regularization is a form of regression that discourages learning a more complex or flexible model, so as to avoid the risk of overfitting.

Lasso regression is implemented by adding an additional term in the loss function. Here, we use the linear least-squares function. The regularization is given by the L1-norm:

L(W,α)=\(||X.W-y||^{2}+\alpha||W||\)

We will apply batch gradient descent with the following derivative of W at each step.

WS = NumPy array corresponding to W. The ith element is: 1 if W[i] >= 0, otherwise -1

n is the number of training examples

W=\((-2*X^{T}.(Y-X.W)+\alpha*W_S)/n\)

We expect you to implement the whole algorithm manually and return the ‘W’ corresponding to the best-suited α and the loss for the corresponding parameters.

Input:

X: an array of training examples. 

y: an array of output corresponding to each training example

alpha: different values of α for which you have to get minimum loss 

iter: number of iterations

lr: learning rate

Output:

For each α calculate the 'W' and hence the loss according to the function above on the whole training data and return the α, loss score, and 'W' for the α with the minimum loss on the training data.

The output should be in the following order:

α, loss score, NumPy array of 'W'.

Hint:

The prediction is mentioned as X.W and the shape of X is (n,m) and shape of Y is (n,1). So, this automatically means W should be (m,1) which is a 2-D matrix.

Sample Input:
<class 'list'>
X: [[0.55345954, 0.26978505, 0.99572193, 0.17879061, 0.13353172], [0.12972127, 0.89863166, 0.97875147, 0.61299997, 0.88425275], [0.08780339, 0.90976317, 0.68283976, 0.02670151, 0.30560837], [0.82365932, 0.87099191, 0.52195797, 0.52162298, 0.40034739], [0.70355801, 0.89146552, 0.38555787, 0.07339327, 0.16111809], [0.27560237, 0.92967928, 0.6460444, 0.46355679, 0.69999201], [0.86036116, 0.66422329, 0.69960402, 0.7787864, 0.67299241], [0.86554358, 0.43671475, 0.0406369, 0.09743328, 0.13477061], [0.22106352, 0.57616507, 0.43354926, 0.63722607, 0.89919981], [0.30758308, 0.40788758, 0.0811379, 0.35161535, 0.37144102]]
<class 'list'>
Y: [[0.12954958], [0.88900561], [0.15619786], [0.19463617], [0.25362551], [0.81332185], [0.59385747], [0.63010439], [0.4483], [0.16408941]]
<class 'list'>
betas: [1, 0.1, 10, 20]
<class 'int'>
iter: 1000
<class 'float'>
lr: 0.001

Expected Output:
<class 'tuple'>
(0.1, 0.5849272455767055, array([[0.0933715 ], [0.22456382], [0.12905883], [0.12445942], [0.21719877]]))

This is a premium problem, to view more details of this problem please sign up for MLPro Premium. MLPro premium offers access to actual machine learning and data science interview questions and coding challenges commonly asked at tech companies all over the world

MLPro Premium also allows you to access all our high quality MCQs which are not available on the free tier.

Not able to solve a problem? MLPro premium brings you access to solutions for all problems available on MLPro

Get access to Premium only exclusive educational content available to only Premium users.

Have an issue, the MLPro support team is available 24X7 to Premium users.

This is a premium feature.
To access this and other such features, click on upgrade below.

Log in to post a comment

Comments
Jump to comment-175
mo_venouziou • 2¬†months, 1¬†week ago

0

The instructions also do not specify the desired initialization approach for W.

Ready.

Input Test Case

Please enter only one test case at a time
numpy has been already imported as np (import numpy as np)