0

Stochastic Gradient Descent

Unsolved
Linear Algebra
Neural Networks
Optimization
Supervised

Difficulty: 7 | Problem written by ankita
There are various optimization algorithms that are used to find a local minimum of a differentiable function by minimizing the cost function of the problem. Gradient descent is one such algorithm that acts as an optimization algorithm that minimizes the cost function by updating the weights of the model.

The gradient descent algorithm multiplies the gradient by a learning rate to determine the next point in the process of reaching a local minimum.

In stochastic gradient descent, the error is calculated for each example in the training dataset and the parameters are updated after evaluating each training example. 

In this problem, we expect you to implement the stochastic gradient descent algorithm manually. The cost/loss function will be the mean squared loss for linear regression:

Loss = \(\tfrac{n}{2}*\sum_{i=1}^{i=n}(y_{i}-f(x_{i}))^{2}\) 

You have to initialize all the weights to zero to meet the desired output.

Input:
x: an array of training examples
y: an array of output corresponding to each training example
lr: the learning rate for the algorithm
it: number of complete traversals of the dataset (can be considered as the number of epochs)

Output:
A list of updated weights after every iteration/epoch. Do not include the first W that is an array of zeros.

The first element of W should be wo i.e., if Y = wX + wo then W = [wo, elements of w]
To get the above W, stack a column of ones to X at the beginning of X

Algorithm:

1. Initialize W to zeros

2. For each training example Xin X, do the following:

3. Calculate predicted y_p as Xi*W

4. Now, use this y_p and actual y to calculate the derivative of the loss function with respect to W:

   Derivative of loss with respect to W =  XiT*. (y_p - y)

   Here, *. refers to the dot product.

5. Update the weights using the formula: W = W - learning_rate*derivative of the loss function with respect to W.

6. Continue the above step for the number of iterations/epochs specified in the function.

Sample Input:
<class 'list'>
x: [[0.56351121, 0.99432814], [0.98461591, 0.2181135], [0.72111435, 0.9264389]]
<class 'list'>
y: [[0.74592086], [0.3833196], [0.37965535]]
<class 'float'>
lr: 0.001
<class 'int'>
it: 2

Expected Output:
<class 'list'>
[array([[0.00150511], [0.00106845], [0.00117445]]), array([[0.00300078], [0.00212988], [0.00234193]])]

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Fuga nostrum eos temporibus aperiam exercitationem deserunt nesciunt eaque, molestias aperiam et ea impedit unde dolorem odio architecto officia distinctio, officiis incidunt impedit doloribus laudantium commodi, culpa quibusdam voluptas numquam velit hic, est laudantium nobis delectus quasi reprehenderit ipsa possimus animi? Distinctio deserunt modi nisi iusto tempora vitae inventore sapiente odit, nemo ipsum voluptatibus voluptatum dolores beatae temporibus quod perspiciatis eveniet. Eveniet ea aliquam, alias soluta libero quod fugiat exercitationem autem neque magni culpa non, ad corrupti laudantium nostrum atque qui nobis illo, illo corrupti blanditiis. Suscipit saepe qui nulla culpa veritatis nihil, et eum autem?

Distinctio porro exercitationem labore delectus voluptatibus est molestias voluptas et possimus aperiam, officiis inventore ipsum, maiores ipsa quos error culpa, eos nihil aspernatur rem natus facilis? Necessitatibus culpa saepe dolor rem aperiam.

Placeat animi minus nesciunt saepe sunt nisi fugiat ratione nostrum culpa cumque, pariatur excepturi animi est ullam doloremque quae. Asperiores natus minus molestias optio fuga mollitia aliquid neque vero eum voluptatum, qui consectetur similique ipsa laudantium quam sit iste numquam veritatis, voluptates commodi dignissimos assumenda consequuntur veritatis. Tempore rem magni doloremque at quo, quidem eveniet asperiores assumenda explicabo saepe mollitia sint dolor eum, similique repellat tenetur sapiente nostrum sit, dolore enim veritatis, at ut laboriosam necessitatibus. Soluta fugit facilis quia molestias repudiandae, quis vero repellendus nisi quidem ad nesciunt, quaerat quidem veniam expedita natus iure?

This is a premium feature.
To access this and other such features, click on upgrade below.

Ready.

Input Test Case

Please enter only one test case at a time
numpy has been already imported as np (import numpy as np)