Gradient Descent with MomentumUnsolved
Problem reported in interviews at
Gradient descent with momentum is an enhanced version of the gradient descent algorithm where the current gradient or step is the weighted sum of the previous gradients.
In this problem, you are asked to return the gradient given:
grads: 1D vector representing the current (postion 0) and the previous gradients (postion [1:end])
beta: the weight of each component
You can use this formula:
grads: [0.9, 1, 0.88, 0.84, 0.89]
This is a premium problem, to view more details of this problem please sign up for MLPro Premium. MLPro premium offers access to actual machine learning and data science interview questions and coding challenges commonly asked at tech companies all over the world
MLPro Premium also allows you to access all our high quality MCQs which are not available on the free tier.
Not able to solve a problem? MLPro premium brings you access to solutions for all problems available on MLPro
Get access to Premium only exclusive educational content available to only Premium users.
Have an issue, the MLPro support team is available 24X7 to Premium users.
This is a premium feature.
To access this and other such features, click on upgrade below.
Log in to post a commentComments
Input Test CasePlease enter only one test case at a time
numpy has been already imported as np (import numpy as np)