1

Unsolved
###### Optimization

Difficulty: 2 | Problem written by zeyad_omar
##### Problem reported in interviews at

Gradient descent with momentum is an enhanced version of the gradient descent algorithm where the current gradient or step is the weighted sum of the previous gradients.

grads: 1D vector representing the current (postion 0) and the previous gradients (postion [1:end])
beta: the weight of each component

You can use this formula:

##### Sample Input:
<class 'list'>
grads: [0.9, 1, 0.88, 0.84, 0.89]
<class 'float'>
beta: 0.5

##### Expected Output:
<class 'float'>
1.780625

MLPro Premium also allows you to access all our high quality MCQs which are not available on the free tier.

Not able to solve a problem? MLPro premium brings you access to solutions for all problems available on MLPro

Have an issue, the MLPro support team is available 24X7 to Premium users.