3

Gradient Descent with Momentum

Unsolved
Fundamentals
Neural Networks
Optimization

Difficulty: 2 | Problem written by zeyad_omar
Problem reported in interviews at

Amazon
Apple
Facebook
Google
Netflix

Gradient descent with momentum is an enhanced version of the gradient descent algorithm where the current gradient or step is the weighted sum of the previous gradients.

In this problem, you are asked to return the gradient given:

grads: 1D vector representing the current (postion 0) and the previous gradients (postion [1:end])
beta: the weight of each component


You can use this formula:

Sample Input:
<class 'list'>
grads: [0.9, 1, 0.88, 0.84, 0.89]
beta: 0.5

Expected Output:
<class 'float'>
1.780625

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Quaerat cupiditate qui odit aperiam eveniet porro commodi eum, non fugiat recusandae impedit? Deserunt illum dolorem distinctio alias perspiciatis repudiandae neque harum, a nobis illo maxime veniam explicabo hic, cum architecto quo earum ducimus eum inventore repudiandae, praesentium animi eos odit eum voluptatem quam. Molestias facere cumque laboriosam laudantium dolore consectetur architecto, impedit ducimus expedita, unde ad doloribus?

Odio sed aperiam ab error veniam quibusdam culpa repudiandae, maiores repellendus assumenda eos aut ullam repudiandae magni similique.

Harum modi odio unde cupiditate deleniti, nesciunt suscipit commodi?

This is a premium feature.
To access this and other such features, click on upgrade below.

Ready.

Input Test Case

Please enter only one test case at a time
numpy has been already imported as np (import numpy as np)