Forward and Backward Propagation in a Neural Network

Neural Networks

Difficulty: 7 | Problem written by ankita
Problem reported in interviews at


Neural Networks are trained by a feedforward process to calculate node activations progressively and backpropagation to calculate the derivative of the loss function with respect to each weight.

In this problem, you are required to implement the feed-forward and backpropagation algorithms of a custom-defined fully connected neural network.

The activation function must be sigmoid.

The loss function must be binary cross-entropy loss.

It is mandatory to initialize the weights and biases to zeros prior to training.


X: A vector with training data values

y: Labels of the training data

l: list of number of neurons in each hidden layer


After one forward and backward pass on the complete data, return the derivative values of:

deriv_w: list of derivatives of the loss function with respect to weights in each hidden layer

deriv_b: list of derivatives of the loss function with respect to bias in each hidden layer

For example:

X = [[2.55337307, 1.52481329], [0.95618789, 1.22932837]]

y = [0,1]

l = [2] (one hidden layers with 2 neurons)

The last hidden layer is not specified but has to specified by the user as number of classed in y.


deriv_w = [array([[0., 0.], [0., 0.]]), array([[ 0.25, -0.25], [ 0.25, -0.25]])]

deriv_b = [array([[0., 0.]]), array([[ 0.5, -0.5]])]


I would recommend watching this video (https://www.youtube.com/watch?v=x_Eamf8MHwU) by Andrew Ng to understand how to calculate derivatives with respect to weights and biases using delta matrix.

You can also refer given Wikipedia link: https://en.wikipedia.org/wiki/Backpropagation for a better understanding of the algorithm.

Sample Input:
<class 'list'>
X: [[2.55337307, 1.52481329], [0.95618789, 1.22932837], [0.75296472, 3.24716693], [-0.93797213, 1.26415069], [-0.39155179, 2.39860195], [0.71028504, 0.72796597], [-3.63427663, 1.35134052], [-0.24226226, -1.84743763], [2.96405404, -0.68521863], [-2.83764855, -1.61462203]]
y: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0]
l: [2]

Expected Output:
<class 'tuple'>
([array([[0., 0.], [0., 0.]]), array([[0.05, 0.05], [0.05, 0.05]])], [array([[0., 0.]]), array([[0.1, 0.1]])])

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Vitae odio ad dolor nobis soluta repudiandae illum? Delectus sapiente mollitia totam saepe nostrum aspernatur, unde ratione explicabo tenetur velit, dignissimos maxime fugiat sapiente? Iste maiores quia? Eligendi ipsum culpa sunt nobis maiores possimus sed, itaque labore quaerat error sapiente autem voluptatem est optio expedita?

Sunt ducimus consequatur maxime ad quos laborum ut nemo deserunt numquam, asperiores inventore rerum perferendis veniam repellendus. Tempora voluptatibus fuga ut blanditiis, a similique repellendus beatae harum sed consequuntur, sint nobis magni soluta, magnam ex quibusdam dolor necessitatibus ullam rem sed. Accusantium itaque ducimus odio error qui?

At culpa repellat iste beatae assumenda necessitatibus qui optio, commodi repellat corporis pariatur ea inventore sapiente maxime quidem quis consequuntur dolor, minima minus quae explicabo, maiores adipisci vitae natus laudantium consectetur, officiis fuga recusandae ab harum at et?

This is a premium feature.
To access this and other such features, click on upgrade below.


Input Test Case

Please enter only one test case at a time
numpy has been already imported as np (import numpy as np)