2

Derivative of Sigmoid for Backpropagation

Unsolved
Neural Networks

Difficulty: 2 | Problem written by Mr. Umair
Derivatives are used in the process of backpropagation. The optimal set of values are computed by gradient descent, which uses the derivative of the sigmoid function, because that is the activation used by the output neuron in a neural netowkr.

Given an input value to a neuron, find "the ability of neuron to learn" by calculating the derivative of the sigmoid function on that specific input value. 

Note: You can use the math.exp builtin function for calculating the sigmoid value.

Example Input:

Input to Neuron (x) = -2

Example Output:

Derivative of Sigmoid  = 0.1049935854035065

 

Sample Input:
<class 'int'>
x: -2

Expected Output:
<class 'float'>
0.1049935854035065

This is a premium problem, to view more details of this problem please sign up for MLPro Premium. MLPro premium offers access to actual machine learning and data science interview questions and coding challenges commonly asked at tech companies all over the world

MLPro Premium also allows you to access all our high quality MCQs which are not available on the free tier.

Not able to solve a problem? MLPro premium brings you access to solutions for all problems available on MLPro

Get access to Premium only exclusive educational content available to only Premium users.

Have an issue, the MLPro support team is available 24X7 to Premium users.

This is a premium feature.
To access this and other such features, click on upgrade below.

Log in to post a comment

Comments
Ready.

Input Test Case

Please enter only one test case at a time
numpy has been already imported as np (import numpy as np)