-
Notifications
You must be signed in to change notification settings - Fork 496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Gradient Descent [Python] #348
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see comments
gradient_descent/gradient_descent.py
Outdated
:param data_input_tuple: Input tuple of a particular example | ||
:return: Value of hypothesis function at that point. | ||
Note that parameter input value is fixed as 1. | ||
Also known as 'biased input' inn ML terminology and the parameter associated with it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change inn
to in
@yashLadha Please review again. |
gradient_descent/gradient_descent.py
Outdated
while True: | ||
j = j+1 | ||
temp_parameter_vector = [0, 0, 0, 0] | ||
for i in range(0, len(parameter_vector)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for i in range(0, len(parameter_vector)):
can be changed to for i in range(len(parameter_vector)):
gradient_descent/gradient_descent.py
Outdated
|
||
def test_gradient_descent(): | ||
for i in range(len(test_data)): | ||
print("Actual output value:", output(i, 'test')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print("Actual output value:", output(i, 'test'))
should be changed to print("Actual output value:", output(i, 'test'))
i.e. remove one space between ,
and output..
gradient_descent/gradient_descent.py
Outdated
relative_error_limit = 0 | ||
j = 0 | ||
while True: | ||
j = j+1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can be changed to j += 1
@aashutoshrathi Please review again. |
gradient_descent/gradient_descent.py
Outdated
hyp_val = 0 | ||
for i in range(len(parameter_vector) - 1): | ||
hyp_val = hyp_val + data_input_tuple[i]*parameter_vector[i+1] | ||
hyp_val = hyp_val + 1*parameter_vector[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Prakash2403 Sir , Why is this 1*parameter_vector[0]
not parameter_vector[0]
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Line 35 and 36 can be changed to +=
format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I have mentioned in comments, there is always a biased input in Artificial Neural Networks or any ML hypothesis, whose value is fixed as 1. I wanted to explicitly mention this fact in the code too, that's why I have written 1*parameter_vector[0]
instead of parameter[0]
.
But now, I guess it's better to keep it parameter[0]
. Thanks for pointing it out.
@aashutoshrathi Please review again. |
gradient_descent/gradient_descent.py
Outdated
:return: Value of hypothesis function at that point. | ||
Note that there is an 'biased input' whose value is fixed as 1. | ||
It is not explicitly mentioned in input data.. But, ML hypothesis functions use it. | ||
So, we have to take care of it separately. Line 36 takes care of it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove Trailing Whitespaces
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is the issue exactly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the end of Line 31
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, thanks.
@aashutoshrathi Please review again. |
Just update your branch |
- Added open source datasets
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please See comments
gradient_descent/gradient_descent.py
Outdated
plt.ylabel('Output Values') | ||
plt.xlim([-1, 7]) | ||
plt.legend() | ||
plt.show() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't the program end. If u haven't close the plot explicitly. May be this will hold the travis-ci
build to complete. Make a timer or something so that it get closed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please See Comments
gradient_descent/gradient_descent.py
Outdated
parameter_vector = numpy.asmatrix([2, 4, 1, 5, 4, 1, 2, 2, 3, 1, 1, 2]).transpose() | ||
learning_rate = 0.00015 | ||
absolute_error_limit = 0.000015 | ||
absolute_error_limit = 0.000015/4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this is so? Why you are not using the simplified version?.
gradient_descent/gradient_descent.py
Outdated
""" | ||
Downloads test and train data from GitHub repository | ||
""" | ||
import requests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Put the import , where all the declarations are defined
There are conflicts in the |
@yashLadha Please review again. |
@Prakash2403 have u passed a parameter for plotting the graph. ?? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see comments
gradient_descent/gradient_descent.py
Outdated
hypothesis_output = _hypothesis_value(input_data, | ||
parameter_vector=parameter_vector) | ||
num_examples = len(output_data) | ||
plt.plot(range(num_examples), actual_output, 'r', label='Actual Output') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check for argument passed for plotting the graph
Add matplotlib to requirements. @Prakash2403 |
- Added matplotlib in python3 requirements.txt
@yashLadha Please review again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nide Work 😄 👍
actual_output = output_data | ||
hypothesis_output = _hypothesis_value(input_data, | ||
parameter_vector=parameter_vector) | ||
if len(sys.argv) == 2: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice Work
@Prakash2403 Can you please rebase this PR? matplotlib has been already added. |
@aviaryan Sir, there is an on-going discussion regarding this issue on gitter. So, it is best to not merge the PR right now. |
@Prakash2403 sir please update this PR. |
Please reopen if you wish to continue. |
Fixes #345
By submitting this pull request I confirm I've read and complied with the below declarations.
Added {Algorithm/DS name} [{Language}]
, notUpdate README.md
orAdded new code
.