You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OK, I'm looking at the mxnet_backend.py and especially on how gradients are calculated since I'm developing a custom optimizer. However K.gradients do not implement any kind of call to mx:
def gradients(loss, variables):
"""Returns the gradients of `variables` (list of tensor variables)
with regard to `loss`.
"""
raise NotImplementedError
So my question is, how are gradients actually calculated, say in a call like:
grads=K.gradients(loss, params)
in my optimizer?
I mean, how the heck does even SGD work with the mx backend?
Thanks
The text was updated successfully, but these errors were encountered:
OK, I'm looking at the mxnet_backend.py and especially on how gradients are calculated since I'm developing a custom optimizer. However K.gradients do not implement any kind of call to mx:
So my question is, how are gradients actually calculated, say in a call like:
grads=K.gradients(loss, params)
in my optimizer?
I mean, how the heck does even SGD work with the mx backend?
Thanks
The text was updated successfully, but these errors were encountered: