You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been following the examples in RNN-Denoise. However, when testing the quantized model, I noticed that the quantized output tends very fast towards zero:
Resetting the states of each RNN layer after each prediction makes the output more reasonable but does not give good noise reduction result:
Is there something I am missing? Note that the model used is the one provided in the repo and rest of the code is identical to the examples in RNN-Denoise. Thanks!
The text was updated successfully, but these errors were encountered:
Note that the model works well in python keras with stateful=True. The input to the above screenshot is one of the noised audio samples from the training dataset, and is preprocessed with the same methods listed in the repo.
Hi @Tom2096
Since the training isn't quantization-aware, it's possible that the weights distribution might become extreme. Keras should perform well in this scenario because it uses floating point representations. However, after quantization, there is a loss of weight resolution. Try stopping the training after a very short period, maybe just 1 or 2 epochs, to see if there's an improvement.
Hi,
I have been following the examples in RNN-Denoise. However, when testing the quantized model, I noticed that the quantized output tends very fast towards zero:
Resetting the states of each RNN layer after each prediction makes the output more reasonable but does not give good noise reduction result:
Is there something I am missing? Note that the model used is the one provided in the repo and rest of the code is identical to the examples in RNN-Denoise. Thanks!
The text was updated successfully, but these errors were encountered: