You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello everybody,
I am currently trying to convert RGAN to MXNet Gluon.
But I still cannot reproduce the results. So I think there are some issues in my code.
Some parts of the Tensorflow code are still confusing for me. It would be very helpful if you can answer some of my questions.
In the generator line 278 we have : logits_2d = tf.matmul(rnn_outputs_2d, W_out_G) + b_out_G
and in the discriminator line 317 we have : logits = tf.einsum('ijk,km', rnn_outputs, W_out_D) + b_out_D
I understand that you are doing a weighted sum and reducing dimensions, but what is the difference between matmul and einsum in this case ?
Are W_out/b_out static tensors ? Or are they evolving during the training phase ?
Can we replace these operations by Dense/Linear layers, they are able to reduce dims after all
Thank you
The text was updated successfully, but these errors were encountered:
Hello everybody,
I am currently trying to convert RGAN to MXNet Gluon.
But I still cannot reproduce the results. So I think there are some issues in my code.
Some parts of the Tensorflow code are still confusing for me. It would be very helpful if you can answer some of my questions.
In the generator line 278 we have :
logits_2d = tf.matmul(rnn_outputs_2d, W_out_G) + b_out_G
and in the discriminator line 317 we have :
logits = tf.einsum('ijk,km', rnn_outputs, W_out_D) + b_out_D
I understand that you are doing a weighted sum and reducing dimensions, but what is the difference between matmul and einsum in this case ?
Are W_out/b_out static tensors ? Or are they evolving during the training phase ?
Can we replace these operations by Dense/Linear layers, they are able to reduce dims after all
Thank you
The text was updated successfully, but these errors were encountered: