Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output layer size in the meta-training and meta-testing phases #2

Open
AhmedFrikha opened this issue May 7, 2020 · 1 comment
Open

Comments

@AhmedFrikha
Copy link

My understanding is that you use a single fully connected layer on top of the neuro-modulated representations.

  1. Does this output layer has 963 nodes during meta-training, since you are performing 963-class classification during meta-training ?
  2. If yes, how do you use the meta-learned output layer to learn/perform 600-class classification at meta-testing time ? Or do you randomly initialize a new output layer with 600 nodes?
@ZhichenML
Copy link

Hi I was wondering if you have addressed this concern. From the code the network has an output layer mapping from 2304 units to 1000 unit, where 1000 covers the training classes 963 and testing classes 600. It is really really really wired to me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants