-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about parameter 'biascoef' #20
Comments
Hi, Xuan Li, biascoef is just a coefficient that adapts learning rate for biases. If you Regards, On Fri, Jun 5, 2015 at 3:19 PM, lixuan [email protected] wrote:
PhD candidate, Sergey Demyanov http://www.demyanov.net/ |
Thank you for your explanation. |
No, but it should be very easy to do. Just take a look how biascoef works, On Fri, Jun 5, 2015 at 5:59 PM, lixuan [email protected] wrote:
|
Hi Sergey,
There is new problem in my work.
I found the parameter 'biascoef' may be not work well as described in the ReadME file.
This is my params and structure:
params.batchsize=128;
params.epochs = 1;
params.alpha = 0.1;
params.momentum = 0.9;
params.lossfun = 'logreg';
params.shuffle = 1;
params.seed = 0;
dropout = 0.5;
layers = {
struct('type', 'i', 'mapsize', kXSize(1:2), 'outputmaps', kXSize(3)) %32
struct('type', 'c', 'filtersize', [3 3], 'outputmaps', 32,'function','sigm', 'dropout', dropout, 'biascoef' ,0) %30
struct('type', 's', 'scale', [2 2], 'function', 'max', 'stride', [2 2], 'dropout', dropout) %15
struct('type', 'c', 'filtersize', [2 2], 'outputmaps', 64,'function','sigm', 'dropout', dropout, 'biascoef' ,0) %14
struct('type', 's', 'scale', [2 2], 'function', 'max', 'stride', [2 2], 'dropout',dropout) %7
struct('type', 'c', 'filtersize', [2 2], 'outputmaps', 64,'function','sigm', 'dropout', dropout, 'biascoef' ,0) %6
struct('type', 's', 'scale', [2 2], 'function', 'max', 'stride', [2 2], 'dropout', dropout) %3
struct('type', 'c', 'filtersize', [2 2], 'outputmaps', 128,'function','sigm', 'dropout', dropout, 'biascoef' ,0) %2
struct('type', 'f', 'length', 256, 'function','sigm', 'dropout', dropout, 'biascoef' ,0)
struct('type', 'f', 'length', kOutputs, 'function', 'soft', 'biascoef' ,0)
};
I have set all the ‘biascoef’ of the convolution layers and the full-connection layers to 0, but the loss and test prediction accuracy are still change after each training epoch.
As far as I know, the network weights will not change when all ‘biascoef’ params are set to 0, but it seems inconsistent with the situation here.
Yours,
Xuan Li.
The text was updated successfully, but these errors were encountered: