Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Visualizing Intermediate Layer Activations #1

Open
NassimaD opened this issue Jun 10, 2019 · 3 comments
Open

Visualizing Intermediate Layer Activations #1

NassimaD opened this issue Jun 10, 2019 · 3 comments

Comments

@NassimaD
Copy link

I'm getting this error in Visualizing Intermediate Layer Activations part, I used this model
def M_Model():
base_model = InceptionV3(weights=None, include_top=False, input_shape=(3, 224, 224))
# Classification block
x = GlobalAveragePooling2D(name='avg_pool')(base_model.output)
x = Dense(128, activation='relu')(x)
x = Dropout(0.2)(x)
x = Dense(clse, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=x)
return model

and the error is :
display_grid[col * size: (col + 1) * size, row * size: (row + 1) * size] = channel_image
ValueError: could not broadcast input array from shape (32,111) into shape (32,32)

@anktplwl91
Copy link
Owner

Hi @NassimaD
It is probably because your image size is not matching with the display grid size in which you are trying to fill up the image. Your display_grid is of size (32, 111) and that is why it is not able to put a (32, 32) image into that grid. You might want to check these parameters images_per_row, n_cols, n_features and size, since they are used for creating display_grid.

n_features = layer_activation.shape[-1]
size = layer_activation.shape[1]
n_cols = n_features // images_per_row
display_grid = np.zeros((size * n_cols, images_per_row * size))

@NassimaD
Copy link
Author

Hi
I have used the same equations with images_per_row= 16, could you help me to select the appropriate hyperparameters for the input size (224, 224,3)?
Thanks.

@anktplwl91
Copy link
Owner

Hi @NassimaD

I ran same code with image input size of (224, 224, 3) and it is running fine with no errors. Maybe what you can try is instead of running whole code, just try to run only below part of code with any image.


inp = Input((224, 224, 3))

inception_model = InceptionV3(include_top=False, input_tensor=inp, pooling='avg')
x = Dense(2)(inception_model.output)

model = Model(inp, x)

layer_outputs = [layer.output for layer in model.layers[:50]]

test_image = "image_path"
img = image.load_img(test_image, target_size=(224, 224))
img_tensor = image.img_to_array(img)
img_tensor = np.expand_dims(img_tensor, axis=0)
img_tensor /= 255.

activation_model = Model(inputs=model.input, outputs=layer_outputs)
activations = activation_model.predict(img_tensor)

layer_names = ['conv2d_1', 'activation_1', 'conv2d_4', 'activation_4', 'conv2d_9', 'activation_9']
activ_list = [activations[1], activations[3], activations[11], activations[13], activations[18], activations[20]]

images_per_row=16

for layer_name, layer_activation in zip(layer_names, activ_list):
    n_features = layer_activation.shape[-1]
    size = layer_activation.shape[1]
    n_cols = n_features // images_per_row
    display_grid = np.zeros((size * n_cols, images_per_row * size))
    
    for col in range(n_cols):
        for row in range(images_per_row):
            channel_image = layer_activation[0, :, :, col * images_per_row + row]
            channel_image -= channel_image.mean()
            channel_image /= channel_image.std()
            channel_image *= 64
            channel_image += 128
            channel_image = np.clip(channel_image, 0, 255).astype('uint8')
            display_grid[col * size : (col + 1) * size, row * size : (row + 1) * size] = channel_image

    scale = 1. / size
    plt.figure(figsize=(scale * display_grid.shape[1], scale * display_grid.shape[0]))
    plt.title(layer_name)
    plt.grid(False)
    plt.imshow(display_grid, aspect='auto', cmap='plasma')

If you are having trouble with calculating how size of display_grid is calculated, its simple actually. Since, the images are formed for every activation layer, lets take first activation layer which in this case is first Convolution layer Conv1D in InceptionV3 network

So, n_features=32 (number of features given for first Convolution layer)
size = 111 (calculate using formula O = ((W-K+2P)/S) + 1, W is input image size, K is filter size, P is padding, S is stride)
n_cols = 32 // 16 = 2
display_grid = np.zeros((111*2, 16*111))

Then, this grid is filled row-wise with images.

You can confirm values of n_features and size using model.summary() after creating the model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants