-
Notifications
You must be signed in to change notification settings - Fork 266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model.call() fails on GraphConvolution layer, cannot connect to other models #41
Comments
Thanks for this bug report! I do not have an immediate solution for this
other than suggesting to try a different Keras version (this package was
developed with Keras 1.0.9 in mind) -- maybe the API changed in the
meantime.
If you find a solution, please let me know!
…On Fri, Feb 22, 2019 at 1:22 AM Robert Beck ***@***.***> wrote:
Dear Thomas,
Thank you for sharing this interesting package. When trying to link a
GraphConvolution model to other models via the call() function, I ran into
an error. Here is the minimal code for reproducing it, with the same input
structure as in train.py:
from keras.layers import Input
from keras.models import Model
from keras.optimizers import Adam
from kegra.layers.graph import GraphConvolution
featureInput = Input(shape=(1,))
adjacencyInput = Input(shape=(None, None), batch_shape=(None,None), sparse=False)
support=1
output = GraphConvolution(1, support, activation='linear')([featureInput,adjacencyInput])
# Compile model
graphConvModel = Model(inputs=[featureInput, adjacencyInput], outputs=output)
graphConvModel.compile(loss='mean_squared_error', optimizer=Adam(lr=1e-4))
The model compiles successfully, and I can train it, and predict with it.
However, when I try to run the call function, for example like this:
graphConvModel([featureInput,adjacencyInput]), I get the following error
message:
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
<ipython-input-826-c1aa401b2630> in <module>()
----> 1 graphConvModel([featureInput,adjacencyInput])
~\Anaconda3\lib\site-packages\keras\engine\base_layer.py in __call__(self, inputs, **kwargs)
455 # Actually call the layer,
456 # collecting output(s), mask(s), and shape(s).
--> 457 output = self.call(inputs, **kwargs)
458 output_mask = self.compute_mask(inputs, previous_mask)
459
~\Anaconda3\lib\site-packages\keras\engine\network.py in call(self, inputs, mask)
562 return self._output_tensor_cache[cache_key]
563 else:
--> 564 output_tensors, _, _ = self.run_internal_graph(inputs, masks)
565 return output_tensors
566
~\Anaconda3\lib\site-packages\keras\engine\network.py in run_internal_graph(self, inputs, masks)
759 'and output masks. Layer ' + str(layer.name) + ' has'
760 ' ' + str(len(output_tensors)) + ' output tensors '
--> 761 'and ' + str(len(output_masks)) + ' output masks.')
762 # Update model updates and losses:
763 # Keep track of updates that depend on the inputs
Exception: Layers should have equal number of output tensors and output masks. Layer graph_convolution_90 has 1 output tensors and 2 output masks.
With multiple GraphConvolution layers, the error always occurs at the
first layer.
Changing node counts does not do anything. I'm suspecting that the batch
shape difference between the two inputs might be why there are 2 output
masks, but I couldn't make a change to the shape and batch_shape arguments
of the inputs that would compile successfully and evade the issue.
Setup details:
Keras version: 2.2.4
Tensorflow version: 1.12.0
Sincerely,
Robert Beck
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#41>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHAcYFarcE2MXjwgZm5Uw4sixoWo58-6ks5vPzg0gaJpZM4bImhy>
.
|
Thanks for the quick reply! I am currently unable to revert to 1.0.9 because of my other dependencies. However, in the Keras code it is clear that the run_internal_graph() function at the time did not raise the same exception, in fact there is a TODO line for specifically for including this later:
I do not have a solution currently, only a workaround where I explicitly build the combined model with all layers and pre-learned weights to avoid Model.call(), but this is not that practical. In case I find a good solution, I'll comment here again. |
Dear Thomas,
Thank you for sharing this interesting package. When trying to link a GraphConvolution model to other models via the call() function, I ran into an error. Here is the minimal code for reproducing it, with the same input structure as in train.py:
The model compiles successfully, and I can train it, and predict with it. However, when I try to run the call function, for example like this:
graphConvModel([featureInput,adjacencyInput])
, I get the following error message:With multiple GraphConvolution layers, the error always occurs at the first layer.
Changing node counts does not do anything. I'm suspecting that the batch shape difference between the two inputs might be why there are 2 output masks, but I couldn't make a change to the shape and batch_shape arguments of the inputs that would compile successfully and evade the issue.
Setup details:
Keras version: 2.2.4
Tensorflow version: 1.12.0
Sincerely,
Robert Beck
The text was updated successfully, but these errors were encountered: