-
Notifications
You must be signed in to change notification settings - Fork 266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trying to set up network for regression task #34
Comments
For regression, simply swap out the loss function (L2 instead of cross
entropy should do fine).
If you have multiple separate graph instances, just concatenate their
feature matrices. You can accordingly ‘concatenate’ the adjacency matrices
into a large N_total x N_total matrix with block-diagonal structure.
N_total is the total number of nodes after concatenation.
…On Tue 17. Jul 2018 at 05:03 Emanuel Azcona ***@***.***> wrote:
Could you possibly point me on where to start with this so I can set this
up for a regression task? Plan on using L2_loss.
I have multiple samples of data that lie on the same graph, however the
features at each node is a vector. I was wondering how to get around since
I'm dealing with a 3D tensor:
X : (Samples x Nodes x features)
Y: (Samples x Nodes x features) as well
Most of the operations in both implementations seem to only work for X
being a matrix, unless I'm mistaken.
Of course, my adjacency & laplacian will remain 2D matrices for all
operations (there the Cheby. will be too)
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#34>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/AHAcYFG-G4dp7UcqfKDnXoZBKhHcbEr4ks5uHWIWgaJpZM4VSJgi>
.
|
Thanks for the tip! Will try now. Also, have you guys worked on pooling for this? I know Defferrard et al. built their Chebynet with pooling. Just wondering if there is a Keras implementation yet. |
Have a look here at my previous works:
https://scholar.google.com/citations?user=83HL5FwAAAAJ
Aside from global gated pooling (in the recent MolGAN paper) and some form
of self-attentive pooling (in the NRI paper), we haven’t done much along
these lines.
…On Wed 18. Jul 2018 at 22:18 Emanuel Azcona ***@***.***> wrote:
Thanks for the tip! Will try now.
Also, have you guys worked on pooling for this? I know Defferrard et al.
built their's Chebynet with pooling.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#34 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AHAcYCx0fA7x7JJE_kqMqlNR0sNhJ3DGks5uH6YqgaJpZM4VSJgi>
.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Could you possibly point me on where to start with this so I can set this up for a regression task? Plan on using L2_loss.
I have multiple samples of data that lie on the same graph, however the features at each node is a vector. I was wondering how to get around since I'm dealing with a 3D tensor:
X : (Samples x Nodes x features)
Y: (Samples x Nodes x features) as well
Most of the operations in both implementations seem to only work for X being a matrix, unless I'm mistaken.
Of course, my adjacency & laplacian will remain 2D matrices for all operations (there the Cheby. will be too)
The text was updated successfully, but these errors were encountered: