Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does VNet support to segment more than 2 classes? #15

Open
John1231983 opened this issue Feb 16, 2017 · 11 comments
Open

Does VNet support to segment more than 2 classes? #15

John1231983 opened this issue Feb 16, 2017 · 11 comments

Comments

@John1231983
Copy link

Hello, I am grateful to your code. I am working in brain segmentation which has 4 classes: WM, GM, CSF, and background. You are using DICE as loss function to maximize. It is very good idea but it may be only work in 2 classes. How can I apply your VNet in brain segmentation? Which layers do I need change the number of output? (May be will will not use Dice layer because it only support two classes). Thank you in advance

@gattia
Copy link

gattia commented Feb 16, 2017

I haven't been using this library/code but have been working on creating a network based on v-net as well as u-net..

The U-net example doesnt use a softmax like v-net does. It instead uses a sigmoid activation and produces a single volume and dice is computed for that single volume (instead of the 2 volumes from the softmax in v-net).

I am currently playing with a binary segmentation (same as v-net does) and want to expand to multiple labels as you are asking about. I've been using the sigmoid activation and am assuming that it would be possible to output a labelmap for each region of interest (0 = background, 1=structure of interest).. well I know it would, you would just have to change the convolution at the end to have shape n,1,1,1 where n = number of labels of interest. I would assume you could calculate a dice coefficient for each individual labelmap and you could average those to get your error term.

@faustomilletari
Copy link
Owner

faustomilletari commented Feb 16, 2017 via email

@gattia
Copy link

gattia commented Feb 16, 2017

Hi,

Im not positive what the u-net original was using. I am basing this off of a re-construction of u-net by @jocicmarko and am using Keras to do so. You can check out my repo, vnet-implementation. It is somewhat a mess as it was not intended to be shared. This uses Keras to implement something that is close to v-net. I am using a sigmoid activation at the end, the same way that the 2d reproduction of u-net that I referenced does. This requires keras as well as keras_contrib because the deconvolution3d is not yet part of the main keras library, I beleive it can use either theano or tensorflow as the backend. I am using tensorflow.

@jocicmarko
Copy link

@gattia You can use Upsampling3D and then Convolution3D instead of Deconvolution3D. There are some indications that these work similarly.

@John1231983
Copy link
Author

Hi all, Could we use Dice loss for 4 classes. For example, brain image: CSF, WM, GM and Background. I am looking for the way to use Dice loss of VNet for brain subject

@faustomilletari
Copy link
Owner

faustomilletari commented Jul 5, 2017 via email

@John1231983
Copy link
Author

Great reference. I just quickly looked at it. It may use automatically way to compute backward for Dice. I think it is great to point of tensorflow in comparison with caffe. Do you think the automatic calculation step size (from backward) in tensorflow for is correct for dice loss case?

@faustomilletari
Copy link
Owner

faustomilletari commented Jul 5, 2017 via email

@John1231983
Copy link
Author

Thanks for your suggestion. However, I am new in tensorflow and I would like to ask you one more question. It may not related to the thread. However, it related with your experiments.
As you know, a layer includes forward and backward. For example, your Dice layer includes forward and backward and they implemented as a customer python layer. However, when I look at the tensorflow loss layer, they just define forward (dice formulation), I did not see any code for backward. Does tensorflow support automatic way to compute it?

This is his dice loss.

def dice(pred, labels):
    n_voxels = labels.get_shape()[0].value
    n_classes = pred.get_shape()[1].value
    pred = tf.nn.softmax(pred)
    # construct sparse matrix for labels to save space
    ids = tf.constant(np.arange(n_voxels), dtype=tf.int64)
    ids = tf.stack([ids, labels], axis=1)
    one_hot = tf.SparseTensor(indices=ids,
                              values=[1.0] * n_voxels,
                              dense_shape=[n_voxels, n_classes])
    # dice
    dice_numerator = 2.0 * tf.sparse_reduce_sum(one_hot * pred,
                                                reduction_axes=[0])
    dice_denominator = (tf.reduce_sum(tf.square(pred), reduction_indices=[0]) +
                        tf.sparse_reduce_sum(one_hot, reduction_axes=[0]))
    epsilon_denominator = 0.00001

    dice_score = dice_numerator / (dice_denominator + epsilon_denominator)
    dice_score.set_shape([n_classes])
    # minimising (1 - dice_coefficients)
    return 1.0 - tf.reduce_mean(dice_score)

@faustomilletari
Copy link
Owner

faustomilletari commented Jul 5, 2017 via email

@PiaoLiangHXD
Copy link

@gattia I could not find your v-net keras implementation now. As keras2 has updated, have you tried to continue your implement job?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants