Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about mergeResultChannels #44

Open
xupoplar opened this issue Mar 7, 2018 · 1 comment
Open

Question about mergeResultChannels #44

xupoplar opened this issue Mar 7, 2018 · 1 comment
Assignees

Comments

@xupoplar
Copy link

xupoplar commented Mar 7, 2018

My understanding is that the image features are calculated on each colour channel, and then it calls:
ImagePlus merged = mergeResultChannels(results);
wholeStack.addSlice(merged.getTitle(), merged.getImageStack().getProcessor(1));

I also noticed in the output training data, there's only one value for the feature(sobel as an example)

@Attribute original numeric
@Attribute Hue numeric
@Attribute Saturation numeric
@Attribute Brightness numeric

@Attribute Sobel_filter_0.0 numeric
@Attribute Sobel_filter_1.0 numeric
@Attribute Sobel_filter_2.0 numeric
@Attribute Sobel_filter_4.0 numeric
@Attribute Sobel_filter_8.0 numeric
@Attribute Sobel_filter_16.0 numeric
@Attribute class {'class 1','class 2'}

But I don't understand how does the mergeResultChannels work? Does it merge the features calculated on 3 colour channels into one before the training? What kind of logic does it use to do the merge? I found the code here but don't know what it does.

/**
 * Merge input channels if they are more than 1
 * @param channels results channels
 * @return result image 
 */
ImagePlus mergeResultChannels(final ImagePlus[] channels) 
{
	if(channels.length > 1)
	{						
		ImageStack mergedColorStack = mergeStacks(channels[0].getImageStack(), channels[1].getImageStack(), channels[2].getImageStack());
		
		ImagePlus merged = new ImagePlus(channels[0].getTitle(), mergedColorStack); 
		
		for(int n = 1; n <= merged.getImageStackSize(); n++)
			merged.getImageStack().setSliceLabel(channels[0].getImageStack().getSliceLabel(n), n);
		
		return merged;
	}
	else
		return channels[0];
@iarganda iarganda self-assigned this Mar 9, 2018
@iarganda
Copy link
Collaborator

iarganda commented Mar 9, 2018

Hello @xupoplar
Yes, the feature values of each channel are stored separately as an ImageStack of ColorProcessor objects. Then, to create the Weka DenseInstance objects, the three channel values are combined using regular averaging: (R + G + B ) / 3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants