-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to resolve variability in results due to different random seeds? #201
Comments
The issue maybe due to the sample rather than the procedure. It would help if you show the plot for one of the runs. |
Thanks for the response. I've attached GIFs looping through some example plots below (all from the same tumor/normal pair, run with 10 different seeds for SNP sampling). The average ploidy values cluster around 1.8 or 3.6, which suggests to me that the program is struggling to pick an appropriate value for For reference, here are the statistics for the 10 runs of this sample (different than the table in my initial post above):
|
Runs 5, 7 and 8 seem different from the rest and don't seem right. You should look what the flags are for each fit. You are not specifying the dipLogR being used in the fit when you generate the spider plot. Without it, it is hard to assess the quality of the fit. |
Hi again, I agree; runs 5/7/8 are definitely the outliers. I should mention that the GIFs are looping through the 10 iterations in order of increasing ploidy; the actual order of runs is (3, 10, 9, 4, 1, 2, 6, 7, 8, 5). All of the You're correct about the spider plots—I accidentally generated them with Now it's much clearer to see the difference between the first 7 runs (largely similar CNs inferred) and the last 3 runs (the outliers). Do you have any suggestions for general principles to use in evaluating the spider plots/selecting the most reasonable CN profile? |
Thanks for creating this tool! The workflow is clear and easy to follow.
I've noticed some concerning variability when I set different random seeds. I understand from #53 that preProcSample randomly selects SNPs from the input matrix, which affects downstream results. I ran 10 tests on the same SNP matrix file with seeds set from 1 to 10 and collected summary statistics after running the whole workflow. The results are below:
n_snps n_hets pct_hets n_segs dipLogR purity ploidy
1 422984 33283 0.07868619 386 -0.8568592 0.8300489 3.954321
2 422849 33294 0.07873733 350 -0.9517581 0.8853103 4.110511
3 422796 33265 0.07867861 368 -0.9274981 0.8898704 4.027205
4 422947 33281 0.07868835 356 -0.6327828 0.9080758 3.212570
5 422948 33294 0.07871890 335 -0.9326662 0.8872234 4.048639
6 422914 33272 0.07867321 363 -0.9533264 0.8823996 4.122240
7 422891 33281 0.07869877 357 -0.9079667 0.9072686 3.931950
8 422910 33294 0.07872597 358 -0.6944477 0.9000032 3.373917
9 422837 33274 0.07869226 352 -0.7299356 0.9003956 3.462835
10 422731 33292 0.07875457 351 -0.9501981 0.8891565 4.096679
The minimal variation in the first 4 columns makes sense to me as a result of randomizing the SNP selection. However, the range of values in dipLogR, purity, and ploidy are somewhat concerning (similar to the results noted in #59). The range of ploidy values is hardest to interpret; I'm not sure how confident to be in any given result when they vary so much.
My instinct is to use these random tests to infer a distribution of values for purity and ploidy, and report the mean or median as the most likely result. Any other suggestions for rationalizing these differences?
On the surface, these ranges may not seem that significant, but I'm also noticing pretty big differences in the CN/CF plots with different random seeds (e.g., large chromosomal segments being called at very disparate CNs across different iterations). I can share these plots if that would be helpful. The resulting spider plots are pretty similar overall; does it make sense to use those to select the "best" fit and consider that iteration the most likely "true" result for downstream analysis? Just want to make sure I'm making accurate inferences in light of this apparent variability.
Any insights would be greatly appreciated! Thanks.
The text was updated successfully, but these errors were encountered: