Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Controlling (high) uLTRA RAM usage #15

Open
pre-mRNA opened this issue Jun 23, 2022 · 1 comment
Open

Controlling (high) uLTRA RAM usage #15

pre-mRNA opened this issue Jun 23, 2022 · 1 comment

Comments

@pre-mRNA
Copy link

pre-mRNA commented Jun 23, 2022

Hi,

I'm running uLTRA on a cluster with 48 CPU cores and 196 GB RAM per node.

If I call uTLRA like this to align direct RNA reads to an indexed mammalian genome:

uLTRA align "${genome}" "${reads}" "${output_directory}" --index "${ultraIndex}" --ont --t 48

I notice that many of the uLTRA subprocesses use more RAM than is available and I get RAM errors for slaMEM etc, resulting in failure of the job.

The current workaround is to limit the number of CPUs used so that I don't exceed max node RAM.

Is there any way to control uLTRA maximum memory usage, either per CPU core or overall?

Thanks!

@ksahlin
Copy link
Owner

ksahlin commented Jul 5, 2022

Hi @pre-mRNA,

No solution that would yield identical results. Reducing the number of cores seems like the best option.

Another alternative is to use the parameter --use_NAM_seeds (based on strobemer seeds), which has a fixed peak memory roughly equivalent to using uLTRA with 18 cores (at least on human genome) - and this option is also faster. Then you should be able to specify --t 48.

However, this parameter does not guarantee identical alignments. I observed that uLTRA's alignments with --use_NAM_seeds were a tiny bit worse than default on the datasets I tried with. The --use_NAM_seeds uses StrobeMap instead of slaMEM to find matches. StrobeMap is automatically installed if you installed uLTRA through conda. For details about --use_NAM_seeds, see section "New since v0.0.4" in the readme.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants