Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for generating taxprofiler/funcscan input samplesheets for preprocessed FASTQs/FASTAs #688

Draft
wants to merge 19 commits into
base: dev
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 12 commits
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions conf/test_hybrid.config
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,8 @@ params {
skip_gtdbtk = true
gtdbtk_min_completeness = 0
skip_concoct = true

// Generate downstream samplesheets
generate_downstream_samplesheets = true
generate_pipeline_samplesheets = null
}
27 changes: 27 additions & 0 deletions docs/output.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,9 @@ The pipeline is built using [Nextflow](https://www.nextflow.io/) and processes d

Note that when specifying the parameter `--coassemble_group`, for the corresponding output filenames/directories of the assembly or downsteam processes the group ID, or more precisely the term `group-[group_id]`, will be used instead of the sample ID.

The pipeline can also generate downstream pipeline input samplesheets.
These are stored in `<outdir>/downstream_samplesheets`.

## Quality control

These steps trim away the adapter sequences present in input reads, trims away bad quality bases and sicard reads that are too short.
Expand Down Expand Up @@ -720,6 +723,9 @@ Because of aDNA damage, _de novo_ assemblers sometimes struggle to call a correc

</details>

The pipeline can also generate input samplesheets for downstream pipelines.
These are stored in `<outdir>/downstream_samplesheets`.

### MultiQC

<details markdown="1">
Expand Down Expand Up @@ -764,3 +770,24 @@ Summary tool-specific plots and tables of following tools are currently displaye
</details>

[Nextflow](https://www.nextflow.io/docs/latest/tracing.html) provides excellent functionality for generating various reports relevant to the running and execution of the pipeline. This will allow you to troubleshoot errors with the running of the pipeline, and also provide you with other information such as launch commands, run times and resource usage.

### Downstream samplesheets

The pipeline can also generate input files for the following downstream pipelines:

- [nf-core/funcscan](https://nf-co.re/funcscan)
- [nf-core/taxprofiler](https://nf-co.re/taxprofiler)

<details markdown="1">
<summary>Output files</summary>

- `downstream_samplesheets/`
- `taxprofiler.csv`: Partially filled out nf-core/taxprofiler `--input` csv with paths to preprocessed reads (adapter trimmed, host removed etc.) in `.fastq.gz` formats. I.e., the direct input into MEGAHIT, SPAdes, SPAdesHybrid.
- `funcscan.csv`: Filled out nf-core/funcscan `--input` csv with absolute paths to the assembled contig FASTA files produced by nf-core/mag (i.e., the direct output from MEGAHIT, SPAdes, SPAdesHybrid - not bins).

</details>

:::warning
Any generated downstream samplesheet is provided as 'best effort' and are not guaranteed to work straight out of the box!
They may not be complete (e.g. some columns may need to be manually filled in).
:::
3 changes: 3 additions & 0 deletions nextflow.config
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,9 @@ params {
validationShowHiddenParams = false
validate_params = true

// Generate downstream samplesheets
generate_downstream_samplesheets = false
generate_pipeline_samplesheets = null
}

// Load base.config by default for all pipelines
Expand Down
23 changes: 23 additions & 0 deletions nextflow_schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,26 @@
}
}
},
"generate_samplesheet_options": {
"title": "Downstream pipeline samplesheet generation options",
"type": "object",
"fa_icon": "fas fa-align-justify",
"description": "Options for generating input samplesheets for complementary downstream pipelines.",
"properties": {
"generate_downstream_samplesheets": {
"type": "boolean",
"description": "Turn on generation of samplesheets for downstream pipelines.",
"fa_icon": "fas fa-toggle-on"
},
"generate_pipeline_samplesheets": {
"type": "string",
"description": "Specify which pipeline to generate a samplesheet for.",
"help": "Note that the nf-core/funcscan samplesheet will only include paths to raw assemblies, not bins\n\nThe nf-core/taxprofiler samplesheet will include of paths the pre-processed reads that are used are used as input for _de novo_ assembly.",
"fa_icon": "fas fa-toolbox",
"pattern": "^(taxprofiler|funcscan)(?:,(taxprofiler|funcscan)){0,1}"
}
}
},
"institutional_config_options": {
"title": "Institutional config options",
"type": "object",
Expand Down Expand Up @@ -920,6 +940,9 @@
{
"$ref": "#/definitions/reference_genome_options"
},
{
"$ref": "#/definitions/generate_samplesheet_options"
},
{
"$ref": "#/definitions/institutional_config_options"
},
Expand Down
92 changes: 92 additions & 0 deletions subworkflows/local/generate_downstream_samplesheets/main.nf
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It looks like @jfy133 used only one workflow, which will selectively generate samplesheets based on params.generate_pipeline_samplesheets. Do you think it would be best to keep that consistent?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, since FastQ files are being pulled from the publishDir, it might be a good idea to include options that override user inputs for params.publish_dir_mode (so that it is always 'copy' if a samplesheet is generated) and params.save_clipped_reads, params.save_phixremoved_reads ...etc so that the preprocessed FastQ files are published to the params.outdir if a downstream samplesheet is generated

Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
//
// Subworkflow with functionality specific to the nf-core/mag pipeline
//

workflow SAMPLESHEET_TAXPROFILER {
take:
ch_reads

main:
format = 'csv'

def fastq_rel_path = '/'
if (params.bbnorm) {
fastq_rel_path = '/bbmap/bbnorm/'
} else if (!params.keep_phix) {
fastq_rel_path = '/QC_shortreads/remove_phix/'
}
else if (params.host_fasta) {
fastq_rel_path = '/QC_shortreads/remove_host/'
}
else if (!params.skip_clipping) {
fastq_rel_path = '/QC_shortreads/fastp/'
}

ch_list_for_samplesheet = ch_reads
.map {
meta, fastq ->
def sample = meta.id
def run_accession = meta.id
def instrument_platform = ""
def fastq_1 = file(params.outdir).toString() + fastq_rel_path + meta.id + '/' + fastq[0].getName()
def fastq_2 = file(params.outdir).toString() + fastq_rel_path + meta.id + '/' + fastq[1].getName()
def fasta = ""
[ sample: sample, run_accession: run_accession, instrument_platform: instrument_platform, fastq_1: fastq_1, fastq_2: fastq_2, fasta: fasta ]
}
.tap{ ch_colnames }
jfy133 marked this conversation as resolved.
Show resolved Hide resolved

channelToSamplesheet(ch_list_for_samplesheet, "${params.outdir}/downstream_samplesheets/mag", format)

}

workflow SAMPLESHEET_FUNCSCAN {
take:
ch_assemblies

main:
format = 'csv'

ch_list_for_samplesheet = ch_assemblies
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Next thing which I don't think will be so complicated is to add another input channel for bins, and here make an if/else statement if they want to send just the raw assemblies (all contigs) or binned contigs to the samplesheet.

It will need another pipeline level parameter too though --generate_samplesheet_funcscan_seqtype or something

.map {
meta, filename ->
def sample = meta.id
def fasta = file(params.outdir).toString() + '/Assembly/' + meta.assembler + '/' + filename.getName()
[ sample: sample, fasta: fasta ]
}
.tap{ ch_colnames }

channelToSamplesheet(ch_list_for_samplesheet, "${params.outdir}/downstream_samplesheets/funcscan", format)
}

workflow GENERATE_DOWNSTREAM_SAMPLESHEETS {
take:
ch_reads
ch_assemblies

main:
def downstreampipeline_names = params.generate_pipeline_samplesheets.split(",")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've also implemented the same system in createtaxdb now, but with an additional input validation thing that you should also adopt here (i.e., to check that someone doesn't add an unsupported pipeline, or makes a typo).

Check the utils_nfcore_createtaxdb_pipeline file there

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


if ( downstreampipeline_names.contains('taxprofiler') && params.save_clipped_reads ) { // save_clipped_reads must be true
SAMPLESHEET_TAXPROFILER(ch_reads)
}

if ( downstreampipeline_names.contains('funcscan') ) {
SAMPLESHEET_FUNCSCAN(ch_assemblies)
}
}

def channelToSamplesheet(ch_list_for_samplesheet, path, format) {
def format_sep = [csv: ",", tsv: "\t", txt: "\t"][format]

def ch_header = ch_list_for_samplesheet

ch_header
.first()
.map { it.keySet().join(format_sep) }
.concat(ch_list_for_samplesheet.map { it.values().join(format_sep) })
.collectFile(
name: "${path}.${format}",
newLine: true,
sort: false
)
}
18 changes: 18 additions & 0 deletions subworkflows/local/utils_nfcore_mag_pipeline/main.nf
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,11 @@ workflow PIPELINE_INITIALISATION {
//
validateInputParameters(
hybrid
jfy133 marked this conversation as resolved.
Show resolved Hide resolved

// Validate samplesheet generation parameters
if (params.generate_downstream_samplesheets && !params.generate_pipeline_samplesheets) {
error('[nf-core/createtaxdb] If supplying `--generate_downstream_samplesheets`, you must also specify which pipeline to generate for with `--generate_pipeline_samplesheets! Check input.')
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nf-core/mag ?

}
)

// Validate PRE-ASSEMBLED CONTIG input when supplied
Expand Down Expand Up @@ -330,6 +335,19 @@ def validateInputParameters(hybrid) {
if (params.save_mmseqs_db && !params.metaeuk_mmseqs_db) {
error('[nf-core/mag] ERROR: Invalid parameter combination: --save_mmseqs_db supplied but no database has been requested for download with --metaeuk_mmseqs_db!')
}

// Validate samplesheet generation parameters
if (params.generate_downstream_samplesheets && !params.generate_pipeline_samplesheets) {
error('[nf-core/mag] If supplying `--generate_downstream_samplesheets`, you must also specify which pipeline to generate for with `--generate_pipeline_samplesheets! Check input.')
}

if (params.generate_downstream_samplesheets && !params.save_clipped_reads) {
error('[nf-core/mag] Supplied --generate_downstream_samplesheets but missing --save_clipped_reads (mandatory for reads.gz output).')
}

if (params.generate_downstream_samplesheets && params.save_clipped_reads && (params.bbnorm || !params.keep_phix || params.host_fasta || params.skip_clipping)) {
error('[nf-core/mag] Supplied --generate_downstream_samplesheets and --save_clipped_reads is true, but also need one of the following: --bbnorm true, or --keep_phix false, or --host_fasta true, or skip_clipping true.')
}
}

//
Expand Down
8 changes: 8 additions & 0 deletions workflows/mag.nf
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ include { ANCIENT_DNA_ASSEMBLY_VALIDATION } from '../subworkflows/local/ancient_
include { DOMAIN_CLASSIFICATION } from '../subworkflows/local/domain_classification'
include { DEPTHS } from '../subworkflows/local/depths'
include { LONGREAD_PREPROCESSING } from '../subworkflows/local/longread_preprocessing'
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spacing :D

include { GENERATE_DOWNSTREAM_SAMPLESHEETS } from '../subworkflows/local/generate_downstream_samplesheets/main.nf'

//
// MODULE: Installed directly from nf-core/modules
Expand Down Expand Up @@ -958,6 +959,13 @@ workflow MAG {
}
}

//
// Samplesheet generation
//
if ( params.generate_downstream_samplesheets ) {
GENERATE_DOWNSTREAM_SAMPLESHEETS ( ch_short_reads_assembly, ch_assemblies )
}

//
// Collate and save software versions
//
Expand Down
Loading