Skip to content

Commit

Permalink
Fix URLs and simplify citation (#152)
Browse files Browse the repository at this point in the history
* Update readme

* Fix url link

* Fix url link in output

* Fix url link in output

* Fix url link in citation

* Add space to badges

* Update CHANGELOG

* Add white background to images

* Update Changelog
  • Loading branch information
LouisLeNezet authored Oct 30, 2024
1 parent 10985a2 commit 70e24ec
Show file tree
Hide file tree
Showing 14 changed files with 3,003 additions and 3,019 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ Initial release of nf-core/phaseimpute, created with the [nf-core](https://nf-co
- [#144](https://github.com/nf-core/phaseimpute/pull/144) - Documentation updates
- [#148](https://github.com/nf-core/phaseimpute/pull/148) - Fix awsfulltest github action for manual dispatch
- [#149](https://github.com/nf-core/phaseimpute/pull/149) - Remove the map file from the awsfulltest
- [#152](https://github.com/nf-core/phaseimpute/pull/152) - Fix URLs in the documentation and remove tools citation in the README, use a white background for all images in the documentation.

### `Fixed`

Expand Down
2 changes: 1 addition & 1 deletion CITATIONS.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@

> Davies, R. W., Flint, J., Myers, S., & Mott, R.(2016). Rapid genotype imputation from sequence without reference panels. Nature genetics 48, 965–969.
- [Shapeit](https://odelaneau.github.io/shapeit5/)
- [Shapeit](https://doi.org/10.1038/s41588-023-01415-w)

> Hofmeister RJ, Ribeiro DM, Rubinacci S., Delaneau O. (2023). Accurate rare variant phasing of whole-genome and whole-exome sequencing data in the UK Biobank. Nature Genetics doi: https://doi.org/10.1038/s41588-023-01415-w
Expand Down
59 changes: 13 additions & 46 deletions README.md

Large diffs are not rendered by default.

Binary file modified docs/images/InputSoftware_compatibility.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/metro/Impute.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/metro/MetroMap.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2,952 changes: 3 additions & 2,949 deletions docs/images/metro/MetroMap.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2,968 changes: 2,968 additions & 0 deletions docs/images/metro/MetroMap_animated.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/metro/PanelPrep.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/metro/Simulate.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/metro/Validate.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
12 changes: 6 additions & 6 deletions docs/images/metro/txt2image.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,10 @@ To use drawio

```bash
drawio --version
drawio docs/images/metro/MetroMap.xml --export --format png --page-index 2 --layers 1 --output docs/images/metro/MetroMap.png --scale 3 --transparent
drawio docs/images/metro/MetroMap.xml --export --format svg --page-index 2 --layers 2,3,4,5 --output docs/images/metro/MetroMap.svg --transparent
drawio docs/images/metro/MetroMap.xml --export --format png --page-index 3 --layers 1 --output docs/images/metro/Simulate.png --scale 3 --transparent
drawio docs/images/metro/MetroMap.xml --export --format png --page-index 4 --layers 0 --output docs/images/metro/PanelPrep.png --scale 3 --transparent
drawio docs/images/metro/MetroMap.xml --export --format png --page-index 5 --layers 1 --output docs/images/metro/Impute.png --scale 3 --transparent
drawio docs/images/metro/MetroMap.xml --export --format png --page-index 6 --layers 0 --output docs/images/metro/Validate.png --scale 3 --transparent
drawio docs/images/metro/MetroMap.xml --export --format png --page-index 2 --layers 1 --output docs/images/metro/MetroMap.png --scale 3
drawio docs/images/metro/MetroMap.xml --export --format svg --page-index 2 --layers 2,3,4,5 --output docs/images/metro/MetroMap.svg
drawio docs/images/metro/MetroMap.xml --export --format png --page-index 3 --layers 1 --output docs/images/metro/Simulate.png --scale 3
drawio docs/images/metro/MetroMap.xml --export --format png --page-index 4 --layers 0 --output docs/images/metro/PanelPrep.png --scale 3
drawio docs/images/metro/MetroMap.xml --export --format png --page-index 5 --layers 1 --output docs/images/metro/Impute.png --scale 3
drawio docs/images/metro/MetroMap.xml --export --format png --page-index 6 --layers 0 --output docs/images/metro/Validate.png --scale 3
```
22 changes: 10 additions & 12 deletions docs/output.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,11 @@ The directories listed below will be created in the results directory after the

This steps of the pipeline performs a QC of the reference panel data and produces the necessary files for imputation (`--steps impute`). It has two optional modes: reference panel phasing with SHAPEIT5 and removal of specified samples from reference panel.

- [Remove Multiallelics](#multiallelics) - Remove multiallelic sites from the reference panel
- [Convert](#convert) - Convert reference panel to .hap and .legend files
- [Posfile](#posfile) - Produce a TSV with the list of positions to genotype (for STITCH/QUILT)
- [Sites](#sites) - Produce a TSV with the list of positions to genotype (for GLIMPSE1)
- [Glimpse Chunk](#glimpse) - Create chunks of the reference panel
- [CSV](#csv) - Obtain a CSV from this step
- [Normalize reference panel](#panel-directory) - Remove multiallelic sites from the reference panel and compute allele frequencies if needed
- [Convert](#haplegend-directory) - Convert reference panel to .hap and .legend files
- [Posfile](#sites-directory) - Produce a TSV with the list of positions to genotype for the different tools
- [Chromosomes chunks](#chunks-directory) - Create chunks of the reference panel
- [CSV](#csv-directory) - Obtain a CSV from this step

The directory structure from `--steps panelprep` is:

Expand Down Expand Up @@ -55,7 +54,7 @@ A directory containing the reference panel per chromosome after preprocessing. T

</details>

[bcftools](https://samtools.github.io/bcftools/bcftools.html) aids in the conversion of vcf files to .hap and .legend files. A .samples file is also generated. Once that you have generated the hap and legend files for your reference panel, you can skip the reference preparation steps and directly submit these files for imputation. The hap and legend files are input files used with `--tools quilt`.
[`bcftools convert`](https://samtools.github.io/bcftools/bcftools.html#convert) aids in the conversion of vcf files to .hap and .legend files. A .samples file is also generated. Once that you have generated the hap and legend files for your reference panel, you can skip the reference preparation steps and directly submit these files for imputation. The hap and legend files are input files used with `--tools quilt`.

### Sites directory

Expand All @@ -72,9 +71,9 @@ A directory containing the reference panel per chromosome after preprocessing. T

</details>

[bcftools query](https://samtools.github.io/bcftools/bcftools.html) produces VCF (`*.vcf.gz`) files per chromosome. These QCed VCFs can be gathered into a csv and used with all the tools in `--steps impute` using the flag `--panel`.
[`bcftools query`](https://samtools.github.io/bcftools/bcftools.html#query) produces VCF (`*.vcf.gz`) files per chromosome. These QCed VCFs can be gathered into a csv and used with all the tools in `--steps impute` using the flag `--panel`.

In addition, [bcftools query](https://samtools.github.io/bcftools/bcftools.html) produces tab-delimited files (`*_tsv.txt`) and, together with the VCFs, they can be gathered into a samplesheet and directly submitted for imputation with `--tools glimpse1,stitch` and `--posfile`.
In addition, [bcftools query](https://samtools.github.io/bcftools/bcftools.html#query) produces tab-delimited files (`*_tsv.txt`) and, together with the VCFs, they can be gathered into a samplesheet and directly submitted for imputation with `--tools glimpse1,stitch` and `--posfile`.

### Chunks directory

Expand All @@ -86,7 +85,7 @@ In addition, [bcftools query](https://samtools.github.io/bcftools/bcftools.html)

</details>

[Glimpse1 chunk](https://odelaneau.github.io/GLIMPSE/) defines chunks where to run imputation. For further reading and documentation see the [Glimpse1 documentation](https://odelaneau.github.io/GLIMPSE/glimpse1/commands.html). Once that you have generated the chunks for your reference panel, you can skip the reference preparation steps and directly submit this file for imputation.
[Glimpse1 chunk](https://odelaneau.github.io/GLIMPSE/glimpse1/) defines chunks where to run imputation. For further reading and documentation see the [Glimpse1 documentation](https://odelaneau.github.io/GLIMPSE/glimpse1/commands.html). Once that you have generated the chunks for your reference panel, you can skip the reference preparation steps and directly submit this file for imputation.

### CSV directory

Expand Down Expand Up @@ -117,13 +116,12 @@ The results from steps impute will have the following directory structure:

</details>

[bcftools concat](https://samtools.github.io/bcftools/bcftools.html) will produce a single VCF from a list of imputed VCFs in chunks.
[`bcftools concat`](https://samtools.github.io/bcftools/bcftools.html#concat) will produce a single VCF from a list of imputed VCFs in chunks.

## Reports

Reports contain useful metrics and pipeline information for the different modes.

- [Pipeline information](#pipeline-information) - Report metrics generated during the workflow execution
- [MultiQC](#multiqc) - Aggregate report describing results and QC from the whole pipeline
- [Pipeline information](#pipeline-information) - Report metrics generated during the workflow execution

Expand Down
6 changes: 1 addition & 5 deletions docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ panel,chr,vcf,index,hap,legend
| `hap` | Full path to ".hap.gz" compressed file containing the reference panel haplotypes in ["haps" format](https://www.cog-genomics.org/plink/2.0/formats#haps). (Required by QUILT) |
| `legend` | Full path to ".legend.gz" compressed file containing the reference panel sites in ["legend" format](https://www.cog-genomics.org/plink/2.0/formats#legend). (Required by QUILT, GLIMPSE1 and STITCH) |

The `legend` file should be a TSV with the following structure, similar to that from [BCFTOOLS convert documentation](https://samtools.github.io/bcftools/bcftools.html#convert) with the `--haplegendsample` command : File is space separated with a header ("id,position,a0,a1"), one row per SNP, with the following columns:
The `legend` file should be a TSV with the following structure, similar to that from [`bcftools convert` documentation](https://samtools.github.io/bcftools/bcftools.html#convert) with the `--haplegendsample` command : File is space separated with a header ("id,position,a0,a1"), one row per SNP, with the following columns:

- Column 1: chromosome:position_ref allele_alternate allele
- Column 2: physical position (sorted from smallest to largest)
Expand Down Expand Up @@ -496,10 +496,8 @@ nextflow pull nf-core/phaseimpute

### Reproducibility

It is a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you'll be running the same version of the pipeline, even if there have been changes to the code since.
It is a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you'll be running the same version of the pipeline, even if there have been changes to the code since.

First, go to the [nf-core/phaseimpute releases page](https://github.com/nf-core/phaseimpute/releases) and find the latest pipeline version - numeric only (eg. `1.3.1`). Then specify this when running the pipeline with `-r` (one hyphen) - eg. `-r 1.3.1`. Of course, you can switch to another version by changing the number after the `-r` flag.
First, go to the [nf-core/phaseimpute releases page](https://github.com/nf-core/phaseimpute/releases) and find the latest pipeline version - numeric only (eg. `1.3.1`). Then specify this when running the pipeline with `-r` (one hyphen) - eg. `-r 1.3.1`. Of course, you can switch to another version by changing the number after the `-r` flag.

This version number will be logged in reports when you run the pipeline, so that you'll know what you used when you look back in the future. For example, at the bottom of the MultiQC reports.
Expand All @@ -520,7 +518,6 @@ These options are part of Nextflow and use a _single_ hyphen (pipeline parameter

Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments.

Several generic profiles are bundled with the pipeline which instruct the pipeline to use software packaged using different methods (Docker, Singularity, Podman, Shifter, Charliecloud, Apptainer, Conda) - see below.
Several generic profiles are bundled with the pipeline which instruct the pipeline to use software packaged using different methods (Docker, Singularity, Podman, Shifter, Charliecloud, Apptainer, Conda) - see below.

:::info
Expand Down Expand Up @@ -556,7 +553,6 @@ If `-profile` is not specified, the pipeline will run locally and expect all sof

### `-resume`

Specify this when restarting a pipeline. Nextflow will use cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously. For input to be considered the same, not only the names must be identical but the files' contents as well. For more info about this parameter, see [this blog post](https://www.nextflow.io/blog/2019/demystifying-nextflow-resume.html).
Specify this when restarting a pipeline. Nextflow will use cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously. For input to be considered the same, not only the names must be identical but the files' contents as well. For more info about this parameter, see [this blog post](https://www.nextflow.io/blog/2019/demystifying-nextflow-resume.html).

You can also supply a run name to resume a specific run: `-resume [run-name]`. Use the `nextflow log` command to show previous run names.
Expand Down

0 comments on commit 70e24ec

Please sign in to comment.