General installation and configuration guide for setting up the EpiDiverse analysis pipelines
To start using the EpiDiverse analysis pipelines, follow the steps below:
Nextflow runs on most POSIX systems (Linux, Mac OSX etc). It can be installed by running the following commands:
See nextflow.io for further instructions on how to install and configure Nextflow itself.
The pipelines themselves need no installation - Nextflow will automatically fetch them from GitHub if eg. epidiverse/wgbs
is specified as the pipeline name.
The above method requires an internet connection so that Nextflow can download the pipeline files. If you're running on a system that has no internet connection, you'll need to download and transfer the pipeline files manually using the following (pseudo)code:
NB: Please replace [PIPELINE]
and [VERSION]
and [PARAMETERS]
as necessary, depending on the latest release from e.g. https://github.com/EpiDiverse/wgbs/releases
If you would like to make changes to the pipeline, it's best to make a fork on GitHub and then clone the files. Once cloned you can run the pipeline directly as above.
By default, the pipelines run with the -profile standard
configuration profile. This uses a number of sensible defaults for process requirements and is suitable for running on a simple (if powerful!) basic server. You can see this configuration in conf/base.config
from the base directory of each pipeline repository.
Be warned of two important points about the default configuration:
The default profile uses the local
executor
All jobs are run in the login session. If you're using a simple server, this may be fine. If you're using a compute cluster, this is bad as all jobs will run on the head node.
See the Nextflow docs for information about running with other hardware backends. Most job scheduler systems are natively supported.
Nextflow will expect all software to be installed and available on the $PATH
Nextflow can be configured to run on a wide range of different computational infrastructures. In addition to pipeline-specific parameters it is likely that you will need to define system-specific options.
Whilst most parameters can be specified on the command line, it is usually sensible to create a configuration file for your environment. A template for such a config can be found in assets/custom.config
from the base directory of each pipeline repository.
If you are the only person to be running this pipeline, you can create your config file as ~/.nextflow/config
and it will be applied every time you run Nextflow. Alternatively, save the file anywhere and reference it when running the pipeline with -config /path/to/config
.
If you think that there are other people using the pipeline who would benefit from your configuration (eg. other common cluster setups), please let us know. We can add a new preset configuration profile which can used by specifying -profile <name>
when running the pipeline.
The pipelines already come with several such config profiles - see the installation appendices and usage documentation for more information.
If you're unable to use either Docker or Singularity but you have conda installed, you can use the bioconda environment that comes with the pipeline. Using the predefined -profile conda
configuration when running the pipeline will take care of this automatically.
If you prefer to build your own environment, running this command will create a new conda environment with all of the required software installed:
The env/environment.yml
file can be found from the base directory of the pipeline repository. Note that you may need to download this file from the GitHub project page if Nextflow is automatically fetching the pipeline files. Ensure that the bioconda environment file version matches the pipeline version that you run.
With either Docker or Singularity installed, you can use the predefined -profile docker
or -profile singularity
configurations when running the pipeline to take care of software dependencies automatically using the official container pulled from Docker Hub.
If you prefer to use your own container, running the pipeline with the option -with-singularity <container>
or -with-docker <container>
and pointing towards a specific image will allow it to be automatically fetched and used.
If running offline with Singularity, you'll need to download and transfer the Singularity image first:
Once transferred, use -with-singularity
but specify the path to the image file:
To run the pipeline on the EpiDiverse servers (epi
or diverse
), use the command line flag -profile epi
or -profile diverse
respectively. This tells Nextflow to submit jobs using the SLURM
job executor and use a pre-built conda environment for software dependencies.
There are also three shortcuts available for EpiDiverse species which can be used in place of --reference
in pipelines that require a reference genome.
--thlaspi
--fragaria
--populus