Commit 56f92f0d authored by Paul McCarthy's avatar Paul McCarthy 🚵
Browse files

ENH: Section on CUDA projects

parent 778c9151
......@@ -46,6 +46,36 @@ construction of the corresponding conda-package name. For example:
| `NewNifti` | `fsl-newnifti` |
There are a small number of exceptions to the above conventions. For example,
the [`fdt`]( project is built into two
conda packages - the [`fsl-fdt`](
package, providing CPU-only executables, and the
[`fsl-fdt-gpu`]( package,
providing GPU/CUDA-enabled executables.
## A note on FSL CUDA projects
Most FSL CUDA projects usually provide both GPU-enabled and CPU-only
executables. For example, the [`fsl/fdt`](
provides a range of CPU-only executables, including `dtifit` and `vecreg`, in
addition to providing GPU-enabled executables such as `xfibres_gpu`.
To accommodate this convention, multiple conda recipes are used for these
"hybrid" projects. For example, the `fsl/fdt` project is built from two
separate recipes:
- [`fsl/conda/fsl-fdt`](, which
builds the CPU-only executables - these recipes are built as `linux` and
`macos` packages.
- [`fsl/conda/fsl-fdt-gpu`](,
which builds the GPU-enabled executables - these recipes are built as
`linux-cuda-X.Y` and `macos-cuda-X.Y` packages.
## Automatically generate a conda recipe for a FSL project
......@@ -127,6 +157,7 @@ conda recipe, but rather on the specific issues that need to be considered
when writing a conda recipe for a FSL project. More general information on
creating conda recipes can be found at these websites:
......@@ -161,6 +192,10 @@ FSL projects are broadly divided into one of the following categories:
`Makefile` to compile and install their deliverables (shared libraries,
scripts, and compiled executables).
- **`Makefile`-based CUDA project**: FSL projects which use a FSL-style
`Makefile`, and which provide GPU-accelerated executables linked against
- **``-based project**: FSL projects which are written in Python,
and which have a `` file which is used to build the project as
a Python package.
......@@ -170,14 +205,16 @@ The mechanisms by which projects in these categories are built are slightly
different and, therefore, the conda recipe for projects from different
category will look slightly different. Examples of `Makefile`-based and
``-based FSL projects, and associated conda recipes, can be
respectively found in the `examples/cpp` and `examples/python`
sub-directories. Some important details are highlighted below.
respectively found in the `examples/cpp`, `examples/cuda`, and
`examples/python` sub-directories. Some important details are highlighted
### Writing the `meta.yaml` file
The `meta.yaml` file contains metadata about your project, including:
- The conda package name
- The URL of the project git repository
- The version number
......@@ -192,10 +229,10 @@ as `jinja2` variables using the following syntax:
{% set name = <conda-package-name> %}
{% set version = <version-number> %}
{% set repository = <repository-url> %}
{% set build = '0' %}
{% set name = '<conda-package-name>' %}
{% set version = '<version-number>' %}
{% set repository = '<repository-url>' %}
{% set build = '0' %}
......@@ -204,7 +241,7 @@ Then use these variables within the recipe YAML, e.g.:
name: {{ name }}
name: {{ name }}
version: {{ version }}
......@@ -241,16 +278,15 @@ setting the `FSLCONDA_REPOSITORY` and `FSLCONDA_REVISION` variables.
shared library dependencies of the project must be present at the time of
compilation, and at run time. This means that the dependencies of your project
may need to be listed twice within the `requirements` section - once under
`host` (or `build`), and again under `run`. This can be avoided by specifying
`run_exports` in the recipes of the dependencies.
`host` (or `build`), and again under `run`. We can avoid having to list
dependencies twice by specifying `run_exports` in the dependency recipes.
section is a trick which can be used within `meta.yaml`, which essentially
allows us to define a dependency as a build-time dependency, and have it
automatically propagated as a run-time dependency. It is used simply to allow
us to avoid having to list shared library dependencies twice.
automatically propagated as a run-time dependency.
So if you are writing a conda recipe for a C/C++ project which provides shared
......@@ -292,19 +328,92 @@ including FSL initialisation scripts and the `Makefile` machinery. As such it
must be installed in order to build `Makefile`-based FSL projects, and must be
present at run time for most FSL commands to function. All FSL conda recipes
must therefore list `fsl-base` as a requirement. C/C++ projects will also need
to have a C++ compiler installed at build time - the convention within conda
recipes is to list compilers as `host` dependencies. For example:
to have a C++ compiler installed at build time. For example:
- {{ compiler('cxx') }}
- make
- fsl-base >=2101.0
### Recipes for CUDA projects
> **Note:** Mechanisms for building CUDA packages on macOS do not currently
> exist.
Separate packages are created from `Makefile`-based CUDA projects for
different versions of CUDA. To enable this, the `meta.yaml` file for a CUDA
project needs to contain a couple of additional elements. First, when a CUDA
package is built, an environment variable called `CUDA_VER` will be set in the
environment, containing the `major.minor` CUDA version the package is being
built against. The `meta.yaml` file can read this environment variable in as
a `jinja2` variable like so:
{{ '{% set cuda_version = os.environ["CUDA_VER"] %}' }}
Built CUDA packages must be labelled with the version of CUDA they were built
against. This is accomplished by adding the CUDA version to the package
build string, like so:
number: {{ '{{ build }}' }}
string: {{ 'h{{ PKG_HASH }}_cuda{{ cuda_version }}_{{ PKG_BUILDNUM }}' }}
- {{ '{{ name }}' }}
When a CUDA package is built, the `nvcc` compiler must be installed in the
build environment, as it cannot be installed through `conda`. However, the
CUDA toolkit *can* be installed through conda, in addition to a "shim" `nvcc`
compiler package which allows `conda` to integrate itself with the externally
installed `nvcc` compiler. A further complication is that different versions
of `nvcc` are compatible with different versions of `gcc`, so the version of
`gcc` that should be installed depends on the version of CUDA against which
the package is being built.
To handle all of these complicationes, a template `requirements` section for
a CUDA project needs to look something like this:
- fsl-base >=2101.0
# Different versions of nvcc need
# different versions of gcc
{{ '{% if cuda_version in ("9.2", "10.0") %}' }}
- {{ '{{ compiler("cxx") }} 7.*' }} # [linux]
{{ '{% elif cuda_version in ("10.1", "10.2") %}' }}
- {{ '{{ compiler("cxx") }} 8.*' }} # [linux]
{{ '{% elif cuda_version in ("11.0", "11.1") %}' }}
- {{ '{{ compiler("cxx") }} 9.*' }} # [linux]
{{ '{% endif %}' }}
- make
# The nvcc_linux-64 package is a shim
# which will use the system-provided
# nvcc, but in a manner that is
# integrated with conda.
- nvcc_linux-64 {{ '{{ cuda_version }}' }} # [linux]
### Defining the build proceess
......@@ -338,6 +447,50 @@ If you are writing a recipe for a ``-based FSL project, a ``
script is generally not required.
A critical requirement of "hybrid" FSL CUDA projects, which provide both
CPU-only and GPU-capable executables, is that the project `Makefile` must be
able to conditionally compile only the CPU component, **or** the GPU
components, provided by the project.
For example, the [`fdt`]( `Makefile` allows
`cpu` and `gpu` flags to be specified to control which parts of the project
are compiled - `make cpu=1` will compile only the CPU components of the `fdt`
project, whereas `make cpu=0 gpu=1` will compile only the GPU components.
> The specific `make` invocations that need to be made depend on how the
> project `Makefile` is written.
The `` scripts for the
[`fsl-fdt`]( and
[`fsl-fdt-gpu`]( take
advantage of this mechanism, so that the `fsl-fdt` recipe will only compile
the CPU components, and the `fsl-fdt-gpu` recipe will only compile the GPU
The `` script for the `fsl-fdt` recipe therefore looks the same as the
template `` script above, except the `make` invocations are as
make cpu=1
make cpu=1 install
And the `make` invocations in the `` script for the `fsl-fdt-gpu`
recipe are as follows:
make cpu=0 gpu=1
make cpu=0 gpu=1 install
### `` and `` scripts.
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment