-
Paul McCarthy authoredPaul McCarthy authored
Parallel processing in Python
While Python has built-in support for threading and parallelising in the
threading
and
multiprocessing
modules (covered in advanced_programming/threading.ipynb
), there are a range of third-party libraries which you can use to improve the performance of your code.
Contents:
Do you really need to parallelise your code?
Before diving in and tearing your code apart, you should think very carefully about your problem, and the hardware you are using to run your code. For example, if you are processing MRI data for a number of subjects on the FMRIB cluster (where each node on the cluster has fairly modest hardware specs, meaning that within-process parallelisation will have limited benefits), the most efficient option may be to write your code in a single-threaded manner, and to parallelise across data sets, processing one data set on each cluster node.
Your best option may simply be to repeatedly call fsl_sub
- see the section below for more details on this approach.
Profilng your code
Once you have decided that you need to parallelise some steps in your program, STOP! As the great Donald Knuth once said:
Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time:
premature optimization is the root of all evil.
Yet we should not pass up our opportunities in that critical 3%.
What Knuth is essentially saying is that there is no point in optimising or rewriting a piece of code unless it is actually going to have a positive impact on performance. In other words, before you refactor your code, you need to ensure that you understand where the bottlenecks are, so that you know which parts of the code should be re-written.
One way in which we can find those bottlenecks is through profiling - running our code, and timing each part of it, to identify which parts are causing our program to run slowly.
As a simple example, consider this code. It is running slowly, and we want to know why:
import numpy as np
def process_data(datafile):
data = np.loadtxt(datafile)
result = data.mean()
return result
print(process_data('mydata.txt'))
The
mydata.txt
file can be generated by running this code:import numpy as np data = np.random.random((10000, 1000)) np.savetxt('mydata.txt', data)
For small functions, you can profile code manually by using the time
module, e.g.:
import time
start = time.time()
result = process_data('mydata.txt')
end = time.time()
print(f'Total #seconds: {end - start}')
print(result)
We could also do the same within the process_data
function, and calculate and report the timings for each individual section within. But there is a library called line_profiler
which can do this for you - it will time every single line of code, and print a report for you:
%load_ext line_profiler
%lprun -f process_data process_data('mydata.txt')
line_profiler
is not installed as part of FSL by default, but you can install it with this command:$FSLDIR/bin/conda install -p $FSLDIR line_profiler
And you can also use it from the command-line, if you are not working in a Jupyter notebook. You can learn more about
line_profiler
at https://github.com/pyutils/line_profiler
We can see that most of the processing time was actually in loading the data from file, and not in the data processing (simply calculating the mean in this toy example). Without profiling your code, you may have naively thought that the data processing step was the culprit, and needed to be optimised. But what profiling has told us here is that our problem lies elsewhere, and perhaps we need to think about changing how we are storing our data.
But let's assume from this point on that we do need to optimise our code...
JobLib
https://joblib.readthedocs.io/en/stable/
JobLib has been around for a while, and is a simple and lightweight library with two main features:
- An API for "embarrassingly parallel" tasks.
- An API for caching/re-using the results of time-consuming computations (which we won't discuss in this notebook, but you can read about here).
On the surface, JobLib does not provide any functionality that cannot already be accomplished with built-in libraries such as multiprocessing
and functools
. However, the JobLib developers have put a lot of effort into ensuring that JobLib:
- is easy to use,
- works consistently across all of the major platforms, and
- works as efficiently as possible with Numpy arrays
We can use the JobLib API by importing the joblib
package (we'll also use numpy
and time
in this example):
import numpy as np
import joblib
import time
Now, let's say that we have a collection of numpy
arrays containing image data for a group of subjects:
datasets = [np.random.random((91, 109, 91)) for i in range(10)]
And we want to calculate some metric on each image. We simply need to define a function which contains the calculation we want to perform (the time.sleep
call is there just to simulate a complex calculation):
def do_calculation(input_data):
time.sleep(2)
return input_data.mean()
We can now use joblib.Parallel
and joblib.delayed
to parallelise those calculations. joblib.Parallel
sets up a pool of worker processes, and joblib.delayed
schedules a single instantiation of the function, and associated data, which is to be executed. We can execute a sequence of delayed
tasks by passing them to the Parallel
pool:
with joblib.Parallel(n_jobs=-1) as pool:
tasks = [joblib.delayed(do_calculation)(d) for d in datasets]
results = pool(tasks)
print(results)
Just like with multiprocessing
, JobLib is susceptible to the same problems with regard to sharing data between processes (these problems are discussed in the advanced_programming/threading.ipynb
notebook). In the example above, each dataset is serialised and copied from the main process to the worker processes, and then the result copied back. This behaviour could be a performance bottleneck for your own task, or you may be working on a system with limited memory which is incapable of storing several copies of the data.
To deal with this, we can use memory-mapped Numpy arrays. This is a feature built into Numpy, and supported by JobLib, which invlolves storing your data in your file system instead of in memory. This allows your data to be simultaneously read and written by multiple processes.
Some other options for sharing data between processes are discussed in the
advanced_programming/threading.ipynb
notebook.
You can create numpy.memmap
arrays directly, but JobLib will also automatically convert any Numpy arrays which are larger than a specified threshold into memory-mapped arrays. This threshold defaults to 1MB, and you can change it via the n_bytes
option when you create a joblib.Parallel
pool.
To see this in action, imagine that you have some 4D fMRI data, and wish to fit a complicated model to the time series at each voxel. We will write our model fitting function so that it works on one slice at a time:
def fit_model(indata, outdata, sliceidx):
print(f'Fitting model at slice {sliceidx}')
time.sleep(1)
outdata[:, :, sliceidx] = indata[:, :, sliceidx, :].mean() + sliceidx
Now we will load our 4D data, and pre-allocate another array to store the fitted model parameters.
# Imagine that we have loaded this data from a file
data = np.random.random((91, 109, 91, 50)).astype(np.float32)
# Pre-allocate space to store the fitted model parameters
model = np.memmap('model.mmap',
shape=(91, 109, 91),
dtype=np.float32,
mode='w+')
# Fit our model, processing slices in parallel
with joblib.Parallel(n_jobs=-1) as pool:
pool(joblib.delayed(fit_model)(data, model, slc) for slc in range(91))
print(model)
Dask
Dask is a very powerful library which you can use to parallelise your code, and to distribute computation across multiple machines. You can use Dask locally on your own computer, on HPC clusters (e.g. SLURM and SGE) and with cloud compute platforms (e.g. AWS, Azure).
Dask has two main components:
-
APIs for defining tasks/jobs - there is a low-level API comparable to that provided by JobLib, but Dask also has sophisticated high-level APIs for working with Pandas and Numpy-style data.
-
A task scheduler which builds a graph of all the tasks that need to be executed, and which manges their execution, either locally or remotely.
We will introduce the Numpy API and the low-level API, and then demonstrate how to use Dask to perform calculations on a SGE cluster.
Dask Numpy API
https://docs.dask.org/en/stable/array.html
To use the Dask Numpy API, simply import the dask.array
package, and use it instead of the numpy
package:
import numpy as np
import dask.array as da
data = da.random.random((1000, 1000, 1000, 20)).astype(np.float32)
data
If you do the numbers, you will realise that the above call has created an array which requires 74 gigabytes of memory - this is far more memory than what is available in most consumer level computers.
The call would almost certainly fail if you made it using np.random.random
instead of da.random.random
, as Numpy would attempt to create the entire data set (and in fact would temporarily require up to 150 gigabytes of memory, as Numpy uses double-precision floating point (float64
) values by default).
However, this worked because:
-
Dask automatically splits arrays into smaller chunks - in the above code,
data
has been split into 1331 smaller arrays, each of which has shape(94, 94, 94, 10)
, and requires only 64 megabytes. -
Most operations in the
dask.array
package are lazily evaluated. The above call has not actually created any data - Dask has just created a set of jobs which describe how to create a large array of random values.
When using dask.array
, nothing is actually executed until you call the compute()
function. For example, we can try calculating the mean of our large array:
m = data.mean()
m
But again, this will not actually calculate the mean - it just defines a task which, when executed, will calculate the mean over the data
array.
We can execute that task by callintg compute()
on the result:
print(m.compute())
Dask arrays support most of the functionality of Numpy arrays, but there are a few exceptions, most notably the numpy.linalg
package.
Remember that Dask also provides a Pandas-style API, accessible in the
dask.dataframe
package - you can read about it at https://docs.dask.org/en/stable/dataframe.html.
Low-level Dask API
In addition to the Numpy and Pandas APIs, Dask also has a lower-level interface which allows you to create and lazily execute a pipeline made up of Python functions. This API is based around dask.delayed
, which is a Python decorator that can be applied to any function, and which tells Dask that a function should be lazily executed.
As a very simple example (taken from the Dask documentation), consider this simple numerical task, where we have a set of numbers and, for each number x
, we want to calculate (x * x)
, and then add all of the results together (i.e. the sum of squares). We'll start by defining a function for each of the operations that we need to perform:
def square(x):
return x * x
def sum(values):
total = 0
for v in values:
total = total + v
return total
We could solve this problem without parallelism by using conventional Python code, which might look something like the following:
data = [1, 2, 3, 4, 5]
output = []
for x in data:
s = square(x)
output.append(s)
total = sum(output)
print(total)
However, this problem is inherently parallelisable - we are independently performing the same series of steps to each of our inputs, before the final tallying. We can use the dask.delayed
function to take advantage of this:
import dask
output = []
for x in data:
a = dask.delayed(square)(x)
output.append(a)
total = dask.delayed(sum)(output)
We have not actually performed any computations yet. What we have done is built a graph of operations that we need to perform. Dask refers to this as a task graph. Dask keeps track of the dependencies of each function that we add, and so is able to determine the order in they need to be executed, and also which steps do not depend on each other and so can be executed in parallel. Dask can even visualise this task graph for us:
total.visualize()
And then when we are ready, we can call compute()
to actually do the calculation:
total.compute()
For a more realistic example, let's imagine that we have T1 MRI images for five subjects, and we want to perform basic structural preprocessing on each of them (reorientation, FOV reduction, and brain extraction). Here we're creating this example data set (all of the T1 images are just a copy of the bighead.nii.gz
image, from the FSL course data).
import os
import shutil
os.makedirs('braindata', exist_ok=True)
for i in range(1, 6):
shutil.copy('../../applications/fslpy/bighead.nii.gz', f'braindata/{i:02d}.nii.gz')
And now we can build our pipeline. The fslpy library has a collection of functions which we can use to call the FSL commands that we need.
We need to do a little work, as by default dask.delayed
assumes that the dependencies of a function (i.e. other delayed
functions) are passed to the function as arguments. There are other methods of dealing with this, but often the easiest option is simply to write a few small functions that define each of our tasks, in a manner that satisfies this assumption:
import fsl.wrappers as fw
def reorient(input, output):
fw.fslreorient2std(input, output)
return output
def fov(input, output):
fw.robustfov(input, output)
return output
def bet(input, output):
fw.bet(input, output)
return output
Again we use dask.delayed
to build up a graph of tasks that need executing, starting from the input files, and ending with our final brain-extracted outputs. Again, we are not actually executing anything here - we're just building up a task graph that Dask will then execute for us later:
import glob
import dask
import fsl.data.image as fslimage
inputs = list(glob.glob('braindata/??.nii.gz'))
tasks = []
for input in inputs:
basename = fslimage.removeExt(input)
r = dask.delayed(reorient)(input, f'{basename}_reorient.nii.gz')
f = dask.delayed(fov)(r, f'{basename}_fov.nii.gz')
b = dask.delayed(bet)(f, f'{basename}_brain.nii.gz')
tasks.append(b)
In the previous example we had a single output (the result of summing the squared input values) upon which we called visualize()
. Here we have a list of independent tasks (in the tasks
list). However, we can still visualise them by passing the entire list to the dask.visualize()
function:
dask.visualize(*tasks)
And, similarly, we can call dask.compute
to run them all at once:
outputs = dask.compute(*tasks)
print(outputs)
Distributing computation with Dask
In the examples above, Dask was running your tasks in parallel on your local machine. But it is easy to instruct Dask to distribute your computations across multiple machines. For example, Dask has support for executing tasks on a SGE or SLURM cluster, which we have available in Oxford at FMRIB and the BDI.
To use this functionality, we need an additional library called dask-jobqueue
which, at the moment, is not installed as part of FSL.
Note that the code cells below will not work if you are running this notebook on your own computer (unless you happen to have a SGE cluster system installed on your laptop).
The first step is to create a SGECluster
(or SLURMCluster
) object. You need to populate this object with information about a single job in your workflow. At a minumum, you must specify the number of cores and memory that are available to a single job:
from dask_jobqueue import SGECluster
cluster = SGECluster(cores=2, memory='16GB')
You can also set a range of other options, including specifying a queue name (e.g. queue='short.q'
), or specifying the total amount of time that your job will run for (e.g. walltime='01:00:00'
for one hour).
The next step is to create some jobs by calling the scale()
method:
cluster.scale(jobs=5)
Behind the scenes, the scale()
method uses qsub
to create five "worker processes", each running on a different cluster node. In the first instance, these worker processes will be sitting idle, doing nothing, and waiting for you to give them a task to run.
The final step is to create "client" object. This is an important step which configures Dask to use the cluster we have just set up:
client = cluster.get_client()
After creating a client, any Dask computations that we request will be performed on the cluster by our worker processes:
import dask.array as da
data = da.random.random((1000, 1000, 1000, 10))
print(data.mean().compute())
fsl-pipe
https://open.win.ox.ac.uk/pages/fsl/fsl-pipe/
fsl-pipe is a Python library (written by our very own Michiel Cottaar) which builds upon another library called file-tree, and allows you to write an analysis pipeline in a declarative manner. A pipeline is defined by:
- A file tree which defines the directory structure of the input and output files of the pipeline.
- A set of recipes (Python functions) describing how each of the pipeline outputs should be produced.
In a similar vein to Dask task graphs, fsl-pipe will automatically determine which recipes should be executed, and in what order, to produce whichever output file(s) you request. You can instruct fsl-pipe to run your recipes locally, be parallelised or distributed using Dask, or be executed on a cluster using fsl_sub
.
For example, let's again imagine that we have some T1 images upon which we wish to perform basic structural preprocessing, and which are arranged in the following manner:
subjectA/ T1w.nii.gz subjectB/ T1w.nii.gz subjectC/ T1w.nii.gz
The code cell below will automatically create a dummy data set with the above structure:
import os
import shutil
for subj in 'ABC':
subjdir = f'mydata/subject{subj}'
os.makedirs(subjdir, exist_ok=True)
shutil.copy('../../applications/fslpy/bighead.nii.gz', f'{subjdir}/T1w.nii.gz')
We must first describe the structure of our data, and save that description as a file-tree file (e.g. mydata.tree
). This file contains a placeholder for the subject ID, and gives both the input and any desired output files unique identifiers:
%%writefile mydata.tree
subject{subject}
T1w.nii.gz (t1)
T1w_brain.nii.gz (t1_brain)
T1w_fov.nii.gz (t1_fov)
T1w_reorient.nii.gz (t1_reorient)
Now we need to define our recipes, the individual processing steps that we want to perform, and that will generate our output files. This is similar to what we did above when we were using Dask - we define functions which perform each step.
from fsl.wrappers import bet, fslreorient2std, robustfov
from fsl_pipe import Pipeline, In, Out
def reorient(t1 : In, t1_reorient : Out):
fslreorient2std(t1, t1_reorient)
def fov(t1_reorient : In, t1_fov : Out):
robustfov(t1_reorient, t1_fov)
def brain_extract(t1_fov : In, t1_brain : Out):
bet(t1_fov, t1_brain)
Note that we have also annotated the inputs and outputs of each function, and have used the same identifiers that we used in our mydata.tree
file above. By doing this, fsl-pipe
is automatically able to determine the steps that would be required in order to generate the output files that we request.
Once we have defined all of our recipes, we need to create a Pipeline
object, and add all of our functions to it:
pipe = Pipeline()
pipe(reorient)
pipe(fov)
pipe(brain_extract)
Note that it is also possible (and equivalent) to create the
Pipeline
before defining our recipe functions, and to use the pipeline as a decorator, e.g.:pipe = Pipeline() @pipe def reorient(t1 : In, t1_reorient : Out): ...
We now need to create a FileTree
object which is used to generate file paths for the input and output files. The update_glob('t1')
method instructs the FileTree
to scan the file system, and to generate file paths for all T1 images that are present.
from file_tree import FileTree
tree = FileTree.read('mydata.tree', './mydata/').update_glob('t1')
Then it is very easy to run our pipeline:
jobs = pipe.generate_jobs(tree)
jobs.run()
The default behaviour when using fsl-pipe locally is to for one task to be executed at a time. However, if you run fsl-pipe on the cluster, it will automatically submit the jobs using fsl_sub
. You can also tell fsl-pipe to execute the pipeline using Dask, which will cause any independent jobs to be executed in parallel, e.g.:
jobs.run(method='dask')
The above example is just one way of using fsl-pipe - the library has several powerful features, including its own command-line interface, and the ability to skip jobs for output files that already exist.
fsl-sub
You can use the venerable fsl_sub
to run several tasks in parallel both on your local machine, and on a HPC cluster, such as the ones available to us at FMRIB and the BDI. fsl_sub
is typically called from the command-line, e.g.:
for dataset in mydata/sub-*; do fsl_sub ./my_processing_script.py ${dataset} --jobram 16 done
If you are working on a local machine, fsl_sub
will block until your command has completed. You can run multiple commands in parallel by using "array tasks" - save each of the commands you want to run to a text file, e.g.:
echo "./my_processing_script.py mydata/sub-01" > tasks.txt echo "./my_processing_script.py mydata/sub-02" >> tasks.txt echo "./my_processing_script.py mydata/sub-03" >> tasks.txt echo "./my_processing_script.py mydata/sub-04" >> tasks.txt echo "./my_processing_script.py mydata/sub-05" >> tasks.txt
And then run fsl_sub -t tasks.txt
- each of your commands will be executed in parallel. If you are working on a cluster, fsl_sub
will schedule all of the commands to be executed simultaneously.
You can also call fsl_sub
from Python by using functions from the fslpy
library:
from glob import glob from fsl.wrappers import fsl_sub for dataset in glob('mydata/sub-*'): fsl_sub(f'./my_processing_script.py ${dataset}', jobram=16)
And the fslpy
wrapper functions allow you to run FSL commands with fsl_sub
, and to specify job dependencies. For example, to submit a collection of jobs, you can pass submit=True
:
from fsl.data.image import removeExt
from fsl.wrappers import bet
from glob import glob
for t1 in glob('braindata/??.nii.gz'):
t1 = removeExt(t1)
bet(t1, f'{t1}_brain', submit={'jobram':16})
You can also specify dependencies by using the ID of a previously submitted job, e.g.:
from fsl.data.image import removeExt
from fsl.wrappers import robustfov, bet
from glob import glob
for t1 in glob('braindata/??.nii.gz'):
t1 = removeExt(t1)
jid = robustfov(t1, f'{t1}_fov', submit=True)
bet(f'{t1}_fov', f'{t1}_brain', submit={'jobram':16, 'jobhold' : jid})