Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
W
win-pytreat
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container Registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
FSL
win-pytreat
Commits
70ea6379
Commit
70ea6379
authored
5 years ago
by
Paul McCarthy
Browse files
Options
Downloads
Patches
Plain Diff
ENH: updates to threading practical
parent
6cba8b3a
No related branches found
Branches containing commit
No related tags found
No related merge requests found
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
advanced_topics/07_threading.ipynb
+32
-22
32 additions, 22 deletions
advanced_topics/07_threading.ipynb
advanced_topics/07_threading.md
+32
-22
32 additions, 22 deletions
advanced_topics/07_threading.md
with
64 additions
and
44 deletions
advanced_topics/07_threading.ipynb
+
32
−
22
View file @
70ea6379
...
@@ -28,20 +28,20 @@
...
@@ -28,20 +28,20 @@
"\n",
"\n",
"\n",
"\n",
"* [Threading](#threading)\n",
"* [Threading](#threading)\n",
" * [Subclassing `Thread`](#subclassing-thread)\n",
"
* [Subclassing `Thread`](#subclassing-thread)\n",
" * [Daemon threads](#daemon-threads)\n",
"
* [Daemon threads](#daemon-threads)\n",
" * [Thread synchronisation](#thread-synchronisation)\n",
"
* [Thread synchronisation](#thread-synchronisation)\n",
" * [`Lock`](#lock)\n",
"
* [`Lock`](#lock)\n",
" * [`Event`](#event)\n",
"
* [`Event`](#event)\n",
" * [The Global Interpreter Lock (GIL)](#the-global-interpreter-lock-gil)\n",
"
* [The Global Interpreter Lock (GIL)](#the-global-interpreter-lock-gil)\n",
"* [Multiprocessing](#multiprocessing)\n",
"* [Multiprocessing](#multiprocessing)\n",
" * [`threading`-equivalent API](#threading-equivalent-api)\n",
"
* [`threading`-equivalent API](#threading-equivalent-api)\n",
" * [Higher-level API - the `multiprocessing.Pool`](#higher-level-api-the-multiprocessing-pool)\n",
"
* [Higher-level API - the `multiprocessing.Pool`](#higher-level-api-the-multiprocessing-pool)\n",
" * [`Pool.map`](#pool-map)\n",
"
* [`Pool.map`](#pool-map)\n",
" * [`Pool.apply_async`](#pool-apply-async)\n",
"
* [`Pool.apply_async`](#pool-apply-async)\n",
"* [Sharing data between processes](#sharing-data-between-processes)\n",
"* [Sharing data between processes](#sharing-data-between-processes)\n",
" * [Read-only sharing](#read-only-sharing)\n",
"
* [Read-only sharing](#read-only-sharing)\n",
" * [Read/write sharing](#read-write-sharing)\n",
"
* [Read/write sharing](#read-write-sharing)\n",
"\n",
"\n",
"\n",
"\n",
"<a class=\"anchor\" id=\"threading\"></a>\n",
"<a class=\"anchor\" id=\"threading\"></a>\n",
...
@@ -732,8 +732,8 @@
...
@@ -732,8 +732,8 @@
"\n",
"\n",
"\n",
"\n",
"Let's see this in action with a simple example. We'll start by defining a\n",
"Let's see this in action with a simple example. We'll start by defining a\n",
"little helper function which allows us to track the total memory
usage, using
\n",
"
horrible
little helper function which allows us to track the total memory\n",
"
the unix `free` command
:"
"
usage
:"
]
]
},
},
{
{
...
@@ -742,14 +742,24 @@
...
@@ -742,14 +742,24 @@
"metadata": {},
"metadata": {},
"outputs": [],
"outputs": [],
"source": [
"source": [
"
# todo mac version
\n",
"
import sys
\n",
"import subprocess as sp\n",
"import subprocess as sp\n",
"def memusage(msg):\n",
"def memusage(msg):\n",
" stdout = sp.run(['free', '--mega'], capture_output=True).stdout.decode()\n",
" if sys.platform == 'darwin':\n",
" stdout = stdout.split('\\n')[1].split()\n",
" total = sp.run(['sysctl', 'hw.memsize'], capture_output=True).stdout.decode()\n",
" total = stdout[1]\n",
" total = int(total.split()[1]) // 1048576\n",
" used = stdout[2]\n",
" usage = sp.run('vm_stat', capture_output=True).stdout.decode()\n",
" print('Memory usage {}: {} / {} MB'.format(msg, used, total))"
" usage = usage.strip().split('\\n')\n",
" usage = [l.split(':') for l in usage]\n",
" usage = {k.strip() : v.strip() for k, v in usage}\n",
" usage = int(usage['Pages free'][:-1]) / 256.0\n",
" usage = int(total - usage)\n",
" else:\n",
" stdout = sp.run(['free', '--mega'], capture_output=True).stdout.decode()\n",
" stdout = stdout.split('\\n')[1].split()\n",
" total = int(stdout[1])\n",
" usage = int(stdout[2])\n",
" print('Memory usage {}: {} / {} MB'.format(msg, usage, total))"
]
]
},
},
{
{
...
@@ -823,8 +833,8 @@
...
@@ -823,8 +833,8 @@
"data? Go back to the code block above and:\n",
"data? Go back to the code block above and:\n",
"\n",
"\n",
"1. Modify the `process_chunk` function so that it modifies every element of\n",
"1. Modify the `process_chunk` function so that it modifies every element of\n",
" its assigned portion of the data before
calculating and returning the sum.
\n",
" its assigned portion of the data before
the call to `time.sleep`. For
\n",
"
For
example:\n",
" example:\n",
"\n",
"\n",
" > ```\n",
" > ```\n",
" > data[offset:offset + nelems] += 1\n",
" > data[offset:offset + nelems] += 1\n",
...
...
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
# Threading and parallel processing
# Threading and parallel processing
The Python language has built-in support for multi-threading in the
The Python language has built-in support for multi-threading in the
[
`threading`
](
https://docs.python.org/3/library/threading.html
)
module, and
[
`threading`
](
https://docs.python.org/3/library/threading.html
)
module, and
true parallelism in the
true parallelism in the
[
`multiprocessing`
](
https://docs.python.org/3/library/multiprocessing.html
)
[
`multiprocessing`
](
https://docs.python.org/3/library/multiprocessing.html
)
module. If you want to be impressed, skip straight to the section on
module. If you want to be impressed, skip straight to the section on
[
`multiprocessing`
](
todo
)
.
[
`multiprocessing`
](
todo
)
.
> *Note*: If you are familiar with a "real" programming language such as C++
> *Note*: If you are familiar with a "real" programming language such as C++
> or Java, you might be disappointed with the native support for parallelism in
> or Java, you might be disappointed with the native support for parallelism in
> Python. Python threads do not run in parallel because of the Global
> Python. Python threads do not run in parallel because of the Global
> Interpreter Lock, and if you use `multiprocessing`, be prepared to either
> Interpreter Lock, and if you use `multiprocessing`, be prepared to either
> bear the performance hit of copying data between processes, or jump through
> bear the performance hit of copying data between processes, or jump through
> hoops order to share data between processes.
> hoops order to share data between processes.
>
>
> This limitation *might* be solved in a future Python release by way of
> This limitation *might* be solved in a future Python release by way of
> [*sub-interpreters*](https://www.python.org/dev/peps/pep-0554/), but the
> [*sub-interpreters*](https://www.python.org/dev/peps/pep-0554/), but the
> author of this practical is not holding his breath.
> author of this practical is not holding his breath.
*
[
Threading
](
#threading
)
*
[
Threading
](
#threading
)
*
[
Subclassing `Thread`
](
#subclassing-thread
)
*
[
Subclassing `Thread`
](
#subclassing-thread
)
*
[
Daemon threads
](
#daemon-threads
)
*
[
Daemon threads
](
#daemon-threads
)
*
[
Thread synchronisation
](
#thread-synchronisation
)
*
[
Thread synchronisation
](
#thread-synchronisation
)
*
[
`Lock`
](
#lock
)
*
[
`Lock`
](
#lock
)
*
[
`Event`
](
#event
)
*
[
`Event`
](
#event
)
*
[
The Global Interpreter Lock (GIL)
](
#the-global-interpreter-lock-gil
)
*
[
The Global Interpreter Lock (GIL)
](
#the-global-interpreter-lock-gil
)
*
[
Multiprocessing
](
#multiprocessing
)
*
[
Multiprocessing
](
#multiprocessing
)
*
[
`threading`-equivalent API
](
#threading-equivalent-api
)
*
[
`threading`-equivalent API
](
#threading-equivalent-api
)
*
[
Higher-level API - the `multiprocessing.Pool`
](
#higher-level-api-the-multiprocessing-pool
)
*
[
Higher-level API - the `multiprocessing.Pool`
](
#higher-level-api-the-multiprocessing-pool
)
*
[
`Pool.map`
](
#pool-map
)
*
[
`Pool.map`
](
#pool-map
)
*
[
`Pool.apply_async`
](
#pool-apply-async
)
*
[
`Pool.apply_async`
](
#pool-apply-async
)
*
[
Sharing data between processes
](
#sharing-data-between-processes
)
*
[
Sharing data between processes
](
#sharing-data-between-processes
)
*
[
Read-only sharing
](
#read-only-sharing
)
*
[
Read-only sharing
](
#read-only-sharing
)
*
[
Read/write sharing
](
#read-write-sharing
)
*
[
Read/write sharing
](
#read-write-sharing
)
<a
class=
"anchor"
id=
"threading"
></a>
<a
class=
"anchor"
id=
"threading"
></a>
## Threading
## Threading
The
[
`threading`
](
https://docs.python.org/3/library/threading.html
)
module
The
[
`threading`
](
https://docs.python.org/3/library/threading.html
)
module
provides a traditional multi-threading API that should be familiar to you if
provides a traditional multi-threading API that should be familiar to you if
you have worked with threads in other languages.
you have worked with threads in other languages.
Running a task in a separate thread in Python is easy - simply create a
Running a task in a separate thread in Python is easy - simply create a
`Thread`
object, and pass it the function or method that you want it to
`Thread`
object, and pass it the function or method that you want it to
run. Then call its
`start`
method:
run. Then call its
`start`
method:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
import time
import time
import threading
import threading
def longRunningTask(niters):
def longRunningTask(niters):
for i in range(niters):
for i in range(niters):
if i % 2 == 0: print('Tick')
if i % 2 == 0: print('Tick')
else: print('Tock')
else: print('Tock')
time.sleep(0.5)
time.sleep(0.5)
t = threading.Thread(target=longRunningTask, args=(8,))
t = threading.Thread(target=longRunningTask, args=(8,))
t.start()
t.start()
while t.is_alive():
while t.is_alive():
time.sleep(0.4)
time.sleep(0.4)
print('Waiting for thread to finish...')
print('Waiting for thread to finish...')
print('Finished!')
print('Finished!')
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
You can also
`join`
a thread, which will block execution in the current thread
You can also
`join`
a thread, which will block execution in the current thread
until the thread that has been
`join`
ed has finished:
until the thread that has been
`join`
ed has finished:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
t = threading.Thread(target=longRunningTask, args=(6, ))
t = threading.Thread(target=longRunningTask, args=(6, ))
t.start()
t.start()
print('Joining thread ...')
print('Joining thread ...')
t.join()
t.join()
print('Finished!')
print('Finished!')
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<a
class=
"anchor"
id=
"subclassing-thread"
></a>
<a
class=
"anchor"
id=
"subclassing-thread"
></a>
### Subclassing `Thread`
### Subclassing `Thread`
It is also possible to sub-class the
`Thread`
class, and override its
`run`
It is also possible to sub-class the
`Thread`
class, and override its
`run`
method:
method:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
class LongRunningThread(threading.Thread):
class LongRunningThread(threading.Thread):
def __init__(self, niters, *args, **kwargs):
def __init__(self, niters, *args, **kwargs):
super().__init__(*args, **kwargs)
super().__init__(*args, **kwargs)
self.niters = niters
self.niters = niters
def run(self):
def run(self):
for i in range(self.niters):
for i in range(self.niters):
if i % 2 == 0: print('Tick')
if i % 2 == 0: print('Tick')
else: print('Tock')
else: print('Tock')
time.sleep(0.5)
time.sleep(0.5)
t = LongRunningThread(6)
t = LongRunningThread(6)
t.start()
t.start()
t.join()
t.join()
print('Done')
print('Done')
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<a
class=
"anchor"
id=
"daemon-threads"
></a>
<a
class=
"anchor"
id=
"daemon-threads"
></a>
### Daemon threads
### Daemon threads
By default, a Python application will not exit until _all_ active threads have
By default, a Python application will not exit until _all_ active threads have
finished execution. If you want to run a task in the background for the
finished execution. If you want to run a task in the background for the
duration of your application, you can mark it as a
`daemon`
thread - when all
duration of your application, you can mark it as a
`daemon`
thread - when all
non-daemon threads in a Python application have finished, all daemon threads
non-daemon threads in a Python application have finished, all daemon threads
will be halted, and the application will exit.
will be halted, and the application will exit.
You can mark a thread as being a daemon by setting an attribute on it after
You can mark a thread as being a daemon by setting an attribute on it after
creation:
creation:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
t = threading.Thread(target=longRunningTask)
t = threading.Thread(target=longRunningTask)
t.daemon = True
t.daemon = True
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
See the
[
`Thread`
See the
[
`Thread`
documentation
](
https://docs.python.org/3/library/threading.html#thread-objects
)
documentation
](
https://docs.python.org/3/library/threading.html#thread-objects
)
for more details.
for more details.
<a
class=
"anchor"
id=
"thread-synchronisation"
></a>
<a
class=
"anchor"
id=
"thread-synchronisation"
></a>
### Thread synchronisation
### Thread synchronisation
The
`threading`
module provides some useful thread-synchronisation primitives
The
`threading`
module provides some useful thread-synchronisation primitives
-
the
`Lock`
,
`RLock`
(re-entrant
`Lock`
), and
`Event`
classes. The
-
the
`Lock`
,
`RLock`
(re-entrant
`Lock`
), and
`Event`
classes. The
`threading`
module also provides
`Condition`
and
`Semaphore`
classes - refer
`threading`
module also provides
`Condition`
and
`Semaphore`
classes - refer
to the
[
documentation
](
https://docs.python.org/3/library/threading.html
)
for
to the
[
documentation
](
https://docs.python.org/3/library/threading.html
)
for
more details.
more details.
<a
class=
"anchor"
id=
"lock"
></a>
<a
class=
"anchor"
id=
"lock"
></a>
#### `Lock`
#### `Lock`
The
[
`Lock`
](
https://docs.python.org/3/library/threading.html#lock-objects
)
The
[
`Lock`
](
https://docs.python.org/3/library/threading.html#lock-objects
)
class (and its re-entrant version, the
class (and its re-entrant version, the
[
`RLock`
](
https://docs.python.org/3/library/threading.html#rlock-objects
)
)
[
`RLock`
](
https://docs.python.org/3/library/threading.html#rlock-objects
)
)
prevents a block of code from being accessed by more than one thread at a
prevents a block of code from being accessed by more than one thread at a
time. For example, if we have multiple threads running this
`task`
function,
time. For example, if we have multiple threads running this
`task`
function,
their
[
outputs
](
https://www.youtube.com/watch?v=F5fUFnfPpYU
)
will inevitably
their
[
outputs
](
https://www.youtube.com/watch?v=F5fUFnfPpYU
)
will inevitably
become intertwined:
become intertwined:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
def task():
def task():
for i in range(5):
for i in range(5):
print('{} Woozle '.format(i), end='')
print('{} Woozle '.format(i), end='')
time.sleep(0.1)
time.sleep(0.1)
print('Wuzzle')
print('Wuzzle')
threads = [threading.Thread(target=task) for i in range(5)]
threads = [threading.Thread(target=task) for i in range(5)]
for t in threads:
for t in threads:
t.start()
t.start()
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
But if we protect the critical section with a
`Lock`
object, the output will
But if we protect the critical section with a
`Lock`
object, the output will
look more sensible:
look more sensible:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
lock = threading.Lock()
lock = threading.Lock()
def task():
def task():
for i in range(5):
for i in range(5):
with lock:
with lock:
print('{} Woozle '.format(i), end='')
print('{} Woozle '.format(i), end='')
time.sleep(0.1)
time.sleep(0.1)
print('Wuzzle')
print('Wuzzle')
threads = [threading.Thread(target=task) for i in range(5)]
threads = [threading.Thread(target=task) for i in range(5)]
for t in threads:
for t in threads:
t.start()
t.start()
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
> Instead of using a `Lock` object in a `with` statement, it is also possible
> Instead of using a `Lock` object in a `with` statement, it is also possible
> to manually call its `acquire` and `release` methods:
> to manually call its `acquire` and `release` methods:
>
>
> def task():
> def task():
> for i in range(5):
> for i in range(5):
> lock.acquire()
> lock.acquire()
> print('{} Woozle '.format(i), end='')
> print('{} Woozle '.format(i), end='')
> time.sleep(0.1)
> time.sleep(0.1)
> print('Wuzzle')
> print('Wuzzle')
> lock.release()
> lock.release()
Python does not have any built-in constructs to implement
`Lock`
-based mutual
Python does not have any built-in constructs to implement
`Lock`
-based mutual
exclusion across several functions or methods - each function/method must
exclusion across several functions or methods - each function/method must
explicitly acquire/release a shared
`Lock`
instance. However, it is relatively
explicitly acquire/release a shared
`Lock`
instance. However, it is relatively
straightforward to implement a decorator which does this for you:
straightforward to implement a decorator which does this for you:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
def mutex(func, lock):
def mutex(func, lock):
def wrapper(*args):
def wrapper(*args):
with lock:
with lock:
func(*args)
func(*args)
return wrapper
return wrapper
class MyClass(object):
class MyClass(object):
def __init__(self):
def __init__(self):
lock = threading.Lock()
lock = threading.Lock()
self.safeFunc1 = mutex(self.safeFunc1, lock)
self.safeFunc1 = mutex(self.safeFunc1, lock)
self.safeFunc2 = mutex(self.safeFunc2, lock)
self.safeFunc2 = mutex(self.safeFunc2, lock)
def safeFunc1(self):
def safeFunc1(self):
time.sleep(0.1)
time.sleep(0.1)
print('safeFunc1 start')
print('safeFunc1 start')
time.sleep(0.2)
time.sleep(0.2)
print('safeFunc1 end')
print('safeFunc1 end')
def safeFunc2(self):
def safeFunc2(self):
time.sleep(0.1)
time.sleep(0.1)
print('safeFunc2 start')
print('safeFunc2 start')
time.sleep(0.2)
time.sleep(0.2)
print('safeFunc2 end')
print('safeFunc2 end')
mc = MyClass()
mc = MyClass()
f1threads = [threading.Thread(target=mc.safeFunc1) for i in range(4)]
f1threads = [threading.Thread(target=mc.safeFunc1) for i in range(4)]
f2threads = [threading.Thread(target=mc.safeFunc2) for i in range(4)]
f2threads = [threading.Thread(target=mc.safeFunc2) for i in range(4)]
for t in f1threads + f2threads:
for t in f1threads + f2threads:
t.start()
t.start()
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
Try removing the
`mutex`
lock from the two methods in the above code, and see
Try removing the
`mutex`
lock from the two methods in the above code, and see
what it does to the output.
what it does to the output.
<a
class=
"anchor"
id=
"event"
></a>
<a
class=
"anchor"
id=
"event"
></a>
#### `Event`
#### `Event`
The
The
[
`Event`
](
https://docs.python.org/3/library/threading.html#event-objects
)
[
`Event`
](
https://docs.python.org/3/library/threading.html#event-objects
)
class is essentially a boolean
[
semaphore
][
semaphore-wiki
]
. It can be used to
class is essentially a boolean
[
semaphore
][
semaphore-wiki
]
. It can be used to
signal events between threads. Threads can
`wait`
on the event, and be awoken
signal events between threads. Threads can
`wait`
on the event, and be awoken
when the event is
`set`
by another thread:
when the event is
`set`
by another thread:
[
semaphore-wiki
]:
https://en.wikipedia.org/wiki/Semaphore_(programming)
[
semaphore-wiki
]:
https://en.wikipedia.org/wiki/Semaphore_(programming)
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
import numpy as np
import numpy as np
processingFinished = threading.Event()
processingFinished = threading.Event()
def processData(data):
def processData(data):
print('Processing data ...')
print('Processing data ...')
time.sleep(2)
time.sleep(2)
print('Result: {}'.format(data.mean()))
print('Result: {}'.format(data.mean()))
processingFinished.set()
processingFinished.set()
data = np.random.randint(1, 100, 100)
data = np.random.randint(1, 100, 100)
t = threading.Thread(target=processData, args=(data,))
t = threading.Thread(target=processData, args=(data,))
t.start()
t.start()
processingFinished.wait()
processingFinished.wait()
print('Processing finished!')
print('Processing finished!')
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<a
class=
"anchor"
id=
"the-global-interpreter-lock-gil"
></a>
<a
class=
"anchor"
id=
"the-global-interpreter-lock-gil"
></a>
### The Global Interpreter Lock (GIL)
### The Global Interpreter Lock (GIL)
The
[
*Global Interpreter
The
[
*Global Interpreter
Lock*
](
https://docs.python.org/3/c-api/init.html#thread-state-and-the-global-interpreter-lock
)
Lock*
](
https://docs.python.org/3/c-api/init.html#thread-state-and-the-global-interpreter-lock
)
is an implementation detail of
[
CPython
](
https://github.com/python/cpython
)
is an implementation detail of
[
CPython
](
https://github.com/python/cpython
)
(the official Python interpreter). The GIL means that a multi-threaded
(the official Python interpreter). The GIL means that a multi-threaded
program written in pure Python is not able to take advantage of multiple
program written in pure Python is not able to take advantage of multiple
cores - this essentially means that only one thread may be executing at any
cores - this essentially means that only one thread may be executing at any
point in time.
point in time.
The
`threading`
module does still have its uses though, as this GIL problem
The
`threading`
module does still have its uses though, as this GIL problem
does not affect tasks which involve calls to system or natively compiled
does not affect tasks which involve calls to system or natively compiled
libraries (e.g. file and network I/O, Numpy operations, etc.). So you can,
libraries (e.g. file and network I/O, Numpy operations, etc.). So you can,
for example, perform some expensive processing on a Numpy array in a thread
for example, perform some expensive processing on a Numpy array in a thread
running on one core, whilst having another thread (e.g. user interaction)
running on one core, whilst having another thread (e.g. user interaction)
running on another core.
running on another core.
<a
class=
"anchor"
id=
"multiprocessing"
></a>
<a
class=
"anchor"
id=
"multiprocessing"
></a>
## Multiprocessing
## Multiprocessing
For true parallelism, you should check out the
For true parallelism, you should check out the
[
`multiprocessing`
](
https://docs.python.org/3/library/multiprocessing.html
)
[
`multiprocessing`
](
https://docs.python.org/3/library/multiprocessing.html
)
module.
module.
The
`multiprocessing`
module spawns sub-processes, rather than threads, and so
The
`multiprocessing`
module spawns sub-processes, rather than threads, and so
is not subject to the GIL constraints that the
`threading`
module suffers
is not subject to the GIL constraints that the
`threading`
module suffers
from. It provides two APIs - a "traditional" equivalent to that provided by
from. It provides two APIs - a "traditional" equivalent to that provided by
the
`threading`
module, and a powerful higher-level API.
the
`threading`
module, and a powerful higher-level API.
> Python also provides the
> Python also provides the
> [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html)
> [`concurrent.futures`](https://docs.python.org/3/library/concurrent.futures.html)
> module, which offers a simpler alternative API to `multiprocessing`. It
> module, which offers a simpler alternative API to `multiprocessing`. It
> offers no functionality over `multiprocessing`, so is not covered here.
> offers no functionality over `multiprocessing`, so is not covered here.
<a
class=
"anchor"
id=
"threading-equivalent-api"
></a>
<a
class=
"anchor"
id=
"threading-equivalent-api"
></a>
### `threading`-equivalent API
### `threading`-equivalent API
The
The
[
`Process`
](
https://docs.python.org/3/library/multiprocessing.html#the-process-class
)
[
`Process`
](
https://docs.python.org/3/library/multiprocessing.html#the-process-class
)
class is the
`multiprocessing`
equivalent of the
class is the
`multiprocessing`
equivalent of the
[
`threading.Thread`
](
https://docs.python.org/3/library/threading.html#thread-objects
)
[
`threading.Thread`
](
https://docs.python.org/3/library/threading.html#thread-objects
)
class.
`multprocessing`
also has equivalents of the
[
`Lock` and `Event`
class.
`multprocessing`
also has equivalents of the
[
`Lock` and `Event`
classes
](
https://docs.python.org/3/library/multiprocessing.html#synchronization-between-processes
)
,
classes
](
https://docs.python.org/3/library/multiprocessing.html#synchronization-between-processes
)
,
and the other synchronisation primitives provided by
`threading`
.
and the other synchronisation primitives provided by
`threading`
.
So you can simply replace
`threading.Thread`
with
`multiprocessing.Process`
,
So you can simply replace
`threading.Thread`
with
`multiprocessing.Process`
,
and you will have true parallelism.
and you will have true parallelism.
Because your "threads" are now independent processes, you need to be a little
Because your "threads" are now independent processes, you need to be a little
careful about how to share information across them. If you only need to share
careful about how to share information across them. If you only need to share
small amounts of data, you can use the
[
`Queue` and `Pipe`
small amounts of data, you can use the
[
`Queue` and `Pipe`
classes
](
https://docs.python.org/3/library/multiprocessing.html#exchanging-objects-between-processes
)
,
classes
](
https://docs.python.org/3/library/multiprocessing.html#exchanging-objects-between-processes
)
,
in the
`multiprocessing`
module. If you are working with large amounts of data
in the
`multiprocessing`
module. If you are working with large amounts of data
where copying between processes is not feasible, things become more
where copying between processes is not feasible, things become more
complicated, but read on...
complicated, but read on...
<a
class=
"anchor"
id=
"higher-level-api-the-multiprocessing-pool"
></a>
<a
class=
"anchor"
id=
"higher-level-api-the-multiprocessing-pool"
></a>
### Higher-level API - the `multiprocessing.Pool`
### Higher-level API - the `multiprocessing.Pool`
The real advantages of
`multiprocessing`
lie in its higher level API, centered
The real advantages of
`multiprocessing`
lie in its higher level API, centered
around the
[
`Pool`
around the
[
`Pool`
class
](
https://docs.python.org/3/library/multiprocessing.html#using-a-pool-of-workers
)
.
class
](
https://docs.python.org/3/library/multiprocessing.html#using-a-pool-of-workers
)
.
Essentially, you create a
`Pool`
of worker processes - you specify the number
Essentially, you create a
`Pool`
of worker processes - you specify the number
of processes when you create the pool. Once you have created a
`Pool`
, you can
of processes when you create the pool. Once you have created a
`Pool`
, you can
use its methods to automatically parallelise tasks. The most useful are the
use its methods to automatically parallelise tasks. The most useful are the
`map`
,
`starmap`
and
`apply_async`
methods.
`map`
,
`starmap`
and
`apply_async`
methods.
The
`Pool`
class is a context manager, so can be used in a
`with`
statement,
The
`Pool`
class is a context manager, so can be used in a
`with`
statement,
e.g.:
e.g.:
> ```
> ```
> with mp.Pool(processes=16) as pool:
> with mp.Pool(processes=16) as pool:
> # do stuff with the pool
> # do stuff with the pool
> ```
> ```
It is possible to create a
`Pool`
outside of a
`with`
statement, but in this
It is possible to create a
`Pool`
outside of a
`with`
statement, but in this
case you must ensure that you call its
`close`
mmethod when you are finished.
case you must ensure that you call its
`close`
mmethod when you are finished.
Using a
`Pool`
in a
`with`
statement is therefore recommended, because you know
Using a
`Pool`
in a
`with`
statement is therefore recommended, because you know
that it will be shut down correctly, even in the event of an error.
that it will be shut down correctly, even in the event of an error.
> The best number of processes to use for a `Pool` will depend on the system
> The best number of processes to use for a `Pool` will depend on the system
> you are running on (number of cores), and the tasks you are running (e.g.
> you are running on (number of cores), and the tasks you are running (e.g.
> I/O bound or CPU bound).
> I/O bound or CPU bound).
<a
class=
"anchor"
id=
"pool-map"
></a>
<a
class=
"anchor"
id=
"pool-map"
></a>
#### `Pool.map`
#### `Pool.map`
The
The
[
`Pool.map`
](
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.map
)
[
`Pool.map`
](
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.map
)
method is the multiprocessing equivalent of the built-in
method is the multiprocessing equivalent of the built-in
[
`map`
](
https://docs.python.org/3/library/functions.html#map
)
function - it
[
`map`
](
https://docs.python.org/3/library/functions.html#map
)
function - it
is given a function, and a sequence, and it applies the function to each
is given a function, and a sequence, and it applies the function to each
element in the sequence.
element in the sequence.
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
import time
import time
import multiprocessing as mp
import multiprocessing as mp
import numpy as np
import numpy as np
def crunchImage(imgfile):
def crunchImage(imgfile):
# Load a nifti image, do stuff
# Load a nifti image, do stuff
# to it. Use your imagination
# to it. Use your imagination
# to fill in this function.
# to fill in this function.
time.sleep(2)
time.sleep(2)
# numpy's random number generator
# numpy's random number generator
# will be initialised in the same
# will be initialised in the same
# way in each process, so let's
# way in each process, so let's
# re-seed it.
# re-seed it.
np.random.seed()
np.random.seed()
result = np.random.randint(1, 100, 1)
result = np.random.randint(1, 100, 1)
print(imgfile, ':', result)
print(imgfile, ':', result)
return result
return result
imgfiles = ['{:02d}.nii.gz'.format(i) for i in range(20)]
imgfiles = ['{:02d}.nii.gz'.format(i) for i in range(20)]
print('Crunching images...')
print('Crunching images...')
start = time.time()
start = time.time()
with mp.Pool(processes=16) as p:
with mp.Pool(processes=16) as p:
results = p.map(crunchImage, imgfiles)
results = p.map(crunchImage, imgfiles)
end = time.time()
end = time.time()
print('Total execution time: {:0.2f} seconds'.format(end - start))
print('Total execution time: {:0.2f} seconds'.format(end - start))
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
The
`Pool.map`
method only works with functions that accept one argument, such
The
`Pool.map`
method only works with functions that accept one argument, such
as our
`crunchImage`
function above. If you have a function which accepts
as our
`crunchImage`
function above. If you have a function which accepts
multiple arguments, use the
multiple arguments, use the
[
`Pool.starmap`
](
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.starmap
)
[
`Pool.starmap`
](
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.starmap
)
method instead:
method instead:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
def crunchImage(imgfile, modality):
def crunchImage(imgfile, modality):
time.sleep(2)
time.sleep(2)
np.random.seed()
np.random.seed()
if modality == 't1':
if modality == 't1':
result = np.random.randint(1, 100, 1)
result = np.random.randint(1, 100, 1)
elif modality == 't2':
elif modality == 't2':
result = np.random.randint(100, 200, 1)
result = np.random.randint(100, 200, 1)
print(imgfile, ': ', result)
print(imgfile, ': ', result)
return result
return result
imgfiles = ['t1_{:02d}.nii.gz'.format(i) for i in range(10)] + \
imgfiles = ['t1_{:02d}.nii.gz'.format(i) for i in range(10)] + \
['t2_{:02d}.nii.gz'.format(i) for i in range(10)]
['t2_{:02d}.nii.gz'.format(i) for i in range(10)]
modalities = ['t1'] * 10 + ['t2'] * 10
modalities = ['t1'] * 10 + ['t2'] * 10
args = [(f, m) for f, m in zip(imgfiles, modalities)]
args = [(f, m) for f, m in zip(imgfiles, modalities)]
print('Crunching images...')
print('Crunching images...')
start = time.time()
start = time.time()
with mp.Pool(processes=16) as pool:
with mp.Pool(processes=16) as pool:
results = pool.starmap(crunchImage, args)
results = pool.starmap(crunchImage, args)
end = time.time()
end = time.time()
print('Total execution time: {:0.2f} seconds'.format(end - start))
print('Total execution time: {:0.2f} seconds'.format(end - start))
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
The
`map`
and
`starmap`
methods also have asynchronous equivalents
`map_async`
The
`map`
and
`starmap`
methods also have asynchronous equivalents
`map_async`
and
`starmap_async`
, which return immediately. Refer to the
and
`starmap_async`
, which return immediately. Refer to the
[
`Pool`
](
https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.pool
)
[
`Pool`
](
https://docs.python.org/3/library/multiprocessing.html#module-multiprocessing.pool
)
documentation for more details.
documentation for more details.
<a
class=
"anchor"
id=
"pool-apply-async"
></a>
<a
class=
"anchor"
id=
"pool-apply-async"
></a>
#### `Pool.apply_async`
#### `Pool.apply_async`
The
The
[
`Pool.apply`
](
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.apply
)
[
`Pool.apply`
](
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.apply
)
method will execute a function on one of the processes, and block until it has
method will execute a function on one of the processes, and block until it has
finished. The
finished. The
[
`Pool.apply_async`
](
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.apply_async
)
[
`Pool.apply_async`
](
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.Pool.apply_async
)
method returns immediately, and is thus more suited to asynchronously
method returns immediately, and is thus more suited to asynchronously
scheduling multiple jobs to run in parallel.
scheduling multiple jobs to run in parallel.
`apply_async`
returns an object of type
`apply_async`
returns an object of type
[
`AsyncResult`
](
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.AsyncResult
)
.
[
`AsyncResult`
](
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.pool.AsyncResult
)
.
An
`AsyncResult`
object has
`wait`
and
`get`
methods which will block until
An
`AsyncResult`
object has
`wait`
and
`get`
methods which will block until
the job has completed.
the job has completed.
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
import time
import time
import multiprocessing as mp
import multiprocessing as mp
import numpy as np
import numpy as np
def linear_registration(src, ref):
def linear_registration(src, ref):
time.sleep(1)
time.sleep(1)
return np.eye(4)
return np.eye(4)
def nonlinear_registration(src, ref, affine):
def nonlinear_registration(src, ref, affine):
time.sleep(3)
time.sleep(3)
# this number represents a non-linear warp
# this number represents a non-linear warp
# field - use your imagination people!
# field - use your imagination people!
np.random.seed()
np.random.seed()
return np.random.randint(1, 100, 1)
return np.random.randint(1, 100, 1)
t1s = ['{:02d}_t1.nii.gz'.format(i) for i in range(20)]
t1s = ['{:02d}_t1.nii.gz'.format(i) for i in range(20)]
std = 'MNI152_T1_2mm.nii.gz'
std = 'MNI152_T1_2mm.nii.gz'
print('Running structural-to-standard registration '
print('Running structural-to-standard registration '
'on {} subjects...'.format(len(t1s)))
'on {} subjects...'.format(len(t1s)))
# Run linear registration on all the T1s.
# Run linear registration on all the T1s.
start = time.time()
start = time.time()
with mp.Pool(processes=16) as pool:
with mp.Pool(processes=16) as pool:
# We build a list of AsyncResult objects
# We build a list of AsyncResult objects
linresults = [pool.apply_async(linear_registration, (t1, std))
linresults = [pool.apply_async(linear_registration, (t1, std))
for t1 in t1s]
for t1 in t1s]
# Then we wait for each job to finish,
# Then we wait for each job to finish,
# and replace its AsyncResult object
# and replace its AsyncResult object
# with the actual result - an affine
# with the actual result - an affine
# transformation matrix.
# transformation matrix.
for i, r in enumerate(linresults):
for i, r in enumerate(linresults):
linresults[i] = r.get()
linresults[i] = r.get()
end = time.time()
end = time.time()
print('Linear registrations completed in '
print('Linear registrations completed in '
'{:0.2f} seconds'.format(end - start))
'{:0.2f} seconds'.format(end - start))
# Run non-linear registration on all the T1s,
# Run non-linear registration on all the T1s,
# using the linear registrations to initialise.
# using the linear registrations to initialise.
start = time.time()
start = time.time()
with mp.Pool(processes=16) as pool:
with mp.Pool(processes=16) as pool:
nlinresults = [pool.apply_async(nonlinear_registration, (t1, std, aff))
nlinresults = [pool.apply_async(nonlinear_registration, (t1, std, aff))
for (t1, aff) in zip(t1s, linresults)]
for (t1, aff) in zip(t1s, linresults)]
# Wait for each non-linear reg to finish,
# Wait for each non-linear reg to finish,
# and store the resulting warp field.
# and store the resulting warp field.
for i, r in enumerate(nlinresults):
for i, r in enumerate(nlinresults):
nlinresults[i] = r.get()
nlinresults[i] = r.get()
end = time.time()
end = time.time()
print('Non-linear registrations completed in '
print('Non-linear registrations completed in '
'{:0.2f} seconds'.format(end - start))
'{:0.2f} seconds'.format(end - start))
print('Non linear registrations:')
print('Non linear registrations:')
for t1, result in zip(t1s, nlinresults):
for t1, result in zip(t1s, nlinresults):
print(t1, ':', result)
print(t1, ':', result)
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<a
class=
"anchor"
id=
"sharing-data-between-processes"
></a>
<a
class=
"anchor"
id=
"sharing-data-between-processes"
></a>
## Sharing data between processes
## Sharing data between processes
When you use the
`Pool.map`
method (or any of the other methods we have shown)
When you use the
`Pool.map`
method (or any of the other methods we have shown)
to run a function on a sequence of items, those items must be copied into the
to run a function on a sequence of items, those items must be copied into the
memory of each of the child processes. When the child processes are finished,
memory of each of the child processes. When the child processes are finished,
the data that they return then has to be copied back to the parent process.
the data that they return then has to be copied back to the parent process.
Any items which you wish to pass to a function that is executed by a
`Pool`
Any items which you wish to pass to a function that is executed by a
`Pool`
must be
*pickleable*
<sup>
1
</sup>
- the built-in
must be
*pickleable*
<sup>
1
</sup>
- the built-in
[
`pickle`
](
https://docs.python.org/3/library/pickle.html
)
module is used by
[
`pickle`
](
https://docs.python.org/3/library/pickle.html
)
module is used by
`multiprocessing`
to serialise and de-serialise the data passed to and
`multiprocessing`
to serialise and de-serialise the data passed to and
returned from a child process. The majority of standard Python types (
`list`
,
returned from a child process. The majority of standard Python types (
`list`
,
`dict`
,
`str`
etc), and Numpy arrays can be pickled and unpickled, so you only
`dict`
,
`str`
etc), and Numpy arrays can be pickled and unpickled, so you only
need to worry about this detail if you are passing objects of a custom type
need to worry about this detail if you are passing objects of a custom type
(e.g. instances of classes that you have written, or that are defined in some
(e.g. instances of classes that you have written, or that are defined in some
third-party library).
third-party library).
> <sup>1</sup>*Pickleable* is the term used in the Python world to refer to
> <sup>1</sup>*Pickleable* is the term used in the Python world to refer to
> something that is *serialisable* - basically, the process of converting an
> something that is *serialisable* - basically, the process of converting an
> in-memory object into a binary form that can be stored and/or transmitted.
> in-memory object into a binary form that can be stored and/or transmitted.
There is obviously some overhead in copying data back and forth between the
There is obviously some overhead in copying data back and forth between the
main process and the worker processes; this may or may not be a problem. For
main process and the worker processes; this may or may not be a problem. For
most computationally intensive tasks, this communication overhead is not
most computationally intensive tasks, this communication overhead is not
important - the performance bottleneck is typically going to be the
important - the performance bottleneck is typically going to be the
computation time, rather than I/O between the parent and child processes.
computation time, rather than I/O between the parent and child processes.
However, if you are working with a large dataset, you have determined that
However, if you are working with a large dataset, you have determined that
copying data between processes is having a substantial impact on your
copying data between processes is having a substantial impact on your
performance, and instead wish to
*share*
a single copy of the data between
performance, and instead wish to
*share*
a single copy of the data between
the processes, you will need to:
the processes, you will need to:
1.
Structure your code so that the data you want to share is accessible at
1.
Structure your code so that the data you want to share is accessible at
the
*module level*
.
the
*module level*
.
2.
Define/create/load the data
*before*
creating the
`Pool`
.
2.
Define/create/load the data
*before*
creating the
`Pool`
.
This is because, when you create a
`Pool`
, what actually happens is that the
This is because, when you create a
`Pool`
, what actually happens is that the
process your Pythonn script is running in will
[
**fork**
][
wiki-fork
]
itself -
process your Pythonn script is running in will
[
**fork**
][
wiki-fork
]
itself -
the child processes that are created are used as the worker processes by the
the child processes that are created are used as the worker processes by the
`Pool`
. And if you create/load your data in your main process
*before*
this
`Pool`
. And if you create/load your data in your main process
*before*
this
fork occurs, all of the child processes will inherit the memory space of the
fork occurs, all of the child processes will inherit the memory space of the
main process, and will therefore have (read-only) access to the data, without
main process, and will therefore have (read-only) access to the data, without
any copying required.
any copying required.
[
wiki-fork
]:
https://en.wikipedia.org/wiki/Fork_(system_call)
[
wiki-fork
]:
https://en.wikipedia.org/wiki/Fork_(system_call)
<a
class=
"anchor"
id=
"read-only-sharing"
></a>
<a
class=
"anchor"
id=
"read-only-sharing"
></a>
### Read-only sharing
### Read-only sharing
Let's see this in action with a simple example. We'll start by defining a
Let's see this in action with a simple example. We'll start by defining a
little helper function which allows us to track the total memory
usage, using
horrible
little helper function which allows us to track the total memory
the unix
`free`
command
:
usage
:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
# todo mac version
import sys
import subprocess as sp
import subprocess as sp
def memusage(msg):
def memusage(msg):
stdout = sp.run(['free', '--mega'], capture_output=True).stdout.decode()
if sys.platform == 'darwin':
stdout = stdout.split('\n')[1].split()
total = sp.run(['sysctl', 'hw.memsize'], capture_output=True).stdout.decode()
total = stdout[1]
total = int(total.split()[1]) // 1048576
used = stdout[2]
usage = sp.run('vm_stat', capture_output=True).stdout.decode()
print('Memory usage {}: {} / {} MB'.format(msg, used, total))
usage = usage.strip().split('\n')
usage = [l.split(':') for l in usage]
usage = {k.strip() : v.strip() for k, v in usage}
usage = int(usage['Pages free'][:-1]) / 256.0
usage = int(total - usage)
else:
stdout = sp.run(['free', '--mega'], capture_output=True).stdout.decode()
stdout = stdout.split('\n')[1].split()
total = int(stdout[1])
usage = int(stdout[2])
print('Memory usage {}: {} / {} MB'.format(msg, usage, total))
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
Now our task is simply to calculate the sum of a large array of numbers. We're
Now our task is simply to calculate the sum of a large array of numbers. We're
going to create a big chunk of data, and process it in chunks, keeping track
going to create a big chunk of data, and process it in chunks, keeping track
of memory usage as the task progresses:
of memory usage as the task progresses:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
import time
import time
import multiprocessing as mp
import multiprocessing as mp
import numpy as np
import numpy as np
memusage('before creating data')
memusage('before creating data')
# allocate 500MB of data
# allocate 500MB of data
data = np.random.random(500 * (1048576 // 8))
data = np.random.random(500 * (1048576 // 8))
# Assign nelems values to each worker
# Assign nelems values to each worker
# process (hard-coded so we need 12
# process (hard-coded so we need 12
# jobs to complete the task)
# jobs to complete the task)
nelems = len(data) // 12
nelems = len(data) // 12
memusage('after creating data')
memusage('after creating data')
# Each job process nelems values,
# Each job process nelems values,
# starting from the specified offset
# starting from the specified offset
def process_chunk(offset):
def process_chunk(offset):
time.sleep(1)
time.sleep(1)
return data[offset:offset + nelems].sum()
return data[offset:offset + nelems].sum()
# Generate an offset into the data for each job -
# Generate an offset into the data for each job -
# we will call process_chunk for each offset
# we will call process_chunk for each offset
offsets = range(0, len(data), nelems)
offsets = range(0, len(data), nelems)
# Create our worker process pool
# Create our worker process pool
with mp.Pool(4) as pool:
with mp.Pool(4) as pool:
results = pool.map_async(process_chunk, offsets)
results = pool.map_async(process_chunk, offsets)
# Wait for all of the jobs to finish
# Wait for all of the jobs to finish
elapsed = 0
elapsed = 0
while not results.ready():
while not results.ready():
memusage('after {} seconds'.format(elapsed))
memusage('after {} seconds'.format(elapsed))
time.sleep(1)
time.sleep(1)
elapsed += 1
elapsed += 1
results = results.get()
results = results.get()
print('Total sum: ', sum(results))
print('Total sum: ', sum(results))
print('Sanity check:', data.sum())
print('Sanity check:', data.sum())
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
You should be able to see that only one copy of
`data`
is created, and is
You should be able to see that only one copy of
`data`
is created, and is
shared by all of the worker processes without any copying taking place.
shared by all of the worker processes without any copying taking place.
So things are reasonably straightforward if you only need read-only acess to
So things are reasonably straightforward if you only need read-only acess to
your data. But what if your worker processes need to be able to modify the
your data. But what if your worker processes need to be able to modify the
data? Go back to the code block above and:
data? Go back to the code block above and:
1.
Modify the
`process_chunk`
function so that it modifies every element of
1.
Modify the
`process_chunk`
function so that it modifies every element of
its assigned portion of the data before
calculating and returning the sum.
its assigned portion of the data before
the call to
`time.sleep`
. For
For
example:
example:
> ```
> ```
> data[offset:offset + nelems] += 1
> data[offset:offset + nelems] += 1
> ```
> ```
2.
Restart the Jupyter notebook kernel (
*Kernel -> Restart*
) - this example is
2.
Restart the Jupyter notebook kernel (
*Kernel -> Restart*
) - this example is
somewhat dependent on the behaviour of the Python garbage collector, so it
somewhat dependent on the behaviour of the Python garbage collector, so it
helps to start afresh
helps to start afresh
2.
Re-run the two code blocks, and watch what happens to the memory usage.
2.
Re-run the two code blocks, and watch what happens to the memory usage.
What happened? Well, you are seeing
[
copy-on-write
][
wiki-copy-on-write
]
in
What happened? Well, you are seeing
[
copy-on-write
][
wiki-copy-on-write
]
in
action. When the
`process_chunk`
function is invoked, it is given a reference
action. When the
`process_chunk`
function is invoked, it is given a reference
to the original data array in the memory space of the parent process. But as
to the original data array in the memory space of the parent process. But as
soon as an attempt is made to modify it, a copy of the data, in the memory
soon as an attempt is made to modify it, a copy of the data, in the memory
space of the child process, is created. The modifications are then applied to
space of the child process, is created. The modifications are then applied to
this child process copy, and not to the original copy. So the total memory
this child process copy, and not to the original copy. So the total memory
usage has blown out to twice as much as before, and the changes made by each
usage has blown out to twice as much as before, and the changes made by each
child process are being lost!
child process are being lost!
[
wiki-copy-on-write
]:
https://en.wikipedia.org/wiki/Copy-on-write
[
wiki-copy-on-write
]:
https://en.wikipedia.org/wiki/Copy-on-write
<a
class=
"anchor"
id=
"read-write-sharing"
></a>
<a
class=
"anchor"
id=
"read-write-sharing"
></a>
### Read/write sharing
### Read/write sharing
> If you have worked with a real programming language with true parallelism
> If you have worked with a real programming language with true parallelism
> and shared memory via within-process multi-threading, feel free to take a
> and shared memory via within-process multi-threading, feel free to take a
> break at this point. Breathe. Relax. Go punch a hole in a wall. I've been
> break at this point. Breathe. Relax. Go punch a hole in a wall. I've been
> coding in Python for years, and this still makes me angry. Sometimes
> coding in Python for years, and this still makes me angry. Sometimes
> ... don't tell anyone I said this ... I even find myself wishing I were
> ... don't tell anyone I said this ... I even find myself wishing I were
> coding in *Java* instead of Python. Ugh. I need to take a shower.
> coding in *Java* instead of Python. Ugh. I need to take a shower.
In order to truly share memory between multiple processes, the
In order to truly share memory between multiple processes, the
`multiprocessing`
module provides the
[
`Value`, `Array`, and `RawArray`
`multiprocessing`
module provides the
[
`Value`, `Array`, and `RawArray`
classes
](
https://docs.python.org/3/library/multiprocessing.html#shared-ctypes-objects
)
,
classes
](
https://docs.python.org/3/library/multiprocessing.html#shared-ctypes-objects
)
,
which allow you to share individual values, or arrays of values, respectively.
which allow you to share individual values, or arrays of values, respectively.
The
`Array`
and
`RawArray`
classes essentially wrap a typed pointer (from the
The
`Array`
and
`RawArray`
classes essentially wrap a typed pointer (from the
built-in
[
`ctypes`
](
https://docs.python.org/3/library/ctypes.html
)
module) to
built-in
[
`ctypes`
](
https://docs.python.org/3/library/ctypes.html
)
module) to
a block of memory. We can use the
`Array`
or
`RawArray`
class to share a Numpy
a block of memory. We can use the
`Array`
or
`RawArray`
class to share a Numpy
array between our worker processes. The difference between an
`Array`
and a
array between our worker processes. The difference between an
`Array`
and a
`RawArray`
is that the former offers low-level synchronised
`RawArray`
is that the former offers low-level synchronised
(i.e. process-safe) access to the shared memory. This is necessary if your
(i.e. process-safe) access to the shared memory. This is necessary if your
child processes will be modifying the same parts of your data.
child processes will be modifying the same parts of your data.
> If you need fine-grained control over synchronising access to shared data by
> If you need fine-grained control over synchronising access to shared data by
> multiple processes, all of the [synchronisation
> multiple processes, all of the [synchronisation
> primitives](https://docs.python.org/3/library/multiprocessing.html#synchronization-between-processes)
> primitives](https://docs.python.org/3/library/multiprocessing.html#synchronization-between-processes)
> from the `multiprocessing` module are at your disposal.
> from the `multiprocessing` module are at your disposal.
The requirements for sharing memory between processes still apply here - we
The requirements for sharing memory between processes still apply here - we
need to make our data accessible at the
*module level*
, and we need to create
need to make our data accessible at the
*module level*
, and we need to create
our data before creating the
`Pool`
. And to achieve read and write capability,
our data before creating the
`Pool`
. And to achieve read and write capability,
we also need to make sure that our input and output arrays are located in
we also need to make sure that our input and output arrays are located in
shared memory - we can do this via the
`Array`
or
`RawArray`
.
shared memory - we can do this via the
`Array`
or
`RawArray`
.
As an example, let's say we want to parallelise processing of an image by
As an example, let's say we want to parallelise processing of an image by
having each worker process perform calculations on a chunk of the image.
having each worker process perform calculations on a chunk of the image.
First, let's define a function which does the calculation on a specified set
First, let's define a function which does the calculation on a specified set
of image coordinates:
of image coordinates:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
import multiprocessing as mp
import multiprocessing as mp
import ctypes
import ctypes
import numpy as np
import numpy as np
np.set_printoptions(suppress=True)
np.set_printoptions(suppress=True)
def process_chunk(shape, idxs):
def process_chunk(shape, idxs):
# Get references to our
# Get references to our
# input/output data, and
# input/output data, and
# create Numpy array views
# create Numpy array views
# into them.
# into them.
sindata = process_chunk.input_data
sindata = process_chunk.input_data
soutdata = process_chunk.output_data
soutdata = process_chunk.output_data
indata = np.ctypeslib.as_array(sindata) .reshape(shape)
indata = np.ctypeslib.as_array(sindata) .reshape(shape)
outdata = np.ctypeslib.as_array(soutdata).reshape(shape)
outdata = np.ctypeslib.as_array(soutdata).reshape(shape)
# Do the calculation on
# Do the calculation on
# the specified voxels
# the specified voxels
outdata[idxs] = indata[idxs] ** 2
outdata[idxs] = indata[idxs] ** 2
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
Rather than passing the input and output data arrays in as arguments to the
Rather than passing the input and output data arrays in as arguments to the
`process_chunk`
function, we set them as attributes of the
`process_chunk`
`process_chunk`
function, we set them as attributes of the
`process_chunk`
function. This makes the input/output data accessible at the module level,
function. This makes the input/output data accessible at the module level,
which is required in order to share the data between the main process and the
which is required in order to share the data between the main process and the
child processes.
child processes.
Now let's define a second function which process an entire image. It does the
Now let's define a second function which process an entire image. It does the
following:
following:
1.
Initialises shared memory areas to store the input and output data.
1.
Initialises shared memory areas to store the input and output data.
2.
Copies the input data into shared memory.
2.
Copies the input data into shared memory.
3.
Sets the input and output data as attributes of the
`process_chunk`
function.
3.
Sets the input and output data as attributes of the
`process_chunk`
function.
4.
Creates sets of indices into the input data which, for each worker process,
4.
Creates sets of indices into the input data which, for each worker process,
specifies the portion of the data that it is responsible for.
specifies the portion of the data that it is responsible for.
5.
Creates a worker pool, and runs the
`process_chunk`
function for each set
5.
Creates a worker pool, and runs the
`process_chunk`
function for each set
of indices.
of indices.
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
def process_dataset(data):
def process_dataset(data):
nprocs = 8
nprocs = 8
origData = data
origData = data
# Create arrays to store the
# Create arrays to store the
# input and output data
# input and output data
sindata = mp.RawArray(ctypes.c_double, data.size)
sindata = mp.RawArray(ctypes.c_double, data.size)
soutdata = mp.RawArray(ctypes.c_double, data.size)
soutdata = mp.RawArray(ctypes.c_double, data.size)
data = np.ctypeslib.as_array(sindata).reshape(data.shape)
data = np.ctypeslib.as_array(sindata).reshape(data.shape)
outdata = np.ctypeslib.as_array(soutdata).reshape(data.shape)
outdata = np.ctypeslib.as_array(soutdata).reshape(data.shape)
# Copy the input data
# Copy the input data
# into shared memory
# into shared memory
data[:] = origData
data[:] = origData
# Make the input/output data
# Make the input/output data
# accessible to the process_chunk
# accessible to the process_chunk
# function. This must be done
# function. This must be done
# *before* the worker pool is
# *before* the worker pool is
# created - even though we are
# created - even though we are
# doing things differently to the
# doing things differently to the
# read-only example, we are still
# read-only example, we are still
# making the data arrays accessible
# making the data arrays accessible
# at the *module* level, so the
# at the *module* level, so the
# memory they are stored in can be
# memory they are stored in can be
# shared with the child processes.
# shared with the child processes.
process_chunk.input_data = sindata
process_chunk.input_data = sindata
process_chunk.output_data = soutdata
process_chunk.output_data = soutdata
# number of voxels to be computed
# number of voxels to be computed
# by each worker process.
# by each worker process.
nvox = int(data.size / nprocs)
nvox = int(data.size / nprocs)
# Generate coordinates for
# Generate coordinates for
# every voxel in the image
# every voxel in the image
xlen, ylen, zlen = data.shape
xlen, ylen, zlen = data.shape
xs, ys, zs = np.meshgrid(np.arange(xlen),
xs, ys, zs = np.meshgrid(np.arange(xlen),
np.arange(ylen),
np.arange(ylen),
np.arange(zlen))
np.arange(zlen))
xs = xs.flatten()
xs = xs.flatten()
ys = ys.flatten()
ys = ys.flatten()
zs = zs.flatten()
zs = zs.flatten()
# We're going to pass each worker
# We're going to pass each worker
# process a list of indices, which
# process a list of indices, which
# specify the data items which that
# specify the data items which that
# worker process needs to compute.
# worker process needs to compute.
xs = [xs[nvox * i:nvox * i + nvox] for i in range(nprocs)] + [xs[nvox * nprocs:]]
xs = [xs[nvox * i:nvox * i + nvox] for i in range(nprocs)] + [xs[nvox * nprocs:]]
ys = [ys[nvox * i:nvox * i + nvox] for i in range(nprocs)] + [ys[nvox * nprocs:]]
ys = [ys[nvox * i:nvox * i + nvox] for i in range(nprocs)] + [ys[nvox * nprocs:]]
zs = [zs[nvox * i:nvox * i + nvox] for i in range(nprocs)] + [zs[nvox * nprocs:]]
zs = [zs[nvox * i:nvox * i + nvox] for i in range(nprocs)] + [zs[nvox * nprocs:]]
# Build the argument lists for
# Build the argument lists for
# each worker process.
# each worker process.
args = [(data.shape, (x, y, z)) for x, y, z in zip(xs, ys, zs)]
args = [(data.shape, (x, y, z)) for x, y, z in zip(xs, ys, zs)]
# Create a pool of worker
# Create a pool of worker
# processes and run the jobs.
# processes and run the jobs.
with mp.Pool(processes=nprocs) as pool:
with mp.Pool(processes=nprocs) as pool:
pool.starmap(process_chunk, args)
pool.starmap(process_chunk, args)
return outdata
return outdata
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
Now we can call our
`process_data`
function just like any other function:
Now we can call our
`process_data`
function just like any other function:
%% Cell type:code id: tags:
%% Cell type:code id: tags:
```
```
indata = np.array(np.arange(64).reshape((4, 4, 4)), dtype=np.float64)
indata = np.array(np.arange(64).reshape((4, 4, 4)), dtype=np.float64)
outdata = process_dataset(indata)
outdata = process_dataset(indata)
print('Input')
print('Input')
print(indata)
print(indata)
print('Output')
print('Output')
print(outdata)
print(outdata)
```
```
...
...
This diff is collapsed.
Click to expand it.
advanced_topics/07_threading.md
+
32
−
22
View file @
70ea6379
...
@@ -22,20 +22,20 @@ module. If you want to be impressed, skip straight to the section on
...
@@ -22,20 +22,20 @@ module. If you want to be impressed, skip straight to the section on
*
[
Threading
](
#threading
)
*
[
Threading
](
#threading
)
*
[
Subclassing `Thread`
](
#subclassing-thread
)
*
[
Subclassing `Thread`
](
#subclassing-thread
)
*
[
Daemon threads
](
#daemon-threads
)
*
[
Daemon threads
](
#daemon-threads
)
*
[
Thread synchronisation
](
#thread-synchronisation
)
*
[
Thread synchronisation
](
#thread-synchronisation
)
*
[
`Lock`
](
#lock
)
*
[
`Lock`
](
#lock
)
*
[
`Event`
](
#event
)
*
[
`Event`
](
#event
)
*
[
The Global Interpreter Lock (GIL)
](
#the-global-interpreter-lock-gil
)
*
[
The Global Interpreter Lock (GIL)
](
#the-global-interpreter-lock-gil
)
*
[
Multiprocessing
](
#multiprocessing
)
*
[
Multiprocessing
](
#multiprocessing
)
*
[
`threading`-equivalent API
](
#threading-equivalent-api
)
*
[
`threading`-equivalent API
](
#threading-equivalent-api
)
*
[
Higher-level API - the `multiprocessing.Pool`
](
#higher-level-api-the-multiprocessing-pool
)
*
[
Higher-level API - the `multiprocessing.Pool`
](
#higher-level-api-the-multiprocessing-pool
)
*
[
`Pool.map`
](
#pool-map
)
*
[
`Pool.map`
](
#pool-map
)
*
[
`Pool.apply_async`
](
#pool-apply-async
)
*
[
`Pool.apply_async`
](
#pool-apply-async
)
*
[
Sharing data between processes
](
#sharing-data-between-processes
)
*
[
Sharing data between processes
](
#sharing-data-between-processes
)
*
[
Read-only sharing
](
#read-only-sharing
)
*
[
Read-only sharing
](
#read-only-sharing
)
*
[
Read/write sharing
](
#read-write-sharing
)
*
[
Read/write sharing
](
#read-write-sharing
)
<a
class=
"anchor"
id=
"threading"
></a>
<a
class=
"anchor"
id=
"threading"
></a>
...
@@ -638,19 +638,29 @@ any copying required.
...
@@ -638,19 +638,29 @@ any copying required.
Let's see this in action with a simple example. We'll start by defining a
Let's see this in action with a simple example. We'll start by defining a
little helper function which allows us to track the total memory
usage, using
horrible
little helper function which allows us to track the total memory
the unix
`free`
command
:
usage
:
```
```
# todo mac version
import sys
import subprocess as sp
import subprocess as sp
def memusage(msg):
def memusage(msg):
stdout = sp.run(['free', '--mega'], capture_output=True).stdout.decode()
if sys.platform == 'darwin':
stdout = stdout.split('\n')[1].split()
total = sp.run(['sysctl', 'hw.memsize'], capture_output=True).stdout.decode()
total = stdout[1]
total = int(total.split()[1]) // 1048576
used = stdout[2]
usage = sp.run('vm_stat', capture_output=True).stdout.decode()
print('Memory usage {}: {} / {} MB'.format(msg, used, total))
usage = usage.strip().split('\n')
usage = [l.split(':') for l in usage]
usage = {k.strip() : v.strip() for k, v in usage}
usage = int(usage['Pages free'][:-1]) / 256.0
usage = int(total - usage)
else:
stdout = sp.run(['free', '--mega'], capture_output=True).stdout.decode()
stdout = stdout.split('\n')[1].split()
total = int(stdout[1])
usage = int(stdout[2])
print('Memory usage {}: {} / {} MB'.format(msg, usage, total))
```
```
...
@@ -713,8 +723,8 @@ your data. But what if your worker processes need to be able to modify the
...
@@ -713,8 +723,8 @@ your data. But what if your worker processes need to be able to modify the
data? Go back to the code block above and:
data? Go back to the code block above and:
1.
Modify the
`process_chunk`
function so that it modifies every element of
1.
Modify the
`process_chunk`
function so that it modifies every element of
its assigned portion of the data before
calculating and returning the sum.
its assigned portion of the data before
the call to
`time.sleep`
. For
For
example:
example:
> ```
> ```
> data[offset:offset + nelems] += 1
> data[offset:offset + nelems] += 1
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment