Skip to content
Snippets Groups Projects
Commit ec3c6f65 authored by Michiel Cottaar's avatar Michiel Cottaar
Browse files

BUG: updated the ipython notebook files

parent 5cd1f11a
No related branches found
No related tags found
No related merge requests found
%% Cell type:code id: tags:
``` python
```
%matplotlib nbagg
import numpy as np
import nibabel as nib
import matplotlib.pyplot as plt
```
%% Output
/Users/ndcn0236/miniconda3/lib/python3.6/site-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
%% Cell type:markdown id: tags:
# Nibabel: loading MRI volumes
## Image file formats beyond NIFTI
Nibabel can also read:
- ANALYZE (plain, SPM99, SPM2 and later)
- Freesurfer MGH/MGZ format
- MINC1 and MINC2
- limited support for [Dicom](http://nipy.org/nibabel/dicom/dicom.html#dicom)
- Philips PAR/REC
You can get the data and affine for all these formats using:
> ```
> img = nib.load(<filename>)
> affine = img.affine
> data = img.get_data() # repeated calls of get_data() will return a cached version
> ```
Other metadata is available through `img.header`, but will be format-specific.
## Acessing part of the data
Running `nibabel.load` will only read in the header, not the full data.
You can exploit this using the `[dataobj](http://nipy.org/nibabel/nibabel_images.html#array-proxies-and-proxy-images)` object:
%% Cell type:code id: tags:
``` python
```
img = nib.load('100307/T1w.nii.gz')
zslice = img.dataobj[:, :, 50] # only loads the slice with k=50
plt.imshow(slice.T, cmap='gray')
```
%% Cell type:markdown id: tags:
## Surfaces in nibabel
Nibabel supports surfaces in both [Freesurfer](http://nipy.org/nibabel/reference/nibabel.freesurfer.html#module-nibabel.freesurfer)
and [GIFTI](http://nipy.org/nibabel/reference/nibabel.gifti.html#module-nibabel.gifti) formats
![Gifti tree structure](gifti_tree.png)
%% Cell type:code id: tags:
``` python
```
img = nib.load('100307/fsaverage_LR32k/100307.L.white.32k_fs_LR.surf.gii')
vertices, faces = [darray.data for darray in img.darrays]
vertices, faces
```
%% Cell type:code id: tags:
``` python
```
from nibabel import freesurfer
freesurfer.io.write_geometry('lh.white', vertices, faces)
```
%% Cell type:code id: tags:
``` python
```
thickness = nib.load('100307/fsaverage_LR32k/100307.L.thickness.32k_fs_LR.shape.gii').darrays[0].data
thickness
```
%% Output
array([2.9977965, 2.7661548, 3.0803757, ..., 2.47089 , 2.2333388,
2.363499 ], dtype=float32)
%% Cell type:markdown id: tags:
## [Tractography streamlines](http://nipy.org/nibabel/reference/nibabel.streamlines.html#module-nibabel.streamlines) in nibabel
# [CIFTI](https://github.com/MichielCottaar/cifti): easy creation/manipulation
This module allows for straight-forward creation of CIFTI files and the reading and manipulating of existing ones
The CIFTI format is used in brain imaging to store data acquired across the brain volume (in voxels) and/or
the brain surface (in vertices). The format is unique in that it can store data from both volume and
surface as opposed to NIftI, which only covers the brain volume, and GIftI, which only covers the brain surface.
See http://www.nitrc.org/projects/cifti for specification of the CIFTI format.
Each type of CIFTI axes describing the rows/columns in a CIFTI matrix is given a unique class:
- `BrainModel`: each row/column is a voxel or vertex
- `Parcels`: each row/column is a group of voxels and/or vertices
- `Series`: each row/column is a timepoint, which increases monotonically
- `Scalar`: each row/column has a unique name (with optional meta-data)
- `Label`: each row/column has a unique name and label table (with optional meta-data)
All of these classes are derived from `Axis`
Reading a CIFTI file (through `read`) will return a matrix and a pair of axes describing the rows and columns of the matrix.
Similarly to write a CIFTI file (through `write`) requires a matrix and a pair of axes.
CIFTI axes of the same type can be concatenated by adding them together.
Numpy indexing also works on them (except for Series objects, which have to remain monotonically increasing or decreasing)
## Installation
This package can be installed directly from github using:
%% Cell type:code id: tags:
``` python
```
!pip install git+https://github.com/MichielCottaar/cifti.git
```
%% Cell type:markdown id: tags:
![Cifti tree structure](cifti_tree.png)
## Examples
%% Cell type:code id: tags:
``` python
```
import cifti
ctx = thickness != 0
arr = np.random.rand(ctx.sum())
bm_ctx = cifti.BrainModel.from_mask(ctx, name='CortexLeft')
sc = cifti.Scalar.from_names(['random'])
cifti.write('random_ctx.dscalar.nii', arr[None, :], (sc, bm_ctx))
```
%% Cell type:code id: tags:
``` python
```
!wb_view 100307/fsaverage_LR32k/100307.*.32k_fs_LR.surf.gii random_ctx.dscalar.nii
```
%% Cell type:code id: tags:
``` python
img = ni.load('100307/aparc+aseg.nii.gz')
```
img = nib.load('100307/aparc+aseg.nii.gz')
cerebellum = img.get_data() == 8
bm_cerebellum = cifti.BrainModel.from_mask(cerebellum, name='CerebellumLeft', affine=img.affine)
bm = bm_ctx + bm_cerebellum
sc = cifti.Scalar.from_names(['random'])
arr = np.random.rand(len(bm))
cifti.write('random_ctx_cerebellum.dscalar.nii', arr[None, :], (sc, bm))
sc = cifti.Scalar.from_names(['random1', 'random2'])
arr = np.random.rand(2, len(bm))
cifti.write('random_ctx_cerebellum.dscalar.nii', arr, (sc, bm))
```
%% Cell type:code id: tags:
``` python
```
!wb_view 100307/fsaverage_LR32k/100307.*.32k_fs_LR.surf.gii 100307/T1w.nii.gz random_ctx_cerebellum.dscalar.nii
```
%% Cell type:code id: tags:
``` python
```
arr = abs(thickness[ctx, None] - thickness[None, ctx])
cifti.write('diff_thickness.dconn.nii', arr, (bm_ctx, bm_ctx))
```
%% Cell type:code id: tags:
``` python
```
!wb_view 100307/fsaverage_LR32k/100307.*.32k_fs_LR.surf.gii diff_thickness.dconn.nii
```
%% Cell type:markdown id: tags:
Let's finally create a parcellated label file:
%% Cell type:code id: tags:
``` python
```
parcels = cifti.Parcels.from_brain_models([('thin', bm_ctx[thickness[ctx] < 2]),
('cerbellum', bm_cerebellum),
('thick', bm_ctx[thickness[ctx] > 3]),
])
scl = cifti.Scalar.from_names(['rgb'])
label = scl.to_label([{1: ('red', (1, 0, 0, 1)), 2: ('green', (0, 1, 0, 1)), 3: ('blue', (0, 0, 1, 1))}])
label = scl.to_label([{1: ('red', (1, 0, 0, 1)),
2: ('green', (0, 1, 0, 1)),
3: ('blue', (0, 0, 1, 1))}])
arr = np.array([[1, 2, 3]])
cifti.write('labels.plabel.nii', arr, (label, parcels))
```
%% Cell type:code id: tags:
``` python
```
!wb_view 100307/fsaverage_LR32k/100307.*.32k_fs_LR.surf.gii 100307/T1w.nii.gz labels.plabel.nii
```
%% Output
Info: Resources loaded:
:/About :/Cursor :/Fonts :/HelpFiles :/LayersPanel :/PaletteSettings :/resources.qrc :/SpecFileDialog :/Splash :/ToolBar :/trolltech :/update_resources.sh
Info: Time to color volume data is 0.581 seconds.
Info: Time to read /Users/ndcn0236/Work/projects/pytreat-2018-practicals/talks/nibabel_cifti/100307/T1w.nii.gz was 1.06257 seconds.
Info: Time to read /Users/ndcn0236/Work/projects/pytreat-2018-practicals/talks/nibabel_cifti/100307/fsaverage_LR32k/100307.L.inflated.32k_fs_LR.surf.gii was 0.111264 seconds.
Info: Time to read /Users/ndcn0236/Work/projects/pytreat-2018-practicals/talks/nibabel_cifti/100307/fsaverage_LR32k/100307.L.midthickness.32k_fs_LR.surf.gii was 0.019432 seconds.
Info: Time to read /Users/ndcn0236/Work/projects/pytreat-2018-practicals/talks/nibabel_cifti/100307/fsaverage_LR32k/100307.L.pial.32k_fs_LR.surf.gii was 0.019506 seconds.
Info: Time to read /Users/ndcn0236/Work/projects/pytreat-2018-practicals/talks/nibabel_cifti/100307/fsaverage_LR32k/100307.L.very_inflated.32k_fs_LR.surf.gii was 0.019134 seconds.
Info: Time to read /Users/ndcn0236/Work/projects/pytreat-2018-practicals/talks/nibabel_cifti/100307/fsaverage_LR32k/100307.L.white.32k_fs_LR.surf.gii was 0.018743 seconds.
Info: Time to read /Users/ndcn0236/Work/projects/pytreat-2018-practicals/talks/nibabel_cifti/100307/fsaverage_LR32k/100307.R.inflated.32k_fs_LR.surf.gii was 0.087767 seconds.
Info: Time to read /Users/ndcn0236/Work/projects/pytreat-2018-practicals/talks/nibabel_cifti/100307/fsaverage_LR32k/100307.R.midthickness.32k_fs_LR.surf.gii was 0.019677 seconds.
Info: Time to read /Users/ndcn0236/Work/projects/pytreat-2018-practicals/talks/nibabel_cifti/100307/fsaverage_LR32k/100307.R.pial.32k_fs_LR.surf.gii was 0.019088 seconds.
Info: Time to read /Users/ndcn0236/Work/projects/pytreat-2018-practicals/talks/nibabel_cifti/100307/fsaverage_LR32k/100307.R.very_inflated.32k_fs_LR.surf.gii was 0.019656 seconds.
Info: Time to read /Users/ndcn0236/Work/projects/pytreat-2018-practicals/talks/nibabel_cifti/100307/fsaverage_LR32k/100307.R.white.32k_fs_LR.surf.gii was 0.018397 seconds.
Info: Time to read /Users/ndcn0236/Work/projects/pytreat-2018-practicals/talks/nibabel_cifti/labels.plabel.nii was 0.651135 seconds.
%% Cell type:code id: tags:
``` python
```
......
%% Cell type:markdown id: tags:
# Main scientific python libraries
See https://scipy.org/
Most of these packages have or are in thr progress of dropping support for python2.
So use python3!
## [Numpy](http://www.numpy.org/): arrays
This is the main library underlying (nearly) all of the scientific python ecosystem.
See the tutorial in the beginner session or [the official numpy tutorial](https://docs.scipy.org/doc/numpy-dev/user/quickstart.html) for usage details.
The usual nickname of numpy is np:
%% Cell type:code id: tags:
```
import numpy as np
```
%% Cell type:markdown id: tags:
Numpy includes support for:
- N-dimensional arrays with various datatypes
- basic functions (e.g., polynomials)
- masked arrays
- matrices
- structured/record array
- basic functions (e.g., sin, log, arctan, polynomials)
- basic linear algebra
- random number generation
## [Scipy](https://scipy.org/scipylib/index.html): most general scientific tools
At the top level this module includes all of the basic functionality from numpy.
You could import this as, but you might as well import numpy directly.
%% Cell type:code id: tags:
```
import scipy as sp
```
%% Cell type:markdown id: tags:
The main strength in scipy lies in its sub-packages:
%% Cell type:code id: tags:
```
from scipy import optimize
def costfunc(params):
return params[0] ** 2 * (params[1] - 3) ** 2 + (params[0] - 2) ** 2
optimize.minimize(costfunc, x0=[0, 0], method='l-bfgs-b')
```
%% Cell type:markdown id: tags:
Tutorials for all sub-packages can be found [here](https://docs.scipy.org/doc/scipy-1.0.0/reference/).
## [Matplotlib](https://matplotlib.org/): Main plotting library
%% Cell type:code id: tags:
```
import matplotlib as mpl
mpl.use('nbagg')
import matplotlib.pyplot as plt
```
%% Cell type:markdown id: tags:
The matplotlib tutorials are [here](https://matplotlib.org/tutorials/index.html)
%% Cell type:code id: tags:
```
x = np.linspace(0, 2, 100)
plt.plot(x, x, label='linear')
plt.plot(x, x**2, label='quadratic')
plt.plot(x, x**3, label='cubic')
plt.xlabel('x label')
plt.ylabel('y label')
plt.title("Simple Plot")
plt.legend()
plt.show()
```
%% Cell type:markdown id: tags:
Alternatives:
- [Mayavi](http://docs.enthought.com/mayavi/mayavi/): 3D plotting (hard to install)
- [Bokeh](https://bokeh.pydata.org/en/latest/) among many others: interactive plots in the browser (i.e., in javascript)
## [Ipython](http://ipython.org/)/[Jupyter](https://jupyter.org/) notebook: interactive python environments
There are many [useful extensions available](https://github.com/ipython-contrib/jupyter_contrib_nbextensions).
## [Pandas](https://pandas.pydata.org/): Analyzing "clean" data
Once your data is in tabular form (e.g. Biobank IDP's), you want to use pandas dataframes to analyze them.
This brings most of the functionality of R into python.
Pandas has excellent support for:
- fast IO to many tabular formats
- accurate handling of missing data
- Many, many routines to handle data
- group by categorical data (i.e., male/female, or age groups)
- joining/merging data
- time series support
- statistical models through [statsmodels](http://www.statsmodels.org/stable/index.html)
- plotting though seaborn [seaborn](https://seaborn.pydata.org/)
- Use [dask](https://dask.pydata.org/en/latest/) if your data is too big for memory (or if you want to run in parallel)
You should also install `numexpr` and `bottleneck` for optimal performance.
For the documentation check [here](http://pandas.pydata.org/pandas-docs/stable/index.html)
### Adjusted example from statsmodels tutorial
%% Cell type:code id: tags:
```
import statsmodels.api as sm
import statsmodels.formula.api as smf
import numpy as np
```
%% Cell type:code id: tags:
```
df = sm.datasets.get_rdataset("Guerry", "HistData").data
df
```
%% Cell type:code id: tags:
```
df.describe()
```
%% Cell type:code id: tags:
```
df.groupby('Region').mean()
```
%% Cell type:code id: tags:
```
results = smf.ols('Lottery ~ Literacy + np.log(Pop1831)', data=df).fit()
results.summary()
```
%% Cell type:code id: tags:
```
df['log_pop'] = np.log(df.Pop1831)
df
```
%% Cell type:code id: tags:
```
results = smf.ols('Lottery ~ Literacy + log_pop', data=df).fit()
results.summary()
```
%% Cell type:code id: tags:
```
results = smf.ols('Lottery ~ Literacy + np.log(Pop1831) + Region', data=df).fit()
results.summary()
```
%% Cell type:code id: tags:
```
results = smf.ols('Lottery ~ Literacy + np.log(Pop1831) + Region + Region * Literacy', data=df).fit()
results.summary()
```
%% Cell type:code id: tags:
```
%matplotlib nbagg
import seaborn as sns
sns.pairplot(df, hue="Region", vars=('Lottery', 'Literacy', 'log_pop'))
```
%% Cell type:markdown id: tags:
## [Sympy](http://www.sympy.org/en/index.html): Symbolic programming
%% Cell type:code id: tags:
```
import sympy as sym # no standard nickname
```
%% Cell type:code id: tags:
```
x, a, b, c = sym.symbols('x, a, b, c')
sym.solve(a * x ** 2 + b * x + c, x)
```
%% Cell type:code id: tags:
```
sym.integrate(x/(x**2+a*x+2), x)
```
%% Cell type:code id: tags:
```
f = sym.utilities.lambdify((x, a), sym.integrate((x**2+a*x+2), x))
f(np.random.rand(10), np.random.rand(10))
```
%% Cell type:markdown id: tags:
# Other topics
## [Argparse](https://docs.python.org/3.6/howto/argparse.html): Command line arguments
%% Cell type:code id: tags:
```
%%writefile test_argparse.py
import argparse
def main():
parser = argparse.ArgumentParser(description="calculate X to the power of Y")
parser.add_argument("-v", "--verbose", action="store_true")
parser.add_argument("x", type=int, help="the base")
parser.add_argument("y", type=int, help="the exponent")
args = parser.parse_args()
answer = args.x**args.y
if args.verbose:
print("{} to the power {} equals {}".format(args.x, args.y, answer))
else:
print("{}^{} == {}".format(args.x, args.y, answer))
if __name__ == '__main__':
main()
```
%% Cell type:code id: tags:
```
%run test_argparse.py 3 8 -v
```
%% Cell type:code id: tags:
```
%run test_argparse.py -h
```
%% Cell type:code id: tags:
```
%run test_argparse.py 3 8.5 -q
```
%% Cell type:markdown id: tags:
### [Gooey](https://github.com/chriskiehl/Gooey): GUI from command line tool
%% Cell type:code id: tags:
```
%%writefile test_gooey.py
import argparse
from gooey import Gooey
@Gooey
def main():
parser = argparse.ArgumentParser(description="calculate X to the power of Y")
parser.add_argument("-v", "--verbose", action="store_true")
parser.add_argument("x", type=int, help="the base")
parser.add_argument("y", type=int, help="the exponent")
args = parser.parse_args()
answer = args.x**args.y
if args.verbose:
print("{} to the power {} equals {}".format(args.x, args.y, answer))
else:
print("{}^{} == {}".format(args.x, args.y, answer))
if __name__ == '__main__':
main()
```
%% Cell type:code id: tags:
```
%run test_gooey.py
```
%% Cell type:code id: tags:
```
!gcoord_gui
```
%% Cell type:markdown id: tags:
## [Jinja2](http://jinja.pocoo.org/docs/2.10/): HTML generation
%% Cell type:code id: tags:
```
%%writefile image_list.jinja2
<!DOCTYPE html>
<html lang="en">
<head>
{% block head %}
<title>{{ title }}</title>
{% endblock %}
</head>
<body>
<div id="content">
{% block content %}
{% for description, filenames in images %}
<p>
{{ description }}
</p>
{% for filename in filenames %}
<a href="{{ filename }}">
<img src="{{ filename }}">
</a>
{% endfor %}
{% endfor %}
{% endblock %}
</div>
<footer>
Created on {{ time }}
</footer>
</body>
</html>
```
%% Cell type:code id: tags:
```
def plot_sine(amplitude, frequency):
x = np.linspace(0, 2 * np.pi, 100)
y = amplitude * np.sin(frequency * x)
plt.plot(x, y)
plt.xticks([0, np.pi, 2 * np.pi], ['0', '$\pi$', '$2 \pi$'])
plt.ylim(-1.1, 1.1)
filename = 'plots/A{:.2f}_F{:.2f}.png'.format(amplitude, frequency)
plt.title('A={:.2f}, F={:.2f}'.format(amplitude, frequency))
plt.savefig(filename)
plt.close(plt.gcf())
return filename
!mkdir plots
amplitudes = [plot_sine(A, 1.) for A in [0.1, 0.3, 0.7, 1.0]]
frequencies = [plot_sine(1., F) for F in [1, 2, 3, 4, 5, 6]]
```
%% Cell type:code id: tags:
```
from jinja2 import Environment, FileSystemLoader
from datetime import datetime
loader = FileSystemLoader('.')
env = Environment(loader=loader)
template = env.get_template('image_list.jinja2')
images = [
('Varying the amplitude', amplitudes),
('Varying the frequency', frequencies),
]
with open('image_list.html', 'w') as f:
f.write(template.render(title='Lots of sines',
images=images, time=datetime.now()))
```
%% Cell type:code id: tags:
```
!open image_list.html
```
%% Cell type:markdown id: tags:
## Neuroimage packages
The [nipy](http://nipy.org/) ecosystem covers most of these.
### [CIFTI](https://github.com/MichielCottaar/cifti): easy creation/manipulation
%% Cell type:code id: tags:
```
import nibabel
thickness = nibabel.load('100307/fsaverage_LR32k/100307.L.thickness.32k_fs_LR.shape.gii').darrays[0].data
thickness
```
%% Cell type:code id: tags:
```
import cifti
ctx = thickness != 0
arr = np.random.rand(ctx.sum())
bm_ctx = cifti.BrainModel.from_mask(ctx, name='CortexLeft')
sc = cifti.Scalar.from_names(['random'])
cifti.write('random_ctx.dscalar.nii', arr[None, :], (sc, bm_ctx))
```
%% Cell type:code id: tags:
```
!wb_view 100307/fsaverage_LR32k/100307.*.32k_fs_LR.surf.gii random_ctx.dscalar.nii
```
%% Cell type:code id: tags:
```
img = nibabel.load('100307/aparc+aseg.nii.gz')
cerebellum = img.get_data() == 8
bm = bm_ctx + cifti.BrainModel.from_mask(cerebellum, name='CerebellumLeft', affine=img.affine)
sc = cifti.Scalar.from_names(['random'])
arr = np.random.rand(len(bm))
cifti.write('random_ctx_cerebellum.dscalar.nii', arr[None, :], (sc, bm))
```
%% Cell type:code id: tags:
```
!wb_view 100307/fsaverage_LR32k/100307.*.32k_fs_LR.surf.gii 100307/aparc+aseg.nii.gz random_ctx_cerebellum.dscalar.nii
```
%% Cell type:code id: tags:
```
arr = abs(thickness[ctx, None] - thickness[None, ctx])
cifti.write('diff_thickness.dconn.nii', arr, (bm_ctx, bm_ctx))
```
%% Cell type:code id: tags:
```
!wb_view 100307/fsaverage_LR32k/100307.*.32k_fs_LR.surf.gii diff_thickness.dconn.nii
```
%% Cell type:markdown id: tags:
## [networkx](https://networkx.github.io/): graph theory
## GUI
- [tkinter](https://docs.python.org/3.6/library/tkinter.html): thin wrapper around Tcl/Tk; included in python
- [wxpython](https://www.wxpython.org/): Wrapper around the C++ wxWidgets library
%% Cell type:code id: tags:
```
%%writefile wx_hello_world.py
#!/usr/bin/env python
"""
Hello World, but with more meat.
"""
import wx
class HelloFrame(wx.Frame):
"""
A Frame that says Hello World
"""
def __init__(self, *args, **kw):
# ensure the parent's __init__ is called
super(HelloFrame, self).__init__(*args, **kw)
# create a panel in the frame
pnl = wx.Panel(self)
# and put some text with a larger bold font on it
st = wx.StaticText(pnl, label="Hello World!", pos=(25,25))
font = st.GetFont()
font.PointSize += 10
font = font.Bold()
st.SetFont(font)
# create a menu bar
self.makeMenuBar()
# and a status bar
self.CreateStatusBar()
self.SetStatusText("Welcome to wxPython!")
def makeMenuBar(self):
"""
A menu bar is composed of menus, which are composed of menu items.
This method builds a set of menus and binds handlers to be called
when the menu item is selected.
"""
# Make a file menu with Hello and Exit items
fileMenu = wx.Menu()
# The "\t..." syntax defines an accelerator key that also triggers
# the same event
helloItem = fileMenu.Append(-1, "&Hello...\tCtrl-H",
"Help string shown in status bar for this menu item")
fileMenu.AppendSeparator()
# When using a stock ID we don't need to specify the menu item's
# label
exitItem = fileMenu.Append(wx.ID_EXIT)
# Now a help menu for the about item
helpMenu = wx.Menu()
aboutItem = helpMenu.Append(wx.ID_ABOUT)
# Make the menu bar and add the two menus to it. The '&' defines
# that the next letter is the "mnemonic" for the menu item. On the
# platforms that support it those letters are underlined and can be
# triggered from the keyboard.
menuBar = wx.MenuBar()
menuBar.Append(fileMenu, "&File")
menuBar.Append(helpMenu, "&Help")
# Give the menu bar to the frame
self.SetMenuBar(menuBar)
# Finally, associate a handler function with the EVT_MENU event for
# each of the menu items. That means that when that menu item is
# activated then the associated handler function will be called.
self.Bind(wx.EVT_MENU, self.OnHello, helloItem)
self.Bind(wx.EVT_MENU, self.OnExit, exitItem)
self.Bind(wx.EVT_MENU, self.OnAbout, aboutItem)
def OnExit(self, event):
"""Close the frame, terminating the application."""
self.Close(True)
def OnHello(self, event):
"""Say hello to the user."""
wx.MessageBox("Hello again from wxPython")
def OnAbout(self, event):
"""Display an About Dialog"""
wx.MessageBox("This is a wxPython Hello World sample",
"About Hello World 2",
wx.OK|wx.ICON_INFORMATION)
if __name__ == '__main__':
# When this module is run (not imported) then create the app, the
# frame, show it, and start the event loop.
app = wx.App()
frm = HelloFrame(None, title='Hello World 2')
frm.Show()
app.MainLoop()
```
%% Cell type:code id: tags:
```
%run wx_hello_world.py
```
%% Cell type:markdown id: tags:
## Machine learning
- scikit-learn
- theano/tensorflow/pytorch
- keras
## [Pycuda](https://documen.tician.de/pycuda/): Programming the GPU
%% Cell type:code id: tags:
```
import pycuda.autoinit
import pycuda.driver as drv
from pycuda.compiler import SourceModule
mod = SourceModule("""
__global__ void multiply_them(double *dest, double *a, double *b)
{
const int i = threadIdx.x;
dest[i] = a[i] * b[i];
}
""")
multiply_them = mod.get_function("multiply_them")
a = np.random.randn(400)
b = np.random.randn(400)
dest = np.zeros_like(a)
multiply_them(
drv.Out(dest), drv.In(a), drv.In(b),
block=(400,1,1), grid=(1,1))
print(dest-a*b)
```
%% Cell type:markdown id: tags:
## Testing
- [unittest](https://docs.python.org/3.6/library/unittest.html): python built-in testing
> ```
> import unittest
>
> class TestStringMethods(unittest.TestCase):
>
> def test_upper(self):
> self.assertEqual('foo'.upper(), 'FOO')
>
> def test_isupper(self):
> self.assertTrue('FOO'.isupper())
> self.assertFalse('Foo'.isupper())
>
> def test_split(self):
> s = 'hello world'
> self.assertEqual(s.split(), ['hello', 'world'])
> # check that s.split fails when the separator is not a string
> with self.assertRaises(TypeError):
> s.split(2)
>
> if __name__ == '__main__':
> unittest.main()
> ```
- [doctest](https://docs.python.org/3.6/library/doctest.html): checks the example usage in the documentation
> ```
> def factorial(n):
> """Return the factorial of n, an exact integer >= 0.
>
> >>> [factorial(n) for n in range(6)]
> [1, 1, 2, 6, 24, 120]
> >>> factorial(30)
> 265252859812191058636308480000000
> >>> factorial(-1)
> Traceback (most recent call last):
> ...
> ValueError: n must be >= 0
>
> Factorials of floats are OK, but the float must be an exact integer:
> >>> factorial(30.1)
> Traceback (most recent call last):
> ...
> ValueError: n must be exact integer
> >>> factorial(30.0)
> 265252859812191058636308480000000
>
> It must also not be ridiculously large:
> >>> factorial(1e100)
> Traceback (most recent call last):
> ...
> OverflowError: n too large
> """
>
> import math
> if not n >= 0:
> raise ValueError("n must be >= 0")
> if math.floor(n) != n:
> raise ValueError("n must be exact integer")
> if n+1 == n: # catch a value like 1e300
> raise OverflowError("n too large")
> result = 1
> factor = 2
> while factor <= n:
> result *= factor
> factor += 1
> return result
>
>
> if __name__ == "__main__":
> import doctest
> doctest.testmod()
> ```
Two external packages provide more convenient unit tests:
- [py.test](https://docs.pytest.org/en/latest/)
- [nose2](http://nose2.readthedocs.io/en/latest/usage.html)
> ```
> # content of test_sample.py
> def inc(x):
> return x + 1
>
> def test_answer():
> assert inc(3) == 5
> ```
- [coverage](https://coverage.readthedocs.io/en/coverage-4.5.1/): measures which part of the code is covered by the tests
## Linters
Linters check the code for any syntax errors, [style errors](https://www.python.org/dev/peps/pep-0008/), unused variables, unreachable code, etc.
- [pylint](https://pypi.python.org/pypi/pylint): most extensive linter
- [pyflake](https://pypi.python.org/pypi/pyflakes): if you think pylint is too strict
- [pep8](https://pypi.python.org/pypi/pep8): just checks for style errors
- [mypy](http://mypy-lang.org/): adding explicit typing to python
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment