If you’re interested in contributing to netCDF-SCM, we’d love to have you on board! This section of the docs details how to get setup to contribute and how best to communicate.


All contributions are welcome, some possible suggestions include:

  • tutorials (or support questions which, once solved, result in a new tutorial :D)

  • blog posts

  • improving the documentation

  • bug reports

  • feature requests

  • pull requests

Please report issues or discuss feature requests in the netCDF-SCM issue tracker. If your issue is a feature request or a bug, please use the templates available, otherwise, simply open a normal issue :)

As a contributor, please follow a couple of conventions:

Getting setup

To get setup as a developer, we recommend the following steps (if any of these tools are unfamiliar, please see the resources we recommend in Development tools):

  1. Install conda and make

  2. Run make conda-environment, if that fails you can try doing it manually by reading the commands from the Makefile

  3. Make sure the tests pass by running make test, as above if that fails you can try doing it manually by reading the commands from the Makefile

Getting help

Whilst developing, unexpected things can go wrong (that’s why it’s called ‘developing’, if we knew what we were doing, it would already be ‘developed’). Normally, the fastest way to solve an issue is to contact us via the issue tracker. The other option is to debug yourself. For this purpose, we provide a list of the tools we use during our development as starting points for your search to find what has gone wrong.

Development tools

This list of development tools is what we rely on to develop netCDF-SCM reliably and reproducibly. It gives you a few starting points in case things do go inexplicably wrong and you want to work out why. We include links with each of these tools to starting points that we think are useful, in case you want to learn more.

  • Git

  • Make

  • Conda virtual environments
    • note the common gotcha that source activate has now changed to conda activate

    • we use conda instead of pure pip environments because they help us deal with Iris’ dependencies: if you want to learn more about pip and pip virtual environments, check out this introduction

  • Tests
    • we use a blend of pytest and the inbuilt Python testing capabilities for our tests so checkout what we’ve already done in tests to get a feel for how it works

  • Continuous integration (CI)
    • we use GitLab CI for our CI but there are a number of good providers

  • Jupyter Notebooks
    • we’d recommend simply installing jupyter (conda install jupyter) in your virtual environment

  • Sphinx

Other tools

We also use some other tools which aren’t necessarily the most familiar. Here we provide a list of these along with useful resources.

  • Regular expressions
    • we use to help us write and check our regular expressions, make sure the language is set to Python to make your life easy!


To help us focus on what the code does, not how it looks, we use a couple of automatic formatting tools. These automatically format the code for us and tell use where the errors are. To use them, after setting yourself up (see Getting setup), simply run make black and make flake8. Note that make black can only be run if you have committed all your work i.e. your working directory is ‘clean’. This restriction is made to ensure that you don’t format code without being able to undo it, just in case something goes wrong.

Buiding the docs

After setting yourself up (see Getting setup), building the docs is as simple as running make docs (note, run make -B docs to force the docs to rebuild and ignore make when it says ‘… index.html is up to date’). This will build the docs for you. You can preview them by opening docs/build/html/index.html in a browser.

For documentation we use Sphinx. To get ourselves started with Sphinx, we started with this example then used Sphinx’s getting started guide.


To get Sphinx to generate pdfs (rarely worth the hassle), you require Latexmk. On a Mac this can be installed with sudo tlmgr install latexmk. You will most likely also need to install some other packages (if you don’t have the full distribution). You can check which package contains any missing files with tlmgr search --global --file [filename]. You can then install the packages with sudo tlmgr install [package].

Docstring style

For our docstrings we use numpy style docstrings. For more information on these, here is the full guide and the quick reference we also use.


The steps to release a new version of netCDF-SCM are shown below. Please do all the steps below and all the steps for both release platforms.

First step

  1. Test installation with dependencies make test-install

  2. Update CHANGELOG.rst:

    • add a header for the new version between master and the latest bullet point

    • this should leave the section underneath the master header empty

  3. git add .

  4. git commit -m "Prepare for release of vX.Y.Z"

  5. git tag vX.Y.Z

  6. Test version updated as intended with make test-install


If uploading to PyPI, do the following (otherwise skip these steps)

  1. make publish-on-testpypi

  2. Go to test PyPI and check that the new release is as intended. If it isn’t, stop and debug.

  3. Test the install with make test-testpypi-install (this doesn’t test all the imports as most required packages are not on test PyPI).

Assuming test PyPI worked, now upload to the main repository

  1. make publish-on-pypi

  2. Go to netCDF-SCM’s PyPI and check that the new release is as intended.

  3. Test the install with make test-pypi-install (a pip only install will throw warnings about Iris not being installed, that’s fine).

Push to repository

Finally, push the tags and the repository

  1. git push

  2. git push --tags


  1. If you haven’t already, fork the netCDF-SCM conda feedstock. In your fork, add the feedstock upstream with git remote add upstream (upstream should now appear in the output of git remote -v)

  2. Update your fork’s master to the upstream master with:

    1. git checkout master

    2. git fetch upstream

    3. git reset --hard upstream/master

  3. Create a new branch in the feedstock for the version you want to bump to.

  4. Edit recipe/meta.yaml and update:

    • version number in line 1 (don’t include the ‘v’ in the version tag)

    • the build number to zero (you should only be here if releasing a new version)

    • update sha256 in line 9 (you can get the sha from netCDF-SCM’s PyPI by clicking on ‘Download files’ on the left and then clicking on ‘SHA256’ of the .tar.gz file to copy it to the clipboard)

  5. git add .

  6. git commit -m "Update to vX.Y.Z"

  7. git push

  8. Make a PR into the netCDF-SCM conda feedstock

  9. If the PR passes (give it at least 10 minutes to run all the CI), merge

  10. Check to double check that the version has increased (this can take a few minutes to update)

Archiving on zenodo

  1. Create a clean version of the repo (note, this deletes all files not tracked by git, use with care!), git clean -xdf (dry run can be done with git clean -ndf)

  2. Tar the repo

    VERSION=`python -c 'import netcdf_scm; print(netcdf_scm.__version__)'` \
        && tar --exclude='./.git' -czvf "netcdf-scm-${VERSION}.tar.gz" .
  3. Run the zenodo script to get the curl command for the file to upload, python scripts/ <file-to-upload>

  4. The above script spits out a curl command, run this command (having set the ZENODO_TOKEN environment variable first) to upload your archive

  5. Go to, read through and finalise the upload by pushing publish

Why is there a Makefile in a pure Python repository?

Whilst it may not be standard practice, a Makefile is a simple way to automate general setup (environment setup in particular). Hence we have one here which basically acts as a notes file for how to do all those little jobs which we often forget e.g. setting up environments, running tests (and making sure we’re in the right environment), building docs, setting up auxillary bits and pieces.

Why did we choose a BSD 2-Clause License?

We want to ensure that our code can be used and shared as easily as possible. Whilst we love transparency, we didn’t want to force all future users to also comply with a stronger license such as AGPL. Hence the choice we made.

We recommend Morin et al. 2012 for more information for scientists about open-source software licenses.