Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

The changes listed in this file are categorised as follows:

  • Added: new features

  • Changed: changes in existing functionality

  • Deprecated: soon-to-be removed features

  • Removed: now removed features

  • Fixed: any bug fixes

  • Security: in case of vulnerabilities.

master

Fixed

  • (!86) Removed reference to masks in docs/source/usage/weights.ipynb, the idea of masking is referred to as weights throughout NetCDF-SCM

v2.1.0 - 2021-03-31

Added

Changed

  • (!83) Raise warning or log warning if the branch times cannot be verified rather than raising a NotImplementedError (partially addresses #61)

  • (!80) Require xarray<0.17 until xarray #5050 is resolved

Fixed

  • (!84) Look for parent data submitted under a different institution id rather than immediately raising an IOError if no parent data under the same institution id is found

  • (!81) During stitching select the parent with the latest version if multiple parents are found (closes #59)

v2.0.2 - 2021-02-25

Added

  • (!79) Ability to crunch files if parent_time_units metadata is missing (closes #56)

Fixed

  • Incorrect paper link in README

v2.0.1 - 2021-02-25

Added

  • (!77) DOI reference to paper in crunched files (and anything derived from crunched files)

  • (!77) Tweaks to paper following proofs

Fixed

v2.0.0 - 2021-01-19

Added

  • (!76) Added missing modules to documentation

  • (!72) v2 paper revisions round 2

  • (!67) v2 paper revisions

  • (!69) Added AR6 reference regions

  • (!68) “30-yr-running-mean” and “30-yr-running-mean-dedrift” normalisation options when stitching

  • (!68) nyears keyword argument when initialising netcdf_scm.normalisation.NormaliserRunningMean and netcdf_scm.normalisation.NormaliserRunningMeanDedrift so that the number of years to use when calculating the running-mean is now arbitrary (default value is 21 so there is now change to the default behaviour)

  • (!32) First submission to Earth System Science Data (ESSD)

  • (!56) Instructions and scripts for doing zenodo releases

  • (!40) Add netcdf_scm.citing module (closes #39)

  • (!35) Add netcdf_scm.retractions module (closes #29)

  • (!51) Add normalisation module to docs

  • (!49) Add progress bar to directory sorting so it’s obvious when things are going very slowly

  • (!46) Add netcdf_scm.errors to docs (closes #41)

  • (!43) Add normalisation method 21-yr-running-mean-dedrift

  • (!39) Put basic license checking tools in new module: netcdf_scm.citing (closes #30)

  • (!34) Add convenience .MAG reader (netcdf_scm.io.load_mag_file) which automatically fills in metadata. Also adds netcdf_scm.io.get_scmcube_helper to the ‘public’ API.

  • (!25) Add regular test of conda installation

  • (!30) Added scipy to dependencies to pip install works

  • (!26) Added 21-year running mean normalisation option

  • (!22) Allow user to choose weighting scheme in CLI

  • (!17) Add netcdf_scm.weights.AreaWeightCalculator

  • (!16) Add CMIP5 stitching support

  • (!8) Add process id to logging calls (fixes #13)

  • (!1) Add netcdf-scm-stitch so e.g. historical and scenario files can be joined and also normalised against e.g. piControl

  • (#108 (github)) Optimise wranglers and add regression tests

  • (#107 (github)) Add wrangling options for average/point start/mid/end year time manipulations for .MAG and .IN files

  • (#104 (github)) Allow wranglers to also handle unit conversions (see #101 (github))

  • (#102 (github)) Keep effective area as metadata when calculating SCM timeseries (see #100 (github))

  • (#98 (github)) Add support for reading CMIP6 concentration GMNHSH data

  • (#95 (github)) Add support for CO2 flux data (fgco2) reading, in the process simplifying crunching and improving lazy weights

  • (#87 (github)) Add support for crunching data with a height co-ordinate

  • (#84 (github)) Add ability to crunch land, ocean and atmosphere data separately (and sensibly)

  • (#75 (github)) Check land_mask_threshold is sensible when retrieving land mask (automatically update if not)

  • (#69 (github)) Add El Nino 3.4 mask

  • (#66 (github)) Add devops tools and refactor to pass new standards

  • (#62 (github)) Add netcdf-scm format and crunch to this by default

  • (#61 (github)) Add land fraction when crunching scm timeseries cubes

Changed

  • (!73) Handling of invalid regions while crunching. If crunching requests regions which aren’t compatible with a file, a warning will be raised but the crunching will continue with all the valid regions it can. Previously, if invalid regions were requested, the crunch would fail and no regions would be crunched for that file.

  • (!73) Renamed netcdf_scm.weights.InvalidWeights to netcdf_scm.weights.InvalidWeightsError and ensured that all weights-related errors are now raised as netcdf_scm.weights.InvalidWeightsError rather than being a mix of netcdf_scm.weights.InvalidWeightsError and ValueError as was previously the case.

  • (!73) netcdf_scm.iris_cube_wrappers.ScmCube.get_scm_timeseries_cubes() will now raise a netcdf_scm.weights.InvalidWeightsError if none of the requested regions have valid weights.

  • (!73) Improved logging handling so only netCDF-SCM’s logger is used by netCDF-SCM, with the root logger never being used.

  • (!71) Rename prefix for AR6 regions from World|AR6 regions to World|AR6

  • (!70) Update default land-fraction cube, netcdf_scm.weights.default_land_ocean_weights.nc, so they’re based on CMIP6 data and treat e.g. the Caspian Sea and Great Lakes not as purely land

  • (!5) Use xarray to load crunched netCDF files in netcdf_scm.io.load_scmrun(), reducing load time by about a factor of 3

  • (!64) Upgraded to pymagicc 2.0.0rc5 and changed all use of scmdata.ScmDataFrame to scmdata.ScmRun

  • (!64) netcdf_scm.io.load_scmdataframe to netcdf_scm.io.load_scmrun and this function now automatically drops the “todo” column on reading

  • (!62) Changed command-line interface to use groups rather than hyphens. Change in commands is netcdf-scm-crunch –> netcdf-scm crunch, netcdf-scm-stitch –> netcdf-scm stitch, netcdf-scm-wrangle –> netcdf-scm wrangle.

  • (!60) Target journal for v2 paper

  • (!55) Added check that region areas are sensible when calculating SCM timeseries cubes (see ScmCube._sanity_check_area(), closes #34)

  • (!52) Put notebooks into documentation henced moved them from notebooks to docs/source/usage

  • (!48) Workaround erroneous whitespace in parent metadata when stitching (closes #36)

  • (!47) Rework CHANGELOG to follow Keep a Changelog (closes #27)

  • (!45) Move from https://gitlab.com/znicholls/netcdf-scm to https://gitlab.com/netcdf-scm/netcdf-scm

  • (!38) Split out normalisation module: netcdf_scm.normalisation (closes #31)

  • (!37) Do not duplicate files into a flat directory when wrangling and stitching (closes #33)

  • (!31) Rename SCMCube, it is now ScmCube. Also use “netCDF” rather than “NetCDF” throughout.

  • (!28) Move multiple stitching utility functions into the ‘public’ API

  • (!29) Parallelise directory sorting when crunching

  • (!27) Refactored stitching to module to make room for new normalisation method

  • (!24) Parallelise unit, integration and regression tests in CI to reduce run time

  • (!23) Split netcdf_scm.cli into smaller parts

  • (!21) Remove use of contourf in notebooks as it can give odd results

  • (!20) Update weight retrieval so that non-area weights are normalised (fixes #11)

  • (!19) Update notebooks and refactor so cubes can have multiple weights calculators

  • (#106 (github)) Upgrade to new Pymagicc release

  • (#105 (github)) Upgrade to new Pylint release

  • (#99 (github)) Switch to BSD-3-Clause license

  • (#92 (github)) Shrink test files (having moved entire repository to use git lfs properly)

  • (#90 (github)) Rely on iris for lazy crunching

  • (#89 (github)) Change crunching thresholds to be based on data size rather than number of years

  • (#82 (github)) Prepare to add land data handling

  • (#81 (github)) Refactor masks to use weighting instead of masking, doing all the renaming in the process

  • (#80 (github)) Refactor to avoid import conftest in tests

  • (#77 (github)) Refactor netcdf_scm.masks.get_area_mask logic to make multi-dimensional co-ordinate support easier

  • (#72 (github)) Monkey patch iris to speed up crunching and go back to linear regridding of default sftlf mask

  • (#70 (github)) Dynamically decide whether to handle data lazily (fix regression tests in process)

  • (#64 (github)) Update logging to make post analysis easier and output clearer

  • (#63 (github)) Switch to using cmor name for variable in SCM timeseries output and put standard name in standard_variable_name

  • (#58 (github)) Lock tuningstruc wrangling so it can only wrangle to flat tuningstrucs, also includes:

    • turning off all wrangling in preparation for re-doing crunching format

    • adding default sftlf cube

  • (#50 (github)) Make pyam-iamc a core dependency

Fixed

  • (!75) Check regionmask version before trying to access regionmask’s AR6 region definitions

  • (!66) Upgraded to scmdata 0.7

  • (!59) Updated SCMCube.lat_lon_shape so it is better able to handle non-standard datasets

  • (!58) Upgraded to pymagicc>=2.0.0rc3 to ensure pint compatible unit handling when writing .MAG files

  • (!57) Include cmip5 reference csv in package (closes #43)

  • (!36) Ensure areas are only calculated based on non-masked data (fixes bugs identified in #35 and #37)

  • (!33) Fix bug in stitching.get_branch_time where wrong time units were used when converting raw time to datetime

  • (!18) Hotfix tests

  • (!15) Fixed but in unit conversion which caused it to fail for hfds

  • (!14) Fixed stitching when start year is 1 error (#15)

  • (!13) Make cube concatenation workaround small errors in raw data metadata

  • (!12) Fixed stitched .MAG filename bug identified in (#14)

  • (!10) Add support for esm* experiments when stitching (fixes #2)

  • (!11) Add ability to read CanESM5 ocean data with depth and ‘extra’ co-ordinates. Also:

    • split regression testing into smaller pieces so memory requirements aren’t so high

  • (!9) Add ability to read CanESM5 ocean data, making handling of ‘extra’ co-ordinates more robust

  • (!6) Allow hfds crunching to work by handling extra ocean data coordinates properly

  • (#114 (github)) Ensure that default sftlf file is included in wheel

  • (#111 (github)) Write tuningstrucs with data in columns rather than rows

  • (#97 (github)) Add support for tuningstruc data which has been transposed

  • (#88 (github)) Fix bug when reading more than one multi-dimensional file in a directory

  • (#74 (github)) Fix bug in mask generation

  • (#67 (github)) Fix crunching filenaming, tidy up more and add catch for IPSL time_origin time variable attribute

  • (#55 (github)) Hotfix docs so they build properly

Removed

  • (!62) netcdf_scm.cli_utils._init_logging, netcdf-SCM will now only initialise a logger if used from the command-line, giving users full control of logging again

  • (!61) Redundant files

  • (!42) Remove redundant test files (leftover from previous behaviour)

v1.0.0 - 2019-05-21

Changed

  • (#49 (github)) Make bandit only check src

  • (#45 (github)) Refactor the masking of regions into a module allowing for more regions to be added as needed

Added

  • (#48 (github)) Add isort to checks

  • (#47 (github)) Add regression tests on crunching output to ensure stability. Also:

    • fixes minor docs bug

    • updates default regexp option in crunch and wrangle to avoid fx files

    • refactors cli.py a touch to reduce duplication

    • avoids collections deprecation warning in mat4py

Fixed

  • (#46 (github)) Fix a number of bugs in netcdf-scm-wrangle’s data handling when converting to tuningstrucs

v0.7.3 - 2019-05-16

Changed

  • (#44 (github)) Speed up crunching by forcing data to load before applying masks, not each time a mask is applied

v0.7.2 - 2019-05-16

Changed

  • (#43 (github)) Speed up crunching, in particular remove string parsing to convert cftime to python datetime

v0.7.1 - 2019-05-15

Added

  • (#42 (github)) Add netcdf-scm-wrangle command line interface

Fixed

  • (#41 (github)) Fixed bug in path handling of CMIP6OutputCube

v0.6.2 - 2019-05-14

Added

  • (#39 (github)) Add netcdf-scm-crunch command line interface

v0.6.1 - 2019-05-13

Added

  • (#29 (github)) Put crunching script into formal testsuite which confirms results against KNMI data available here, however no docs or formal example until #6 (github) is closed

  • (#28 (github)) Added cmip5 crunching script example, not tested so use with caution until #6 (github) is closed

Changed

  • (#40 (github)) Upgrade to pyam v0.2.0

  • (#38 (github)) Update to using openscm releases and hence drop Python3.6 support

  • (#37 (github)) Adjusted read in of gregorian with 0 reference to give all data from year 1 back

  • (#34 (github)) Move to new openscm naming i.e. returning ScmDataFrame rather than OpenSCMDataFrameBase

  • (#32 (github)) Move to returning OpenSCMDataFrameBase rather than pandas DataFrame when crunching to scm format

Fixed

  • (#35 (github)) Fixed bug which prevented SCMCube from crunching to scm timeseries with default earth radius when areacella cube was missing

  • (#29 (github)) Fixed bug identified in #30 (github)

v0.5.1 - 2018-11-12

Changed

  • (#26 (github)) Expose directory and filename parsers directly

v0.4.3 - 2018-11-12

Changed

  • Move import cftime into same block as iris imports

v0.4.2 - 2018-11-12

Changed

  • Update setup.py to install dependencies so that non-Iris dependent functionality can be run from a pip install

v0.4.1 - 2018-11-12

Added

  • (#23 (github)) Added ability to handle cubes with invalid calendar (e.g. CMIP6 historical concentrations cubes)

  • (#20 (github)) Added CMIP6Input4MIPsCube and CMIP6OutputCube which add compatibility with CMIP6 data

v0.3.1 - 2018-11-05

Added

  • (#15 (github)) Add ability to load from a directory with data that is saved in multiple timeslice files, also adds:

    • adds regular expressions section to development part of docs

    • adds an example script of how to crunch netCDF files into SCM csvs

  • (#13 (github)) Add load_from_path method to SCMCube

  • (#10 (github)) Add land/ocean and hemisphere splits to _get_scm_masks outputs

Changed

  • (#17 (github)) Update to crunch global and hemispheric means even if land-surface fraction data is missing

  • (#16 (github)) Tidy up experimental crunching script

  • (#14 (github)) Streamline install process

  • (#12 (github)) Update to use output format that is compatible with pyam

  • Update netcdftime to cftime to track name change

v0.2.4 - 2018-10-15

Added

  • Include simple tests in package

v0.2.3 - 2018-10-15

Added

  • Include LICENSE in package

v0.2.2 - 2018-10-15

Added

  • Add conda dev environment details

v0.2.1 - 2018-10-15

Changed

  • Update setup.py to reflect actual supported python versions

v0.2.0 - 2018-10-14

Added

  • (#4 (github)) Add work done elsewhere previously
    • SCMCube base class for handling netCDF files
      • reading, cutting and manipulating files for SCM use

    • MarbleCMIP5Cube for handling CMIP5 netCDF files within a particular directory structure

    • automatic loading and use of surface land fraction and cell area files

    • returns timeseries data, once processed, in pandas DataFrames rather than netCDF format for easier use

    • demonstration notebook of how this first step works

    • CI for entire repository including notebooks

    • automatic documentation with Sphinx

v0.0.1 - 2018-10-05

Added

  • initial release

v0.0 - 2018-10-05

Added

  • dummy release