All Nengo Bones configuration is done through the .nengobones.yml
file. In this notebook we will demonstrate the different configuration options available and how to use them to customize behaviour for different use cases.
First we’ll define some utility functions we’ll use throughout this notebook:
[1]:
def write_yml(contents):
"""Create a sample .nengo.yml file from a string."""
nengo_yml = "pkg_name: example\nrepo_name: nengo-bones/example\n"
nengo_yml += contents
with open(".nengobones.yml", "w") as f:
f.write(nengo_yml)
def display_contents(filename, sections=None):
"""
Display the contents of a file.
The 'sections' argument filters the file to show only the
specified sections.
"""
with open(filename) as f:
data = f.readlines()
# strip out blank lines
data = [x for x in data if x.strip() != ""]
if sections is None:
display_data = data
else:
# pull out only the requested sections
display_data = []
for sec in sections:
for i, line in enumerate(data):
if ('"$COMMAND" == "%s"' % sec in line
or line.startswith(sec)):
section_data = data[i:]
break
for i, line in enumerate(section_data[1:]):
if not line.startswith(" "):
section_data = section_data[:i+1]
break
display_data.extend(section_data)
print("".join(display_data).rstrip("\n"))
Nengo Bones uses a continuous integration (CI) setup wherein build jobs are associated with different shell scripts, and each script defines commands to be executed at different stages during the build process. These scripts are configured through the ci_scripts
section of .nengobones.yml
. The basic step is to add an entry with - template: scriptname
for each script template that we want to be rendered. As an example, in this section we will use the “test” template, but all the
options we describe here work for any of the templated scripts.
Normally the .nengobones.yml
file is a text file sitting in the root directory of the repository. For demonstration purposes, in this notebook we will be generating different config files on-the-fly using the utility functions from above.
[2]:
nengobones_yml = """
ci_scripts:
- template: test
"""
write_yml(nengobones_yml) # create .nengobones.yml file
!generate-bones ci-scripts # call the generate-bones script
display_contents("test.sh") # display the contents of the generated file
#!/usr/bin/env bash
# Automatically generated by nengo-bones, do not edit this file directly
# Version: 0.1.0
NAME=$0
COMMAND=$1
STATUS=0 # used to exit with non-zero status if any command fails
exe() {
echo "\$ $*";
# remove empty spaces from args
args=( "$@" )
for i in "${!args[@]}"; do
[ -n "${args[$i]}" ] || unset "args[$i]"
done
"${args[@]}" || STATUS=1;
}
if [[ ! -e example ]]; then
echo "Run this script from the root directory of this repository"
exit 1
fi
if [[ "$COMMAND" == "install" ]]; then
:
exe pip install "pytest>=3.6.0" "pytest-xdist>=1.16.0"
exe pip install -e ".[tests]"
elif [[ "$COMMAND" == "after_install" ]]; then
:
elif [[ "$COMMAND" == "before_script" ]]; then
:
elif [[ "$COMMAND" == "script" ]]; then
# shellcheck disable=SC2086
exe pytest example -v -n 2 --color=yes --durations 20 $TEST_ARGS
elif [[ "$COMMAND" == "before_cache" ]]; then
exe conda clean --all
elif [[ "$COMMAND" == "after_success" ]]; then
:
elif [[ "$COMMAND" == "after_failure" ]]; then
:
elif [[ "$COMMAND" == "before_deploy" ]]; then
:
elif [[ "$COMMAND" == "after_deploy" ]]; then
:
elif [[ "$COMMAND" == "after_script" ]]; then
:
elif [[ -z "$COMMAND" ]]; then
echo "$NAME requires a command like 'install' or 'script'"
else
echo "$NAME does not define $COMMAND"
fi
if [[ "$COMMAND" != "script" && -n "$TRAVIS" && "$STATUS" -ne "0" ]]; then
travis_terminate "$STATUS"
fi
exit "$STATUS"
There is a lot of information in that file that we don’t really need to worry about (that’s the whole point of Nengo Bones, to take care of those details for us). We can see that the overall structure is made up of behaviour defined for different build stages (e.g. “install”, “after_install”, “before_script”, etc.). In most cases all the important action happens in the “install” and “script” stages:
[3]:
display_contents("test.sh", sections=["install", "script"])
if [[ "$COMMAND" == "install" ]]; then
:
exe pip install "pytest>=3.6.0" "pytest-xdist>=1.16.0"
exe pip install -e ".[tests]"
elif [[ "$COMMAND" == "script" ]]; then
# shellcheck disable=SC2086
exe pytest example -v -n 2 --color=yes --durations 20 $TEST_ARGS
In the “install” stage we install any of the package requirements needed to run the script. In this case we can see we’re installing pytest
, pytest-xdist
, and the current package (including any optional dependencies defined in the [tests]
extra_requires
directive).
If we need to add extra packages to this default installation, that can be done with the pip_install
or conda_install
configuration options (to install packages through pip
or conda
, respectively):
[4]:
nengobones_yml = """
ci_scripts:
- template: test
pip_install:
- an-extra-pip-package
conda_install:
- an-extra-conda-package
"""
write_yml(nengobones_yml)
!generate-bones ci-scripts
display_contents("test.sh", sections=["install"])
if [[ "$COMMAND" == "install" ]]; then
exe conda install -q "an-extra-conda-package"
exe pip install "an-extra-pip-package"
exe pip install "pytest>=3.6.0" "pytest-xdist>=1.16.0"
exe pip install -e ".[tests]"
Note that requirements should generally be added to the package requirements defined in setup.py
(in which case they would be automatically installed in the pip install -e ".[tests]"
step). That way anyone can easily install the necessary packages and run the tests without having to go through the CI scripts. The pip_install
and conda_install
options are only necessary if that isn’t feasible for some reason.
The “script” stage is where the main work of the script is done. In the case of the “test” script, this means calling pytest
to run the test suite:
[5]:
display_contents("test.sh", sections=["script"])
elif [[ "$COMMAND" == "script" ]]; then
# shellcheck disable=SC2086
exe pytest example -v -n 2 --color=yes --durations 20 $TEST_ARGS
Ignore the $TEST_ARGS
part for now. We will cover that when we come to the .travis.yml
configuration.
As with the “install” stage, the “script” stage can also be customized if we want to add extra commands, either before or after the main script body:
[6]:
nengobones_yml = """
ci_scripts:
- template: test
pre_commands:
- echo "this command will run at the beginning"
post_commands:
- echo "this command will run at the end"
"""
write_yml(nengobones_yml)
!generate-bones ci-scripts
display_contents("test.sh", sections=["script"])
elif [[ "$COMMAND" == "script" ]]; then
exe echo "this command will run at the beginning"
# shellcheck disable=SC2086
exe pytest example -v -n 2 --color=yes --durations 20 $TEST_ARGS
exe echo "this command will run at the end"
We can also use the same template multiple times, passing different options to generate different output scripts. In this case we need to use the output_name
config option to distinguish different rendered scripts:
[7]:
nengobones_yml = """
ci_scripts:
- template: test
pre_commands:
- echo "this is test"
- template: test
output_name: test2
pre_commands:
- echo "this is test2"
"""
write_yml(nengobones_yml)
!generate-bones ci-scripts
print("Contents of test.sh")
print("-------------------")
display_contents("test.sh", sections=["script"])
print("\nContents of test2.sh")
print("--------------------")
display_contents("test2.sh", sections=["script"])
Contents of test.sh
-------------------
elif [[ "$COMMAND" == "script" ]]; then
exe echo "this is test"
# shellcheck disable=SC2086
exe pytest example -v -n 2 --color=yes --durations 20 $TEST_ARGS
Contents of test2.sh
--------------------
elif [[ "$COMMAND" == "script" ]]; then
exe echo "this is test2"
# shellcheck disable=SC2086
exe pytest example -v -n 2 --color=yes --durations 20 $TEST_ARGS
The “test” script also has some configuration options specific to that script. First, we can collect coverage information while tests are running by setting coverage: true
:
[8]:
nengobones_yml = """
ci_scripts:
- template: test
coverage: true
"""
write_yml(nengobones_yml)
!generate-bones ci-scripts
display_contents("test.sh", sections=["install", "script", "after_script"])
if [[ "$COMMAND" == "install" ]]; then
:
exe pip install "pytest>=3.6.0" "pytest-xdist>=1.16.0"
exe pip install "pytest-cov>=2.6.0"
exe pip install -e ".[tests]"
elif [[ "$COMMAND" == "script" ]]; then
# shellcheck disable=SC2086
exe pytest example -v -n 2 --color=yes --durations 20 --cov=example --cov-append --cov-report=term-missing $TEST_ARGS
elif [[ "$COMMAND" == "after_script" ]]; then
exe eval "bash <(curl -s https://codecov.io/bash)"
Note that the script is now installing extra packages, adding extra arguments to pytest
, and uploading the results to Codecov at the end.
Nengo backends also often want to run the core Nengo tests in addition to their own test suite. This can be accomplished by setting nengo_tests: true
:
[9]:
nengobones_yml = """
ci_scripts:
- template: test
nengo_tests: true
"""
write_yml(nengobones_yml)
!generate-bones ci-scripts
display_contents("test.sh", sections=["script"])
elif [[ "$COMMAND" == "script" ]]; then
# shellcheck disable=SC2086
exe pytest example -v -n 2 --color=yes --durations 20 $TEST_ARGS
# shellcheck disable=SC2086
exe pytest --pyargs nengo -v -n 2 --color=yes --durations 20 $TEST_ARGS
As mentioned above, the general idea is that TravisCI will create a number of build jobs, and each job will call one of the scripts we generated. TravisCI is configured through its own .travis.yml
file, so what we will be doing is using .nengobones.yml
to generate .travis.yml
. This is done through the travis_yml
section:
[10]:
nengobones_yml = """
travis_yml:
jobs: []
"""
write_yml(nengobones_yml)
!generate-bones travis-yml
display_contents(".travis.yml")
# Automatically generated by nengo-bones, do not edit this file directly
# Version: 0.1.0
language: python
python:
- "3.6"
notifications:
email:
on_success: change
on_failure: change
# cache:
# directories:
# - $HOME/miniconda
dist: trusty
env:
global:
- SCRIPT="test"
- PYTHON_VERSION="3.6"
- PIP_UPGRADE="true" # always upgrade to latest version
- PIP_UPGRADE_STRATEGY="eager" # upgrade all dependencies
- TEST_ARGS=""
- COV_CORE_SOURCE= # early start pytest-cov engine
- COV_CORE_CONFIG=.coveragerc
- COV_CORE_DATAFILE=.coverage.eager
- BRANCH_NAME="${TRAVIS_PULL_REQUEST_BRANCH:-$TRAVIS_BRANCH}"
jobs:
include:
before_install:
# export travis_terminate for use in scripts
- export -f travis_terminate
_travis_terminate_linux
_travis_terminate_osx
_travis_terminate_unix
_travis_terminate_windows
# install/run nengo-bones
- pip install nengo-bones
- generate-bones --output-dir .ci --template-dir .ci ci-scripts
- check-bones
# set up conda
- export PATH="$HOME/miniconda/bin:$PATH"
- if ! [[ -d $HOME/miniconda/envs/test ]]; then
rm -rf $HOME/miniconda;
wget -q http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh -O miniconda.sh;
bash miniconda.sh -b -p $HOME/miniconda;
conda update -q -y conda;
conda create -q -y -n test python=$PYTHON_VERSION pip;
else
conda update -q -y conda;
fi
- conda config --set always_yes yes --set changeps1 no
- source activate test
# upgrade pip
- pip install pip
# display environment info
- conda info -a
- conda list -e
- pip freeze
install:
- .ci/$SCRIPT.sh install
- conda list -e
- pip freeze
after_install:
- .ci/$SCRIPT.sh after_install
before_script:
- .ci/$SCRIPT.sh before_script
script:
- .ci/$SCRIPT.sh script
before_cache:
- .ci/$SCRIPT.sh before_cache
after_success:
- .ci/$SCRIPT.sh after_success
after_failure:
- .ci/$SCRIPT.sh after_failure
before_deploy:
- .ci/$SCRIPT.sh before_deploy
after_deploy:
- .ci/$SCRIPT.sh after_deploy
after_script:
- .ci/$SCRIPT.sh after_script
As before, there is a lot of detail here that we don’t need to worry about because Nengo Bones will take care of it for us. The important part to understand is the overall structure.
First, we create some global variables:
[11]:
display_contents(".travis.yml", sections=["env"])
env:
global:
- SCRIPT="test"
- PYTHON_VERSION="3.6"
- PIP_UPGRADE="true" # always upgrade to latest version
- PIP_UPGRADE_STRATEGY="eager" # upgrade all dependencies
- TEST_ARGS=""
- COV_CORE_SOURCE= # early start pytest-cov engine
- COV_CORE_CONFIG=.coveragerc
- COV_CORE_DATAFILE=.coverage.eager
- BRANCH_NAME="${TRAVIS_PULL_REQUEST_BRANCH:-$TRAVIS_BRANCH}"
This shows the default variables. We can add arbitrary variables to this list through the global_vars
option:
[12]:
nengobones_yml = """
travis_yml:
global_vars:
VAR0: val0
VAR1: val1
jobs: []
"""
write_yml(nengobones_yml)
!generate-bones travis-yml
display_contents(".travis.yml", sections=["env"])
env:
global:
- SCRIPT="test"
- PYTHON_VERSION="3.6"
- PIP_UPGRADE="true" # always upgrade to latest version
- PIP_UPGRADE_STRATEGY="eager" # upgrade all dependencies
- TEST_ARGS=""
- COV_CORE_SOURCE= # early start pytest-cov engine
- COV_CORE_CONFIG=.coveragerc
- COV_CORE_DATAFILE=.coverage.eager
- BRANCH_NAME="${TRAVIS_PULL_REQUEST_BRANCH:-$TRAVIS_BRANCH}"
- VAR0="val0"
- VAR1="val1"
Note that these variables will be set in order, with later variables overwriting earlier ones. So if, for example, we wanted to change the default script, we could do that by setting:
[13]:
nengobones_yml = """
travis_yml:
global_vars:
SCRIPT: not-test
jobs: []
"""
write_yml(nengobones_yml)
!generate-bones travis-yml
display_contents(".travis.yml", sections=["env"])
env:
global:
- SCRIPT="test"
- PYTHON_VERSION="3.6"
- PIP_UPGRADE="true" # always upgrade to latest version
- PIP_UPGRADE_STRATEGY="eager" # upgrade all dependencies
- TEST_ARGS=""
- COV_CORE_SOURCE= # early start pytest-cov engine
- COV_CORE_CONFIG=.coveragerc
- COV_CORE_DATAFILE=.coverage.eager
- BRANCH_NAME="${TRAVIS_PULL_REQUEST_BRANCH:-$TRAVIS_BRANCH}"
- SCRIPT="not-test"
The next important part of the .travis.yml
is the “jobs” section.
[14]:
display_contents(".travis.yml", sections=["jobs"])
jobs:
include:
It is currently empty because we haven’t defined any jobs in our .nengobones.yml
.
This is done via the jobs
section of travis_yml
, which specifies the builds we want to run during CI. The most important part of each job is specifying which script we want to run. For example, we could create a single job that runs the “test” script:
[15]:
nengobones_yml = """
travis_yml:
jobs:
- script: test
"""
write_yml(nengobones_yml)
!generate-bones travis-yml
display_contents(".travis.yml", sections=["jobs"])
jobs:
include:
-
env:
SCRIPT="test"
We can see that this causes the $SCRIPT
variable to be set to "test"
, which will then cause test.sh
to be run during the various build steps.
Similarly, we can set the python_version
variable to change the Python version being used for that job. For example, we could create two jobs that run the test suite with different python versions:
[16]:
nengobones_yml = """
travis_yml:
jobs:
- script: test
python_version: 3.5
- script: test
python_version: 3.6
"""
write_yml(nengobones_yml)
!generate-bones travis-yml
display_contents(".travis.yml", sections=["jobs"])
jobs:
include:
-
env:
SCRIPT="test"
PYTHON_VERSION="3.5"
-
env:
SCRIPT="test"
PYTHON_VERSION="3.6"
The test_args
option sets the $TEST_ARGS
environment variable that we saw in the test.sh
script above. Recall that this will be passed to the pytest
command, like pytest my_project ... $TEST_ARGS
. This can be useful for projects that define custom pytest arguments that they might want to set for different jobs, e.g.:
[17]:
nengobones_yml = """
travis_yml:
jobs:
- script: test
test_args: --do-a-special-test
"""
write_yml(nengobones_yml)
!generate-bones travis-yml
display_contents(".travis.yml", sections=["jobs"])
jobs:
include:
-
env:
SCRIPT="test"
TEST_ARGS="--do-a-special-test"
Note that all these options are simply shortcuts for setting environment variables. We can also set arbitrary environment variables through the env
option:
[18]:
nengobones_yml = """
travis_yml:
jobs:
- script: test
env:
VAR0: val0
"""
write_yml(nengobones_yml)
!generate-bones travis-yml
display_contents(".travis.yml", sections=["jobs"])
jobs:
include:
-
env:
VAR0="val0"
SCRIPT="test"
Any other options set for a job will be passed directly on to the .travis.yml
config for that job. This opens up a wide range of configuration options (see https://docs.travis-ci.com/ for more information). As an example, we could divide jobs into different stages through the stage
option:
[19]:
nengobones_yml = """
travis_yml:
jobs:
- script: test
stage: stage 0
- script: docs
stage: stage 1
"""
write_yml(nengobones_yml)
!generate-bones travis-yml
display_contents(".travis.yml", sections=["jobs"])
jobs:
include:
-
env:
SCRIPT="test"
stage: stage 0
-
env:
SCRIPT="docs"
stage: stage 1
Nengo Bones can also configure TravisCI to automatically deploy releases to PyPI. This is activated by setting the pypi_user
option, which should be the username for the PyPI account that will be used to upload the releases. You will also need to go into the TravisCI settings for your repo and add a secure environment variable named PYPI_TOKEN
containing that account’s password. Finally, you will need to make sure you are building the deploy.sh
script by adding - template: deploy
to your ci_scripts
.
[20]:
nengobones_yml = """
ci_scripts:
- template: test
- template: deploy
travis_yml:
jobs:
- script: test
pypi_user: test_user
"""
write_yml(nengobones_yml)
!generate-bones travis-yml
display_contents(".travis.yml", sections=["jobs"])
jobs:
include:
-
env:
SCRIPT="test"
- stage: deploy
if: branch =~ ^release-candidate-* OR tag =~ ^v[0-9]*
env: SCRIPT="deploy"
cache: false
deploy:
- provider: pypi
server: https://test.pypi.org/legacy/
user: test_user
password: $PYPI_TOKEN
distributions: "sdist "
on:
all_branches: true
condition: $TRAVIS_BRANCH =~ ^release-candidate-*
condition: $TRAVIS_TAG = ""
- provider: pypi
user: test_user
password: $PYPI_TOKEN
distributions: "sdist "
on:
all_branches: true
condition: $TRAVIS_TAG =~ ^v[0-9]*
This will trigger an automatic PyPI release for any tagged commit. You can also do a test release by pushing to a branch named release-candidate-x.y.z
(where x.y.z
is the version number you want to test).
The only other setting related to the automatic deployment is deploy_dists
, which sets the distributions that will be built and uploaded with each release. We can see above that the default is an sdist
-only release, but we could, for example, add wheels by modifying deploy_dists
:
[21]:
nengobones_yml = """
ci_scripts:
- template: test
- template: deploy
travis_yml:
jobs:
- script: test
pypi_user: test_user
deploy_dists:
- sdist
- bdist_wheel
"""
write_yml(nengobones_yml)
!generate-bones travis-yml
display_contents(".travis.yml", sections=["jobs"])
jobs:
include:
-
env:
SCRIPT="test"
- stage: deploy
if: branch =~ ^release-candidate-* OR tag =~ ^v[0-9]*
env: SCRIPT="deploy"
cache: false
deploy:
- provider: pypi
server: https://test.pypi.org/legacy/
user: test_user
password: $PYPI_TOKEN
distributions: "sdist bdist_wheel "
on:
all_branches: true
condition: $TRAVIS_BRANCH =~ ^release-candidate-*
condition: $TRAVIS_TAG = ""
- provider: pypi
user: test_user
password: $PYPI_TOKEN
distributions: "sdist bdist_wheel "
on:
all_branches: true
condition: $TRAVIS_TAG =~ ^v[0-9]*
The next section of the .travis.yml
file sets up the python environment:
[22]:
display_contents(".travis.yml", sections=["before_install"])
before_install:
# export travis_terminate for use in scripts
- export -f travis_terminate
_travis_terminate_linux
_travis_terminate_osx
_travis_terminate_unix
_travis_terminate_windows
# install/run nengo-bones
- pip install nengo-bones
- generate-bones --output-dir .ci --template-dir .ci ci-scripts
- check-bones
# set up conda
- export PATH="$HOME/miniconda/bin:$PATH"
- if ! [[ -d $HOME/miniconda/envs/test ]]; then
rm -rf $HOME/miniconda;
wget -q http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh -O miniconda.sh;
bash miniconda.sh -b -p $HOME/miniconda;
conda update -q -y conda;
conda create -q -y -n test python=$PYTHON_VERSION pip;
else
conda update -q -y conda;
fi
- conda config --set always_yes yes --set changeps1 no
- source activate test
# upgrade pip
- pip install pip
# display environment info
- conda info -a
- conda list -e
- pip freeze
Note that we are calling generate-bones ... ci-scripts
in this section in order to automatically generate all the scripts defined in the .nengobones.yml
ci_scripts
discussed above. We also call check-bones
, which will give an error if any of the templated files are out of date.
There are no configuration options for this section (although note that it is using the $PYTHON_VERSION
variable we set in the global variables above).
Finally we call our templated script in each of the build steps, which will execute the behaviour we defined in those scripts.
[23]:
display_contents(".travis.yml", sections=[
"install", "after_install", "before_script", "script"
])
print("...")
install:
- .ci/$SCRIPT.sh install
- conda list -e
- pip freeze
after_install:
- .ci/$SCRIPT.sh after_install
before_script:
- .ci/$SCRIPT.sh before_script
script:
- .ci/$SCRIPT.sh script
...
There are no configuration options for this section (instead this behaviour is controlled by configuring the ci scripts, as described in the ci_scripts
section above).
Nengo Bones will automatically upload coverage reports to Codecov if the coverage: true
option is set on the test script config. Codecov reads the .codecov.yml
configuration file, which Nengo Bones also has a template for and can be controlled through the codecov_yml
option.
[24]:
nengobones_yml = """
codecov_yml: {}
"""
write_yml(nengobones_yml)
!generate-bones codecov-yml
display_contents(".codecov.yml")
# Automatically generated by nengo-bones, do not edit this file directly
# Version: 0.1.0
codecov:
ci:
- "!ci.appveyor.com"
notify:
require_ci_to_pass: no
coverage:
status:
project:
default:
enabled: yes
target: auto
patch:
default:
enabled: yes
target: 100%
changes: no
The first thing to be noted is the !ci.appveyor.com
line. This tells Codecov not to wait for a coverage report from appveyor (generally we assume that all the coverage is being checked on TravisCI). This can be disabled by setting skip_appveyor: false
:
[25]:
nengobones_yml = """
codecov_yml:
skip_appveyor: false
"""
write_yml(nengobones_yml)
!generate-bones codecov-yml
display_contents(".codecov.yml", sections=["codecov"])
codecov:
notify:
require_ci_to_pass: no
The other configuration options have to do with the target coverage at either the project or patch level. These determine whether the codecov PR status checks will be marked as passing or failing. The “project” coverage is the total lines of code covered by tests for the whole project. The default value is “auto”, meaning that the status check will pass if the total coverage is >= the coverage in the base branch of the PR. The “patch” coverage only looks at the lines of code modified in the PR. The default value is “100%”, meaning that every modified line of code needs to be covered by a test for the check to pass.
These can be modified by the abs_target
and diff_target
config options, respectively:
[26]:
nengobones_yml = """
codecov_yml:
abs_target: 80%
diff_target: 90%
"""
write_yml(nengobones_yml)
!generate-bones codecov-yml
display_contents(".codecov.yml", sections=["coverage"])
coverage:
status:
project:
default:
enabled: yes
target: 80%
patch:
default:
enabled: yes
target: 90%
changes: no