Skip to content

GitLab-CI

What is continuous integration?

Continuous integration (CI) is the practice of running an automated pipeline of scripts to build and test a project after every change.

This allows maintainers to identify bugs early in the development cycle, ensuring that all code that is pushed into the main development branch is compliant with the requirements of the project.

Continuous deployment

Continuous deployment (CD) takes this another step further by automating the process of deploying an application to production after every change.

If configured correctly (this is the default), CI pipelines will run for every merge request, meaning the modified code can be build and tested before changes are accepted into the repository.

The fork-branch-merge-request presented earlier can then be augmented to include CI at all stages

GitLab workflow with CI

How to enable CI on GitLab

All Gitlab CI/CD is configured with a YAML-format file called .gitlab-ci.yml in the root of the project repository.

https://docs.gitlab.com/ee/ci/yaml/README.html

Examples

Running a simple script

A basic example could just check that your script runs without failing. Consider the example from https://git.ligo.org/duncanmmacleod/gitlab-example/tree/ci:

Simple Gitlab-CI configuration

image: python

test:
  script:
    - python hello_world.py

Here, a single job called test is configured to run python hello_world.py. If that command exits with code 0, the job passes, otherwise it fails.

image: python

We use image: python to declare to GitLab-CI that this job should run inside a python container.

The default is image: docker:latest.

An example of this pipeline running is here:

https://git.ligo.org/duncanmmacleod/gitlab-example/pipelines/83702

Building a (python) package

A more complicated example is available on the package branch of the same gitlab-example repository:

https://git.ligo.org/duncanmmacleod/gitlab-example/tree/package

Here the full YAML configuration is:

Building and testing a Python package

stages:
  - build
  - test

image: python

build:
  stage: build
  script:
    - python setup.py sdist --dist-dir .
  artifacts:
    paths:
      - "hello-world-*.tar.*"

.test: &test
  stage: test
  dependencies:
    - build
  script:
    - python -m pip install hello-world-*.tar.*
    - hello-world

test:3.9:
  <<: *test
  image: python:3.9

test:3.10:
  <<: *test
  image: python:3.10

Here we define a more complicated pipeline with two stages:

graph TD; build-->py39["test:3.9"]; build-->py310["test:3.10"];

The build job uses artifacts to store the output of its job (the built tarball) so that it can be used in later jobs.

This configuration also uses a YAML anchor to define a template for the test jobs, called .test, then references it to define the actual test jobs using the <<: *test alias.

An example of this pipeline running is here:

https://git.ligo.org/duncanmmacleod/gitlab-example/pipelines/83703

Other examples

gwdatafind/gwdatafind (python package with automated tests): : https://git.ligo.org/gwdatafind/gwdatafind/pipelines/61271

lscsoft/bayeswave (cmake build with binary packaging and tests): : https://git.ligo.org/lscsoft/bayeswave/pipelines/82380

lscsoft/lalsuite (multi-package, multi-distribution package suite with scheduled nightly jobs): : https://git.ligo.org/lscsoft/lalsuite/pipelines/81275

emfollow/gwcelery (python package with documentation, tests, and continuous deployment to multiple locations): : https://git.ligo.org/emfollow/gwcelery/pipelines/82277

Gitlab-CI Templates

The IGWN Computing and Software Working Group maintain a suite of Gitlab-CI templates that can be used to trivialise basic operations in a number of projects. Please see

https://computing.docs.ligo.org/gitlab-ci-templates/

For example, we can simplify the above Building a (python) package example by using the .python:build template for the build` job:

include:
  - project: computing/gitlab-ci-templates
    file: python.yml

stages:
  - build
  - test

build:
  extends:
    # https://computing.docs.ligo.org/gitlab-ci-templates/python/#.python:build
    - .python:build
  stage: build

test:
   ...

Here the entire job configuration is delegated to the template. This has the benefit of simplifying this YAML file, and also meaning that any improvements in build techniques applied by Computing and Software to the job templates are automatically included in any pipelines.

You can see a full example of the gitlab-ci-templates used to configure an end-to-end CI pipeline in a newer version of the .gitlab-ci.yml for gwdatafind/gwdatafind (pipeline).

GitLab Pages

GitLab supports building static web content with CI and hosting them through Gitlab with an extension called Pages.

An example of this can be seen for this webpage:

https://git.ligo.org/duncanmmacleod/gitlab-tutorial/

For more details, see

https://about.gitlab.com/product/pages/

Advanced concepts

Multi-project pipelines

GitLab CI allows specifying triggers in one project that will launch a CI/CD pipeline in a downstream project. For example:

test:
  stage: test
  script: make check

downstream:
  stage: deploy
  trigger: my/otherproject

In this example, after the test job succeeds in the test stage, the downstream job starts. GitLab then creates a downstream pipeline in the my/otherproject project.

This is used in Computing and Software to rebuild downstream Docker containers if the base container is rebuilt, see here.

See Multi-project pipelines in the official GitLab Docs for full details.

Runners and tags

In Gitlab-CI a runner is the name given to the machine that actual executes the jobs in a CI/CD pipeline. Each pipeline can run jobs on multiple runners, and each runner can run jobs from multiple pipelines concurrently (depending on its configuration).

Linux (default)

By default on the IGWN GitLab instance, jobs will run on a Linux x86_64 runner machine that supports the docker executor. If you do not specify a tag then the build will be randomly assigned to any of the Linux runners. There are site specific tags to ensure that your build runs at either UWM, using the uwm tag, or at Caltech, using the cit tag. By default each build will have access to 4 CPU cores and 10Gb RAM. If you require more memory than this then the highmem tag can be used, this will give you build access to 12 CPU cores and 25Gb RAM. Further details regarding these runners can be seen in the table below:

Site Slots Cores RAM Architecure CPU Tag
UWM 15 4 10Gb x86_64 Intel Xeon Silver 4116 uwm
CIT 32 4 10Gb x86_64 AMD EPYC 7402 cit
CIT 4 12 25Gb x86_64 AMD EPYC 7402 highmem

For these runners you need to specify which docker container to use for your build, if a container image is not specified (using the image keyword in your .gitlab-ci.yaml file) then the docker:latest image will be used. This is a very basic Linux image with docker available that is only really suitable for building docker containers.

macOS

There are a total of 6 macOS runners using both x86_64 and ARM64 CPUs, details of these machines are available in the table below:

Site Slots Cores RAM Architecure CPU Tag Operating System
CIT 2 4 8Gb x86_64 Intel (Haswell) macos_catalina_x86_64 macOS 10.15 (Catalina)
CIT 2 4 8Gb x86_64 Intel (Haswell) macos_bigsur_x86_64 macOS 11 (Big Sur)
CIT 2 8 16Gb ARM64 M1 macos_bigsur_arm64 macOS 11 (Big Sur)

macOS version-independent tags

If you do not have a preference for what version of macOS is used you can also use the tags macos_x86_64 and macos_arm64 to match to any runner with an x86_64 or ARM64 CPU respectively. For compatibility with the previous configuration the tag macos can also be used to match to any of the x86_64 macOS machines. Use of this tag is deprecated and the tag will be removed at some point.