Skip to content

Search is only available in production builds. Try building and previewing the site to test it out locally.

Workflow

This page describes the available parameters for defining workflows in Daisytuner. Workflows control how and when your benchmarks are executed on our cluster, daisyfield.

The on Section

This section specifies conditions that trigger the workflow. The syntax closely follows GitHub Actions workflows:

ParameterValuesDescription
pushbranches with a list of branch names, or
tags with a list of tags
Specifies workflow triggers upon pushes to specific branches or tags. Syntax is identical to GitHub Actions’ push events.
pull_requesttypes (e.g., opened, synchronize, ready_for_review)Specifies workflow triggers for pull request events. Available event types match GitHub Actions pull request triggers.

Example:

on:
push:
branches:
- main
- develop
pull_request:
types: [opened, reopened, synchronize]

The parameters Section

The parameters section defines job-specific execution settings:

ParameterTypeDescription
partitionsList of stringsSpecifies which cluster partitions your benchmarks should run on. See the Partitions Reference for available options.
timeout
(optional)
IntegerMaximum allowed execution time in minutes before the benchmark job is automatically terminated. The default value is 20 minutes.
parameters:
partitions:
- bellis5
timeout: 40

The steps Section

This section contains two subsections: build and run.

The build Subsection

Contains commands to set up the environment and compile your benchmarks. The subsection expects a multiline string with shell commands executed sequentially.

steps:
build: |
sudo apt-get update
sudo apt-get install -y libblas-dev
docc -O2 benchmark.c -o benchmark.out

The run Subsection

Defines individual benchmarks, including commands and measurement configurations: Each benchmark entry supports the following parameters:

ParameterTypeDescription
commandStringCommand to execute the benchmark.
measurementsIntegerNumber of repeated runs to ensure statistical reliability.
metricsList of stringsPerformance metrics collected during benchmark execution. Available metrics depend on the selected partition; see the Partitions Reference.
profiler
(optional)
StringProfiling tool used to collect to performance metrics. Currently supports perf, ncu, and pyspy.
threshold
(optional)
IntegerAbsolute runtime threshold in seconds. Jobs exceeding this threshold are marked as failed.
loops
(optional)
BooleanEnabling the collection of profiling data for loop nests. Constructs precise roofline models. See AutomaticLoop-Level Analysis with DOCC. Only available for compilation with the DOCC.
kernels
(optional)
BooleanEnabling the collection of profiling data for individual computing kernels. Can be visualized in an interactive flame graph. Only available on GPU.
steps:
run:
benchmark_1:
command: ./benchmark.out --size 1000
measurements: 5
threshold: 0.05
profiler: perf
loops: true
kernels: false
metrics:
- flop_dp
- memory_volume