Skip to main content

Github Actions

This integration allows you to automatically publish performance metrics from your Github Actions CI/CD workflows, making it easier to track the performance of your software over time. Thresholds can be used to fail builds if the performance metrics fall below a certain level.

Setup

Sign up for a Latency Lingo account.

Locate your API key in account settings and add it as a secret to your Github repository under LATENCY_LINGO_API_KEY. Next, add the Github Action as a step in your workflow.

Example Workflow

on: [push]
jobs:
performance-test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3.2.0
- name: Latency Lingo
uses: latency-lingo/github-action@v0.0.2
with:
api-key: ${{ secrets.LATENCY_LINGO_API_KEY }}
file: jmeter-results.jtl
label: Github Actions Test
format: jmeter

The input arguments mimic the flags provided to the publish CLI command.

The value of file should reference a file containing your test results. This depends on your test runner and strategy for running tests in CI/CD. Feel free to reach out for any support.

With this workflow, every time you push code to your repository, the performance test metrics will be automatically published to your Latency Lingo account. You can view the metrics by logging in to your account and navigating to the dashboard.

The action also evaluates any thresholds for the test scenario referenced. If any of the thresholds are violated, the action will fail the build.

Badge

Latency Lingo also supports a Github badge to preview results for the latest test run in a scenario.

Github badge example

To add the badge to your repository, add the following markdown to your README. Be sure to replace <test_scenario_id> with your preferred test scenario ID, which can be found in the output of the CLI command or just reach out to support.

[![Latency Lingo Badge](https://img.shields.io/endpoint?url=https://latency-lingo.web.app/v2/test.getLastRunResultForGithub?scenarioId=<test_scenario_id>)](https://latencylingo.com/test-scenarios/<test_scenario_id>/latest-run)

The value of the badge will change depending on whether the scenario has thresholds configured. With thresholds configured, it will show how many thresholds passed for the latest run. Without thresholds, it will share the latest run's overall P90 response time.

With these default settings, the badge will link to the latest test run for this scenario. You can view the shields.io documentation to change the badge style.

Troubleshooting

If you have any issues with the integration, you can check the logs in the "Actions" tab of your repository. If you are still experiencing problems, you can contact support for assistance.

References