Performance test reports depend on the metrics you publish using the CLI. Internally, the CLI parses the contents of your input file into rollup metrics and then sends them to our Metric APIs.
Rollup metrics are a way to summarize data points over a given time period. Metrics are rolled up in time intervals of 5 seconds, 30 seconds, 1 minute, 5 minutes, and 30 minutes. These windows allow us to capture insights using the granular data from 5 second rollups and still render charts quickly with larger rollups.
The downside to rollups is that we do not have each data point from the original input file available for analytics. This is a known limitation that we will maintain for cost reasons, but may be addressed in the future.
These are all the metrics we collect from the input file.
- Request count: The number of requests sent.
- Failure count: The number of requests that failed.
- Virtual users: The number of active virtual users.
- Response times: The response times for requests. These are stored for the following measurements: avg, min, max, p50, p75, p90, p95, and p99. Percentiles are calculated using the data points within the rollup window.
Metrics by Operation
For improved insights, we also store all the above metrics grouped by operation. We calculate the response time percentiles accurately within the rollup period for each operation.