Flurry Logo

Flurry Documentation

Core Concepts

Introduction

Getting started with Flurry begins with creating or joining an organisation. Organisations are collaborative spaces where users can work together on projects. Admins can invite new members and manage roles. Within each organisation, projects are created to group related work. Each project can contain multiple collections, which serve as containers for your testing logic and scenarios.

Collections

Collections are at the core of Flurry. When using the CLI, you initiate a collection run. This collection contains your testing building blocks: tasks. Tasks are units of action, such as making an API call or listening for a callback. They come in two flavors: dispatchers, which handle outgoing actions like sending requests, and listeners, which handle incoming data such as webhooks or event stream messages.

Flows

Tasks are combined into flows. Flows represent different testing scenarios. Only flows that are toggled to be active are executed when you run a collection, which means you can work on a flow and only enable it when it is ready. Tasks can be run in groups or individually. Grouped tasks are executed in two concurrent phases. All dispatchers are executed first (in parallel), while listeners are executed after all dispatchers have completed (also in parallel).

Environments

Environments allow you to configure different testing contexts. Each environment contains variables that can be referenced in your tasks using the {{variableName}} syntax. This enables you to test the same collection against different API endpoints, authentication credentials, or configurations without duplicating your test logic. When running via the CLI, Flurry also reads your system process environment variables (for example, values exported in your shell) in addition to the environments defined in the collection. If a variable name matches one defined in the collection environment, the collection variable takes precedence. A Local environment is available in the UI, but is not available in the CLI. The values in the Local environment are only stored in the browser. This is useful for initial HTTP request testing outside the CLI, without needing to save sensitive data to the backend.

Assertions

Relevant task types can have assertions that validate the data it produces. Simple checks can be quickly configured in the UI. Define the path to the field, assertion type, data type and expected value. You can create these simple comparisons or complex custom JavaScript validation scripts.

Task Types

Dispatchers

Dispatchers are tasks that initiate outgoing actions, such as making HTTP requests or publishing messages to queues.

HTTP Requests

Make API calls to your endpoints.

  • Configuration:

    Set HTTP method, request URL, headers, body content, and query parameters. Define optional timeout and request delay.
  • Polling:

    Retry requests with configurable intervals and stop conditions.
  • Assertions:

    Validate response status, headers, and body content.
  • Pre-request:

    Run scripts before the request is sent. Pre-request scripts can set environment variables. Scripts can also be named for easier identification.
  • Post-request:

    Run scripts after the request completes (and after assertions). Post-request scripts receive the full response root and can set environment variables for subsequent tasks. Scripts can also be named for easier identification.

Listeners

Listeners are tasks that handle async incoming data from external sources like webhooks and event streams.

Callbacks

Listen for HTTP callbacks or webhooks sent from your application.

  • Matching Rules:

    Define rules to match against incoming data. These are configured using the familiar structure of assertions.
  • Assertions:

    Validate payload structure and content.
  • Timeouts:

    Configure how long to wait for expected callbacks.
  • Post-callback:

    Process values from the matched payload as variables to use in subsequent tasks.

In addition to being a task type, callbacks are the main ingestion process for HTTP messages (webhooks, notifications, third-party events) that need to be correlated back to a Flurry collection run. All callbacks in a collection share the same callback endpoint on the Flurry API. This endpoint requires two query parameters to identify the target collection and environment.

Callback API contract

  • POST https://api.flurrytest.com/api/v1/callback?collectionId=<COLLECTION_ID>&environment=<ENVIRONMENT_NAME>: ingest a single callback message for a specific collection + environment.
  • Body: any payload is accepted and stored as a raw string. Callbacks are configured with the body content type, and the body is parsed accordingly. Content types other than JSON coming soon.
  • Response: JSON of the form { success: true, events: CallbackEvent[], total: number, collectionId, environment }.
  • Retention: callback events are automatically cleaned up after ~5 minutes if they are not processed by a collection run.

Auth model

  • Requests made to the callback endpoint must include an Authorization: Bearer <JWT> header.

Generate an API key

  • Open a collection in the UI and go to API Keys.
  • Click Generate New API Key and select Callbacks.
  • Copy the token from the one-time display dialog and configure your application to include the header Authorization: Bearer <TOKEN>.
  • The callback URL (including the required collectionId and an environment placeholder) is shown in the collection Overview section.

AWS Kinesis

Connect to a Kinesis data stream.

  • Stream Configuration:

    Define the stream name and region.
  • Body Format & Encoding:

    Configure the expected format and encoding for event data.
  • Matching:

    Set rules to match incoming record data. This uses the same structure as assertions.
  • Assertions:

    Validate the content of incoming data.
  • Post-event:

    Set payload values as variables that can be used in follow up tasks.

Assertions

Overview

Assertions are validation rules that test whether your API responses, webhook payloads, or event data meet your expectations. They allow you to verify data values, check response structure, validate business logic, and ensure your integrations work correctly. Tasks can have multiple assertions to validate different aspects of the received data.

Data Sources

Body

The response body or payload data. Most commonly used for validating API response content. Default source for data from data streams.

{"userId": 123, "name": "John", "orders": [...] }

Headers

HTTP headers or event metadata. Useful for validating content types, authentication headers, or custom metadata.

content-type: application/json
x-api-version: v2

Response

HTTP response metadata like status codes. Only available for HTTP request tasks.

status: 200
ok: true
statusText: "OK"

Path Navigation

Dot notation

Navigate nested objects using dot notation:

user.profile.email

Accesses: {"user": {"profile": {"email": "..."}}}}

Array access

Access array elements with bracket notation:

users[0].name
orders[1].total

Gets the first user's name or second order's total

Root-level arrays

For responses that are arrays at the root:

[0].userId
[2].status

Access items directly from root array: [{"userId": 1}, ...]

Empty path

Leave path empty to work with the entire response:

Path: (empty)
Type: length

Validates the length of the entire response array

Assertion Types

Comparison Operators

==
Equal - exact match
!=
Not equal - value differs
>
Greater than
<
Less than
>=
Greater than or equal
<=
Less than or equal

Special Types

Length
Validates array or string length
Exists
Checks if field is present
Script
Custom JavaScript validation

Scripts (Custom JavaScript)

Scripts are used in multiple places in Flurry: pre-request scripts, post-request scripts, and assertion scripts. Listeners can also run post-processing scripts (for example post-callbackand post-event scripts). Scripts can set environment variables for use in later tasks. Assertion scripts can also return a boolean indicating pass/fail.

Execution context

Scripts are functions that receive a single context parameter:

function(context) {
  const data = context.data;
  const variables = context.variables;

  // ...your logic...

  // For pre/post request scripts: return { setVariables: { ... } }
  // For assertion scripts: return boolean OR { passed, setVariables? }
}

Setting variables

Scripts set variables by returning a setVariables object. Assertion scripts can also set variables when they return the object. Object keys are the variable names, and values are the variable values. Existing variables with the same name are overwritten.

return {
  setVariables: {
    timestamp: Date.now()
  }
};

Where scripts run (and what context.data is)

  • HTTP pre-request scripts:
    context.data is undefined.
  • HTTP post-request scripts (runs after assertions):
    context.data is the full response object.
  • HTTP assertion scripts:
    context.data is also the full response object. Assertion scripts do not use Source/Path in the UI; scripts should explicitly access data.body to access the response body.
  • Callback post-callback scripts:
    context.data is the matched callback payload (parsed JSON or raw text, depending on the callback body format).
  • Kinesis post-event scripts:
    context.data is the matched record payload (parsed JSON).
  • SQS pre-request scripts:
    context.data is undefined.

Example: HTTP post-request script

function({ data, variables }) {
  const userId = data?.body?.userId;

  return {
    setVariables: {
      currentUserId: userId,
      lastStatus: data?.response?.status
    }
  };
}

Example: HTTP pre-request script

function({ variables }) {
  const counter = variables["COUNTER_VALUE"] || 0;

  return {
    setVariables: {
      COUNTER_VALUE: counter + 1
    }
  };
}

Example: HTTP script assertion

function({ data }) {
  const ok = data?.response?.status === 200;
  const hasUserId = !!data?.body?.userId;

  return {
    passed: ok && hasUserId
  };
}

Flows

Flow Orchestration

Flows organize your tasks into coherent testing scenarios.

Execution Modes

Sequential Mode

Tasks run one after another in order. Each task waits for the previous one to complete before starting.

Grouped Mode

Tasks are executed in two phases: all dispatchers run in parallel first, then all listeners run in parallel.

Flow Controls

  • Active/Inactive: Toggle flows on or off to control which scenarios are executed during a collection run.

Running a Collection

CLI Usage

Generating an API key

  • Open a collection in the UI and go to API Keys.
  • Click Generate New API Key and select CLI.
  • Copy the token from the one-time display dialog and configure your environment to use it. You can either pass it via the --api-key command line option or set the environment variable FLURRYTEST_API_KEY.
  • The callback URL (including the required collectionId and an environment placeholder) is shown in the collection Overview section.

Basic Command

npx flurry-cli@latest --collection-id=YOUR_COLLECTION_ID --api-key=YOUR_API_KEY --environment=preproduction

Command Line Options

  • --collection-id- The collection ID to run (required). Find this in your collection's settings.
  • --api-key- Your API key for authentication (required). Alternatively, set FLURRYTEST_API_KEY in your process environment.
  • --environment- The environment to use (required)
  • --run-flow-id- Run a single flow by ID (optional)
  • --verbose- Set to true for detailed output (optional)
  • --fail-fast- Set to true to exit the test run as soon as an assertion fails (optional)

GitHub Actions Integration

You can easily integrate Flurry collection runs into your GitHub Actions CI/CD pipelines. If you have not set up the configuration for the aws-actions/configure-aws-credentials step, start by configuring the IAM role and OIDC trust relationship as described in this AWS blog post. Below as a simple workflow example that runs a Flurry test collection on pushes to a specific branch. Ensure that the role has the necessary permissions to access the resources defined in your Flurry tasks.

name: Flurry Test

on:
  push:
    branches:
      - <YOUR_BRANCH>

permissions:
  contents: read
  id-token: write

jobs:
  flurry-test:
    runs-on: ubuntu-latest
    environment: <YOUR_GITHUB_ENVIRONMENT>
                  
  steps:
    - name: Setup Node
      uses: actions/setup-node@v4
      with:
        node-version: 20

    - name: Configure AWS Credentials
      uses: aws-actions/configure-aws-credentials@main
      with:
        role-to-assume: ${{ vars.MY_ROLE }}
        aws-region: ${{ vars.AWS_REGION }}

    - name: Flurry test env
      env:
        FLURRY_API_KEY: ${{ secrets.FLURRY_API_KEY }}
        FLURRY_COLLECTION_ID: ${{ vars.FLURRY_COLLECTION_ID }}
      run: |
        npx flurry-cli@latest \
          --collection-id=${FLURRY_COLLECTION_ID} \
          --environment=preproduction