Skip to main content
Latency injection adds a configurable delay to a mock response before GoDizzy returns it to your agent. Instead of responding immediately, GoDizzy waits for a random duration within a range you define — giving you a realistic, variable delay that exercises your agent’s timing-dependent logic.

Why inject latency

Real tool APIs are not always fast. Testing your agent only against instant responses misses an entire class of production failures:
  • Retry logic — Does your agent retry on a slow response, and does it back off correctly?
  • Timeout handling — If a tool takes 10 seconds, does your agent time out gracefully or hang?
  • Cost of slow tools — Slow tool calls can cascade into longer end-to-end agent runs. Measuring this in a controlled environment lets you tune budgets before it affects real users.
  • Graceful degradation — Can your agent fall back to a cached result or a simpler strategy when a tool is sluggish?
With latency injection you reproduce all of these scenarios deterministically, in your dev or staging environment, without rate-limiting or abusing a real API.

How it works

Each mock response has two latency fields:
  • Min latency (ms) — The minimum delay, in milliseconds.
  • Max latency (ms) — The maximum delay, in milliseconds.
When GoDizzy serves the mock response, it picks a random integer within [min, max] and waits that many milliseconds before sending the response.
The delay is randomized on every request. If you set min to 200 and max to 500, one request might wait 214 ms and the next might wait 487 ms. This variability is intentional — it more accurately reflects real-world network and service jitter than a fixed delay.
Set both fields to 0 to disable latency and return the mock response immediately.

Configure latency on a mock rule

1

Open the mock rule editor

Navigate to your route collection, find the rule you want to add latency to, and open the rule editor. The rule must have its action set to Mock — latency only applies to mock rules.
2

Set the min and max latency

Enter values in the Min latency (ms) and Max latency (ms) fields. Both values must be non-negative integers, and min must be less than or equal to max.
3

Save

Click Save. GoDizzy creates a new version of the mock response with the updated latency range.
Latency injection only applies to mock action rules. Proxy rules forward the request to your target endpoint directly — GoDizzy does not add artificial delay to proxied traffic.

Examples

Simulate normal but variable API response time

Use a moderate range to exercise retry policies that should only trigger on true timeouts, not normal slowness:
FieldValue
Min latency200 ms
Max latency500 ms
Your agent sees a realistic 200–500 ms response time. Retry logic that triggers under 1 second will fire incorrectly, revealing a misconfigured threshold.

Simulate a near-timeout scenario

Push the delay close to your agent’s configured timeout to see how it behaves at the boundary:
FieldValue
Min latency9000 ms
Max latency10000 ms
If your agent has a 10-second timeout, some requests will succeed just under the wire and others will time out. This reveals race conditions in your timeout-handling code.

Simulate a fully timed-out tool

Set the delay beyond your agent’s timeout threshold to guarantee every request times out:
FieldValue
Min latency15000 ms
Max latency15000 ms
Setting min and max to the same value produces a fixed delay. Use this when you need every request to fail in exactly the same way, such as in a regression test for graceful degradation.

Simulate a rate-limited response with realistic backoff delay

Combine latency with a 429 status code to test both the error handling and the timing of your agent’s retry-after logic:
FieldValue
Status code429
Min latency800 ms
Max latency2000 ms
Body{"error": "rate_limited", "retry_after": {{$randomInt(5,30)}}}

Use cases by scenario

Tune retry policies

Inject a delay that is longer than your agent’s retry threshold. Confirm retries fire when expected and not on fast responses.

Test timeout handling

Set min/max near or beyond the agent’s configured timeout. Verify your agent does not hang and returns a sensible error.

Measure cascading cost

Run a realistic latency range across all tool rules and benchmark end-to-end agent run time. Use the data to set per-tool latency budgets.

Reproduce flaky tool behavior

A wide min/max range (for example, 503000 ms) mimics the jitter of an unstable third-party API without relying on real infrastructure.