Skip to content

Tune network settings

This walkthrough illustrates the steps required from the benchmark designer in order to configure the latency of the data sources of a Benchmark specification.


In this walkthrough we assume that you already have already prepared the following:

  • A Benchmark for the benchmark you want to change the network parameters.

We have already prepared several benchmarks to use. If you want to create your own benchmark specification, check out this guide.

Step 1 - Inject latency for each source endpoint

KOBE allows simulating network traffic for all sources of the benchmark. For every source dataset of the benchmark, you can:

  • inject delay in the connection between the given source endpoint and the federation engine.
  • inject delay in the connection between the given source endpoint and another source endpoint.

The reason for injecting delays between the federated sources is the fact that every SPARQL endpoint can issue a SPARQL query to every other endpoint using the SERVICE SPARQL keyword.

The latency of each source can be configured using the following delay parameters. The functionality of these parameters is offered by Istio. Check this link for more information.

  • The fixedDelaySec and fixedDelayMSec are used to indicate the amount of delay in seconds and in milliseconds.
  • The percentage field can be used to only delay a certain percentage of requests.

You can extend your benchmark specification can be extended in order to define the latency of the sources as follows:

# In this example we will use two datasets, ds1 and ds2.
    - name: ds1
      # adds 1 second of delay before forwarding all responces to the federator
           fixedDelaySec: 1
           percentage: 100
        # adds 2 sec of delay before forwarding the 50% of responces to the source ds1
        - datasetSource: ds2
            fixedDelaySec: 2
            percentage: 50
      # ... add remaining parameters for ds1

    - name: ds2
      # ... add remaining parameters for ds2

Check the following link in which we illustrate a simple working example with delays:

This benchmark contains three SPARQL queries and two datasets (namely toy1 and toy2). All responses from toy1 to the federator are delayed by 2 seconds and 150 milliseconds, all responses from toy2 to the federator are delayed by 2 seconds, and the 50% of the responses from toy1 to toy2 are delayed by 3 seconds.


We have already prepared a benchmark specification with delays to experiment with:


We plan to define more benchmark specifications in the future. We place all benchmark specifications in the examples/ directory under a subdirectory with the prefix benchmark-*.