Clustering Overview

Loadcoder can be clustered over several remote servers. The main idea behind this is that you can distibute your load test over more resources than you would have in you own workstation. This will be important if you design tests that uses a lot of CPU and memory. Since Loadcoder measures response times, it is crucial that enough resources always is available throughout the execution.

Architecture

Architecture

Description

The cluster can be setup in a various of ways but the underlying technique is the same. The cluster infrastructure is managed as code with a Controller application that calls the Docker API at each node. You can customize how many Loadcoder instance you would like, which docker image versions to use and of course where the cluster shall execute. See below list to get a more detailed description:

The Workstation

This is typically your computer where you have your IDE and develop the tests. You will also manage the cluster from here

The Master and the Workers

The Master and the Worker nodes will be machines runnning Linux with the Docker API running. Follow instructions for Initial Cluster Setup to setup these machines.

Controller

The Controller is a Java application you use from your workstation to manage you Loadcoder cluster. You can for instance setup the entire cluster, start a performance test and tear the test down, fully automated, with just a few lines of code. Follow the Controller instructions for how to create and use it.

Loadcoder

The cluster are distributing Loadcoder instances as Docker containers through the Docker API. Check out the Loadcoder container description for further information.

Host volume

Host volume is a persistent volume shared between the host machine and the Loadcoder container. Read the Host volume description for more information.

Deployment

This section describes different ways deploying the cluster

If your clustered Loadcoder test doesn't work as expected, please consult the Troubleshooting section

Run entire cluster at your Workstation

It is possible to run the entire cluster locally at your Workstation. This deployment is recommended for early performance tests and as a good first step to understand how the Loadcoder cluster works.

Keep in mind that your workstation will run both the Master and the Worker containers on top of everything else that is running. Be therefore aware of the amount of resources being used.

  1. Make sure your Workstation are setup both as a Master and Worker, according to Initial Cluster Setup
  2. Create a Load Test project and configure you Workstation as the single node.

Run entire cluster at a remote machine

  1. Make sure the inteded machine are setup both as a Master and Worker, according to Initial Cluster Setup
  2. Create a Load Test project and configure you the remote machine as the single node.

Run cluster distributed on several remote machines

  1. Make sure the intended machines are setup both as a Master and Worker, according to Initial Cluster Setup
  2. Create a Load Test project and configure all machines as nodes. Choose one of them to be the Master node.

Initial Cluster Setup

This section describes how the setup the prerequisites at the Master and Worker nodes

You need to keep attention to the security before setting up the cluster. Consider the security measures listed at the Cluster Security page to stay safe!

System Requirements

These are the recommended lowest specs needed to setup a machine:

All machines needs to be able reach the Master node over the network

Master specific

Worker

OS

As of today, Loadcoder Cluster needs to be executed at Linux. The Following list are Linux distributions and verions that has been verified to be able to execute the cluster

Domain lookup

While not required, it is recommended to use machines that can be identified by host through a DNS that all the nodes (including the workstation) can reach. Using DNS hosts will simplify the cluster infrastructure configuration, that oterwise needs to be done by using local host to IP mappings. See the cluster configuration page for more details on how to configure the cluster infrastructure.

If your cluster machines can't be found through a DNS, you can setup local host / ip lookup manually. In Linux this is done in the file /etc/hosts. In this case, add the host / ip mappings at all the machines in you cluster like below:

192.168.1.100       masternode
192.168.1.101       workernode1

Docker API

The Docker API is a service that takes request and performs docker operations. Edit the following file:

$ vi /lib/systemd/system/docker.service

Comment out or remove the following line with a # like this:

# ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Paste the following line after the line above. Note that since the Docker API will be setup using an unsecure connection, it is cruical that you use nothing else but the IP 127.0.0.1 here. This will only work if you run the entire cluster at one machine (your Workstation). If you want to distribute your cluster over remove machines, it is highly recommended to setup the Docker API with MTLS according to the Cluster Security instructions

ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375

Then execute the following commands to active the Docker API

            $ systemctl daemon-reload
$ service docker stop
$ service docker start

Verify that the port up and listening on ip 127.0.0.1:2375

$ netstat -an | grep 2375
tcp        0      0 127.0.0.1:2375          0.0.0.0:*               LISTEN

The Load Test project

This section describes how to build each cluster related component of your load test project.

Load Test Project

Instead of starting from scratch, you can always use the Loadcoder examples zip below. It contains some useful examples for how to start building your own clustered Load Test project from.

loadcoder-examples.zip loadcoder-examples.zip
Controller

The Controller is a Java application that you will use to manage the Loadcoder cluster. There is no GUI. Everything is done through code and configuration. Below is an examle of a Controller app.

import java.io.File;
import com.loadcoder.cluster.clients.grafana.GrafanaClient;
import com.loadcoder.cluster.clients.docker.LoadcoderCluster;
import static com.loadcoder.cluster.clients.docker.MasterContainers.*;

public class Controller {

	public static void main(String[] args){
		LoadcoderCluster cluster = new LoadcoderCluster();
		//Creates and starts Grafana, InfluxDB, Loadship and also Artifactory
		cluster.setupMaster();
		
		//Send this Maven project as a zip file to the Loadship server
		//cluster.uploadTest(new File("."));
		
		//Start a new clustered Loadcoder test
		//cluster.startNewExecution(1);
		
		//Create a Grafana Dashboard based on the data that the test wrote to InfluxDB
		//GrafanaClient grafana = cluster.getGrafanaClient(cluster.getInfluxDBClient("LoadcoderClusterTests", "InfluxReportTest"));
		//grafana.createGrafanaDashboard();
		
		//Stops and removes Grafana, InfluxDB and Loadship
		//cluster.stopExecution();
		
	}
}
loadcoder.conf

loadcoder.conf is the default configuration file that Loadcoder will try to find as a resource

Below is a minimal configuration file that will will work when running the cluster locally (at your Workstation). The configuration file can be extended with customized ports, additional nodes. Visit the Cluster Configuration Documentation for the full description.

Also note that a hostip mapping is configured and used as the host value for the master host. In order to make it work, the hostname master must be resolvable from the Workstation, either by DNS or by adding it into the file /etc/hosts

Note that docker.mtls below is set to false, which is unsecure. MTLS shall only be disabled in situations where the Docker API can't be reached by anyone else. If you remove this parameter, or set the value to true, the Docker client will try to authenticate with mtls. See Cluster Security for information of how to setup MTLS.

################## CONTAINERS ##################
influxdb.image=influxdb:1.7.10
grafana.image=grafana/grafana:5.4.3
loadship.image=loadcoderhub/loadcoder:1.0.0
loadcoder.image=loadcoderhub/loadcoder:1.0.0


################ INFRASTRUCTURE ################
cluster.masternode=1
docker.mtls=false

node.1.host=master
node.1.dockerapi.port=2375

hostip.master=192.168.1.104
test.sh

test.sh is a required file that must exist in order for your test to work. It is a bash script that will be executed inside each Loadcoder container that you start.

This is where you decide how the actual Loadcoder test command shall be executed. The recommeded way is to run the test in the Maven test phase, like shown below.

Be creative! Design you script as you want it.

#!/bin/bash
echo "Create this test script however you like it!"
mvn -Dtest=MyLoadTest -Dloadcoder.configuration=cluster_configuration.conf test > /root/host-volume/$LOADCODER_CLUSTER_INSTANCE_ID.log
Loadcoder Test

If implemented correctly, Loadcoder tests will run just as good locally as within the cluster. This is true for everything except that is cluster you not have access to a display. This means that you cannot use the RuntimeChart or ResultChart graphs as they are GUI components that will try to start inside the clustered docker container. This wont work!

Instead, use the InfluxDB and Grafana integration to show your Loadcoder results. The example below shows how to call the method storeAndConsumeResultRuntime so that all results are reported runtime into the InfluxDB database configured in you loadcoder configuration file

import org.testng.annotations.Test;
import com.loadcoder.cluster.clients.docker.LoadcoderCluster;
import com.loadcoder.cluster.clients.influxdb.InfluxDBClient;
import com.loadcoder.load.LoadUtility;
import com.loadcoder.load.scenario.ExecutionBuilder;
import com.loadcoder.load.scenario.Load;
import com.loadcoder.load.scenario.LoadBuilder;
import com.loadcoder.load.scenario.LoadScenario;
import static com.loadcoder.statics.Statics.*;
public class InfluxReportTest {

  @Test
  public void influxReporterTest() {
    LoadScenario ls = new LoadScenario() {

      @Override
      public void loadScenario() {
        load("simple-transaction", () -> { LoadUtility.sleep(54);
        }).perform();
      }
    };

    Load l = new LoadBuilder(ls)
    .stopDecision(duration(120 * SECOND))
    .throttle(2, PER_SECOND, SHARED).build();
    
    //THIS IS WHERE YOU IMPLEMENT HOW THE LOADCODER TEST REPORTS THE RESULT TO THE INFLUXDB.
    new ExecutionBuilder(l)
    .storeAndConsumeResultRuntime(InfluxDBClient
    .setupInfluxDataConsumer(new LoadcoderCluster(), "LoadcoderClusterTests", "InfluxReportTest"))
    
    .build().execute().andWait();
  }
}
The zip

The distribution of the test is made by zipping the Load test project and then send it to the Loadship container. The Loadship will store it in memory and make it available for download. The containers that will run Loadcoder will start by downloading and unzipping the the Load test project from Loadship and then execute test.sh

Maven Configuration

If your load test project depends on other artifacts managed at your local Maven repository, the Maven settings needs to be configured accordingly. This can be done in two approaches.

Loadcoder Container

The Loadcoder load test will be distributed to the Worker nodes through the Docker API. Each instance will run inside a container built with the image loadcoderhub/loadcoder. It will download the test package earlier uploaded to the Loadship service and execute the test.sh.

The image comes with the below applications at hand for your load test execution

If you need other versions or additional tools, you can do this by building your Loadcoder image. The image definition is version controlled at the Loadcoder Github project under module loadcoder-cluster.

Host volume

The host volume is a storage volume that can both be accessed from the host machine as well as from inside the Loadcoder container. It is created during the creating of the Loadcoder container and the content will be persisted even though the containers are deleted. The Host volume will be used to persist logs and the Maven local repository by default, but can be used as you like for other things as well. The volume is mounted inside the Loadcoder container at:

/root/host-volume

By using the docker inspect command with the loadcoder container ID and grep for "Source" you can find where to volume is mounted on the host machine:

$ docker inspect 8313f0b2e9c1 | grep Source
                "Source": "/var/lib/docker/volumes/LoadcoderVolume/_data",

If you redirect the output from the command that starts the test to a log file at the host volume (as shown in test.sh), you have an easy way of accessing the test output from the machine where the container is running

Grafana & InfluxDB

Grafana is a webservice for data visualization. InfluxDB is a time series database. Together they compose the reporting mechanism of the Loadcoder Cluster. It works like this

Grafana Dashboard
  1. When a Loadcoder test is started with the InfluxDB reporting mechanism, it will first make sure that there is database within InfluxDB that has the name that corresponds to the group name and the test name stated in the call InfluxDBClient.setupInfluxDataConsumer
    new ExecutionBuilder(load).storeAndConsumeResultRuntime(
      InfluxDBClient.setupInfluxDataConsumer(new LoadcoderCluster(), "LoadcoderClusterTests", "InfluxReportTest"))
      .build().execute().andWait();
    If the database doesn't exist it will be created. The the test starts and Loadcoder will continuously report the transaction results to InfluxDB throughout the test.
  2. When test test successfully started, a Grafana Dashboad can be created that shows the results. This can be done from the Controller class by calling the GrafanHelper
    LoadcoderCluster client = new LoadcoderCluster();
    GrafanaClient grafana = client.getGrafanaClient(
      client.getInfluxDBClient("LoadcoderClusterTests", "InfluxReportTest"));
    grafana.createGrafanaDashboard("2020.*");
    The group name and the test name is here reuse, as well as a regexp pattern to match the exeuction id of the test. If not specified otherwise, this will be a String containing date and time. The first thing that happens is that InfluxDB will be called to find the transactions that are reported in by the test. The purpose is this is to find the transaction names. It is therefore important that the load test is started and already reported results for each transaction name before the dashboard is created.
  3. Once the information about the transaction have been collected from the database, Grafana is called to create the datasource and the dashboard.
  4. The dashboard can be viewed by login in to the Grafana web (http://localhost:3000 if running Grafana at localhost and default port. Default user and password is admin / admin). Go do Dashboard -> Manage to find the dashboard grouped up as directories with names accoring to the group name you used during creation
Grafana Dashboard

You can easily create new graphs for your load tests by using the GrafanaHelper

Offline mode

A Loadcoder cluster can partly be used offline (without access to internet). This sections explains how

Architecture

Running the cluster in offline mode is done by letting the Master and the Worker nodes access the required online service through a dedicated machine, described below as the Internet Accessor. Clustered Loadcoder tests needs internet access because of two reasons

The Internet Accessor

There may be good reasons to not connect the cluster nodes to the internet directly. By introducing an Internet Accessor node, you will keep your cluster offline as well as functioning. This Internet Accessor can be any machine (even the Master node) that can be reached from the Master and the Worker nodes, and will run one local Maven repository and one Docker registy