Skip to content

Gateway RPS

Gateway Load Test Report

This article introduces the performance of the DCE 5.0 Cloud-Native Gateway in different scenarios, enabling you to configure appropriate resources for the gateway based on your needs.

Test Environment

Before starting the test, it is necessary to deploy DCE 5.0, download and install the testing tool and prepare stress-testing machines.

Object Role Description
DCE 5.0 Cloud-Native Gateway Test Object Deployed in master-slave mode, located at: 172.30.120.211
Locust Testing Tool Running in a 1+4 master-slave distributed mode, with four stress-testing machines' resource configurations at 8 cores and 8 G
Nginx Demo service for testing gateway performance Accessed through DCE 5.0 Cloud-Native Gateway, access address: http://172.30.120.211:30296/
contour Control plane of DCE 5.0 Cloud-Native Gateway Version 1.23.1
envoy Data plane of DCE 5.0 Cloud-Native Gateway Version 1.24.0
Global Management Components that DCE 5.0 Cloud-Native Gateway depends on Version 0.12.1
Container Management Components that DCE 5.0 Cloud-Native Gateway depends on Version 0.13.1
Microservice Engine Components that DCE 5.0 Cloud-Native Gateway depends on Version 0.15.1

Performance Indicators

  • Throughput (RPS): The number of requests processed per second. Combined with CPU utilization, it determines the maximum number of concurrent requests that can be processed per second under a specific resource configuration for the DCE 5.0 Cloud-Native Gateway. The higher the throughput, the better the gateway performance.
  • CPU utilization: The CPU usage of the DCE 5.0 Cloud-Native Gateway instance when processing a specific number of concurrent requests during the test. When the CPU usage exceeds 90%, it is considered to be approaching full load, and the throughput (RPS) at this point is the maximum number of concurrent requests that can be processed normally with the current configuration.

Test Script

  • Run the following command on the Locust Web machine to collect the stress-testing results:

    docker run -p 8089:8089 --network=host -v $PWD:/mnt/locust locustio/locust -f /mnt/locust/gateway-external-nginx.py --master
    
  • Run the following command on the Locust stress-testing machine to simulate user access and perform stress testing:

    docker run -p 8089:8089 --network=host -v $PWD:/mnt/locust locustio/locust -f /mnt/locust/gateway-external-nginx.py --worker --master-host=172.30.120.210
    
  • Stress-testing script gateway-external-nginx.py:

    from locust import task
    from locust.contrib.fasthttp import FastHttpUser
    
    class ShellCard(FastHttpUser):
    
      host = "http://172.30.120.211:30296" # Access address of the tested service
      @task
      def test(self):
        header = {"Host": "external.nginx"}
        self.client.get("/", headers=header)
    

Test Nginx Throughput: Three Replicas, No Resource Limitations

Test Results

Number of Concurrent Users Throughput (RPS) CPU Utilization Analysis
4 4300 58%——70% The resources are not fully utilized, theoretically capable of handling more requests.
8 5700 77%——83% A small amount of resources is still idle, and increasing the number of concurrent requests can be attempted to test the throughput limit.
12 6700 90%——95% Only a very small amount of resources is idle, CPU is close to full load, considered to have reached the maximum throughput limit.

Success

Based on the above, when three replicas of the service are deployed with no resource usage limitations, the DCE 5.0 Cloud-Native Gateway can handle approximately 6,000 to 7,000 concurrent requests, which is excellent performance compared to similar products.

Test Process Screenshots

  • Concurrent Users = 4

  • Concurrent Users = 8

  • Concurrent Users = 12

Investigating the Impact of Contour Resource Configuration on Envoy Performance

The DCE 5.0 Cloud-Native Gateway is further developed and optimized based on the open-source projects Contour and Envoy. Contour acts as the control plane of the gateway, and Envoy acts as the data plane.

When creating the DCE 5.0 Cloud-Native Gateway, the system requires that the gateway be configured with no less than 1 core and 1 G of resources. Therefore, in this test, the minimum resource limit for Contour is set to 1 core and 1 G.

To better demonstrate the impact of Contour's resource configuration, Envoy's resource limit is set to 6 cores and 3 G to ensure that Envoy itself always has high performance and does not affect the test results due to insufficient resources.

To ensure normal resource load on the stress-testing machine, the Locust users are set to 8 by default.

Test Results

Contour Resource Specification Throughput (RPS) CPU Utilization Analysis
1 core 1G 3700 53%——69% Contour's resource configuration ranges from 1 core 1 G to 2 cores 1 G, and then to 3 cores 2 G, but the maximum throughput of the gateway has been maintained at around 3700, with a fluctuation range of only 100. CPU utilization has also remained at around 50% to 70%, with very small overall changes.
2 cores 1G 3600 55%——72%
3 cores 2G 3800 57%——69%

Success

This indicates that Contour's resource configuration has almost no impact on Envoy's performance.

Test Process Screenshots

  • contour: 1 core 1 G

  • contour: 2 core 1 G

  • contour: 3 core 2 G

Investigating the Impact of Envoy Resource Configuration on Throughput

Envoy is fixed to 1 replica, Contour is configured with 1 core and 1 G, and the tested service Nginx is configured with 3 replicas with no resource usage limitations.

Test Results

Envoy Resource Specification Concurrent Users Throughput (RPS) CPU Utilization Analysis
1 core 1G 4 1016 18%——22% When Envoy's resource configuration remains unchanged, even if the number of concurrent users doubles, the gateway throughput does not change much.
8 1181 19%——20%
16 1090 19%——22%
2 cores 1G 4 2103 28%——41% With 4 concurrent users, when the resource configuration is increased from 1 core to 2 cores, the throughput also increases by about 1000.
8 2284 38%——47%
3 cores 1G 8 3355 59%——70% With 8 concurrent users, when the resource configuration is increased from 1 core to 3 cores, the throughput also increases by about 2000.
12 3552 52%——59%
4 cores 2G 8 3497 58%——80% With 12 concurrent users, when the resource configuration is increased from 3 cores to 5 cores, the throughput also increases by about 1000.
12 4250 78%——86%
5 cores 2G 8 3573 60%——81%
12 4698 68%——78%
6 cores 2G 12 4574 78%——85% When configured with 6 cores and 2 G, the throughput is about 5400, and CPU usage also reaches over 90%, close to full load.
16 5401 Above 90%

Success

In conclusion:

  • Envoy's CPU configuration is the determining factor for throughput.
  • Under current stress test resources, the throughput of accessing Nginx through Envoy can reach over 80% of directly accessing Nginx.

Test Process Screenshots

Envoy 1 core 1G

  • Concurrent Users = 4

  • Concurrent Users = 8

  • Concurrent Users = 16

Envoy 2 core 1 G

  • Concurrent Users = 4

  • Concurrent Users = 8

Envoy 3 core 1 G

  • Concurrent Users = 8

  • Concurrent Users = 12

Envoy 4 core 2 G

  • Concurrent Users = 8

  • Concurrent Users = 12

Envoy 5 core 2 G

  • Concurrent Users = 8

  • Concurrent Users = 12

Envoy 6 core 2 G

  • Concurrent Users = 12

  • Concurrent Users = 16

Comments