HTTP POST benchmarking / stress-testing an API behind HAProxy with siege

Written by - 1 comments

Published on - Listed in Internet Linux HAProxy


I was looking for a way to stress-test a SOAP API running behind a HAProxy loadbalancer using a HTTP POST. In my usual stress-testing scenarios (using GET) I've been using "wrk" and "httperf" for years. But this week I came across something else: siege.

According to the description, siege is:

HTTP regression testing and benchmarking utility
 Siege is an regression test and benchmark utility. It can stress test a single
 URL with a user defined number of simulated users, or it can read many URLs
 into memory and stress them simultaneously. The program reports the total
 number of hits recorded, bytes transferred, response time, concurrency, and
 return status. Siege supports HTTP/1.0 and 1.1 protocols, the GET and POST
 directives, cookies, transaction logging, and basic authentication. Its
 features are configurable on a per user basis.

Installation in Debian and Ubuntu is very easy as siege is already part of the standard repositories:

$ sudo apt-get install siege

To run a benchmarking (stress-) test using POST data, I used the following command:

# siege -b --concurrent=10 --content-type="text/xml;charset=utf-8" -H "Soapaction: ''" 'http://localhost:18382/ POST < /tmp/xxx' -t 10S

So let's break that down:

-b: Run a benchmark/stresstest (there are no delays between requests as it would be from a normal Internet user)
--concurrent=n: Simulate n concurrent users, here I chose 10 concurrent users
--content-type: Define the content-type. Can be pretty important when testing a POST (to send data in the correct format)
-H: Several additional HTTP headers can be send by using -H (multiple times)
-t: For how long do you want to run siege? Here I chose 10 seconds.

One has to look out for the destination URL syntax. It must be in single quotes and also contain the request method:

'URL[:port][URI] METHOD'

Additionally I directly used the data stored in the file /tmp/xxx to be sent by POST:

$ cat /tmp/xxx
InternetClaudio KuenzlerHello worldThis is a test.

This can of course be in any format (for example json) you want, as long as the destination API correctly handles the data.

The siege command above returns statistics after it finished its run:

# siege -b --concurrent=10 --content-type="text/xml;charset=utf-8" -H "Soapaction: ''" 'http://localhost:18382/ POST < /tmp/xxx' -t 10S
** SIEGE 3.0.8
** Preparing 10 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.

Transactions:                 224 hits
Availability:              100.00 %
Elapsed time:                9.60 secs
Data transferred:            0.10 MB
Response time:                0.23 secs
Transaction rate:           23.33 trans/sec
Throughput:                0.01 MB/sec
Concurrency:                5.41
Successful transactions:         224
Failed transactions:               0
Longest transaction:            4.91
Shortest transaction:            0.00

FILE: /var/log/siege.log
You can disable this annoying message by editing
the .siegerc file in your home directory; change
the directive 'show-logfile' to false.

Most of them are pretty safe explanatory (otherwise use the man page, it is greatly documented), but just to add some notes:

Transactions: siege was able to hit the target 224 times (HTTP responses below 500)
Elapsed time: The test ran for 9.6 seconds
Successful transactions: From all transactions, 224 transactions were successful
Availability: ... ergo resulting in 100% availability

I mentioned before that I was testing a SOAP API balanced through a HAProxy loadbalancer (listening on port 18382 as you probably noticed).
In this particular setup the SOAP servers can only handle a maximum of 6 concurrent connections. In HAProxy I set the maxconn value for each backend server to "6", set a miminum queue and very low queuing time (I don't want requests to be piled up in a queue, instead HAProxy should deliver an error). The HAProxy backend runs with 16 servers, each allowing 6 concurrent connections. This makes a total of 96 possible concurrent connections (16 x 6). Let's try siege with 100 concurrent users, there should be some failed requests.

# siege -b --concurrent=100 --content-type="text/xml;charset=utf-8" -H "Soapaction: ''" 'http://localhost:18382/ POST < /tmp/xxx' -t 10S
** SIEGE 3.0.8
** Preparing 100 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.

Transactions:                 224 hits
Availability:               39.16 %
Elapsed time:                9.35 secs
Data transferred:            0.13 MB
Response time:                1.52 secs
Transaction rate:           23.96 trans/sec
Throughput:                0.01 MB/sec
Concurrency:               36.34
Successful transactions:         224
Failed transactions:             348
Longest transaction:            5.09
Shortest transaction:            0.00

As in the previous test, siege was able to hit the target 224 times.
However this time there are 348 failed and only 224 successful transactions, resulting in an availability of 39.16%.

And now the same with the max concurrent connections of 96:

# siege -b --concurrent=96 --content-type="text/xml;charset=utf-8" -H "Soapaction: ''" 'http://localhost:18382/ POST < /tmp/xxx' -t 10S
** SIEGE 3.0.8
** Preparing 96 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.

Transactions:                 224 hits
Availability:              100.00 %
Elapsed time:                9.38 secs
Data transferred:            0.10 MB
Response time:                1.52 secs
Transaction rate:           23.88 trans/sec
Throughput:                0.01 MB/sec
Concurrency:               36.20
Successful transactions:         224
Failed transactions:               0
Longest transaction:            5.09
Shortest transaction:            0.00

100% success rate here. This is proof that siege used exactly 96 users as HAProxy allowed.

By the way: siege also allows to compare previous test results by checking the log file it creates:

# cat /var/log/siege.log
Date & Time,  Trans,  Elap Time,  Data Trans,  Resp Time,  Trans Rate,  Throughput,  Concurrent,    OKAY,   Failed
2017-08-03 08:56:46,    224,       9.19,           0,       1.52,       24.37,        0.00,       36.95,     224,     340
2017-08-03 09:04:01,    224,       9.35,           0,       1.52,       23.96,        0.00,       36.34,     224,     348
2017-08-03 09:05:24,    224,       9.03,           0,       1.58,       24.81,        0.00,       39.19,     224,     336
2017-08-03 09:05:39,    224,       9.23,           0,       1.50,       24.27,        0.00,       36.50,     224,     329
2017-08-03 09:06:55,    224,       9.49,           0,       1.45,       23.60,        0.00,       34.23,     224,       0
2017-08-03 09:08:03,    224,       9.38,           0,       1.52,       23.88,        0.00,       36.20,     224,       0

Hands down a very good tool for http benchmarking/stress-testing. Not just for POST requests.


Add a comment

Show form to leave a comment

Comments (newest first)

Juanga Covas from wrote on Jan 20th, 2022:

Thanks! I knew about "siege", but being able to massively repeat POSTs was a day saver for me.


RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Icingaweb   Icingaweb2   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder   


Update cookies preferences