Magento2 Load Balancer Health Check for AWS ELB

Written by - 0 comments

Published on - Listed in AWS Internet PHP MySQL DB Linux


The Problem

Amazon Web Services (AWS) provides Elastic Load Balancing (ELB) by performing a back end 'Health Check' for any compute resources, expecting a code such as HTTP 200 'OK' in order to start sending traffic.

Out of the box, eCommerce CMS Magento 2, can cater for 2 potential causes of failure in a typical LEMP (Linux, NginX, MySQL & PHP) stack -

  • PHP Fails = 502 Bad Gateway
  • MySQL  Fails = 503 Service Unavailable

The Solution

How are others doing it?

 A few minutes Googling turned up this approach: https://serverfault.com/questions/578984/how-to-set-up-elb-health-checks-with-multiple-applications-running-on-each-ec2-i

Adapting it to the Magento2 API URI '/rest/default/schema' results in the following code snippet;

## All AWS Health Checks from the ELBs arrive at the default server. Forward these requests on the appropriate configuration on this host.
  location /health-check/ {
    rewrite ^/health-check/(?[a-zA-Z0-9\.]+) /rest/default/schema break;
    # Lie about incoming protocol, to avoid the backend issuing a 301 redirect from insecure->secure,
    #  which would not be considered successful.
    proxy_set_header X-Forwarded-Proto 'https';
    proxy_set_header "Host" $domain;
    proxy_pass http://127.0.0.1;
  }

Why is this useful?

Unfortunately the ELB does not allow us to pass the header information to NginX, so the above must be included in the default server block - more on that in a minute. The format the above ingests the URL lets us specify the intended host header, very useful if running more than one site on each server. So now, by calling this check in the format http://{EC2 Public IP}/health-check/{base url} we retrieve a poorly formatted JSON encoded response.

What are the problems running in this configuration? 

It is a useful building block that we can extend, but it does not cover all of the failure domains of a Magento2 store. These include -

  • Being Administratively in maintenance mode 'php bin/magento maintenance:enable'
  • /vendor folder missing dependencies (returns: Autoload error Vendor autoload is not found. Please run 'composer install' under application root directory)
  • Redis Server becoming unavailable (returns: An error has happened during application run. See exception log for details. Could not write error message to log. Please use developer mode to see the message)

The above health check still returns 'HTTP 200 OK' in the above scenarios even if they return the error messages, meaning they will still receive live traffic routed to them, despite the fact they're not functioning correctly. 

So how can we cater to these additional failure domains? Enter some simple PHP JSON decoding.

<?php
$url = 'http://127.0.0.1/health-check/' . $_REQUEST['hostname'];
$ch = curl_init($url);

curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

$data = curl_exec($ch);
curl_close($ch);

$result = json_decode($data);

if (json_last_error() === JSON_ERROR_NONE) {
  http_response_code(200);
  } else
  http_response_code(418);
?> 

PHP is the natural language choice, given this is what Magento2 uses natively. You can place the above inside an index.php file in an alternative location on the server - and use a separate PHP pool and user combination for additional security if required. It calls the initial solution, and parses the result to test for valid JSON. In normal circumstances, this will correctly route traffic to functioning nodes but when any of the previously mentioned failure conditions are met will instead tell the Load Balancer 'HTTP 418 I AM A TEAPOT' to stop the traffic being routed.

As an added bonus, the data being sent back to the load balancer is only 5 bytes, with the full API response staying locally within the server. 

Full NginX Config

# ELB Health Check Config
server {
  listen 80 default_server;
  server_name  _;
  index index.php;
  root /var/www/lbstatus;
  access_log off;
  location = /favicon.ico {access_log off; log_not_found off;}
  location ~ index\.php$ {
    try_files $uri =404;
    fastcgi_intercept_errors on;
    fastcgi_pass fastcgi_backend;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
  }

  ## All AWS Health Checks from the ELBs arrive at the default server. Forward these requests on the appropriate configuration on this host.
  location /health-check/ {
    rewrite ^/health-check/(?[a-zA-Z0-9\.]+) /rest/default/schema break;
    # Lie about incoming protocol, to avoid the backend issuing a 301 redirect from insecure->secure,
    #  which would not be considered successful.
    proxy_set_header X-Forwarded-Proto 'https';
    proxy_set_header "Host" $domain;
    proxy_pass http://127.0.0.1;
  }
}

So there we go, with 2 levels of 'Health Checks' within the NginX default server configuration, 1 for injecting the header information to return the API JSON (when working) and a simple PHP function to check the JSON response we can tell the ELB when Magento2 is healthy. The URI for health check in ELB will become '/?hostname={your base url}' and you should see 200 OK responses in the logs presuming everything is ship shape.

Failure Domains Not Catered For

  • Front end issues, eg. broken CSS, JS, images
  • Products missing from categories, often Indexing issues
If you have any solutions that might help us programmatically address them, please get in touch and let us know in the commments.


Add a comment

Show form to leave a comment

Comments (newest first)

No comments yet.

RSS feed

Blog Tags:

  AWS   Android   Ansible   Apache   Apple   Atlassian   BSD   Backup   Bash   Bluecoat   CMS   Chef   Cloud   Coding   Consul   Containers   CouchDB   DB   DNS   Database   Databases   Docker   ELK   Elasticsearch   Filebeat   FreeBSD   Galera   Git   GlusterFS   Grafana   Graphics   HAProxy   HTML   Hacks   Hardware   Icinga   Icingaweb   Icingaweb2   Influx   Internet   Java   KVM   Kibana   Kodi   Kubernetes   LVM   LXC   Linux   Logstash   Mac   Macintosh   Mail   MariaDB   Minio   MongoDB   Monitoring   Multimedia   MySQL   NFS   Nagios   Network   Nginx   OSSEC   OTRS   Office   PGSQL   PHP   Perl   Personal   PostgreSQL   Postgres   PowerDNS   Proxmox   Proxy   Python   Rancher   Rant   Redis   Roundcube   SSL   Samba   Seafile   Security   Shell   SmartOS   Solaris   Surveillance   Systemd   TLS   Tomcat   Ubuntu   Unix   VMWare   VMware   Varnish   Virtualization   Windows   Wireless   Wordpress   Wyse   ZFS   Zoneminder   


Update cookies preferences