Difference between revisions of "HAproxy"

(Configuration)
Line 84: Line 84:
 
They also ship an xinetd service definition. On RHEL/CentOS the service needs to be added to /etc/services.
 
They also ship an xinetd service definition. On RHEL/CentOS the service needs to be added to /etc/services.
  
  # default: on  
+
# default: on
  # description: mysqlchk  
+
# description: mysqlchk
  service mysqlchk  
+
service mysqlchk
  {  
+
{
  # this is a config for xinetd, place it in /etc/xinetd.d/
+
# this is a config for xinetd, place it in /etc/xinetd.d/
          disable = no  
+
        disable         = no
          flags          = REUSE  
+
        flags          = REUSE
          socket_type    = stream  
+
        socket_type    = stream
          port            = 9200  
+
        port            = 9200
          wait            = no  
+
        wait            = no
          user            = nobody  
+
        user            = nobody
          server          = /usr/bin/clustercheck <USERNAME> <PASSWORD>
+
        server          = /usr/bin/clustercheck
          log_on_failure  += USERID  
+
        server_args    = username password
          only_from      = 0.0.0.0/0  
+
        log_on_failure  += USERID
          # recommended to put the IPs that need
+
        only_from      = 0.0.0.0/0
          # to connect exclusively (security purposes)
+
        per_source      = UNLIMITED
          per_source      = UNLIMITED  
+
        type            = UNLISTED
  }  
+
}
  #
+
#

Revision as of 07:55, 8 July 2014

HAproxy Loadbalancer

Introduction

Where using a Keepalived based approach for Galera loadbalancing is not feasible, the next alternative is to use HAproxy.

System Design

We present a solution where each OX node runs a HAproxy instance. This way we can implement a solution without the need for failover IPs or IP forwarding, which is often the reason why the Keepalived based approach is unavailable.

We create two HAproxy "listener", one round-robin for the read requests, one active/passive for the write requests.

Software Installation

HAproxy should be shipped with the distribution.

Configuration

The following is a HAproxy configuration file, assuming the Galera nodes have the IPs 192.168.1.101..103:

 global
     log 127.0.0.1 local0 notice
     user haproxy
     group haproxy
     # TUNING
     # this is not recommended by the haproxy authors, but seems to improve performance for me
     #nbproc 4
 
 defaults
     log global
     retries           3
     maxconn           256000
     timeout connect   60000
     timeout client    120000
     timeout server    120000
     no option httpclose
     option            dontlognull
     option            redispatch
     option            allbackups
 
 listen mysql-cluster
     bind 127.0.0.1:3306
     mode tcp
     balance roundrobin
     option httpchk
     server dav-db1 192.168.1.101:3306 check port 9200 inter 12000 rise 3 fall 3
     server dav-db2 192.168.1.102:3306 check port 9200 inter 12000 rise 3 fall 3
     server dav-db3 192.168.1.103:3306 check port 9200 inter 12000 rise 3 fall 3
 
 listen mysql-failover
     bind 127.0.0.1:3307
     mode tcp
     balance roundrobin
     option httpchk
     server dav-db1 192.168.1.101:3306 check port 9200 inter 12000 rise 3 fall 3
     server dav-db2 192.168.1.102:3306 check port 9200 inter 12000 rise 3 fall 3 backup
     server dav-db3 192.168.1.103:3306 check port 9200 inter 12000 rise 3 fall 3 backup
 
 #
 # can configure a stats interface here, but if you do so,
 # change the username / password
 #
 #listen stats
 #    bind 0.0.0.0:8080
 #    mode http
 #    stats enable
 #    stats uri /
 #    stats realm Strictly\ Private
 #    stats auth user:pass

You can see we use the httpchk option, which means that haproxy makes http requests to obtain node health. Therefore we need to configure something which answers those requests.

The Percona Galera packages ship with a script /usr/bin/clustercheck which can be called like

 # /usr/bin/clustercheck <username> <password>
 HTTP/1.1 200 OK
 Content-Type: text/plain
 Connection: close
 Content-Length: 40
 
 Percona XtraDB Cluster Node is synced.
 # 

They also ship an xinetd service definition. On RHEL/CentOS the service needs to be added to /etc/services.

# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
        disable         = no
        flags           = REUSE
        socket_type     = stream
        port            = 9200
        wait            = no
        user            = nobody
        server          = /usr/bin/clustercheck
        server_args     = username password
        log_on_failure  += USERID
        only_from       = 0.0.0.0/0
        per_source      = UNLIMITED
        type            = UNLISTED
}
#