Difference between revisions of "HAproxy"

(Software Installation)
(Configuration)
Line 28: Line 28:
  
 
   global
 
   global
       log 127.0.0.1 local0 notice
+
       log 127.0.0.1     local0
       user haproxy
+
       log 127.0.0.1    local1 notice
      group haproxy
 
      # TUNING
 
 
       # this is not recommended by the haproxy authors, but seems to improve performance for me
 
       # this is not recommended by the haproxy authors, but seems to improve performance for me
 
       #nbproc 4
 
       #nbproc 4
 +
      maxconn          256000
 +
      spread-checks    5
 +
      daemon
 +
      stats socket      /var/lib/haproxy/stats
 
    
 
    
 
   defaults
 
   defaults
       log global
+
       log               global
 
       retries          3
 
       retries          3
 
       maxconn          256000
 
       maxconn          256000
Line 42: Line 44:
 
       timeout client    120000
 
       timeout client    120000
 
       timeout server    120000
 
       timeout server    120000
      no option httpclose
 
 
       option            dontlognull
 
       option            dontlognull
 
       option            redispatch
 
       option            redispatch
 
       option            allbackups
 
       option            allbackups
 +
      # the http options are not needed here
 +
      # but may be reasonable if you use haproxy also for some OX HTTP proxying
 +
      mode              http
 +
      no option        httpclose
 
    
 
    
 
   listen mysql-cluster
 
   listen mysql-cluster

Revision as of 07:08, 17 October 2014

HAproxy Loadbalancer

Introduction

Where using a Keepalived based approach for Galera loadbalancing is not feasible, the next alternative is to use HAproxy.

System Design

We present a solution where each OX node runs a HAproxy instance. This way we can implement a solution without the need for failover IPs or IP forwarding, which is often the reason why the Keepalived based approach is unavailable.

We create two HAproxy "listener", one round-robin for the read requests, one active/passive for the write requests.

Software Installation

HAproxy should be shipped with the distribution.

Wheezy note: haproxy is provided in wheezy-backports, see http://haproxy.debian.net/

Short version:

 echo "deb http://http.debian.net/debian wheezy-backports main" > /etc/apt/sources.list.d/wheezy-backports.list
 apt-get update
 apt-get -t wheezy-backports install haproxy

Configuration

The following is a HAproxy configuration file, assuming the Galera nodes have the IPs 192.168.1.101..103:

 global
     log 127.0.0.1     local0
     log 127.0.0.1     local1 notice
     # this is not recommended by the haproxy authors, but seems to improve performance for me
     #nbproc 4
     maxconn           256000
     spread-checks     5
     daemon
     stats socket      /var/lib/haproxy/stats 
 
 defaults
     log               global
     retries           3
     maxconn           256000
     timeout connect   60000
     timeout client    120000
     timeout server    120000
     option            dontlognull
     option            redispatch
     option            allbackups
     # the http options are not needed here
     # but may be reasonable if you use haproxy also for some OX HTTP proxying
     mode              http
     no option         httpclose
 
 listen mysql-cluster
     bind 127.0.0.1:3306
     mode tcp
     balance roundrobin
     option httpchk
     server dav-db1 192.168.1.101:3306 check port 9200 inter 12000 rise 3 fall 3
     server dav-db2 192.168.1.102:3306 check port 9200 inter 12000 rise 3 fall 3
     server dav-db3 192.168.1.103:3306 check port 9200 inter 12000 rise 3 fall 3
 
 listen mysql-failover
     bind 127.0.0.1:3307
     mode tcp
     balance roundrobin
     option httpchk
     server dav-db1 192.168.1.101:3306 check port 9200 inter 12000 rise 3 fall 3
     server dav-db2 192.168.1.102:3306 check port 9200 inter 12000 rise 3 fall 3 backup
     server dav-db3 192.168.1.103:3306 check port 9200 inter 12000 rise 3 fall 3 backup
 
 #
 # can configure a stats interface here, but if you do so,
 # change the username / password
 #
 #listen stats
 #    bind 0.0.0.0:8080
 #    mode http
 #    stats enable
 #    stats uri /
 #    stats realm Strictly\ Private
 #    stats auth user:pass

You can see we use the httpchk option, which means that haproxy makes http requests to obtain node health. Therefore we need to configure something which answers those requests.

The Percona Galera packages ship with a script /usr/bin/clustercheck which can be called like

 # /usr/bin/clustercheck <username> <password>
 HTTP/1.1 200 OK
 Content-Type: text/plain
 Connection: close
 Content-Length: 40
 
 Percona XtraDB Cluster Node is synced.
 # 

They also ship an xinetd service definition. On RHEL/CentOS the service needs to be added to /etc/services.

You need a user for this service. Create as follows:

mysql -e 'grant process on *.* to "clustercheck"@"localhost" identified by "<password>";'

Of course substitute here (and in the following) "<password>" by some reasonable password.

Then adjust the xinetd configuration:

# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
        disable         = no
        flags           = REUSE
        socket_type     = stream
        port            = 9200
        wait            = no
        user            = nobody
        server          = /usr/bin/clustercheck
        server_args     = clustercheck <password>
        log_on_failure  += USERID
        only_from       = 0.0.0.0/0
        per_source      = UNLIMITED
        type            = UNLISTED
}
#