Load balancing and clustering

Revision as of 17:16, 26 July 2009 by Mheiland (talk | contribs) (Master configuration)

Load balancing and clustering Open-Xchange

General

Open-Xchange Server 6 is primary built for the Software-as-a-Service world. Hosting and telecommunication providers around the world use Open-Xchange to offer hosted services to their customers. Open-Xchange Server 6 scales vertical and horizontal which means either use a more powerful server or add more machines to fulfill resource requirements. While upgrading a single server installation inevitable gets to a point where costs rise faster than performance gains, adding some simple machines to the installation provides linear cost increase and a slightly more complex administration. Besides the fiscal impact of using medium sized servers another key argument for clustering is service availability, single nodes can go down for maintenance without influencing the general service availability. A typical scenario for clustering is virtualization where multiple nodes can provide resources on demand.

One of the main principles of Open-Xchange Server 6 is the ability to utilize several medium sized servers. This guide will outline the basic principles of clustering Open-Xchange Server instances and provide load balancing to utilize all nodes of a cluster.

Requirements

Since clustering and load balancing is an advanced topic, skills on operating system and Open-Xchange Server 6 administration are required. To gain those skills, please refer to the documentation repository and general system administration lecture. With this guide we're going to set up five machines in total. Therefor it's recommended to get some training on a virtualized environment first. When rolling out the setup it is recommended to use real hardware or enterprise grade virtualization solutions like VMware ESX or Citrix XEN. These types servers will be set up:

  • 1 Webserver (Apache)
  • 2 Groupware nodes (Open-Xchange Server 6)
  • 2 Database servers (MySQL Master/Slave)

To maintain consistency throughout the guide, each system gets a unique name which can be set as hostname. The IP addresses are also used through the whole guide but they may differ at the actual network setup. All systems run Debian GNU/Linux 5.0 (Lenny), any other supported platform works as well. All assumptions and instructions about system configuration is based on a minimal installation of the operating system. This guide is valid for Open-Xchange 6.10.

  • web (10.20.30.210)
  • oxgw01 (10.20.30.213)
  • oxgw02 (10.20.30.215)
  • dbmaster (10.20.30.217)
  • dbslave (10.20.30.219)

When finishing the guide the setup will provide several load balancing and clustering features.

  • Session load balancing
  • Open-Xchange clustering
  • Database master/slave replication
  • Database read/write separation
  • Distributed file storage
  • Remote logging

Concepts

Master/Slave database setup

Startup both database machines and install the mysql server packages

$ apt-get install mysql-server

During the installation, a dialog will show up to set a password for the MySQL 'root' user. Please set a strong password here.

Master configuration

Open the MySQL configuration file with you favorite editor

$ vim /etc/mysql/my.cnf

Modify or enable the following configuration options

bindaddress             = 10.20.30.217
server-id               = 1
log_bin                 = /var/log/mysql/mysql-bin.log

"bindaddress" specifies the network address where MySQL is listening for network connections. Since the MySQL slave and both Open-Xchange Servers are dedicated machines it is required to have the master accessible through the network. "server-od" is just a number within a environment with multiple MySQL servers. It needs to be unique for each server. "log_bin" enables the MySQL binary log which is required for Master/Slave replication. In general every statement triggered at the database is stored there to get distributed through the database cluster.

To apply the configuration changes, restart the MySQL server.

$ /etc/init.d/mysql restart

Then login to MySQL with the credentials given at the MySQL installation process

$ mysql -u root -p
Enter password:

Configure replication permissions for the MySQL slave server and the mysql user "replication". This account is used by the MySQL slave to get database updates from the master. Please chose a strong password here.

 mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'10.20.30.219' IDENTIFIED BY 'secret';

Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not yet exist, but will be created during the Open-Xchange Server installation.

mysql> GRANT ALL PRIVILEGES ON configdb.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret' WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON oxdb.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret' WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON configdb.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret' WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON oxdb.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret' WITH GRANT OPTION;

Verify that the MySQL master is writing a binary log and remember the values

mysql> SHOW MASTER STATUS;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 |      1082|              |                  |
+------------------+----------+--------------+------------------+

Copy the MySQL master binary log and the index file to the slave. This is required for initial synchronization.

$ scp /var/log/mysql/mysql-bin.* root@10.20.30.219:/var/log/mysql

Slave configuration

Set the mysql system user as owner to the binary log that has just been copied to the slave.

$ chown mysql:adm /var/log/mysql/*

Open the MySQL configuration file with you favorite editor

$ vim /etc/mysql/my.cnf

Modify or enable the following configuration options. Just like the master, the slave requires a unique server-id and needs to listen to an external network address. Activating the binary log is not required at the slave.

bindaddress             = 10.20.30.219
server-id               = 2

To apply the configuration changes, restart the MySQL server.

$ /etc/init.d/mysql restart

Then login to MySQL with the credentials given at the MySQL installation process

$ mysql -u root -p

Configure the replication from the master based on the 'replication' user and the masters binary log status. The values for MASTER_LOG_FILE and MASTER_LOG_POS must equal the output of the SHOW MASTER STATUS command at the MySQL master.

mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;

Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not yet exist, but will be created during the Open-Xchange Server installation.

mysql> GRANT ALL PRIVILEGES ON configdb.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret' WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON oxdb.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret' WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON configdb.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret' WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON oxdb.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret' WITH GRANT OPTION;

Start the MySQL slave replication

mysql> START SLAVE;

Check the slave status, sometimes it can take a while until the replication starts. Slave_IO_Running shows that the MySQL slave is exchanging data with the MySQL master.

mysql> SHOW SLAVE STATUS \G;
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

Also check the syslog if the replication has been sucessfully started

$ tail -fn20 /var/log/syslog
Jul 26 19:03:45 dbslave mysqld[4718]: 090726 19:03:45 [Note] Slave I/O thread: connected to master 'replication@10.20.30.217:3306',  replication started in log 'mysql-bin.000001' at position 1082

Testing Master/Slave

On the master, create a new database in MySQL:

mysql> CREATE DATABASE foo;

Check if this database is available on the slave:

mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| foo                |
| mysql              |
+--------------------+

Delete the database on the master

mysql> DROP DATABASE foo;

Check if the database has been removed at the slave

mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
+--------------------+

Distributed file storage

Clustering Open-Xchange

Session load balancing

Remote logging