Template:OXLoadBalancingClustering Database: Difference between revisions
No edit summary |
|||
Line 5: | Line 5: | ||
== Galera database setup == | == Galera database setup == | ||
=== Preparations === | |||
Our configuration process includes wiping and reinitializing the datadir. This is usually not a problem in a fresh installation. If you want to upgrade an existing database to Galera cluster, please be prepared to wipe the datadir, i.e. take a <code>mysqldump</code> for later restoration into the properly configured master. | |||
Depeding on the flavor of the current database, this can be something like | |||
# mariadb or oracle mysql without GTIDs | |||
mysqldump --databases configdb oxdb_{5..14} > backup.sql | |||
# mysql 5.6 with GTIDs... we dont want GTIDs here | |||
mysqldump --databases --set-gtid-purged=OFF configdb oxdb_{5..14} > backup.sql | |||
Be sure to verify the list of databases. | |||
=== Installation === | === Installation === | ||
Note: the commands below are worked out for Debian Jessie systems. The extrapolation to RHEL-ish systems should be straightforward. | |||
==== Percona XtraDB Cluster ==== | |||
wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb | |||
dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb | |||
apt-get update | |||
apt-cache search percona | |||
# 5.6 | |||
apt-get install percona-xtradb-cluster-56 | |||
# 5.7 | |||
apt-get install percona-xtradb-cluster-57 | |||
# The service is getting started automatically, but we need to reconfigure, reinitialize, and restart later. | |||
service mysql stop | |||
==== MariaDB Galera Cluster ==== | |||
== | wget https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | ||
# 10.0 | |||
bash mariadb_repo_setup --mariadb-server-version=10.0 | |||
apt-get install mariadb-galera-server percona-xtrabackup socat | |||
# 10.1: socat is now a dependency | |||
bash mariadb_repo_setup --mariadb-server-version=10.1 | |||
apt-get install mariadb-server percona-xtrabackup-24 | |||
# The service is getting started automatically, but we need to reconfigure, reinitialize, and restart later. | |||
service mysql stop | |||
=== Configuration === | |||
Galera-specific MySQL configuration advise is included in our main [[My.cnf|MySQL configuration article]]. Please consult that page for configuration information. | |||
That page suggests a setup were we add three custom config files to <code>/etc/mysql/ox.conf.d/</pre>: <code>ox.cnf</code> for general tuning/sizing, <code>wsrep.cnf</code> for clusterwide galera configuration, and <code>host.cnf</code> for host-specific settings. | |||
Adjust the general settings and tunings in <code>ox.cnf</code> according to your sizing etc. | |||
Adjust <code>wsrep.cnf</code> to reflect local paths, cluster member addresses, etc. | |||
Adjust <code>host.cnf</code> to give node-local IPs, etc. | |||
Version-specific hints: | |||
# percona 5.6: unknown variable 'pxc_strict_mode=ENFORCING' ... unset that one | |||
# mariadb 10.1: add wsrep_on=ON | |||
# mariadb 10.0 and 10.1: set wsrep_node_incoming_address=192.168.1.22:3306 in host.cnf, otherwise the status wsrep_incoming_addresses might not be shown correctly(?!) | |||
Some settings we recommend to change require that the database gets re-initialized. We assume you don't have data there (since we are covering a fresh install) or you have taken a backup for later restore as explained above in the Preparations section. | |||
= | cd /var/lib/ | ||
mv mysql mysql.old.datadir | |||
mkdir mysql | |||
chown mysql.mysql mysql | |||
# percona 5.6, mariadb 10.0 and 10.1 | |||
mysql_install_db | |||
# percona 5.7 | |||
mysqld --initialize-insecure --user=mysql | |||
(Don't be worried about the <code>insecure</code>, it just means we set the db root pw in the next steps.) | |||
=== Cluster startup === | === Cluster startup === | ||
Typically on startup a Galera node tries to join a cluster, and if it fails, it will exit. Thus, when no cluster nodes are running, the first cluster node to be started needs to be told to not try to join a cluster, but rather bootstrap a new cluster. The exact arguments vary from version to version and from flavor to flavor. | |||
==== First node ==== | |||
So we initialize the cluster bootstrap on the first node: | |||
# percona 5.6, 5.7 | |||
service mysql bootstrap-pxc | |||
# mariadb 10.0 | |||
service mysql bootstrap | |||
# mariadb 10.1: service mysql bootstrap seems to be broken, does not pass the necessary options to mysqld | |||
galera_new_cluster | |||
Run <code>mysql_secure_installation</code> for a "secure by default" installation: | |||
mysql_secure_installation | |||
The further steps in this guide omit <code>-u -p</code> arguments to the MySQL client. Rather than passing them on the command line [https://dev.mysql.com/doc/refman/5.7/en/password-security-user.html] it is recommended to place the credentials in a file like <code>/root/.my.cnf</code> like | |||
[client] | |||
user=root | |||
password=wip9Phae3Beijeed | |||
We need a Galera replication user: | |||
CREATE USER ' | |||
GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO ' | CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'OpIdjijwef0'; | ||
-- percona 5.6, mariadb 10.0 | |||
GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost'; | |||
-- percona 5.7, mariadb 10.1 | |||
GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost'; | |||
FLUSH PRIVILEGES; | FLUSH PRIVILEGES; | ||
MariaDB provided startup scripts use (under Debian) the distro's mechanism of verifying startup/shutdown using a system user, so we create that as well: | |||
# mariadb 10.0, 10.1 | |||
GRANT ALL PRIVILEGES ON *.* TO "debian-sys-maint"@"localhost" IDENTIFIED BY "adBexthTsI5TaEps"; | |||
If you do this, yo need to synchronize the <code>/etc/mysql/debian.cnf</code> file from the first node to the other nodes as well. | |||
==== Other nodes ==== | |||
On the other nodes, we only need to restart the service now, to trigger a full state transfer from the first node to the other nodes. | |||
We recommend to do this serially to let one state transfer complete before the second state transfer. | |||
==== First node (continued) ==== | |||
Only applicable if you used <code>galera_new_cluster</code> before rather than the service script: In order to get the systemctl status consistent, restart the service on the first node: | |||
# mariadb 10.1: restart the service so that the systemctl status is consistent | |||
mysqladmin shutdown | |||
service mysql bootstrap | |||
=== Verify the replication === | |||
The key tool to verify replication status is | |||
mysql> show status like "%wsrep%"; | |||
This will give a lot of output. You want to verify in particular | |||
+------------------------------+--------------------------------------+ | |||
| Variable_name | Value | | |||
+------------------------------+--------------------------------------+ | |||
| wsrep_cluster_size | 3 | | |||
| wsrep_cluster_status | Primary | | |||
| wsrep_local_state | 4 | | |||
| wsrep_local_state_comment | Synced | | |||
| wsrep_ready | ON | | |||
+------------------------------+--------------------------------------+ | |||
You can also explicitly verify replication by creating / inserting DBs, tables, rows on one node and select on other nodes. | |||
==== Troubleshooting ==== | ==== Troubleshooting ==== | ||
Line 122: | Line 167: | ||
If the first node starts, but the second / third nodes can not be added to the cluster: | If the first node starts, but the second / third nodes can not be added to the cluster: | ||
* User for the replication not created correctly on the first Galera node | * User for the replication not created correctly on the first Galera node | ||
* SST fails due to missing / wrong version prerequisite packages (not everything is hardcoded in package dependencies -- make sure you got percona-xtrabackup installed in the correct version, and also socat). If SST fails, do not only look into mysqls primary error logs, but also into logfiles from the SST tool in /var/lib/mysql on the donor node. | |||
=== Notes about configuring OX for use with Galera === | === Notes about configuring OX for use with Galera === | ||
Line 167: | Line 213: | ||
=== Preparations === | === Preparations === | ||
Our configuration process includes wiping and reinitializing the datadir. This is usually not a | Our configuration process includes wiping and reinitializing the datadir. This is usually not a problem in a fresh installation. If you want to upgrade an existing database to GTID master-slave, please be prepared to wipe the datadir, i.e. take a <code>mysqldump</code> for later restoration into the properly configured master. | ||
Depeding on the flavor of the current database, this can be something like | Depeding on the flavor of the current database, this can be something like | ||
Line 241: | Line 287: | ||
So, to initialize the datadir: | So, to initialize the datadir: | ||
cd /var/lib/ | cd /var/lib/ | ||
mv mysql mysql.old.datadir | mv mysql mysql.old.datadir |
Revision as of 10:15, 28 September 2017
Overview
You can choose between Galera or Master/Slave replication. We like to recommend to use Galera for higher redudancy, easier operations, und synchronous semantics (so you can run OX without our "replication monitor").
Galera database setup
Preparations
Our configuration process includes wiping and reinitializing the datadir. This is usually not a problem in a fresh installation. If you want to upgrade an existing database to Galera cluster, please be prepared to wipe the datadir, i.e. take a mysqldump
for later restoration into the properly configured master.
Depeding on the flavor of the current database, this can be something like
# mariadb or oracle mysql without GTIDs mysqldump --databases configdb oxdb_{5..14} > backup.sql # mysql 5.6 with GTIDs... we dont want GTIDs here mysqldump --databases --set-gtid-purged=OFF configdb oxdb_{5..14} > backup.sql
Be sure to verify the list of databases.
Installation
Note: the commands below are worked out for Debian Jessie systems. The extrapolation to RHEL-ish systems should be straightforward.
Percona XtraDB Cluster
wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.deb dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb apt-get update apt-cache search percona # 5.6 apt-get install percona-xtradb-cluster-56 # 5.7 apt-get install percona-xtradb-cluster-57 # The service is getting started automatically, but we need to reconfigure, reinitialize, and restart later. service mysql stop
MariaDB Galera Cluster
wget https://downloads.mariadb.com/MariaDB/mariadb_repo_setup # 10.0 bash mariadb_repo_setup --mariadb-server-version=10.0 apt-get install mariadb-galera-server percona-xtrabackup socat # 10.1: socat is now a dependency bash mariadb_repo_setup --mariadb-server-version=10.1 apt-get install mariadb-server percona-xtrabackup-24 # The service is getting started automatically, but we need to reconfigure, reinitialize, and restart later. service mysql stop
Configuration
Galera-specific MySQL configuration advise is included in our main MySQL configuration article. Please consult that page for configuration information.
That page suggests a setup were we add three custom config files to /etc/mysql/ox.conf.d/:
ox.cnf
for general tuning/sizing, wsrep.cnf
for clusterwide galera configuration, and host.cnf
for host-specific settings.
Adjust the general settings and tunings in ox.cnf
according to your sizing etc.
Adjust wsrep.cnf
to reflect local paths, cluster member addresses, etc.
Adjust host.cnf
to give node-local IPs, etc.
Version-specific hints:
# percona 5.6: unknown variable 'pxc_strict_mode=ENFORCING' ... unset that one
# mariadb 10.1: add wsrep_on=ON
# mariadb 10.0 and 10.1: set wsrep_node_incoming_address=192.168.1.22:3306 in host.cnf, otherwise the status wsrep_incoming_addresses might not be shown correctly(?!)
Some settings we recommend to change require that the database gets re-initialized. We assume you don't have data there (since we are covering a fresh install) or you have taken a backup for later restore as explained above in the Preparations section.
cd /var/lib/
mv mysql mysql.old.datadir
mkdir mysql
chown mysql.mysql mysql
# percona 5.6, mariadb 10.0 and 10.1
mysql_install_db
# percona 5.7
mysqld --initialize-insecure --user=mysql
(Don't be worried about the insecure
, it just means we set the db root pw in the next steps.)
Cluster startup
Typically on startup a Galera node tries to join a cluster, and if it fails, it will exit. Thus, when no cluster nodes are running, the first cluster node to be started needs to be told to not try to join a cluster, but rather bootstrap a new cluster. The exact arguments vary from version to version and from flavor to flavor.
First node
So we initialize the cluster bootstrap on the first node:
# percona 5.6, 5.7
service mysql bootstrap-pxc
# mariadb 10.0
service mysql bootstrap
# mariadb 10.1: service mysql bootstrap seems to be broken, does not pass the necessary options to mysqld
galera_new_cluster
Run mysql_secure_installation
for a "secure by default" installation:
mysql_secure_installation
The further steps in this guide omit -u -p
arguments to the MySQL client. Rather than passing them on the command line [1] it is recommended to place the credentials in a file like /root/.my.cnf
like
[client]
user=root
password=wip9Phae3Beijeed
We need a Galera replication user:
CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'OpIdjijwef0';
-- percona 5.6, mariadb 10.0
GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
-- percona 5.7, mariadb 10.1
GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
FLUSH PRIVILEGES;
MariaDB provided startup scripts use (under Debian) the distro's mechanism of verifying startup/shutdown using a system user, so we create that as well:
# mariadb 10.0, 10.1
GRANT ALL PRIVILEGES ON *.* TO "debian-sys-maint"@"localhost" IDENTIFIED BY "adBexthTsI5TaEps";
If you do this, yo need to synchronize the /etc/mysql/debian.cnf
file from the first node to the other nodes as well.
Other nodes
On the other nodes, we only need to restart the service now, to trigger a full state transfer from the first node to the other nodes.
We recommend to do this serially to let one state transfer complete before the second state transfer.
First node (continued)
Only applicable if you used galera_new_cluster
before rather than the service script: In order to get the systemctl status consistent, restart the service on the first node:
# mariadb 10.1: restart the service so that the systemctl status is consistent
mysqladmin shutdown
service mysql bootstrap
Verify the replication
The key tool to verify replication status is
mysql> show status like "%wsrep%";
This will give a lot of output. You want to verify in particular
+------------------------------+--------------------------------------+
| Variable_name | Value |
+------------------------------+--------------------------------------+
| wsrep_cluster_size | 3 |
| wsrep_cluster_status | Primary |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_ready | ON |
+------------------------------+--------------------------------------+
You can also explicitly verify replication by creating / inserting DBs, tables, rows on one node and select on other nodes.
Troubleshooting
The logs are helpful. Always.
Common mistakes are listed below.
If the Galera module does not get loaded at all:
- Configuration settings in my.cnf which are incompatible to Galera
- Wrong path of the shared object providing the Galera plugin in wsrep.cnf (wsrep_provider)
If the first node starts, but the second / third nodes can not be added to the cluster:
- User for the replication not created correctly on the first Galera node
- SST fails due to missing / wrong version prerequisite packages (not everything is hardcoded in package dependencies -- make sure you got percona-xtrabackup installed in the correct version, and also socat). If SST fails, do not only look into mysqls primary error logs, but also into logfiles from the SST tool in /var/lib/mysql on the donor node.
Notes about configuring OX for use with Galera
Write requests
Open-Xchange supports Galera as database backend only in the configuration where all writes are directed to one Galera node. For availability, it makes sense to not configure one Galera node's IP address directly, but rather employ some HA solution which offers active-passive functionality. Options therefore are discussed below.
Read requests
Read requests can be directed to any node in the Galera cluster. Our standard approach is to recommend to use a loadbalancer to implement round-robin over all nodes in a Galera cluster for the read requests. But you can also chose to use a dedicated read node (the same node, or a different node, than the write node). Each of the approaches has its own advantages.
- Load balancer based setup: Read requests get distributed round-robin between the Galera nodes. Theoretically by distributing the load of the read requests, you benefit from lower latencies and more throughput. But this has never been benchmarked yet. For a discussion of available loadbalances, see next section. OX-wise, in this configuration, you have two alternatives:
- The Galera option wsrep_causal_reads=1 option enables you to configure OX with its replication monitor disabled (com.openexchange.database.replicationMonitor=false in configdb.properties). This is the setup which seems to perform best according to our experience as turning off the replication monitor reduces the commits on the DB and thus the write operations per second on the underlying storage significantly, which outweights the drawback from having higher commit latency due to fully synchronous mode.
- Alternatively, you can run Galera with wsrep_causal_reads=0 when switching on OX builtin replication monitor. This is also a valid setup.
- Use a designated floating IP for the read requests: This eliminates the need of a load balancer. With this option you will not gain any performance, but the quantitative benefit is unclear anyhow.
- Use the floating IP for the writes also for the reads: In this scenario, you direct all database queries only to one Galera node, and the other two nodes are only getting queries in case of a failure of that node. In this case, you can even use wsrep_causal_reads=0 while still having OX builtin replication monitor switched off. However we do not expect this option to be superior to the round-robin loadbalancer approach.
Loadbalancer options
While the JDBC driver has some round-robin load balancing capabilities built-in, we don't recommend it for production use since it lacks possibilities to check the Galera nodes health states.
Loadbalancers used for OX -> Galera loadbalancing should be able to implement active-passive instances for the write requests, and active-active (round-robin) instances for the read requests. (If they cannot implement active-passive, you can still take a floating IP therefore.) Furthermore it is required to configure node health checks not only on the TCP level (by a simple connect), but to query the Galera health status periodically, evaluating Galera WSREP status variables. Otherwise split-brain scenarios or other bad states cannot be detected. For an example of such an health check, see our Clustercheck page.
Some customers use loadbalancing appliances. It is important to check that if the (virtual) infrastructure offers "loadbalancer" instances that they satisfy the given requirements. Often this is not the case. In particular, a simple "DNS round robin" approach is not viable.
LVS/ipvsadm/keepalived
If you want to create your own loadbalancers based on Linux, we usually recommend LVS (Linux Virtual Servers) controlled by Keepalived. LVS is a set of kernel modules implementing a L4 loadbalancer which performs quite well. Keepalived is a userspace daemon to control LVS rules, using health checks to reconfigure LVS rules if required. Keepalived / LVS requires one (or, for availability, two) dedicated linux nodes to run on. This can be a disadvantage for some installations, but usually, it pays off. We provide some configuration information on Keepalived here.
MariaDB Maxscale
Since Maxscale has become GA in 2015, it seems to have undergone significant stability, performance and functional improvements. We are currently experimenting with Maxscale and share our installation / configuration knowledge here. It looks quite promising and might become the standard replacement for HAproxy, while we still presume Keepalived offers superior robustness and performance, coming with the cost of the requirement for one (or more) dedicated loadbalancer nodes.
HAproxy
In case where the Keepalived based approach is not feasible due to its requirements on the infrastructure, it is also possible to use a HAproxy based solution where HAproxy processes run on each of the OX nodes, configured for one round-robin and one active/passive instance. OX is then connecting to the local HAproxy instances. It is vital to configure HAproxy timeouts different from the defaults, otherwise HAproxy will kill active DB connections, causing errors. Be aware that in large installations the number of (distributed) HAproxy instances can get quite large. Some configuration hints for HAproxy are available here.
Master/Slave database setup
While we also support also "legacy" (pre-GTID) Master/Slave replication, we recommend to use GTID based replication, for easier setup and failure recovery. Support for GTID based replication has been added with OX 7.8.0.
GTID has been available since MySQL 5.6, so no 5.5 installation instructions below, sorry. We try to be generic in this documentation (thus, applicable to Oracle Community Edition and MariaDB) and point out differences where needed. Note: Instructions below include information about Oracle Community MySQL 5.7 which is not yet formally supported.
Preparations
Our configuration process includes wiping and reinitializing the datadir. This is usually not a problem in a fresh installation. If you want to upgrade an existing database to GTID master-slave, please be prepared to wipe the datadir, i.e. take a mysqldump
for later restoration into the properly configured master.
Depeding on the flavor of the current database, this can be something like
# mariadb or oracle mysql without GTIDs
mysqldump --databases configdb oxdb_{5..14} > backup.sql
# mysql 5.6 with GTIDs... we dont want GTIDs here
mysqldump --databases --set-gtid-purged=OFF configdb oxdb_{5..14} > backup.sql
Be sure to verify the list of databases.
Installation
Notes for installing from "upstream" follow. When using the DB which comes with the OS, see the corresponding upstream section.
Software installation is identical for master and slave.
This documentation has been created and verified on Debian Jessie. We assume the extrapolation on RPM-based distributions is straightforward. Consult the upstream documentation.
MariaDB 10.0, 10.1
Following https://downloads.mariadb.org/mariadb/repositories/
Basic software installation:
apt-get install software-properties-common
apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db
# use 10.0 or 10.1 in the next line
# there are multiple mirrors available, see the mariadb downloads page
add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://ftp.hosteurope.de/mirror/mariadb.org/repo/10.1/debian jessie main'
apt-get update
apt-get install mariadb-server
The MariaDB package scripts start the service. So we need to stop it in order to apply our custom config.
service mysql stop
Oracle Community Edition
Following https://dev.mysql.com/doc/mysql-apt-repo-quick-guide/en/
dpkg -i mysql-apt-config_0.8.7-1_all.deb
# select a version in graphical thing; supported as of time of writing are 5.6 and 5.7
apt-get update
apt-get install mysql-server
Configuration
Configuration as per configuration files is also identical for master and slave.
Consult My.cnf for general recommendations how to configure databases for usage with OX.
For GTID based replication, make sure you add some configurables to a new /etc/mysql/ox.conf.d/gtid.cnf
file (assuming you are following our proposed schema of adding a !includedir /etc/mysql/ox.conf.d/
" directive to /etc/mysql/my.cnf
):
# GTID
log-bin=mysql-bin
server-id=...
log_slave_updates = ON
Oracle Community Edition: we need to add also
enforce_gtid_consistency = ON
gtid_mode = ON
(GTID mode is on by default on MariaDB.)
Use unique a server-id
for each server; like 1
for the master, 2
for slave. For more complicated setups (like multiple slaves), adjust accordingly.
Since applying our configuration / sizing requires reinitialization of the MySQL datadir, we wipe/recreate it. Caution: this assumes we are running an empty database. If there is data in the database you want to keep, use mysqldump. See Preparation section above.
So, to initialize the datadir:
cd /var/lib/
mv mysql mysql.old.datadir
mkdir mysql
chown mysql.mysql mysql
(When coming from an existing installation, be sure to wipe also old binlogs. They can confuse the server on startup. Their location varies by configuration.)
The step to initialize the datadir is different for the different DBs:
# MariaDB 10.0, 10.1
mysql_install_db
# Oracle 5.6
mysql_install_db -u mysql
# Oracle 5.7
mysqld --initialize-insecure --user=mysql
(Don't be worried about the insecure
, it just means we set the db root pw in the next steps.)
Then:
service mysql restart
mysql_secure_installation
We want to emphasize the last step to run "secure".
Steps up to here apply to both the designated master and slave. The next steps will apply to the master.
Replication Setup
Master Setup
Create a replication user on the master (but, as always, pick your own password, and use the same password in the slave setup below):
mysql -e "CREATE USER 'repl'@'gtid-slave.localdomain' IDENTIFIED BY 'IvIjyoffod2'; GRANT REPLICATION SLAVE ON *.* TO 'repl'@'gtid-slave.localdomain';"
Now would also be the time to restore a previously created mysqldump, or add other users you need for adminstration, monitoring etc (like debian-sys-maint@localhost
, for example). Adding the OX users is explained below ("Creating Open-Xchange user").
# If you took a dump for restore before
mysql < backup.sql
To prepare for the initial sync of the slave, set the master read-only:
mysql -e "SET @@global.read_only = ON;"
Create a dump to initialize the slave:
# MariaDB
mysqldump --all-databases --triggers --routines --events --master-data --gtid > master.sql
# Oracle
mysqldump --all-databases --triggers --routines --events --set-gtid-purged=ON > master.sql
Transfer to the slave:
scp master.sql gtid-slave:
Slave Setup
Configure the replication master settings. Note we don't need complicated binlog position settings etc with GTID.
Yet again DB-specific (use the repl user password from above):
# MariaDB
mysql -e 'CHANGE MASTER TO MASTER_HOST="gtid-master.localdomain", MASTER_USER="repl", MASTER_PASSWORD="IvIjyoffod2";'
# Oracle
mysql -e "CHANGE MASTER TO MASTER_HOST='gtid-master.localdomain', MASTER_USER='repl', MASTER_PASSWORD='IvIjyoffod2', MASTER_AUTO_POSITION=1;"
# https://www.percona.com/blog/2013/02/08/how-to-createrestore-a-slave-using-gtid-replication-in-mysql-5-6/
mysql -e "RESET MASTER;"
Read the master dump:
mysql < master.sql
Start replication on the slave:
mysql -e 'START SLAVE;'
mysql -e 'SHOW SLAVE STATUS\G'
Master Setup (continued)
Finally, unset read-only on the master:
# on the master
mysql -e "SET @@global.read_only = OFF;"
Configure OX to use with Master/Slave replication
Not much special wisdom here. OX was designed to be used with master/slave databases. For the ConfigDB, configdb.properties
allows configuration of a readUrl
and writeUrl
(both of which are set to the correct values if you use oxinstaller
with the correct arguments --configdb-readhost
, --configdb-writehost
).
(Obviously, the master is for writing and the slave is for reading.)
For the individiual UserDBs, use registerdatabase -m true
for the masters and registerdatabase -m false -M ...
for the respective slaves.
Be sure to have enabled the replication monitor in configdb.properties
: com.openexchange.database.replicationMonitor=true
(which it is by default); while GTID can show synchronous semantics, it is specified to silently fall back to asynchronous in certain circumstances, so synchronity is not guaranteed.
We recommend, though, to not register the databases directly by their native hostname or IP, but rather use some kind of HA system in order to be able to easily move a floating/failover IP from the master to the slave in case of master failure. Configuring and running such systems (like, corosync/pacemaker, keepalived, or whatever) is out of scope of this documentation, however.
Creating Open-Xchange user
Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.
Note: The IPs in this example belong to the two different Open-Xchange Servers, please adjust them accordingly. And use a real password.
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'IntyoyntOat1';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'IntyoyntOat1';