https://oxpedia.org/wiki/api.php?action=feedcontributions&user=Jstricker&feedformat=atomOpen-Xchange - User contributions [en]2024-03-29T11:46:53ZUser contributionsMediaWiki 1.31.0https://oxpedia.org/wiki/index.php?title=AppSuite:Version_Support_Commitment&diff=21204AppSuite:Version Support Commitment2016-01-04T10:09:27Z<p>Jstricker: /* Open-Xchange Releases Support Commitment */</p>
<hr />
<div>= Open-Xchange Version Support Commitment =<br />
<br />
== Open-Xchange Versions ==<br />
<br />
A detailed description of the different versions of Open-Xchange can be found in the Support Definitions document included in the contract. <br />
<br />
* '''New Generation''': A new release that may contain major feature changes, new architecture or different technology. <br />
* '''Major Release''': A major update of Open-Xchange's Product that will normally include all the changes provided by Minor Releases for the current version. They are cumulative so a customer has to install the latest Major Release to benefit from all changes that are available for this Open-Xchange Product. Major Releases also provide functional enhancements. A Licensee is therefor encouraged to install all Major Releases as soon as feasible.<br />
* '''Minor Release''': A change of Open-Xchange's Product that is released on an as needed basis, containing minor feature enhancements as well as solutions for known problems. Minor Releases go through quality assurance testing and APIs are not changed. <br />
* '''Patch Release''': A change to Open-Xchange's Product, to temporarily fix a Problem. “Patch Releases” are one-offs, special one-time builds not fully regression tested and/or recertified. The software change will be applied to the next formal Minor- or Major Release of the Software. Patch Releases are cumulative in nature, thus every new Patch Release will contain all former software changes released earlier as Patch Releases for the relevant Minor- or Major Release.<br />
** '''Private Patch Release''': A Private Patch Release is a Patch Release that is built for a specific customer only.<br />
** '''Public Patch Release''': A Public Patch Release is a Patch Release which is delivered to all customers.<br />
<br />
== Support Commitment ==<br />
<br />
=== Major Releases ===<br />
A Major Release cycle will be supported 6 Months after First Customer Shipment (FCS) of the following Major Release<br />
<br />
=== Minor Releases ===<br />
Within a Major Release cycle support is always available for the most recent Minor Release. To provide sufficient time for the update, Open-Xchange will support the two most recent Minor Release (e.g. 7.0.1 and 7.0.2) for four weeks in parallel after the First Customer Shipment (FCS) of the most recent Minor Release.<br />
<br />
=== Patch Releases ===<br />
Patches are cumulative which means each Patch Release also contains all fixes from the earlier Patch Releases within a Minor Release cycle. Therefore always the latest Patch Release (public or private) will be supported. <br />
<br />
== Time Bar Example ==<br />
<br />
[[File:support_committment_overview.png|900px]]<br />
<br />
== Open-Xchange Releases Support Commitment ==<br />
<br />
{| border="1" cellpadding="3" cellspacing="0"<br />
!align="left" |Release<br />
!align="left" |First Customer Shipment (FCS)<br />
!align="left" |Supported Commitment<br />
|-<br />
|v7.6 / 6.22.7<br />
|2014-06-25<br />
|<span style="color:#FF0000"> Discontinued Support (since 2014-11-12)</span><br />
|-<br />
|v7.6.1 / 6.22.8<br />
|2014-10-15<br />
|<span style="color:#FF0000"> Discontinued Support (since 2015-04-13)</span><br />
|-<br />
|v7.6.2 / 6.22.9<br />
|2015-03-16<br />
|<span style="color:#FF0000"> Discontinued Support (since 2015-12-30)</span><br />
|-<br />
|v7.8.0 / 6.22.10<br />
|2015-10-07<br />
|<span style="color:#008800"> Support Commitment</span><br />
|-<br />
|v7.6.3 / 6.22.9<br />
|2015-12-02<br />
|<span style="color:#008800"> Support Commitment 2016-04-07</span><br />
|-<br />
|}</div>Jstrickerhttps://oxpedia.org/wiki/index.php?title=AppSuite:Version_Support_Commitment&diff=18980AppSuite:Version Support Commitment2014-12-30T10:35:41Z<p>Jstricker: /* Open-Xchange Releases Support Commitment */</p>
<hr />
<div>= Open-Xchange Version Support Commitment =<br />
<br />
== Open-Xchange Versions ==<br />
<br />
A detailed description of the different versions of Open-Xchange can be found in the Support Definitions document included in the contract. <br />
<br />
* '''New Generation''': A new release that may contain major feature changes, new architecture or different technology. <br />
* '''Major Release''': A major update of Open-Xchange's Product that will normally include all the changes provided by Minor Releases for the current version. They are cumulative so a customer has to install the latest Major Release to benefit from all changes that are available for this Open-Xchange Product. Major Releases also provide functional enhancements. A Licensee is therefor encouraged to install all Major Releases as soon as feasible.<br />
* '''Minor Release''': A change of Open-Xchange's Product that is released on an as needed basis, containing minor feature enhancements as well as solutions for known problems. Minor Releases go through quality assurance testing and APIs are not changed. <br />
* '''Patch Release''': A change to Open-Xchange's Product, to temporarily fix a Problem. “Patch Releases” are one-offs, special one-time builds not fully regression tested and/or recertified. The software change will be applied to the next formal Minor- or Major Release of the Software. Patch Releases are cumulative in nature, thus every new Patch Release will contain all former software changes released earlier as Patch Releases for the relevant Minor- or Major Release.<br />
** '''Private Patch Release''': A Private Patch Release is a Patch Release that is built for a specific customer only.<br />
** '''Public Patch Release''': A Public Patch Release is a Patch Release which is delivered to all customers.<br />
<br />
== Support Commitment ==<br />
<br />
=== Major Releases ===<br />
A Major Release cycle will be supported 6 Months after First Customer Shipment (FCS) of the following Major Release<br />
<br />
=== Minor Releases ===<br />
Within a Major Release cycle support is always available for the most recent Minor Release. To provide sufficient time for the update, Open-Xchange will support the two most recent Minor Release (e.g. 7.0.1 and 7.0.2) for four weeks in parallel after the First Customer Shipment (FCS) of the most recent Minor Release.<br />
<br />
=== Patch Releases ===<br />
Patches are cumulative which means each Patch Release also contains all fixes from the earlier Patch Releases within a Minor Release cycle. Therefore always the latest Patch Release (public or private) will be supported. <br />
<br />
== Time Bar Example ==<br />
<br />
[[File:support_committment_overview.png|900px]]<br />
<br />
== Open-Xchange Releases Support Commitment ==<br />
<br />
{| border="1" cellpadding="3" cellspacing="0"<br />
!align="left" |Release<br />
!align="left" |First Customer Shipment (FCS)<br />
!align="left" |Supported Commitment<br />
|-<br />
|v7.4 / 6.22.4<br />
|2013-09-26<br />
|<span style="color:#FF0000"> Discontinued Support (since 2013-12-19)</span><br />
|-<br />
|v7.4.1 / 6.22.5<br />
|2013-11-21<br />
|<span style="color:#FF0000"> Discontinued Support (since 2014-03-11)</span><br />
|-<br />
|v7.4.2 / 6.22.6<br />
|2014-02-11<br />
|<span style="color:#FF0000"> Discontinued Support (since 2014-12-25)</span><br />
|-<br />
|v7.6 / 6.22.7<br />
|2014-06-25<br />
|<span style="color:#FF0000"> Discontinued Support (since 2014-11-12)</span><br />
|-<br />
|v7.6.1 / 6.22.8<br />
|2014-10-15<br />
|<span style="color:#008800"> Support Commitment</span><br />
|-<br />
|}</div>Jstrickerhttps://oxpedia.org/wiki/index.php?title=AppSuite:Version_Support_Commitment&diff=18902AppSuite:Version Support Commitment2014-11-18T10:31:36Z<p>Jstricker: /* Open-Xchange Releases Support Commitment */</p>
<hr />
<div>= Open-Xchange Version Support Commitment =<br />
<br />
== Open-Xchange Versions ==<br />
<br />
A detailed description of the different versions of Open-Xchange can be found in the Support Definitions document included in the contract. <br />
<br />
* '''New Generation''': A new release that may contain major feature changes, new architecture or different technology. <br />
* '''Major Release''': A major update of Open-Xchange's Product that will normally include all the changes provided by Minor Releases for the current version. They are cumulative so a customer has to install the latest Major Release to benefit from all changes that are available for this Open-Xchange Product. Major Releases also provide functional enhancements. A Licensee is therefor encouraged to install all Major Releases as soon as feasible.<br />
* '''Minor Release''': A change of Open-Xchange's Product that is released on an as needed basis, containing minor feature enhancements as well as solutions for known problems. Minor Releases go through quality assurance testing and APIs are not changed. <br />
* '''Patch Release''': A change to Open-Xchange's Product, to temporarily fix a Problem. “Patch Releases” are one-offs, special one-time builds not fully regression tested and/or recertified. The software change will be applied to the next formal Minor- or Major Release of the Software. Patch Releases are cumulative in nature, thus every new Patch Release will contain all former software changes released earlier as Patch Releases for the relevant Minor- or Major Release.<br />
** '''Private Patch Release''': A Private Patch Release is a Patch Release that is built for a specific customer only.<br />
** '''Public Patch Release''': A Public Patch Release is a Patch Release which is delivered to all customers.<br />
<br />
== Support Commitment ==<br />
<br />
=== Major Releases ===<br />
A Major Release cycle will be supported 6 Months after First Customer Shipment (FCS) of the following Major Release<br />
<br />
=== Minor Releases ===<br />
Within a Major Release cycle support is always available for the most recent Minor Release. To provide sufficient time for the update, Open-Xchange will support the two most recent Minor Release (e.g. 7.0.1 and 7.0.2) for four weeks in parallel after the First Customer Shipment (FCS) of the most recent Minor Release.<br />
<br />
=== Patch Releases ===<br />
Patches are cumulative which means each Patch Release also contains all fixes from the earlier Patch Releases within a Minor Release cycle. Therefore always the latest Patch Release (public or private) will be supported. <br />
<br />
== Time Bar Example ==<br />
<br />
[[File:support_committment_overview.png|900px]]<br />
<br />
== Open-Xchange Releases Support Commitment ==<br />
<br />
{| border="1" cellpadding="3" cellspacing="0"<br />
!align="left" |Release<br />
!align="left" |First Customer Shipment (FCS)<br />
!align="left" |Supported Commitment<br />
|-<br />
|v7.4 / 6.22.4<br />
|2013-09-26<br />
|<span style="color:#FF0000"> Discontinued Support (since 2013-12-19)</span><br />
|-<br />
|v7.4.1 / 6.22.5<br />
|2013-11-21<br />
|<span style="color:#FF0000"> Discontinued Support (since 2014-03-11)</span><br />
|-<br />
|v7.4.2 / 6.22.6<br />
|2014-02-11<br />
|<span style="color:#008800"> Support Commitment 2014-12-25</span><br />
|-<br />
|v7.6 / 6.22.7<br />
|2014-06-25<br />
|<span style="color:#FF0000"> Discontinued Support (since 2014-11-12)</span><br />
|-<br />
|v7.6.1 / 6.22.8<br />
|2014-10-15<br />
|<span style="color:#008800"> Support Commitment</span><br />
|-<br />
|}</div>Jstrickerhttps://oxpedia.org/wiki/index.php?title=ConfigCascade&diff=17335ConfigCascade2014-03-27T08:36:21Z<p>Jstricker: /* Specifying Configuration - Context Scope and User Scope */</p>
<hr />
<div>= Introduction to the config cascade =<br />
<br />
The config cascade is a configuration system that allows administrators to selectively override configuration parameters on context and user level. This means a configuration option can vary between groups of contexts, specific contexts or users.<br />
<br />
== Who should read this document? ==<br />
<br />
If you are tasked with designing and maintaining the configuration of an OX server or cluster, information contained in this document will acquaint you with options in OX configuration design. <br />
<br />
== Core Concepts - Configuration Scope ==<br />
<br />
The config cascade differentiates between 4 scopes of configuration: '''Server''', '''ContextSet''', '''Context''' and '''User''', with the latter always overriding the configuration of the one before it. To determine the active value of a certain parameter, the config cascade looks whether the parameter is defined in a certain scope before falling back to the next scope to the left to determine if a value is defined there. This means, that a value in the '''User''' scope can override the more general value from the '''Context''' scope, which in turn overrides the value of a context set configuration, which itself overrides a server wide configuration.<br />
<br />
== Core Concepts - Context Taxonomy ==<br />
<br />
When deciding on configuration options it usually makes sense to group contexts according to a certain criterion. Typical uses would be to group contexts by offering ('''webmail''', '''groupware_standard''', '''groupware_plus'''), or country ('''de''', '''fr''', '''es''') or brand ('''coolhosting''', '''supremehosting''') or if they are part of a "friendly users" group you sometimes give access to features to deem whether they are appropriate for rollout ('''beta'''). You can then specify configuration options that only take effect if a context is part of one of these groups. For example, the default hostname varies by both country and brand, with the french coolhosting domain name being "coolhosting.fr", while the spanish one is "coolhosting.es", or, for the second brand "supremehosting.fr" or "supremehosting.es" respectively. How can you go about classifying a context?<br />
<br />
Using the command line tools you can specifiy the taxonomy/types parameter:<br />
<br />
'''createcontext ... -i 12 --taxonomy/types=webmail,coolhosting,de'''<br />
<br />
which would tag context 12 with the types webmail, coolhosting and de. This is also available in "'''changecontext'''". In RMI the equivalent is to call '''Context#setUserAttribute("taxonomy", "types", "webmail,coolhosting,de")'''. We will later see how configuration options can be specified for these types of contexts. <br />
<br />
== Specifying Configuration - Server Scope ==<br />
<br />
The most general scope is the '''Server''' scope. Every value that can be overridden along the config cascade should also be defined with a default value in the Server scope. This is done using the usual configuration methods of the server: .properties files in the config directory (usually /opt/openexchange/etc, or /opt/openexchange/etc/groupware in versions up to 6.20.7). Let's consider the properties "'''com.openexchange.messaging.facebook'''", which governs whether facebook messaging should be available in a given installation. Since we consider this to be a premium feature, we'll disable this on the server level:<br />
<br />
'''facebookmessaging.properties:'''<br />
com.openexchange.messaging.facebook=false<br />
<br />
Later we will see how to enable it for certain groups of contexts.<br />
<br />
== Specifying Configuration - Context Set Scope ==<br />
<br />
As we saw, you can classify contexts into groups. These groups will now be used to specify certain configuration options. Let's consider this setup:<br />
<br />
<br />
Context 12: webmail,de,beta<br />
Context 13: groupware_plus,es<br />
Context 14: groupware_plus,fr,beta<br />
<br />
<br />
Let's say, we want to roll out the facebook functionality to those contexts, that have the groupware_plus product and are part of our "friendly users". For this, you can specify a configuration that overrides the server setting like this:<br />
<br />
Create a file called '''/opt/openexchange/etc/contextSets/messaging.yml''' and add the following block:<br />
<br />
experimental_gw_plus:<br />
withTags: groupware_plus & beta<br />
com.openexchange.messaging.facebook: true<br />
<br />
Let's go through this line by line. The first line introduces a configuration block that will be used for certain contexts. The name doesn't matter, only insofar as that it may only be used once per file. Choose a good mnemonic here, so a future you or someone else can guess at what is going on in this configuration block. <br />
<br />
The second line specifies the criterion to use to find out whether a context belongs in this group of contexts. In this case, a context having both the groupware_plus and beta tags will be considered to be a part of this group. In the withTags expression you can use boolean logic (with & for and, | for or and brackets to group the expressions). It's best to not go overboard with this, though. If the boolean expressions here become too complex it's usually an indication that you could use another classification for the contexts. Which tags does a context have? Firstly, and most obviously, those specified as its taxonomy/types list. But that is not the whole story. The /users/ module access permissions are also transformed into tags and applied to the context (at runtime). So if a user has access to the tasks module and the infostore module, the context will be considered to be tagged with ucTask and ucInfostore as well. This is sometimes enough to determine if a context is part of a certain offering, but more explicit tagging of contexts according to the offering keeps things readable. Lastly the configuration parameter "com.openexchange.config.cascade.types" (which is itself config cascade enabled) adds its value to the tag list, so, for example:<br />
<br />
friendly_users:<br />
withTags: groupware_plus & beta<br />
com.openexchange.config.cascade.types: friendly_and_paying<br />
<br />
Would add the friendly_and_paying tag to all contexts already classified as groupware_plus and beta. Also since this value can also be specified on user level, you could classify users irrespective of their contexts, should the need arise. <br />
<br />
The third line then specifies the setting to override. You can specify all properties to override in this block, so if we wanted to enable both facebook and twitter messaging for these contexts, we'd use the following configuration:<br />
<br />
experimental_gw_plus:<br />
withTags: groupware_plus & beta<br />
com.openexchange.messaging.facebook: true<br />
com.openexchange.messaging.twitter: true<br />
<br />
Most configuration use cases can probably be handled with the context sets system. Only if a configuration is truly unique for just one context or user should the other options be pursued.<br />
<br />
== Specifying Configuration - Context Scope and User Scope ==<br />
<br />
Configuration options can be overridden on user and context level, using a dynamic property. For example:<br />
<br />
$ createcontext [...] --config/com.openexchange.messaging.facebook=true<br />
$ changecontext [...] --config/com.openexchange.messaging.facebook=true<br />
<br />
$ createuser [...] --config/com.openexchange.messaging.facebook=true<br />
$ changeuser [...] --config/com.openexchange.messaging.facebook=true<br />
<br />
Depending on the number of users and contexts in your system, this could pose a problem further down the road when you need to update this value for a large number of users.<br />
<br />
To remove such a setting again the following syntax can be used:<br />
<br />
$ changecontext [...] --remove-config/com.openexchange.messaging.facebook<br />
$ changeuser [...] --remove-config/com.openexchange.messaging.facebook<br />
<br />
== UI Properties ==<br />
<br />
A common use case for the OX configuration system is to allow fine-tuning of the UI by providing configuration data on the backend. All properties defined in properties files below <tt>/opt/open-xchange/etc/groupware/settings</tt> are transported to the UI and are config cascade enabled. So every customization you can specify for the UI using these settings, can also be selectively overridden with the config cascade.<br />
<br />
Since the config cascade only overrides existing settings, whether a property is a UI property or a server property is automatically determined by the directory in which the corresponding <tt>.properties</tt> file is found. For example if <tt>/opt/open-xchange/etc/settings/appsuite.properties</tt> contains the setting<br />
<br />
io.ox/core//theme=default<br />
<br />
Then you can overwrite it for any context (or user, context set, etc.):<br />
<br />
$ changecontext [...] --config/io.ox/core//theme=org.example.theme<br />
<br />
== Further Reading ==<br />
<br />
[[ConfigCascadeCookbook]] - Collects typical configuration scenarios and how to handle them using the config cascade.</div>Jstrickerhttps://oxpedia.org/wiki/index.php?title=Template:OXLoadBalancingClustering_Database&diff=7278Template:OXLoadBalancingClustering Database2011-02-23T12:56:15Z<p>Jstricker: /* Second Master configuration */</p>
<hr />
<div>== Master/Master database setup ==<br />
Even if the OX handles the database servers as master and slave, you should configure them as a master/master setup.<br />
<br />
Startup both database machines and install the mysql server packages<br />
$ apt-get install mysql-server<br />
<br />
During the installation, a dialog will show up to set a password for the MySQL 'root' user. Please set a strong password here.<br />
<br />
=== First Master configuration ===<br />
The first server is a master in this context and the second one is the slave.<br />
<br />
Open the MySQL configuration file with you favorite editor<br />
$ vim /etc/mysql/my.cnf<br />
<br />
Modify or enable the following configuration options in the mysqld-section<br />
bind-address = 0.0.0.0<br />
server-id = 1<br />
log-bin = /var/log/mysql/mysql-bin.log<br />
<br />
* ''bindaddress'' specifies the network address where MySQL is listening for network connections. Since the MySQL slave and both Open-Xchange Servers are dedicated machines it is required to have the master accessible through the network.<br />
* ''server-id'' is just a number within a environment with multiple MySQL servers. It needs to be unique for each server.<br />
* ''log-bin'' enables the MySQL binary log which is required for Master/Master replication. In general every statement triggered at the database is stored there to get distributed through the database cluster.<br />
<br />
To apply the configuration changes, restart the MySQL server.<br />
$ /etc/init.d/mysql restart<br />
<br />
Then login to MySQL with the credentials given at the MySQL installation process<br />
$ mysql -u root -p<br />
Enter password:<br />
<br />
Configure replication permissions for the MySQL slave server and the MySQL user "replication". This account is used by the MySQL slave to get database updates from the master. Please choose a strong password here.<br />
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'10.20.30.219' IDENTIFIED BY 'secret';<br />
<br />
Now setup access for the Open-Xchange Server database user ''openexchange'' to configdb and the groupware database for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';<br />
<br />
On the master (10.20.30.217) verify that the MySQL master is writing a binary log and remember the values<br />
mysql> SHOW MASTER STATUS;<br />
+------------------+----------+--------------+------------------+<br />
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |<br />
+------------------+----------+--------------+------------------+<br />
| mysql-bin.000001 | 1082| | |<br />
+------------------+----------+--------------+------------------+<br />
<br />
Copy the MySQL master binary log and the index file to the slave. This is required for initial synchronization.<br />
$ scp /var/log/mysql/mysql-bin.* root@10.20.30.219:/var/log/mysql<br />
<br />
On the slave (10.20.30.219) set the MySQL system user as owner to the binary log that has just been copied to the slave.<br />
$ chown mysql:adm /var/log/mysql/*<br />
<br />
On the slave (10.20.30.219) set the server as a slave of 10.20.30.217. Replace the log file information by the values you retrieved from the master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
On the slave (10.20.30.219) start the MySQL slave replication<br />
mysql> START SLAVE;<br />
<br />
On the slave (10.20.30.219) check the Slave status<br />
mysql> SHOW SLAVE STATUS;<br />
<br />
"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes".<br />
<br />
=== Second Master configuration ===<br />
The first server is a slave in this context and the second one is the master.<br />
<br />
Open the MySQL configuration file with you favorite editor<br />
$ vim /etc/mysql/my.cnf<br />
<br />
Modify or enable the following configuration options in the mysqld-ection. Just like the other server, this one requires a unique ''server-id'' and needs to listen to an external network address. Activating the binary log is not required at the slave.<br />
bind-address = 0.0.0.0<br />
server-id = 2<br />
log-bin = /var/log/mysql/mysql-bin.log<br />
<br />
<br />
To apply the configuration changes, restart the MySQL server.<br />
$ /etc/init.d/mysql restart<br />
<br />
Then login to MySQL with the credentials given at the MySQL installation process<br />
$ mysql -u root -p<br />
Enter password:<br />
<br />
Configure the replication from the master based on the 'replication' user and the masters binary log status. The values for ''MASTER_LOG_FILE'' and ''MASTER_LOG_POS'' must equal the output of the ''SHOW MASTER STATUS'' command at the MySQL master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';<br />
<br />
On the master (10.20.30.219) verify that the MySQL master is writing a binary log and remember the values<br />
mysql> SHOW MASTER STATUS;<br />
+------------------+----------+--------------+------------------+<br />
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |<br />
+------------------+----------+--------------+------------------+<br />
| mysql-bin.000001 | 1082| | |<br />
+------------------+----------+--------------+------------------+<br />
<br />
On the slave (10.20.30.217) set the MySQL system user as owner to the binary log that has just been copied to the slave.<br />
$ chown mysql:adm /var/log/mysql/*<br />
<br />
On the slave (10.20.30.217) set the server as a slave of 10.20.30.219. Replace the log file information by the values you retrieved from the master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.219', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
On the slave (10.20.30.217) start the MySQL slave replication<br />
mysql> START SLAVE;<br />
<br />
On the slave (10.20.30.21) check the Slave status<br />
mysql> SHOW SLAVE STATUS;<br />
<br />
"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes".<br />
<br />
Also check the syslog if the replication has been sucessfully started<br />
$ tail -fn20 /var/log/syslog<br />
Jul 26 19:03:45 dbslave mysqld[4718]: 090726 19:03:45 [Note] Slave I/O thread: connected to master 'replication@10.20.30.217:3306', replication started in log 'mysql-bin.000001' at position 1082<br />
<br />
=== Testing Master/Master ===<br />
<br />
On the first master, create a new database in MySQL:<br />
mysql> CREATE DATABASE foo;<br />
<br />
Check if this database is available on the second master:<br />
mysql> SHOW DATABASES;<br />
+--------------------+<br />
| Database |<br />
+--------------------+<br />
| information_schema |<br />
| foo |<br />
| mysql |<br />
+--------------------+<br />
<br />
Delete the database on the second master:<br />
mysql> DROP DATABASE foo;<br />
<br />
Check if the database has been removed at first master<br />
mysql> SHOW DATABASES;<br />
+--------------------+<br />
| Database |<br />
+--------------------+<br />
| information_schema |<br />
| mysql |<br />
+--------------------+</div>Jstrickerhttps://oxpedia.org/wiki/index.php?title=Template:OXLoadBalancingClustering_Database&diff=7277Template:OXLoadBalancingClustering Database2011-02-23T12:56:02Z<p>Jstricker: /* Second Master configuration */</p>
<hr />
<div>== Master/Master database setup ==<br />
Even if the OX handles the database servers as master and slave, you should configure them as a master/master setup.<br />
<br />
Startup both database machines and install the mysql server packages<br />
$ apt-get install mysql-server<br />
<br />
During the installation, a dialog will show up to set a password for the MySQL 'root' user. Please set a strong password here.<br />
<br />
=== First Master configuration ===<br />
The first server is a master in this context and the second one is the slave.<br />
<br />
Open the MySQL configuration file with you favorite editor<br />
$ vim /etc/mysql/my.cnf<br />
<br />
Modify or enable the following configuration options in the mysqld-section<br />
bind-address = 0.0.0.0<br />
server-id = 1<br />
log-bin = /var/log/mysql/mysql-bin.log<br />
<br />
* ''bindaddress'' specifies the network address where MySQL is listening for network connections. Since the MySQL slave and both Open-Xchange Servers are dedicated machines it is required to have the master accessible through the network.<br />
* ''server-id'' is just a number within a environment with multiple MySQL servers. It needs to be unique for each server.<br />
* ''log-bin'' enables the MySQL binary log which is required for Master/Master replication. In general every statement triggered at the database is stored there to get distributed through the database cluster.<br />
<br />
To apply the configuration changes, restart the MySQL server.<br />
$ /etc/init.d/mysql restart<br />
<br />
Then login to MySQL with the credentials given at the MySQL installation process<br />
$ mysql -u root -p<br />
Enter password:<br />
<br />
Configure replication permissions for the MySQL slave server and the MySQL user "replication". This account is used by the MySQL slave to get database updates from the master. Please choose a strong password here.<br />
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'10.20.30.219' IDENTIFIED BY 'secret';<br />
<br />
Now setup access for the Open-Xchange Server database user ''openexchange'' to configdb and the groupware database for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';<br />
<br />
On the master (10.20.30.217) verify that the MySQL master is writing a binary log and remember the values<br />
mysql> SHOW MASTER STATUS;<br />
+------------------+----------+--------------+------------------+<br />
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |<br />
+------------------+----------+--------------+------------------+<br />
| mysql-bin.000001 | 1082| | |<br />
+------------------+----------+--------------+------------------+<br />
<br />
Copy the MySQL master binary log and the index file to the slave. This is required for initial synchronization.<br />
$ scp /var/log/mysql/mysql-bin.* root@10.20.30.219:/var/log/mysql<br />
<br />
On the slave (10.20.30.219) set the MySQL system user as owner to the binary log that has just been copied to the slave.<br />
$ chown mysql:adm /var/log/mysql/*<br />
<br />
On the slave (10.20.30.219) set the server as a slave of 10.20.30.217. Replace the log file information by the values you retrieved from the master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
On the slave (10.20.30.219) start the MySQL slave replication<br />
mysql> START SLAVE;<br />
<br />
On the slave (10.20.30.219) check the Slave status<br />
mysql> SHOW SLAVE STATUS;<br />
<br />
"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes".<br />
<br />
=== Second Master configuration ===<br />
The first server is a slave in this context and the second one is the master.<br />
<br />
Open the MySQL configuration file with you favorite editor<br />
$ vim /etc/mysql/my.cnf<br />
<br />
Modify or enable the following configuration options in the mysqld-ection. Just like the other server, this one requires a unique ''server-id'' and needs to listen to an external network address. Activating the binary log is not required at the slave.<br />
bind-address = 0.0.0.0<br />
server-id = 2<br />
log-bin = /var/log/mysql/mysql-bin.log<br />
<br />
<br />
To apply the configuration changes, restart the MySQL server.<br />
$ /etc/init.d/mysql restart<br />
<br />
Then login to MySQL with the credentials given at the MySQL installation process<br />
$ mysql -u root -p<br />
Enter password:<br />
<br />
Configure the replication from the master based on the 'replication' user and the masters binary log status. The values for ''MASTER_LOG_FILE'' and ''MASTER_LOG_POS'' must equal the output of the ''SHOW MASTER STATUS'' command at the MySQL master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';<br />
<br />
On the master (10.20.30.219) verify that the MySQL master is writing a binary log and remember the values<br />
mysql> SHOW MASTER STATUS;<br />
+------------------+----------+--------------+------------------+<br />
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |<br />
+------------------+----------+--------------+------------------+<br />
| mysql-bin.000001 | 1082| | |<br />
+------------------+----------+--------------+------------------+<br />
<br />
On the slave (10.20.30.217) set the MySQL system user as owner to the binary log that has just been copied to the slave.<br />
$ chown mysql:adm /var/log/mysql/*<br />
<br />
On the slave (10.20.30.217) set the server as a slave of 10.20.30.219. Replace the log file information by the values you retrieved from the master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.219', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
On the slave (10.20.30.217) start the MySQL slave replication<br />
mysql> START SLAVE;<br />
<br />
On the slave (10.20.30.21) check the Slave status<br />
mysql> SHOW SLAVE STATUS;<br />
<br />
"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes".<br />
<br />
Also check the syslog if the replication has been sucessfully started<br />
$ tail -fn20 /var/log/syslog<br />
Jul 26 19:03:45 dbslave mysqld[4718]: 090726 19:03:45 [Note] Slave I/O thread: connected to master 'replication@10.20.30.217:3306', replication started in log 'mysql-bin.000001' at position 1082<br />
<br />
=== Testing Master/Master ===<br />
<br />
On the first master, create a new database in MySQL:<br />
mysql> CREATE DATABASE foo;<br />
<br />
Check if this database is available on the second master:<br />
mysql> SHOW DATABASES;<br />
+--------------------+<br />
| Database |<br />
+--------------------+<br />
| information_schema |<br />
| foo |<br />
| mysql |<br />
+--------------------+<br />
<br />
Delete the database on the second master:<br />
mysql> DROP DATABASE foo;<br />
<br />
Check if the database has been removed at first master<br />
mysql> SHOW DATABASES;<br />
+--------------------+<br />
| Database |<br />
+--------------------+<br />
| information_schema |<br />
| mysql |<br />
+--------------------+</div>Jstrickerhttps://oxpedia.org/wiki/index.php?title=Template:OXLoadBalancingClustering_Database&diff=7276Template:OXLoadBalancingClustering Database2011-02-23T11:38:51Z<p>Jstricker: /* Second Master configuration */</p>
<hr />
<div>== Master/Master database setup ==<br />
Even if the OX handles the database servers as master and slave, you should configure them as a master/master setup.<br />
<br />
Startup both database machines and install the mysql server packages<br />
$ apt-get install mysql-server<br />
<br />
During the installation, a dialog will show up to set a password for the MySQL 'root' user. Please set a strong password here.<br />
<br />
=== First Master configuration ===<br />
The first server is a master in this context and the second one is the slave.<br />
<br />
Open the MySQL configuration file with you favorite editor<br />
$ vim /etc/mysql/my.cnf<br />
<br />
Modify or enable the following configuration options in the mysqld-section<br />
bind-address = 0.0.0.0<br />
server-id = 1<br />
log-bin = /var/log/mysql/mysql-bin.log<br />
<br />
* ''bindaddress'' specifies the network address where MySQL is listening for network connections. Since the MySQL slave and both Open-Xchange Servers are dedicated machines it is required to have the master accessible through the network.<br />
* ''server-id'' is just a number within a environment with multiple MySQL servers. It needs to be unique for each server.<br />
* ''log-bin'' enables the MySQL binary log which is required for Master/Master replication. In general every statement triggered at the database is stored there to get distributed through the database cluster.<br />
<br />
To apply the configuration changes, restart the MySQL server.<br />
$ /etc/init.d/mysql restart<br />
<br />
Then login to MySQL with the credentials given at the MySQL installation process<br />
$ mysql -u root -p<br />
Enter password:<br />
<br />
Configure replication permissions for the MySQL slave server and the MySQL user "replication". This account is used by the MySQL slave to get database updates from the master. Please choose a strong password here.<br />
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'10.20.30.219' IDENTIFIED BY 'secret';<br />
<br />
Now setup access for the Open-Xchange Server database user ''openexchange'' to configdb and the groupware database for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';<br />
<br />
On the master (10.20.30.217) verify that the MySQL master is writing a binary log and remember the values<br />
mysql> SHOW MASTER STATUS;<br />
+------------------+----------+--------------+------------------+<br />
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |<br />
+------------------+----------+--------------+------------------+<br />
| mysql-bin.000001 | 1082| | |<br />
+------------------+----------+--------------+------------------+<br />
<br />
Copy the MySQL master binary log and the index file to the slave. This is required for initial synchronization.<br />
$ scp /var/log/mysql/mysql-bin.* root@10.20.30.219:/var/log/mysql<br />
<br />
On the slave (10.20.30.219) set the MySQL system user as owner to the binary log that has just been copied to the slave.<br />
$ chown mysql:adm /var/log/mysql/*<br />
<br />
On the slave (10.20.30.219) set the server as a slave of 10.20.30.217. Replace the log file information by the values you retrieved from the master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
On the slave (10.20.30.219) start the MySQL slave replication<br />
mysql> START SLAVE;<br />
<br />
On the slave (10.20.30.219) check the Slave status<br />
mysql> SHOW SLAVE STATUS;<br />
<br />
"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes".<br />
<br />
=== Second Master configuration ===<br />
The first server is a slave in this context and the second one is the master.<br />
<br />
Open the MySQL configuration file with you favorite editor<br />
$ vim /etc/mysql/my.cnf<br />
<br />
Modify or enable the following configuration options in the mysqld-ection. Just like the other server, this one requires a unique ''server-id'' and needs to listen to an external network address. Activating the binary log is not required at the slave.<br />
bindaddress = 0.0.0.0<br />
server-id = 2<br />
log-bin = /var/log/mysql/mysql-bin.log<br />
<br />
<br />
To apply the configuration changes, restart the MySQL server.<br />
$ /etc/init.d/mysql restart<br />
<br />
Then login to MySQL with the credentials given at the MySQL installation process<br />
$ mysql -u root -p<br />
Enter password:<br />
<br />
Configure the replication from the master based on the 'replication' user and the masters binary log status. The values for ''MASTER_LOG_FILE'' and ''MASTER_LOG_POS'' must equal the output of the ''SHOW MASTER STATUS'' command at the MySQL master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';<br />
<br />
On the master (10.20.30.219) verify that the MySQL master is writing a binary log and remember the values<br />
mysql> SHOW MASTER STATUS;<br />
+------------------+----------+--------------+------------------+<br />
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |<br />
+------------------+----------+--------------+------------------+<br />
| mysql-bin.000001 | 1082| | |<br />
+------------------+----------+--------------+------------------+<br />
<br />
On the slave (10.20.30.217) set the MySQL system user as owner to the binary log that has just been copied to the slave.<br />
$ chown mysql:adm /var/log/mysql/*<br />
<br />
On the slave (10.20.30.217) set the server as a slave of 10.20.30.219. Replace the log file information by the values you retrieved from the master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.219', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
On the slave (10.20.30.217) start the MySQL slave replication<br />
mysql> START SLAVE;<br />
<br />
On the slave (10.20.30.21) check the Slave status<br />
mysql> SHOW SLAVE STATUS;<br />
<br />
"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes".<br />
<br />
Also check the syslog if the replication has been sucessfully started<br />
$ tail -fn20 /var/log/syslog<br />
Jul 26 19:03:45 dbslave mysqld[4718]: 090726 19:03:45 [Note] Slave I/O thread: connected to master 'replication@10.20.30.217:3306', replication started in log 'mysql-bin.000001' at position 1082<br />
<br />
=== Testing Master/Master ===<br />
<br />
On the first master, create a new database in MySQL:<br />
mysql> CREATE DATABASE foo;<br />
<br />
Check if this database is available on the second master:<br />
mysql> SHOW DATABASES;<br />
+--------------------+<br />
| Database |<br />
+--------------------+<br />
| information_schema |<br />
| foo |<br />
| mysql |<br />
+--------------------+<br />
<br />
Delete the database on the second master:<br />
mysql> DROP DATABASE foo;<br />
<br />
Check if the database has been removed at first master<br />
mysql> SHOW DATABASES;<br />
+--------------------+<br />
| Database |<br />
+--------------------+<br />
| information_schema |<br />
| mysql |<br />
+--------------------+</div>Jstrickerhttps://oxpedia.org/wiki/index.php?title=Template:OXLoadBalancingClustering_Database&diff=7275Template:OXLoadBalancingClustering Database2011-02-23T11:38:17Z<p>Jstricker: /* First Master configuration */</p>
<hr />
<div>== Master/Master database setup ==<br />
Even if the OX handles the database servers as master and slave, you should configure them as a master/master setup.<br />
<br />
Startup both database machines and install the mysql server packages<br />
$ apt-get install mysql-server<br />
<br />
During the installation, a dialog will show up to set a password for the MySQL 'root' user. Please set a strong password here.<br />
<br />
=== First Master configuration ===<br />
The first server is a master in this context and the second one is the slave.<br />
<br />
Open the MySQL configuration file with you favorite editor<br />
$ vim /etc/mysql/my.cnf<br />
<br />
Modify or enable the following configuration options in the mysqld-section<br />
bind-address = 0.0.0.0<br />
server-id = 1<br />
log-bin = /var/log/mysql/mysql-bin.log<br />
<br />
* ''bindaddress'' specifies the network address where MySQL is listening for network connections. Since the MySQL slave and both Open-Xchange Servers are dedicated machines it is required to have the master accessible through the network.<br />
* ''server-id'' is just a number within a environment with multiple MySQL servers. It needs to be unique for each server.<br />
* ''log-bin'' enables the MySQL binary log which is required for Master/Master replication. In general every statement triggered at the database is stored there to get distributed through the database cluster.<br />
<br />
To apply the configuration changes, restart the MySQL server.<br />
$ /etc/init.d/mysql restart<br />
<br />
Then login to MySQL with the credentials given at the MySQL installation process<br />
$ mysql -u root -p<br />
Enter password:<br />
<br />
Configure replication permissions for the MySQL slave server and the MySQL user "replication". This account is used by the MySQL slave to get database updates from the master. Please choose a strong password here.<br />
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'10.20.30.219' IDENTIFIED BY 'secret';<br />
<br />
Now setup access for the Open-Xchange Server database user ''openexchange'' to configdb and the groupware database for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';<br />
<br />
On the master (10.20.30.217) verify that the MySQL master is writing a binary log and remember the values<br />
mysql> SHOW MASTER STATUS;<br />
+------------------+----------+--------------+------------------+<br />
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |<br />
+------------------+----------+--------------+------------------+<br />
| mysql-bin.000001 | 1082| | |<br />
+------------------+----------+--------------+------------------+<br />
<br />
Copy the MySQL master binary log and the index file to the slave. This is required for initial synchronization.<br />
$ scp /var/log/mysql/mysql-bin.* root@10.20.30.219:/var/log/mysql<br />
<br />
On the slave (10.20.30.219) set the MySQL system user as owner to the binary log that has just been copied to the slave.<br />
$ chown mysql:adm /var/log/mysql/*<br />
<br />
On the slave (10.20.30.219) set the server as a slave of 10.20.30.217. Replace the log file information by the values you retrieved from the master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
On the slave (10.20.30.219) start the MySQL slave replication<br />
mysql> START SLAVE;<br />
<br />
On the slave (10.20.30.219) check the Slave status<br />
mysql> SHOW SLAVE STATUS;<br />
<br />
"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes".<br />
<br />
=== Second Master configuration ===<br />
The first server is a slave in this context and the second one is the master.<br />
<br />
Open the MySQL configuration file with you favorite editor<br />
$ vim /etc/mysql/my.cnf<br />
<br />
Modify or enable the following configuration options in the mysqld-ection. Just like the other server, this one requires a unique ''server-id'' and needs to listen to an external network address. Activating the binary log is not required at the slave.<br />
bindaddress = 10.20.30.219<br />
server-id = 2<br />
<br />
To apply the configuration changes, restart the MySQL server.<br />
$ /etc/init.d/mysql restart<br />
<br />
Then login to MySQL with the credentials given at the MySQL installation process<br />
$ mysql -u root -p<br />
Enter password:<br />
<br />
Configure the replication from the master based on the 'replication' user and the masters binary log status. The values for ''MASTER_LOG_FILE'' and ''MASTER_LOG_POS'' must equal the output of the ''SHOW MASTER STATUS'' command at the MySQL master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';<br />
<br />
On the master (10.20.30.219) verify that the MySQL master is writing a binary log and remember the values<br />
mysql> SHOW MASTER STATUS;<br />
+------------------+----------+--------------+------------------+<br />
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |<br />
+------------------+----------+--------------+------------------+<br />
| mysql-bin.000001 | 1082| | |<br />
+------------------+----------+--------------+------------------+<br />
<br />
On the slave (10.20.30.217) set the MySQL system user as owner to the binary log that has just been copied to the slave.<br />
$ chown mysql:adm /var/log/mysql/*<br />
<br />
On the slave (10.20.30.217) set the server as a slave of 10.20.30.219. Replace the log file information by the values you retrieved from the master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.219', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
On the slave (10.20.30.217) start the MySQL slave replication<br />
mysql> START SLAVE;<br />
<br />
On the slave (10.20.30.21) check the Slave status<br />
mysql> SHOW SLAVE STATUS;<br />
<br />
"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes".<br />
<br />
Also check the syslog if the replication has been sucessfully started<br />
$ tail -fn20 /var/log/syslog<br />
Jul 26 19:03:45 dbslave mysqld[4718]: 090726 19:03:45 [Note] Slave I/O thread: connected to master 'replication@10.20.30.217:3306', replication started in log 'mysql-bin.000001' at position 1082<br />
<br />
=== Testing Master/Master ===<br />
<br />
On the first master, create a new database in MySQL:<br />
mysql> CREATE DATABASE foo;<br />
<br />
Check if this database is available on the second master:<br />
mysql> SHOW DATABASES;<br />
+--------------------+<br />
| Database |<br />
+--------------------+<br />
| information_schema |<br />
| foo |<br />
| mysql |<br />
+--------------------+<br />
<br />
Delete the database on the second master:<br />
mysql> DROP DATABASE foo;<br />
<br />
Check if the database has been removed at first master<br />
mysql> SHOW DATABASES;<br />
+--------------------+<br />
| Database |<br />
+--------------------+<br />
| information_schema |<br />
| mysql |<br />
+--------------------+</div>Jstrickerhttps://oxpedia.org/wiki/index.php?title=Template:OXLoadBalancingClustering_Database&diff=7274Template:OXLoadBalancingClustering Database2011-02-23T11:38:09Z<p>Jstricker: /* First Master configuration */</p>
<hr />
<div>== Master/Master database setup ==<br />
Even if the OX handles the database servers as master and slave, you should configure them as a master/master setup.<br />
<br />
Startup both database machines and install the mysql server packages<br />
$ apt-get install mysql-server<br />
<br />
During the installation, a dialog will show up to set a password for the MySQL 'root' user. Please set a strong password here.<br />
<br />
=== First Master configuration ===<br />
The first server is a master in this context and the second one is the slave.<br />
<br />
Open the MySQL configuration file with you favorite editor<br />
$ vim /etc/mysql/my.cnf<br />
<br />
Modify or enable the following configuration options in the mysqld-section<br />
bind-address = 0.0.0.0<br />
server-id = 1<br />
log-bin = /var/log/mysql/mysql-bin.log<br />
<br />
* ''bindaddress'' specifies the network address where MySQL is listening for network connections. Since the MySQL slave and both Open-Xchange Servers are dedicated machines it is required to have the master accessible through the network.<br />
* ''server-id'' is just a number within a environment with multiple MySQL servers. It needs to be unique for each server.<br />
* ''log-bin'' enables the MySQL binary log which is required for Master/Master replication. In general every statement triggered at the database is stored there to get distributed through the database cluster.<br />
<br />
To apply the configuration changes, restart the MySQL server.<br />
$ /etc/init.d/mysql restart<br />
<br />
Then login to MySQL with the credentials given at the MySQL installation process<br />
$ mysql -u root -p<br />
Enter password:<br />
<br />
Configure replication permissions for the MySQL slave server and the MySQL user "replication". This account is used by the MySQL slave to get database updates from the master. Please choose a strong password here.<br />
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'10.20.30.219' IDENTIFIED BY 'secret';<br />
<br />
Now setup access for the Open-Xchange Server database user ''openexchange'' to configdb and the groupware database for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';<br />
<br />
On the master (10.20.30.217) verify that the MySQL master is writing a binary log and remember the values<br />
mysql> SHOW MASTER STATUS;<br />
+------------------+----------+--------------+------------------+<br />
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |<br />
+------------------+----------+--------------+------------------+<br />
| mysql-bin.000001 | 1082| | |<br />
+------------------+----------+--------------+------------------+<br />
<br />
Copy the MySQL master binary log and the index file to the slave. This is required for initial synchronization.<br />
$ scp /var/log/mysql/mysql-bin.* root@10.20.30.219:/var/log/mysql<br />
<br />
On the slave (10.20.30.219) set the MySQL system user as owner to the binary log that has just been copied to the slave.<br />
$ chown mysql:adm /var/log/mysql/*<br />
<br />
On the slave (10.20.30.219) set the server as a slave of 10.20.30.217. Replace the log file information by the values you retrieved from the master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
On the slave (10.20.30.219) start the MySQL slave replication<br />
mysql> START SLAVE;<br />
<br />
On the slave (10.20.30.219) check the Slave status<br />
mysql> SHOW SLAVE STATUS;<br />
<br />
"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes".<br />
<br />
=== Second Master configuration ===<br />
The first server is a slave in this context and the second one is the master.<br />
<br />
Open the MySQL configuration file with you favorite editor<br />
$ vim /etc/mysql/my.cnf<br />
<br />
Modify or enable the following configuration options in the mysqld-ection. Just like the other server, this one requires a unique ''server-id'' and needs to listen to an external network address. Activating the binary log is not required at the slave.<br />
bindaddress = 10.20.30.219<br />
server-id = 2<br />
<br />
To apply the configuration changes, restart the MySQL server.<br />
$ /etc/init.d/mysql restart<br />
<br />
Then login to MySQL with the credentials given at the MySQL installation process<br />
$ mysql -u root -p<br />
Enter password:<br />
<br />
Configure the replication from the master based on the 'replication' user and the masters binary log status. The values for ''MASTER_LOG_FILE'' and ''MASTER_LOG_POS'' must equal the output of the ''SHOW MASTER STATUS'' command at the MySQL master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';<br />
<br />
On the master (10.20.30.219) verify that the MySQL master is writing a binary log and remember the values<br />
mysql> SHOW MASTER STATUS;<br />
+------------------+----------+--------------+------------------+<br />
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |<br />
+------------------+----------+--------------+------------------+<br />
| mysql-bin.000001 | 1082| | |<br />
+------------------+----------+--------------+------------------+<br />
<br />
On the slave (10.20.30.217) set the MySQL system user as owner to the binary log that has just been copied to the slave.<br />
$ chown mysql:adm /var/log/mysql/*<br />
<br />
On the slave (10.20.30.217) set the server as a slave of 10.20.30.219. Replace the log file information by the values you retrieved from the master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.219', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
On the slave (10.20.30.217) start the MySQL slave replication<br />
mysql> START SLAVE;<br />
<br />
On the slave (10.20.30.21) check the Slave status<br />
mysql> SHOW SLAVE STATUS;<br />
<br />
"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes".<br />
<br />
Also check the syslog if the replication has been sucessfully started<br />
$ tail -fn20 /var/log/syslog<br />
Jul 26 19:03:45 dbslave mysqld[4718]: 090726 19:03:45 [Note] Slave I/O thread: connected to master 'replication@10.20.30.217:3306', replication started in log 'mysql-bin.000001' at position 1082<br />
<br />
=== Testing Master/Master ===<br />
<br />
On the first master, create a new database in MySQL:<br />
mysql> CREATE DATABASE foo;<br />
<br />
Check if this database is available on the second master:<br />
mysql> SHOW DATABASES;<br />
+--------------------+<br />
| Database |<br />
+--------------------+<br />
| information_schema |<br />
| foo |<br />
| mysql |<br />
+--------------------+<br />
<br />
Delete the database on the second master:<br />
mysql> DROP DATABASE foo;<br />
<br />
Check if the database has been removed at first master<br />
mysql> SHOW DATABASES;<br />
+--------------------+<br />
| Database |<br />
+--------------------+<br />
| information_schema |<br />
| mysql |<br />
+--------------------+</div>Jstrickerhttps://oxpedia.org/wiki/index.php?title=Template:OXLoadBalancingClustering_Database&diff=7273Template:OXLoadBalancingClustering Database2011-02-23T11:37:55Z<p>Jstricker: /* First Master configuration */</p>
<hr />
<div>== Master/Master database setup ==<br />
Even if the OX handles the database servers as master and slave, you should configure them as a master/master setup.<br />
<br />
Startup both database machines and install the mysql server packages<br />
$ apt-get install mysql-server<br />
<br />
During the installation, a dialog will show up to set a password for the MySQL 'root' user. Please set a strong password here.<br />
<br />
=== First Master configuration ===<br />
The first server is a master in this context and the second one is the slave.<br />
<br />
Open the MySQL configuration file with you favorite editor<br />
$ vim /etc/mysql/my.cnf<br />
<br />
Modify or enable the following configuration options in the mysqld-section<br />
bind-address = 0.0.0.0<br />
server-id = 1<br />
log-bin = /var/log/mysql/mysql-bin.log<br />
<br />
* ''bindaddress'' specifies the network address where MySQL is listening for network connections. Since the MySQL slave and both Open-Xchange Servers are dedicated machines it is required to have the master accessible through the network.<br />
* ''server-id'' is just a number within a environment with multiple MySQL servers. It needs to be unique for each server.<br />
* ''log-bin'' enables the MySQL binary log which is required for Master/Master replication. In general every statement triggered at the database is stored there to get distributed through the database cluster.<br />
<br />
To apply the configuration changes, restart the MySQL server.<br />
$ /etc/init.d/mysql restart<br />
<br />
Then login to MySQL with the credentials given at the MySQL installation process<br />
$ mysql -u root -p<br />
Enter password:<br />
<br />
Configure replication permissions for the MySQL slave server and the MySQL user "replication". This account is used by the MySQL slave to get database updates from the master. Please choose a strong password here.<br />
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication'@'10.20.30.219' IDENTIFIED BY 'secret';<br />
<br />
Now setup access for the Open-Xchange Server database user ''openexchange'' to configdb and the groupware database for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';<br />
<br />
On the master (10.20.30.217) verify that the MySQL master is writing a binary log and remember the values<br />
mysql> SHOW MASTER STATUS;<br />
+------------------+----------+--------------+------------------+<br />
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |<br />
+------------------+----------+--------------+------------------+<br />
| mysql-bin.000001 | 1082| | |<br />
+------------------+----------+--------------+------------------+<br />
<br />
Copy the MySQL master binary log and the index file to the slave. This is required for initial synchronization.<br />
$ scp /var/log/mysql/mysql-bin.* root@10.20.30.219:/var/log/mysql<br />
<br />
On the slave (10.20.30.219) set the MySQL system user as owner to the binary log that has just been copied to the slave.<br />
$ chown mysql:adm /var/log/mysql/*<br />
<br />
On the slave (10.20.30.219) set the server as a slave of 10.20.30.217. Replace the log file information by the values you retrieved from the master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
On the slave (10.20.30.219) start the MySQL slave replication<br />
mysql> START SLAVE;<br />
<br />
On the slave (10.20.30.219) check the Slave status<br />
mysql> SHOW SLAVE STATUS;<br />
<br />
"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes".<br />
<br />
=== Second Master configuration ===<br />
The first server is a slave in this context and the second one is the master.<br />
<br />
Open the MySQL configuration file with you favorite editor<br />
$ vim /etc/mysql/my.cnf<br />
<br />
Modify or enable the following configuration options in the mysqld-ection. Just like the other server, this one requires a unique ''server-id'' and needs to listen to an external network address. Activating the binary log is not required at the slave.<br />
bindaddress = 10.20.30.219<br />
server-id = 2<br />
<br />
To apply the configuration changes, restart the MySQL server.<br />
$ /etc/init.d/mysql restart<br />
<br />
Then login to MySQL with the credentials given at the MySQL installation process<br />
$ mysql -u root -p<br />
Enter password:<br />
<br />
Configure the replication from the master based on the 'replication' user and the masters binary log status. The values for ''MASTER_LOG_FILE'' and ''MASTER_LOG_POS'' must equal the output of the ''SHOW MASTER STATUS'' command at the MySQL master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.217', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
Now setup access for the Open-Xchange Server database user 'openexchange' to configdb and the oxdb for both groupware server addresses. These databases do not exist yet, but will be created during the Open-Xchange Server installation.<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.213' IDENTIFIED BY 'secret';<br />
mysql> GRANT ALL PRIVILEGES ON *.* TO 'openexchange'@'10.20.30.215' IDENTIFIED BY 'secret';<br />
<br />
On the master (10.20.30.219) verify that the MySQL master is writing a binary log and remember the values<br />
mysql> SHOW MASTER STATUS;<br />
+------------------+----------+--------------+------------------+<br />
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |<br />
+------------------+----------+--------------+------------------+<br />
| mysql-bin.000001 | 1082| | |<br />
+------------------+----------+--------------+------------------+<br />
<br />
On the slave (10.20.30.217) set the MySQL system user as owner to the binary log that has just been copied to the slave.<br />
$ chown mysql:adm /var/log/mysql/*<br />
<br />
On the slave (10.20.30.217) set the server as a slave of 10.20.30.219. Replace the log file information by the values you retrieved from the master.<br />
mysql> CHANGE MASTER TO MASTER_HOST='10.20.30.219', MASTER_USER='replication', MASTER_PASSWORD='secret', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1082;<br />
<br />
On the slave (10.20.30.217) start the MySQL slave replication<br />
mysql> START SLAVE;<br />
<br />
On the slave (10.20.30.21) check the Slave status<br />
mysql> SHOW SLAVE STATUS;<br />
<br />
"Slave_IO_Running" and "Slave_SQL_Running" should be set to "yes".<br />
<br />
Also check the syslog if the replication has been sucessfully started<br />
$ tail -fn20 /var/log/syslog<br />
Jul 26 19:03:45 dbslave mysqld[4718]: 090726 19:03:45 [Note] Slave I/O thread: connected to master 'replication@10.20.30.217:3306', replication started in log 'mysql-bin.000001' at position 1082<br />
<br />
=== Testing Master/Master ===<br />
<br />
On the first master, create a new database in MySQL:<br />
mysql> CREATE DATABASE foo;<br />
<br />
Check if this database is available on the second master:<br />
mysql> SHOW DATABASES;<br />
+--------------------+<br />
| Database |<br />
+--------------------+<br />
| information_schema |<br />
| foo |<br />
| mysql |<br />
+--------------------+<br />
<br />
Delete the database on the second master:<br />
mysql> DROP DATABASE foo;<br />
<br />
Check if the database has been removed at first master<br />
mysql> SHOW DATABASES;<br />
+--------------------+<br />
| Database |<br />
+--------------------+<br />
| information_schema |<br />
| mysql |<br />
+--------------------+</div>Jstricker