User:Dominik.epple
Install Guide
About this document
The aim of this document is to improve on the existing quickinstall guides to be more structured, provide a more extensive view on "single node and beyond" topics, follow closer to existing "best practices" (also, but not only security-wise), and point out what needs to be changed in clustered installations.
Most of the commands given in this document thus assume a high level design of "single-node, all-in-one".
This document was created on Debian Stretch (which, as of time of writing, is not even supported yet), but it should work as-is also for jessie. Porting to RHEL/SLES/... is TODO.
Preparations
System update
You want to start on latest patchlevel of your OS:
apt-get update apt-get dist-upgrade apt-get install less vim pwgen apt-transport-https # or yum update yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm yum install vim less pwgen wget
reboot
Pregenerate passwords
This guide shall feature copy-paste ready commands which create installations with no default passwords.
We will pre-generate some passwords which will live as dotfiles in /root.
pwgen -c -n 16 1 > /root/.oxmasterpw pwgen -c -n 16 1 > /root/.oxadminpw pwgen -c -n 16 1 > /root/.oxuserpw pwgen -c -n 16 1 > /root/.dbpw pwgen -c -n 16 1 > /root/.dbrootpw
Prepare database
In real-world installations this will probably be multiple galera clusters of a supported flavor and version. For educational purposes a standalone DB on our single-node machine is sufficient.
Even for single-node, don't forget to apply database tuning. See our oxpedia article for default tunings. Note that typically you need to re-initialize the MySQL datadir after changing InnoDB sizing values, and subsequently start the service:
mysql_install_db service mysql restart
We aim to create secure-by-default documentation, so here we go: Run mysql_secure_installation, and chose every security relevant option, but let the root password empty in this step, as we set it in the next step:
# leave the root password empty in mysql_secure_installation as we set it in the subsequent step mysql_secure_installation # now, configure the password from /root/.dbrootpw mysql -e "UPDATE mysql.user SET Password=PASSWORD('$(cat /root/.dbrootpw)') WHERE User='root'; FLUSH PRIVILEGES;" cat >/root/.my.cnf <<EOF [client] user=root password=$(cat /root/.dbrootpw) EOF
MySQL 5.7: the aforementioned must be adjusted using
ALTER USER USER() IDENTIFIED BY 'tiez7EiNgaish0ee';
These credentials also needs to be put in /etc/mysql/debian.cnf.
Cluster note
On multiple db clusters, do it per node analogously. Just be aware the copy-paste command above expects the /root/.dbrootpw file.
Prepare OX user
While the packages will create the user automatically if it does not exist, we want to prepare the filestore now, and we need the user therefore.
useradd -r open-xchange
Cluster Note
You should hard-wire the userid and groupid to the same fixed value. Otherwise, if you want to use a NFS filestore, you'll run into permissions problems, unless you use nfs4/kerberos/idmapd.
groupadd -r -g 999 open-xchange useradd -r -g 999 -u 999 open-xchange
Prepare filestore
There are several options here.
Single-Node: local directory
For a single-node installation, you can just prepare a local directory:
mkdir /var/opt/filestore chown open-xchange:open-xchange /var/opt/filestore
NFS
If using NFS:
Setup on the NFS server:
apt-get install nfs-kernel-server service nfs-kernel-server restart
Configure /etc/exports. This is for traditional ip based access control; krb5 or other security configuration is out of scope of this document.
mkdir /var/opt/filestore chown open-xchange:open-xchange /var/opt/filestore echo "/var/opt/filestore 192.168.1.0/24(rw,sync,fsid=0,no_subtree_check)" >> /etc/exports exportfs -a
Clients can then mount using
mkdir /var/opt/filestore mount -t nfs -o vers=4 nfs-server:/filestore /var/opt/filestore
Or using fstab entries like
nfs-server:/filestore /var/opt/filestore nfs4 defaults 0 0
Object Store
You can use an object store. For lab environments Ceph is a convenient option. For demo / educational purpuses a "single node Ceph cluster" even co-located on your "single-node machine" is reasonble, but its setup is out of scope of this document. If you want to use this, be prepared to provide information about endpoint, bucket name, access key, secret key.
No filestore
If you dont want to provide a filestore, you can configure OX later to run without filestore. (Q: do we still need a dummy registerfilestore on a local directory in that event?)
Prepare mail system
Formally out of scope of this document.
If you need to create a testing dovecot/postfix setup, you can use our performance testing sample config.
Install OX software
You need an ldb user and password for updates and proprietary repos. If you dont have such a user, you can still install the free components. You'll get a lot of authentication failed warnings however from apt tools unless you deconfigure the closed repos.
wget http://software.open-xchange.com/oxbuildkey.pub -O - | apt-key add - wget -O/etc/apt/sources.list.d/ox.list http://software.open-xchange.com/products/DebianJessie.list ldbuser=... ldbpassword=... sed -i -e "s/LDBUSER:LDBPASSWORD/$ldbuser:$ldbpassword/" /etc/apt/sources.list.d/ox.list apt-get update apt-get install open-xchange open-xchange-authentication-database open-xchange-grizzly open-xchange-admin open-xchange-appsuite-backend open-xchange-appsuite-manifest open-xchange-appsuite # # or # wget -O /etc/yum.repos.d/ox.repo http://software.open-xchange.com/products/RHEL7.repo ldbuser=... ldbpassword=... sed -i -e "s/LDBUSER:LDBPASSWORD/$ldbuser:$ldbpassword/" /etc/yum.repos.d/ox.repo yum install open-xchange open-xchange-authentication-database open-xchange-grizzly open-xchange-admin open-xchange-appsuite-backend open-xchange-appsuite-manifest open-xchange-appsuite
Cluster note
- if you want to have separate frontend (apache) and middleware (open-xchange) systems, make sure to install packages which require apache as dependency on the frontend nodes, and packages which require java as a dependency on the middleware nodes. Currently this results in the split
- Frontend nodes: open-xchange-appsuite
- Middleware nodes: everything else
- If you want to use an object store, install the corresponding open-xchange-filestore-xyz package, like open-xchange-filestore-s3
- For hazelcast session storage, install also open-xchange-sessionstorage-hazelcast
Install database schemas
If the DB runs on localhost and you have root access, you can use
/opt/open-xchange/sbin/initconfigdb --configdb-pass="$(cat /root/.dbpw)" -a
Cluster note
Create the DB users on all write instances manually. https://oxpedia.org/wiki/index.php?title=Template:OXLoadBalancingClustering_Database#Creating_Open-Xchange_user
mysql -e "GRANT CREATE, LOCK TABLES, REFERENCES, INDEX, DROP, DELETE, ALTER, SELECT, UPDATE, INSERT, CREATE TEMPORARY TABLES, SHOW VIEW, SHOW DATABASES ON *.* to 'openexchange'@'%' identified by '$(cat /root/.dbpw)' WITH GRANT OPTION;"
Run initconfigdb with some more options:
/opt/open-xchange/sbin/initconfigdb --configdb-user=openexchange --configdb-pass="$(cat /root/.dbpw)" --configdb-host=configdb-writehost
(initconfigdb needs to be run only once on one cluster node)
Initial configuration
/opt/open-xchange/sbin/oxinstaller --add-license=YOUR-OX-LICENSE-CODE --servername=oxserver --configdb-pass="$(cat /root/.dbpw)" --master-pass="$(cat /root/.oxmasterpw)" --network-listener-host=localhost --servermemory 1024
servername is more like a clustername and needs to be the same for all nodes.
servermemory should be adjusted to reflect the expected number of concurrent active sessions; sizing assumption is 4MB per session.
Cluster Note
--configdb-readhost=... --configdb-writehost=... --imapserver=... --smtpserver=... --mail-login-src=<login|mail|name> --mail-server-src=<user|global> --transport-server-src=<user|global> --jkroute=APP1 --object-link-hostname=[service DNS name like ox.example.com] --extras-link=[1] --name-of-oxcluster=[something unique per cluster, like business-staging; see --servername] --network-listener-host=<localhost|*>
oxinstaller needs to be run on each cluster node with identical options besides the jkroute, which must be unique per cluster node and match the corresponding apache option, see below.
In a cluster you also want to configure hazelcast; see AppSuite:Running_a_cluster#Configuration. Most prominent options are
com.openexchange.hazelcast.enabled=true com.openexchange.hazelcast.group.name=<reasonable unique group name> com.openexchange.hazelcast.group.password=<unique password, CHANGE THE SHIPPED DEFAULT PASSWORD!> com.openexchange.hazelcast.network.join=static # static is recommended over multicast for robustness com.openexchange.hazelcast.network.join.static.nodes=... # configure your nodes as a comma-separated list; a bootstrapping subset is acceptable com.openexchange.hazelcast.network.interfaces=... # pick your subnet; Wildcards (*) and ranges (-) can be used
Start the service:
systemctl restart open-xchange
Cluster Note
Start the service on every cluster node.
Registering stuff
Register the "server":
/opt/open-xchange/sbin/registerserver -n oxserver -A oxadminmaster -P "$(cat /root/.oxmasterpw)"
Cluster Note
All the register* commands need to be issued only once per cluster, as the effect is to enter corresponding lines in the configdb.
And the filestore:
/opt/open-xchange/sbin/registerfilestore -A oxadminmaster -P "$(cat /root/.oxmasterpw)" -t file:/var/opt/filestore -s 1000000 -x 1000000
Cluster Note
If you chose an object store, the corresponding registerfilestore line reads as follows (Ceph radosgw example):
/opt/open-xchange/sbin/registerfilestore -A oxadminmaster -P "$(cat /root/.oxmasterpw)" -t s3://radosgw -s 1000000 -x 1000000
It requires configuration of the object store in filestore-s3.properties:
com.openexchange.filestore.s3.radosgw.endpoint=http://localhost:7480 com.openexchange.filestore.s3.radosgw.bucketName=oxbucket com.openexchange.filestore.s3.radosgw.region=eu-west-1 com.openexchange.filestore.s3.radosgw.pathStyleAccess=true com.openexchange.filestore.s3.radosgw.accessKey=... com.openexchange.filestore.s3.radosgw.secretKey=... com.openexchange.filestore.s3.radosgw.encryption=none com.openexchange.filestore.s3.radosgw.signerOverride=S3SignerType com.openexchange.filestore.s3.radosgw.chunkSize=5MB
Changing this file needs another
service open-xchange restart
And the database:
/opt/open-xchange/sbin/registerdatabase -A oxadminmaster -P "$(cat /root/.oxmasterpw)" -n oxdb -p "$(cat /root/.dbpw)" -m true
Cluster Note
You probably have multiple clusters with read and write URLs. Register them each like first registering the master, subsequently registering the slave URL with the corresponding master ID:
/opt/open-xchange/sbin/registerdatabase -A oxadminmaster -P "$(cat /root/.oxmasterpw)" -n oxdb -p "$(cat /root/.dbpw)" -m true
This command gives as output the id of the registered db master, like
database 3 registered
Here, the id is 3. Use this id as argument of the -M switch of the next command:
/opt/open-xchange/sbin/registerdatabase -A oxadminmaster -P "$(cat /root/.oxmasterpw)" -n oxdbr -p "$(cat /root/.dbpw)" -m false -M 3
Configure Apache
Create config files /etc/apache2/conf-enabled/proxy_http.conf, /etc/apache2/sites-enabled/000-default.conf by copy-pasting as explained in AppSuite:Open-Xchange_Installation_Guide_for_Debian_8.0#Configure_services
Make sure you are using mpm_event. Apply concurrent connections tuning as described in Tune_apache2_for_more_concurrent_connections.
Configure modules and restart:
a2enmod proxy proxy_http proxy_balancer expires deflate headers rewrite mime setenvif lbmethod_byrequests systemctl restart apache2
Cluster Note
Make sure that each middleware node got its unique server.properties:com.openexchange.server.backendRoute, e.g. APP1, APP2, APP3, etc.
Configure them in the http proxy definitions like
BalancerMember http://ox1:8009 timeout=100 smax=0 ttl=60 retry=60 loadfactor=50 route=APP1 BalancerMember http://ox2:8009 timeout=100 smax=0 ttl=60 retry=60 loadfactor=50 route=APP2 BalancerMember http://ox3:8009 timeout=100 smax=0 ttl=60 retry=60 loadfactor=50 route=APP3
If you colocate apache on middleware nodes, you might want to minimize cross-node routing, by setting on each node for the local node a loadfactor=50 and for the other nodes a status=+H instead of the loadfactor. E.g. on node ox1:
BalancerMember http://ox1:8009 timeout=100 smax=0 ttl=60 retry=60 loadfactor=50 route=APP1 BalancerMember http://ox2:8009 timeout=100 smax=0 ttl=60 retry=60 status=+H route=APP2 BalancerMember http://ox3:8009 timeout=100 smax=0 ttl=60 retry=60 status=+H route=APP3
However this requires host-specific config files for apache http proxy.
Use touch-appsuite with one identical timestamp on each frontend node.
timestamp=$(date -u +%Y%m%d.%H%M%S) for node in $all_frontend_nodes; do ssh $node /opt/open-xchange/sbin/touch-appsuite --timestamp=$timestamp done
Provision a Test User
Provision a sample context and user:
/opt/open-xchange/sbin/createcontext -c 1 -A oxadminmaster -P $(cat /root/.oxmasterpw) -N localdomain -u oxadmin -d "Admin User" -g Admin -s User -p $(cat /root/.oxadminpw) -e oxadmin@localdomain -q 100 --access-combination-name groupware_premium /opt/open-xchange/sbin/createuser -c 1 -A oxadmin -P $(cat /root/.oxadminpw) -u testuser -d "Test User" -g Test -s User -p $(cat /root/.oxuserpw) -e testuser@localdomain --access-combination-name groupware_premium
Context_Preprovisioning#Sample_Script provides an example how to fast-mode provision a huge number of contexts quickly.