AppSuite:ResourceLimits/sandbox
Configuration of Resource Limits
Overview
Several ways exist to restrict resources on a linux system from a global level down to user/groups or even shells and the processes started by them.
Sysctl
Sysctl is used to modify kernel parameters at runtime. E.g. to set the maximum number of files
$ sysctl -w fs.file-max=100000
To permanently set them append to the main configuration file and reload the settings
$ echo fs.file-max=100000 >> /etc/sysctl.conf $ sysctl -p
More infos can be found via man sysctl
Limits.conf
Allows to restrict resources an a global, group or user level. E.g:
$ cat /etc/security/limits.d/90-nproc.conf # Default limit for number of user's processes to prevent # accidental fork bombs. # See rhbz #432903 for reasoning. * soft nproc 1024
From man limits.conf:
Also, please note that all limit settings are set per login. They are not global, nor are they permanent; existing only for the duration of the session.
The limits per login are applied via the pam stack. See man pam and man pam_limits for more details. As those limits are bound to sessions they don't affect most daemons started by our supported init systems or init utils. Most state that they are ignored by design, see upstart, systemd and start-stop-daemon
Ulimit
From man bash
ulimit [-HSTabcdefilmnpqrstuvx [limit]] Provides control over the resources available to the shell and to processes started by it, on systems that allow such control.
This is what we use in our System V compatible init scripts to increase resources for the open-xchange process across multiple distros. Currently only the maximum number of processes and the maximum number of open file descriptors available to a single user are increased via ulimit. The values are specified in /opt/open-xchange/ox-scriptconf.sh
Systemd
Control Groups
Control groups should only affect the OX middleware if you create/manage them yourself of if you are using a modern distribution that already uses systemd as init.
Citing from the kernel cgroup [documentation]:
1-2. What is cgroup?
cgroup is a mechanism to organize processes hierarchically and distribute system resources along the hierarchy in a controlled and configurable manner.
cgroup is largely composed of two parts - the core and controllers. cgroup core is primarily responsible for hierarchically organizing processes. A cgroup controller is usually responsible for distributing a specific type of system resource along the hierarchy although there are utility controllers which serve purposes other than resource distribution.
cgroups form a tree structure and every process in the system belongs to one and only one cgroup. All threads of a process belong to the same cgroup. On creation, all processes are put in the cgroup that the parent process belongs to at the time. A process can be migrated to another cgroup. Migration of a process doesn't affect already existing descendant processes.
Following certain structural constraints, controllers may be enabled or disabled selectively on a cgroup. All controller behaviors are hierarchical - if a controller is enabled on a cgroup, it affects all processes which belong to the cgroups consisting the inclusive sub-hierarchy of the cgroup. When a controller is enabled on a nested cgroup, it always restricts the resource distribution further. The restrictions set closer to the root in the hierarchy can not be overridden from further away.
So processes are organized into a tree structure of control groups and controllers are responsible for the distribution of resources. So what kind of controllers exist?
5. Controllers
5-1. CPU
The "cpu" controllers regulates distribution of CPU cycles. This controller implements weight and absolute bandwidth limit models for normal scheduling policy and absolute bandwidth allocation model for realtime scheduling policy.
5-2. Memory
The "memory" controller regulates distribution of memory. ... While not completely water-tight, all major memory usages by a given cgroup are tracked so that the total memory consumption can be accounted and controlled to a reasonable extent.
5-3. IO
The "io" controller regulates the distribution of IO resources. This controller implements both weight based and absolute bandwidth or IOPS limit distribution; however, weight based distribution is available only if cfq-iosched is in use and neither scheme is available for blk-mq devices.
The open-xchange service is simply put into the default system.slice without applying further limits.
singlenode$ systemd-cgls --no-pager ├─1 /sbin/init ├─system.slice │ ├─avahi-daemon.service │ │ ├─501 avahi-daemon: running [singlenode] │ │ └─514 avahi-daemon: chroot helper │ ├─console-kit-daemon.service │ │ └─16164 /usr/sbin/console-kit-daemon --no-daemon │ ├─dbus.service │ │ └─508 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation │ ├─munin-node.service │ │ └─4290 /usr/bin/perl -wT /usr/sbin/munin-node │ ├─open-xchange.service │ │ └─6037 /usr/bin/java -Dsun.net.inetaddr.ttl=3600 -Dnetworkaddress.cache.ttl=3600 -Dnetworkaddress.cache.negative.ttl=10 ...
To check all the details use
singlenode:~ # systemctl show system.slice
Limits besides control groups
Besides control groups systemd allows you to apply other limits to the execution environment of your service. Here we can apply the limits that would normally be applied via limits.conf or ulimit. Systemd uses [setrlimit] for this. The options that we set by default are:
* LimitNOFILE * LimitNPROC
You can check this by looking at the service file that is shipped by default service file
singlenode:~ # cat /usr/lib/systemd/system/open-xchange.service [Unit] After=remote-fs.target After=time-sync.target ypbind.service sendmail.service cyrus.service [Service] User=open-xchange PermissionsStartOnly=true TimeoutStartSec=0 ExecStartPre=/opt/open-xchange/sbin/triggerupdatethemes -u ExecStart=/opt/open-xchange/sbin/open-xchange ExecStop=/opt/open-xchange/sbin/shutdown -w ExecReload=/opt/open-xchange/sbin/triggerreloadconfiguration -d KillMode=process LimitNOFILE=65536 LimitNPROC=65536 [Install] WantedBy=multi-user.target
Drop-in configs
Drop in configs allow administrator to easily override the default service unit files. So if you want to change the default limits or add additional limits have a look at
singlenode:~ # cat /etc/systemd/system/open-xchange.service.d/limits.conf # Override and add options in this file # See systemd.exec(5) for other limits [Service] #LimitNPROC=65536 #LimitNOFILE=65536
Open-Xchange middleware on specific distros
The support for the mentioned mechanism of resource control differ depending on the distribution and the init system in use.
Debian 7
- Init
- System V style
- OX Configurable Limits/Defaults
- nofile, nproc
The mentioned limits can be configured via /opt/open-xchange/etc/ox-scriptconf.sh. The limits are applied via ulimit in the service's init script. The open-xchange service is finally started via start-stop-daemon which doesn't doesn't consider /etc/security/limits.*
RHEL 6 / CentOS 6
- Init
- Upstart, System V compatible
- OX Configurable Limits/Defaults
- nofile, nproc
The mentioned limits can be configured via /opt/open-xchange/etc/ox-scriptconf.sh. The limits are applied via ulimit in the service's init script. Furthermore as the open-xchange service is finally started via su ... open-xchange on this distro a user session is opened via su/pam and the default CentOS pam config reads the /etc/security/limits.* configuration by loading the pam stack like:
- /etc/pam.d/su
- -> /etc/pam.d/system-auth
- -> pam_limits.so
- -> /etc/pam.d/system-auth
If NPROC isn't configured for the open-xchange-server it's restricted to 1024 globally by default to prevent accidental fork bombs, see /etc/security/limits.d/90-nproc.conf which can result in severe problems modern multithreaded applications.
RHEL 7 / CentOS 7 / Debian 8 / SLE 12
- Init
- Systemd
- OX Configurable Limits/Defaults
- nofile, nproc
For systemd the default limits are configured directly in the service's unit file that is shipped by OX and located at /usr/lib/systemd/system/open-xchange.service. The drop-in config to override or extend the default unit file is located at /etc/systemd/system/open-xchange.service.d/limits.conf. Systemd.exec shows a whole lot of options that can be used by admins to adapt the default service to their specific needs.
Verify limits
System V
singlenode:~ # read pid < /var/run/open-xchange.pid singlenode:~ # cat /proc/$pid/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 65536 65536 processes Max open files 65536 65536 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 24254 24254 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us
Systemd
singlenode:~ # systemctl show open-xchange | grep Limit StartLimitInterval=10000000 StartLimitBurst=5 StartLimitAction=none MemoryLimit=18446744073709551615 LimitCPU=18446744073709551615 LimitFSIZE=18446744073709551615 LimitDATA=18446744073709551615 LimitSTACK=18446744073709551615 LimitCORE=18446744073709551615 LimitRSS=18446744073709551615 LimitNOFILE=65536 LimitAS=18446744073709551615 LimitNPROC=65536 LimitMEMLOCK=65536 LimitLOCKS=18446744073709551615 LimitSIGPENDING=19827 LimitMSGQUEUE=819200 LimitNICE=0 LimitRTPRIO=0 LimitRTTIME=18446744073709551615