Difference between revisions of "Dovecot:Main Page Dovecot"

Line 6: Line 6:
* [[#dovecotquickinstall|<span>Download &amp; Install</span>]]
* [[#dovecotquickinstall|<span>Download &amp; Install</span>]]
* [[#dovecotadminguide|<span>Admin Documentation</span>]]
* [[#dovecotadminguide|<span>Admin Documentation</span>]]
* [[#dovecotadvanced|<span>Advanced Documentation</span>]]
* [[#dovecotbackendconf|<span>Backend Configuration</span>]]
* [[#dovecotbackendconf|<span>Backend Configuration</span>]]
* [[#dovecotdirectorconf|<span>Director Configuration</span>]]
* [[#dovecotdirectorconf|<span>Director Configuration</span>]]
Line 19: Line 20:
<div id="dovecotadminguide">
<div id="dovecotadminguide">
<div id="dovecotadvanced">

Revision as of 08:39, 16 February 2017


Product Information/Overview


Dovecot Pro Overview

Dovecot Pro provides a reliable and scalable Mail solution to customers with enterprise level stability and scalability on top of the open source version. This product is built based on the experience the Dovecot Team has made by working for many years with the largest ISPs of the world. Dovecot Pro license enables Object Storage Plugins supports various object storages.

Dovecot Pro supports easy change from many existing IMAP and POP3 servers by allowing transparent migration of users in to Dovecot platform.

Dovecot Pro Product Components

Main Dovecot Pro Product components are:

  • Dovecot Pro Mail
  • Dovecot Object Storage
  • Dovecot Full Text Search
  • Dovecot Pro Vault
  • Dovecot Migration Framework

Dovecot Pro Mail


Depending the size of the installation Dovecot mail system contains servers in different roles: Proxies, Directors, Backends. Stateless design in Dovecot allows any of the components to be lost, shut down for maintenance or upgrade, without affecting the service availability. All the users on the same site will still receive service, only capacity providing the service is diminished, meaning that some speed of the service might be lost if system is under heavy load at that time. Dovecot architecture is possible to scale horizontally and vertically. The recommended network topology has:

  • Dovecot Proxy in public network,
  • Dovecot Director and Dovecot Backend in private network,
  • Object storage typically in Storage network.

Backends can use independent local storages or extensible-shared storage, like NFS or object storage. All Dovecot components include the same application but are configured to become different components. The Dovecot components are explained below in more detail.

Dovecot Proxy

Dovecot Proxies act as a frontend IMAP/POP3/LMTP servers for client connections. Dovecot Proxies make the user database lookup to LDAP to validate the user and to lookup the user parameters. In multi-site installations, Dovecot Proxy servers’ main function is to forward user to the load balancer in front of the Dovecot Director of the site where the user is located. Dovecot proxy also decrypts TLS/SSL sessions, which are more CPU consuming than the user lookup and traffic forwarding.

Dovecot Director

Directors listen to IMAP/POP3/LMTP protocols and balance load and provide high-availability for the Dovecot Backends.

The main difference between a regular load balancer and Dovecot Director is that the director makes sure that a single user is never accessed by different Backends at the same time. This is needed in order to avoid user data corruption. Dovecot Directors are taking care of this as they are sharing information of the users and user sessions while the user sessions are active. Each Dovecot Director knows which Backend is taking care of existing user session for any user. In front of Dovecot Directors there needs to be a load balancer to provide High Availability (HA) for them. Dovecot Directors are stateless meaning that any one of them can be switched off for maintenance without user noticing. This also makes updating procedures for the Dovecot components possible without interrupting the user session, thus making the operation transparent to user.

Dovecot Backend

Dovecot Backend does all the hard work of reading and writing mails to storage and handling all of the IMAP/POP3/LMTP protocols. Dovecot Backend is connected to mail storage, typically filesystem or cloud storage, where user mails and mail indexes are stored.

As an user is connecting to Dovecot for reading the mails the user’s mail indexes are fetched from the mail storage. The mail indexes are updated as long as the session is valid. This makes fetching of the user’s mail perform fast. As the session validity expires or the user logs off, the updated index is stored back to storage waiting for the next login. Dovecot Backends are stateless, making it possible for user connections to be connected to any Backend.

Dovecot Object Storage


Dovecot Pro supports mail storage to several object storage solutions.  These solutions include the ability to host on a managed cloud storage service such as Amazon S3 or to locally hosted object storage solutions such as Scality.  

If mails are stored to Object Storage, they can be accessed from any Backend, which are totally stateless as well as the other servers. All sessions of the same user are directed to the same Backend by Dovecot Director to prevent data corruption. As the user session reaches the Dovecot Backend the user’s indexes are loaded from object storage to local cache of the Backend. As the session is over, the indexes are loaded back to object storage. In the unlucky event of Backend failure the next Backend can continue where the other Backend left off since there are no locking problems (in comparison to NFS) and the indexes can be merged and updated by Dovecot Backend as soon as the servers are back on-line.

NFS storage is also supported, but this does not require Dovecot Object Storage plug-in.

Object Storage Advantages

The obox plugin is optimized for cloud technologies by enabling long-term email data storage into cloud storage solutions. The obox plugin tracks which index files have been altered or are needed locally and uploads / downloads them to object storage only as necessary.  This usage pattern most efficiently leverages the object storage paradigm, as opposed to a more traditional black storage strategy. The obox plugin consists of three major components. The first component is a sdbox-like mailbox format.  Each message is stored in its own “file” (a discrete object)I.  Indexes are bundled into separate discrete objects stored in objects storage. The second component is a collection of drivers, that implement support for various object storages, such as Amazon S3 and Scality sproxyd.  There is additionally a "fscache" driver that implements a local filesystem cache for mail objects. The third component is metadata storage for index files and and other metadata, such as Sieve scripts. It synchronizes these files between a local cache and the object storage.

Dovecot Full Text Search


As the amount and importance of information stored in email messages is increasing in people’s everyday lives, searching through those messages is becoming ever more important. At the same time mobile clients add their own restrictions for what can be done on the client side.

The ever-diversifying client software also tests the limits of the IMAP protocol and current server implementations. When an indexing Backend is not present, searches fall back on slow sequential searches through all message headers or text. Thus efficient and feature rich server side searching has grown in importance.

A two-part reimplementation of the Dovecot indexing and search architecture has been designed. This is to provide more customizability for searching and a more unified range of features for all existing FTS plugins. Additionally a renewed Dovecot native implementation of the full FTS stack is provided for better performance and scaling for large mail volumes.

Full text indexing and search

Triggers for Full Text Search (FTS) indexing are configurable. It can be started on demand when searching, automatically when new messages arrive or as a batch job. Full text search has the following features:

  • Indexes can be stored in the Object Storage
  • Smaller indexes compared to all current search Backends to improve search


  • Avoid indexing duplicate data by using word stemming and normalization and skipping

bad character and stop words

  • Part of Dovecot mail server Backend, no 3rd party software needed
  • No extra java virtual machine needed for search sing word stemming and normalizat
  • Substring search for partial word search

Dovecot's standard IMAP SEARCH TEXT/BODY parameters use the FTS indexes. Searches through message headers benefit from Dovecot's fast message index cache implementation, which often contains the necessary information. Optionally header searches can also be done from FTS indexes.

Search Parameters

The following are specified as the search parameters to be used in the webmail User Interface to be passed on to the Dovecot Search Backend:

Details of the possible search queries can be found from here: http://wiki.dovecot.org/Tools/Doveadm/SearchQuery

Dovecot Pro Vault


The storage for Dovecot Vault is based on an existing cluster based on Obox. Emails are stored in a read-only namespace. Apart from the regular Obox configuration of the storage cluster, the namespace is configured as hidden and the containing mailbox is automatically created.

Read-only access to Vault e-mails

All mails stored in the archive namespace have to be read only. This is achieved using the “ACL” plugin to prevent deleting emails.

Incoming e-mails

Incoming e-mails are delivered to Dovecot via LMTP. The “Vault” plugin performs the job of storing the emails in the archive namespace first, and if that succeeds it begins the actual mail delivery, including running any Sieve scripts.

Outgoing e-mails

The SMTP submission server (e.g. Postfix) is used to catch outgoing e-mails and then BCCed to Dovecot’s LMTP via another port. LMTP will, then, deliver these e-mails to the archive namespace, as usual. The LMTP service for the outgoing e-mails will be configured to execute a sieve filtering script to store the email to the archive namespace and add the “\Seen” flag to it.

Vault Encryption

Emails stored in the archive namespace can be encrypted using the “mail-crypt” plugin. Encryption will be done using Elliptic Curve Cryptography.

Dovecot Migration Framework


Dovecot migration framework is a framework of controlling mailbox content migration from a centralized management system. Framework provides centralized repository to limit number of Backends running mailbox dsync processes and number of parallel dsync processes each Backend is allowed to run. It also gathers statistics from each Backend. Framework provides JSON/RESTAPI interface for integration into customers existing environment and management tools. Framework also optionally sends statistics information to Graphite server for progress visualisation.

The Migration Framework can be used for both Content and User Migration phases.

Key features

  • Queuing and distribution of accounts to migrate with locking for multiple Backends running the actual migration.
  • Centralized status display
  • Centralized statistics collection
  • Centralized control of number of Backends doing the actual content copy
  • Centralized control on number of parallel copies each Backend is allowed to run
  • Centralized log collection
  • Progress visualization with 3rd party Graphite possible

Dovecot Download and Install

The repository access is available only by using a customer-specific username and password. We preserve the right to suspend a user account if the maximum number of servers (50) is exceeded. A warning email is sent to the account owner before this happens. If you need more than the allowed number of connections, don't hesitate to contact our sales (sales@dovecot.fi).

If you have any problems with the object storage plugins, send your report to <qa(at)dovecot.fi>. You can report other Dovecot related bugs to our public community mailing list <dovecot(at)dovecot.org>.

Repository configuration for RedHat and CentOS


name=RHEL $releasever - $basearch - Dovecot Oy
name=RHEL $releasever - $basearch - Dovecot 3rd party Packages

The stable-2.2 points to the latest stable Dovecot version. Only the latest patch releases are stored in this repository. If you want to install older releases you need to explicitly refer to the minor version number. So for example if is the latest version, you can still install v2.2.20.1 from the stable-2.2 repository, but to be able to install v2.2.19.2 (or v2.2.19.1) you need to change stable-2.2 to 2.2.19:


You can see all the available Dovecot enterprise packages with:

yum search dovecot-ee

Commonly you want to install at least:

yum install dovecot-ee dovecot-ee-pigeonhole dovecot-ee-managesieve

Note that “dovecot-ee-obox” package still points to the obsolete obox version 1. For now you need to install “dovecot-ee-obox2” package explicitly.

See also /etc/sysconfig/dovecot for some startup settings.

Repository configuration for Debian and Ubuntu

Install the apt repository gpg key:

wget -O - https://apt.dovecot.fi/dovecot-gpg.key | apt-key add -

Add your distribution-specific line to /etc/apt/sources.list.d/dovecot.list:

  • Debian 6.0 Squeeze:
deb https://USERNAME:PASSWORD@apt.dovecot.fi/stable-2.2/debian/squeeze squeeze main
deb https://USERNAME:PASSWORD@apt.dovecot.fi/3rdparty/debian/squeeze squeeze main
  • Debian 7.0 Wheezy:
deb https://USERNAME:PASSWORD@apt.dovecot.fi/stable-2.2/debian/wheezy wheezy main
deb https://USERNAME:PASSWORD@apt.dovecot.fi/3rdparty/debian/wheezy wheezy main
  • Debian 8.0 Jessie:
deb https://USERNAME:PASSWORD@apt.dovecot.fi/stable-2.2/debian/jessie jessie main
deb https://USERNAME:PASSWORD@apt.dovecot.fi/3rdparty/debian/jessie jessie main
  • Ubuntu 12.04 Precise:
deb https://USERNAME:PASSWORD@apt.dovecot.fi/stable-2.2/ubuntu/precise precise main
deb https://USERNAME:PASSWORD@apt.dovecot.fi/3rdparty/ubuntu/precise precise main
  • Ubuntu 14.04 Trusty:
deb https://USERNAME:PASSWORD@apt.dovecot.fi/stable-2.2/ubuntu/trusty trusty main
deb https://USERNAME:PASSWORD@apt.dovecot.fi/3rdparty/ubuntu/trusty trusty main

The stable-2.2 points to the latest stable Dovecot version. Only the latest patch releases are stored in this repository. If you want to install older releases you need to explicitly refer to the minor version number. So for example if is the latest version, you can still install v2.2.20.1 from the stable-2.2 repository, but to be able to install v2.2.19.2 (or v2.2.19.1) to Ubuntu Trusty you need to change stable-2.2 to 2.2.19:

deb https://USERNAME:PASSWORD@apt.dovecot.fi/2.2.19/ubuntu/trusty trusty main

You can see all the available Dovecot enterprise packages with:

apt-cache search dovecot-ee

Commonly you want to install at least:

apt-get install dovecot-ee-core dovecot-ee-imapd dovecot-ee-pop3d dovecot-ee-lmtpd dovecot-ee-sieve dovecot-ee-managesieved

Note that “dovecot-ee-obox” package still points to the obsolete obox version 1. For now you need to install “dovecot-ee-obox2” package explicitly.

Important: You need to enable Dovecot startup by setting ENABLED=y in /etc/default/dovecot. It has also some other startup settings.

Administration Guide

Dovecot Cluster Architecture

Dovecot Proxy

Dovecot Proxies are IMAP/POP3/LMTP proxies that are typically only needed in multi-site setups. Their job is to simply look up the user’s current site from passdb and proxy the connection to that site’s Dovecot Director cluster. User is also typically authenticated at this stage.

If the storage between sites is replicated, it’s possible to do site failover. Deciding when to do a site failover can be either a manual process or it can be done via an automated watchdog. The failover shouldn’t be done too quickly, because it will cause a large load spike when a lot of users start logging into the failover site where users have no local caches. So typically the watchdog script should wait at least 5 minutes to see if the network between sites comes back up before deciding that the other side is down.

Once it’s decided that a site failover should be done, the passdb needs to be updated to switch the affected users’ site to the fallback site. Normally this is done with LDAP passdb by keeping track of username -> virtual site ID and virtual site ID -> IP address. Each physical site would have about 10-100 virtual site IDs. On failover the failed site’s virtual IDs’ IP addresses are updated. This way only a few records are updated instead of potentially millions of user records. Having multiple virtual site IDs per physical site has two advantages: 1) If there are more than two physical sites, it allows distributing the failed site’s users to multiple failover sites. 2) When the original site comes back up the users can be restored to it one virtual site at a time to avoid a load spike.

Note that during a split brain both sites may decide that the other site isn’t available and redirect all incoming connections to the local site. This means that both sites could modify the same mailbox simultaneously. With the Dovecot Object Storage backend this behavior is fine. When split brain is over the changes will be merged, so there is no data loss. The merging reduces the performance temporarily though, so it shouldn’t be relied on during normal operation.

If you wish to reduce the amount of needed hardware, Dovecot Proxies don’t necessarily need to be separated from Dovecot Directors. A single Dovecot instance can perform both operations. The only downside is that it slightly complicates understanding what the server is doing.

Dovecot Director

Dovecot Directors are IMAP/POP3/LMTP proxies that do load balancing and high-availability for the Dovecot Backends. They perform a job similar to a stateful load balancer: The main difference between a regular load balancer and Dovecot Director is that the director makes sure that a single user is never accessed by different backends at the same time. This is needed to keep the performance good and to avoid potential problems. In front of Dovecot directors there needs to be a load balancer to provide high availability for them.

Dovecot Directors connect to each others with TCP in a ring formation (each director connects to the next one, while the last one connects to the first one). This ring is used to distribute the current global state of the cluster, so any of the directors can die without losing state.

Normally the directors determine the backend server for users based on the MD5 hash of the username. This usually gives a good distribution of users to backends and it’s very efficient for the directors: usually a director can determine the correct backend for a user without talking to any other directors. Only in some special situations, like when a backend has recently been removed, the director cluster will temporarily perform worse with slightly higher latency, because they need to talk to each others to determine the current state. This usually takes less than a second to get back to normal.

When a user logs in, the user will be assigned to a specific backend if it’s not already done. This assignment will last for 15 minutes after the user’s last session has closed. Afterwards it’s possible that the user may end up in a different backend. It’s also possible to explicitly move users around in the cluster (doveadm director move).

It’s possible to assign different amount of work for different director servers by changing their “vhost count”. By default each server has it set to 100. If you want one server to have double the number of users, you can set its vhost count to 200. Or if you want one server to have half the number of users, you can set its vhost count to 50. So for example if vhost counts for 3 backends are A=50, B=100, C=200, the probabilities of backends getting connections are:

  • A: 50/(50+100+200) = 14%
  • B: 100/(50+100+200) = 29%
  • C: 200/(50+100+200) = 57%

Changing the vhost count affects only newly assigned users, so it doesn’t have an immediate effect. Running doveadm director flush causes the existing connections to be moved immediately.

Dovecot Backend

The Dovecot Backend does all the hard work of reading and writing mails to storage and handling all of the IMAP/POP3/LMTP protocols. Dovecot Backend is connected to the object storage where users’ mails and mail indexes are stored.

As a user is connecting to Dovecot for reading mails, the user’s mail indexes are fetched from the object storage and cached in local file system. The mail indexes are updated locally while the user does mailbox modifications. The modified local indexes are uploaded back to object storage on background every 5 minutes, except for LMTP mail deliveries. With LMTP mail deliveries the indexes are uploaded only every 10th mail (obox_max_rescan_mail_count setting) to avoid unnecessary object storage writes. The index updates for LMTP deliveries don’t contain anything that can’t be recreated from the mails themselves.

Dovecot Backends are stateless, so should the server crash the only thing lost for the logged in users are the recent message flag updates. When user logs in the next time to another backend, the indexes are fetched again from the object storage to local cache. Because LMTP mail deliveries don’t update indexes immediately, the email objects are also listed once for each accessed folder to find out if there are any newly delivered mails that don’t exist yet in the index.

Dovecot backends attempt to do as much in local cache as possible to minimise the object storage I/O. The larger the local cache the less object storage I/O there is. Typically you can count that each backend should have at least 2 MB of local cache allocated for its active users (e.g. if there are 100 000 users per backend who are receiving mails or who are accessing mails within 15 minutes, there should be at least 200 GB of local cache on the backend). It’s important that the local cache doesn’t become a bottleneck, so ideally it would be using SSDs. Alternatives are to use in-memory disk (tmpfs) or filesystem on SAN that provides enough disk IOPS. (NFS should not be used for local cache.) Dovecot never uses fsyncing when writing to local cache, so after a server crash the cache may be inconsistent or corrupted. This is why the caches should be deleted at server boot up.

Password databases (passdb) and User Databases (userdb)

Dovecot splits all authentication lookups into two categories:

  • passdb lookup most importantly authenticate the user. They also provide any other pre-login information needed for users, such as:
    • Which server user is proxied to.
    • If user should be allowed to log in at all (temporarily or permanently).
  • userdb lookup retrieves post-login information specific to this user. This may include:
    • Mailbox location information
    • Quota limit
    • Overriding settings for the user (almost any setting can be overridden)
Passdb lookups are done by: Dovecot Director Dovecot Backend
IMAP & POP3 logins yes yes
LMTP mail delivery yes -
doveadm commands yes -
Userdb lookups are done by: Dovecot Director Dovecot Backend
IMAP & POP3 logins - yes
LMTP mail delivery - yes
doveadm commands - yes

Prefetch Userdb

During IMAP & POP3 logins to Dovecot backend both passdb and userdb lookups are performed. To avoid two LDAP lookups a prefetch userdb is used. This simply means that the passdb lookup is configured to return both passdb and userdb fields with the userdb fields prefixed with “userdb_” string. This slightly complicates the configuration though, because now when doing userdb changes they need to be remembered to be done to both passdb and userdb configuration.

Object Storage Plugin

Dovecot obox format is split into two main categories: mail object handling and index object handling.

Mail Objects

The mail object handling is easy enough: Each mail is stored in its own separate object. The object name is a uniquely generated name, which we call object ID (OID). The mails are also cached locally using a fscache wrapper, which uses a global cache directory with a configurable max size. If the object storage access is fast, this cache doesn’t need to be very large, but it should still exist. A small cache that usually stays in memory is likely good (e.g. 1 GB).

The mail object names look like: user-hash/user@domain/mailboxes/folder-guid/oid For example: b5/899/user@example.com/mailboxes/00d7d12ea08a3153175e0000dfbea952/d88ff1001d4bf753a1b800001accfe22

Index Objects

Dovecot obox format uses the normal Dovecot index file formats, except they are packed into index bundles when they are stored to object storage. The indexes are written lazily to the object storage in order to minimize the object storage I/O.

There are two types of index bundles: base bundles and diff bundles. The base bundles may be large and they are updated somewhat rarely. The diff bundles contain the latest changes since the base bundle and are the ones usually updated. This is done to avoid constantly uploading large index objects even though very little had changed.

All objects are created with unique object names. This guarantees that two servers can’t accidentally overwrite each others’ changes. Instead, what happens is that there may be two conflicting index bundle objects. If Dovecot notices such conflict, it merges the conflicting indexes using dsync algorithm without data loss. This allows active-active multi-site setups to run safely during a split brain.

The base index object names look like: user-hash/user@domain/mailboxes/folder-guid/idx/bundle.timestamp-secs.timestamp-usecs.unique-id For example: b5/899/user@example.com/mailboxes/00d7d12ea08a3153175e0000dfbea952/idx/bundle.53f74dc2.0fbcf.c96d802b5d4df75307bb00001accfe22

The diff index object names look the same, except another “-unique-id” is appended after the base bundle name.

Example Use Cases

Example 1: Receiving a Mail

  1. Mail is sent by a user using an email client, which sends the mail to the user’s own MTA (Mail Transport Agent).
  2. Mail is received by the destination user’s MTA.
  3. MTA performs antispam and antivirus checks and potentially rejects the mail or tags it with extra headers to indicate it’s spam.
  4. Mail is sent to the Dovecot Proxy with LMTP protocol.
    1. The proxy is chosen by load balancer.
  5. Dovecot Proxy performs a passdb lookup (from LDAP) to find out the user’s primary site.
  6. Dovecot Proxy forwards the LMTP connection to the correct site’s director cluster (local or remote).
    1. The director is chosen by load balancer (e.g. HAproxy).
  7. Dovecot Director looks up or assigns a Dovecot backend for the user and forwards the LMTP connection to the Dovecot Backend.
  8. Dovecot Backend performs userdb lookup to find where and how to save the mail.
  9. Dovecot Backend saves the mail to object storage:
    1. Check if the user’s local cache is up-to-date (list user’s index objects)
    2. If not, fetch the user’s index objects to local cache (1-2 GETs)
    3. Check if the INBOX exists in local cache
      1. If not, fetch the INBOX’s index objects to local cache (1-2 GETs). Also list email objects in INBOX to find any new emails that don’t exist in the index yet (backend failover).
    4. For each new email object found, lookup their GUID and add it to index (1 HEAD per new email)
    5. There is normally a maximum of 10 new email objects (obox_max_rescan_mail_count setting)
    6. Upload the mail to object storage
      1. Write the mail to local fscache
    7. Modify the local indexes
      1. Usually the indexes aren’t uploaded, but every 10th mail (obox_max_rescan_mail_count setting) the indexes are uploaded to object storage (1 PUT + 1 DELETE)

Example 2: Reading a Mail

  1. User connects to Dovecot cluster with an IMAP client, possibly via a webmail.
  2. The IMAP client connects to Dovecot Director IMAP Proxy
    1. The proxy is chosen by load balancer.
  3. Dovecot Proxy performs a passdb lookup (from LDAP) to find out the user’s primary site.
    1. During migration the passdb lookup would direct non-migrated users to the old system.
  4. Dovecot Proxy forwards the IMAP connection to the correct site’s director cluster (local or remote).
    1. The director is chosen by load balancer (e.g. HAproxy).
  5. Dovecot Director looks up or assigns a Dovecot backend for the user and forwards the IMAP connection to the Dovecot Backend.
  6. Dovecot Backend performs a userdb lookup to find where and how to access user’s mails.
  7. Dovecot Backend checks if the user’s local cache is up-to-date (list user’s index objects)
    1. If not, fetch the user’s root index objects to local cache (1-2 GETs)
  8. The IMAP client opens INBOX folder.
    1. Dovecot Backend checks if the INBOX exists in local cache
      1. If not, fetch the INBOX’s index objects to local cache (1-2 GETs). Also list email objects in INBOX to find any new emails that don’t exist in the index yet (backend failover).
        1. For each new email object found, lookup their GUID and add it to index (1 HEAD per new email)
        2. There is normally a maximum of 10 new email objects (obox_max_rescan_mail_count setting).
  9. The IMAP client fetches metadata (e.g. headers) for new emails.
    1. Dovecot usually replies to these from the locally cached INBOX indexes without object storage access.
  10. The IMAP client fetches bodies for the new email(s).
    1. Dovecot looks up if the mail is already in local fscache and serves from there if possible.
    2. Otherwise, Dovecot retrieves the mail from object storage and writes it to local fscache (1 GET)
  11. The IMAP client sets a \Seen flag for a mail.
    1. Dovecot updates the local index.
    2. The modified index will be uploaded to object storage within the next 5 minutes (1 PUT + 1 DELETE)
  12. The IMAP client logs out.

Director Administration

Directors can be managed using the “doveadm director” commands. See “doveadm help director” man page for the full command parameters.

Backend Modifications

The backends can be changed with:

  • doveadm director add: Add a new backend or change an existing one’s vhost count.
    • New servers should also be added to the director_mail_servers setting in dovecot.conf so a cluster restart will know about it.
  • doveadm director update: Update vhost count of an existing backend. There only difference to “doveadm director add” is that it’s not possible to accidentally add a new backend.
  • doveadm director up: Mark a director as being “up”. This is the default state. This is usually updated automatically by dovemon.
  • doveadm director down: Mark a director as being “down”. This is effectively the same as changing vhost count to 0. This is usually updated automatically by dovemon.
  • doveadm director remove: Remove a backend entirely. This should be used only if you permanently remove a server.
  • doveadm director flush: Move users in one specific backend or all backends to the backend according to the user’s current hash. This is needed after “down” command or when setting vhost count to 0 to actually remove all the existing user assignments to the host.

The backend health checking is usually done by the dovemon script, which automatically scans the backends and determines if they are up or down and uses these doveadm commands to update the backend states. See the “Dovecot Pro Director Configuration Manual” for more information about dovemon.

You can see the current backend state with doveadm director status command without parameters. If you want to see which backend a user is currently assigned to and where it may end up being in future, use doveadm director status user@domain.

Cleanly Removing Backend

The cleanest way to take down a working backend server is to:

  • doveadm director update ip-addr 0
    • No longer send any new users to this backend. Wait here as long as possible for the existing connections to die (at least a few minutes would be ideal).
  • On the backend server: doveadm metacache flushall
    • Flush all pending metacache changes to object storage.
  • doveadm director flush ip-addr
    • Forget about the last users assigned to the backend and move them elsewhere.
  • On the backend server: doveadm metacache flushall
    • Final flush to make sure there are no more metacache changes.
  • If the server is permanently removed:
    • doveadm director remove ip-addr
    • Remove the server from director_mail_servers setting in dovecot.con.

Director Ring Modifications

A new director server is added by:

  • Add the server to director_servers setting so that the director is remembered even after a cluster restart.
  • doveadm director ring add command can be used to add the director to an already running ring.

A director server can be removed with doveadm director ring remove. You can see the current ring state with doveadm director ring status.

Director Disaster Recovery

Director servers share the same global state. This means that if there are some bugs, the same bug probably ends up affecting the entire director cluster. Although director is nowadays pretty well tested, it’s possible that something new unexpected happens. This chapter explains how to fix such situations if they ever happen.

In case the director ring has become somehow confused and the ring’s connections don’t look exactly correct, you can restart some of the directors (service dovecot restart), which are connected to the wrong servers. Directors should always automatically retry connecting to their correct neighbors after failures, so this manual restarting isn’t normally necessary.

Full Director State Reset

If the directors start crashing or logging errors and failing user logins, there are two ways the service could be restored:

  • doveadm director flush -F resets all the users’ state immediately. Note that this command shouldn’t be used unless absolutely necessary, because it immediately forgets all the existing user assignments and doesn’t kill any existing connections. This means that for all the active users, the same user could be simultaneously accessed by different backends.
  • A safer way would be to shutdown the entire director cluster and starting it back up from zero state. This may also be necessary if the forced director flush doesn’t work for some reason. Note that it’s not enough to simply restart each director separately, because after the restart it’ll receive the earlier state from the next running director. All the directors must be shut down first.

Mailbox Administration

Doveadm Mailbox Commands

These commands should be run on one of the Dovecot directors. The director is then responsible for forwarding the command to be run in the correct backend. This guarantees that two backend servers don’t attempt to modify the same user’s mailbox at the same time (which might cause problems).

  • doveadm fetch: Fetch mail contents or metadata.
    • doveadm search does the same as doveadm fetch ‘mailbox-guid did’. It’s useful for quick checks where you don’t want to write the full fetch command.
  • doveadm copy & move to another folder, potentially to another user.
  • doveadm reduplicate: Deduplicate mails either by their GUID or by Message-Id: header.
  • doveadm expunge: Expunge mails (without moving to Trash).
  • doveadm flags add/remove/replace: Update IMAP flags for a mail
  • doveadm force-rsync: Try to fix a broken mailbox (or verify that all is ok)
  • doveadm index: Index any mails that aren’t indexed yet. Mainly useful if full text search indexing is enabled.
  • doveadm mailbox list: List user’s folders.
  • doveadm mailbox create/delete/rename: Modify folders.
  • doveadm mailbox subscribe/unsubscribe: Modify IMAP folder subscriptions.
  • doveadm mailbox status: Quickly lookup folder metadata (# of mails, # of unseen mails, etc)

Object Storage Mailbox Format Administration

The object storage plugin administration is mainly related to making sure that the mail cache and the index cache perform efficiently and they don’t take up all the disk space.

The mail cache size is specified in the plugin { obox_fs } setting as the parameter to fscache. Usually with a fast object storage this should be a relatively small value, such as 1 GB. It’s not a user-visible problem if the fscache runs out of disk space (although it will log some errors in that case), so it might be a good idea to use a separate partition for it. If needed, you may also manually delete parts or all of the fscache with the standard rm command. Afterwards you should run doveadm fscache rescan to update the fscache index to know the updated correct size.

The index cache size is specified in the metacache_max_size setting. This should ideally be as large as possible to reduce both object storage GETs for the indexes and also local filesystem writes when the indexes are unpacked to local cache. You can also manually clean some older indexes from cache by running doveadm metacache clean command.

If multiple backends do changes to the same mailbox at the same time, Dovecot will eventually perform a dsync-merge for the indexes. Due to dsync being quite a complicated algorithm there’s a chance that the merging may trigger a bug/crash that won’t fix itself automatically. If this happens, the bug should be reported to get it properly fixed, but a quick workaround is to run: doveadm -o plugin/metacache_disable_merging=yes force-resync -u user@domain INBOX

Moving/Migrating/Converting/Exporting/Importing Mailboxes

Almost everything related to moving/converting mail accounts can be done using the dsync tool. It can do either one-way synchronization or two-way synchronization of mailboxes. See the doveadm help sync and doveadm help backup for more information. Also http://wiki2.dovecot.org/Migration/Dsync describes how to migrate mails from another IMAP/POP3 servers.

Mails can be also imported to an existing mailbox using doveadm import command. The new mails will be appended to their respective folders, creating the folders if necessary. It’s also possible to give a prefix for the new folders, such as “backup-restored-20140824/”.

Mails can be also continuously replicated between two Dovecot servers using the replicator service. See http://wiki2.dovecot.org/Replication for more information.


Each IMAP, POP3 and LMTP connection has its own unique session ID. This ID is logged in all the lines and passed between Dovecot services, which allows tracking it all the way through directors to backends and their various processes. The session IDs look like <ggPiljkBBAAAAAAAAAAAAAAAAAAAAAAB>


If problems are happening, it’s much easier to see what’s going wrong if all the errors are logged into a separate log file, so you can quickly see all of them at once. With rsyslog you can configure this with:

mail.* -/var/log/dovecot.log mail.warning;mail.error;mail.crit -/var/log/dovecot.err

Another thing that often needs to be changed is to disable flood control in rsyslog. Dovecot may log a lot, especially with debug logging enabled, and rsyslog’s default settings often lose log messages.

Another way to look at recent Dovecot errors is to run doveadm log error, which shows up to the last 1000 errors logged by Dovecot since it was last started.

Authentication Debugging

Most importantly set auth_debug=yes, which makes Dovecot log a debug line for just about anything related to authentication. If you’re having problems with passwords, you can also set auth_debug_passwords=yes which will log them in plaintext.

For easily testing authentication, use: doveadm auth test user@domain password

For looking up userdb information for a user, use: doveadm user user@domain

For simulating a full login with both passdb and userdb lookup, use: doveadm auth login user@domain password

Mail Debugging

Setting mail_debug=yes will make Dovecot log all kinds of things about mailbox initialization. Note that it won’t increase error logging at all, so if you’re having some random problems it’s unlikely to provide any help.

If there are any problems with a mailbox, Dovecot should automatically fix it. If that doesn’t work for any reason, you can manually also request fixing a mailbox by running: doveadm force-resync -u user@domain INBOX Where the INBOX should be replaced with the folder that is having problems. Or ‘*’ if all folders should be fixed.

Users may sometimes complain that they have lost emails. The problem is almost always that this was done by one of the user’s email clients accidentally. Especially accidentally configuring a POP3 client to a new device that deletes the mails after downloading them. For this reason it’s very useful to enable the mail_log plugin and enable logging for all the events that may cause mails to be lost. This way it’s always possible to find out from the logs what exactly caused messages to be deleted.

If you’re familiar enough with Dovecot’s index files, you can use 'doveadm dump command to look at their contents in human readable format and possibly determine if there is something wrong in them.


Dovecot has been designed to rather crash than continue in a potentially unsafe manner that could cause data loss. Most crashes usually happen just once and retrying the operation will succeed, so usually even if you see them it’s not a big problem. Of course, all crashes are bugs that should eventually be fixed, so feel free to report them always even if they’re not causing any visible problems. Reporting crashes is usually best accompanied with a gdb backtrace as described in http://dovecot.org/bugreport.html

Instead of crashing, there are have have been some rare bugs in Dovecot when some process could go into infinite loop, which causes the process to use 100% CPU. If you detect such processes, it would be very helpful again to get a gdb backtrace of the running process:

  • gdb -p pid-of-process
  • bt full

After getting the backtrace, you can just kill -9 the process.


User’s current quota usage can be looked up with: doveadm quota get -u user@domain

User’s current quota may sometimes be wrong for various reasons (typically only after some other problems). The quota can be recalculated with: doveadm quota recalc -u user@domain


When Sieve scripts are uploaded using the ManageSieve service, they’re immediately compiled and the script upload will fail if any problems were detected. Not all problems can be detected at compile time however, so it’s also possible that the Sieve script will fail during runtime. In this case the errors will be written to the .dovecot.sieve.log file (right next to the .dovecot.sieve file itself in user’s home directory).

Stress Testing

Easiest way to stress test Dovecot is to use the imaptest tool: http://imapwiki.org/ImapTest. It can be used to flood a server with random commands and it can also attempt to mimic a large number of real-world clients.

API status: In Development

Dovecot Advanced Documentation

PLEASE NOTE: This documentation is in production mode. Not finalized yet

Dovecot Pro Backend Configuration

See dovecot-backend.conf for the full example configuration file. This document explains the settings grouped to logical sections.

Note that this is not an exhaustive list of options that may need to be set for a Backend.

Generic Settings

protocols = imap pop3 smtp sieve

Protocols to enable.

verbose_proctitle = yes

Show state information in process titles (in “ps” output).

mail_log_prefix = "%s(%u)<%{session}>: "

Include the session string in all log messages to make it easier to match log lines together.


Note that Proxy or Director already verifies the authentication (in the reference Dovecot architecture; password has been switched to a master password at this point), so we don’t really need to do it again. We could, in fact, even avoid the password checking entirely, but for extra security it’s still done in this document.

'auth_mechanisms = plain login

Enables the PLAIN and LOGIN authentication mechanisms. The LOGIN mechanism is obsolete, but still used by old Outlooks and some Microsoft phones.

service anvil {

  unix_listener anvil-auth-penalty {

    mode = 0



Disable authentication penalty. Proxy/Director already handled this.

auth_cache_size = 100M

Specifies the amount of memory used for authentication caching (passdband userdb lookups).

login_trusted_networks =

Space-separated list of IP/network ranges that contain the Dovecot Directors. This setting allows Directors to forward the client’s original IP address and session ID to the Backends.

mail_max_userip_connections = 10

Maximum number of simultaneous IMAP/POP3 connections allowed for the same user from the same IP address (10 = 10 IMAP + 10 POP3)

ssl = no

disable_plaintext_auth = no

Proxy/Director already decrypted the SSL connections. The Backends will always see only plaintext connections.

LDAP Authentication

See http://wiki.dovecot.org/AuthDatabase/LDAP for more details.

passed {

  args = /etc/dovecot/dovecot-ldap.conf.ext

  driver = ldap


userdb {

  driver = prefetch


userdb {

  args = /etc/dovecot/dovecot-ldap.conf.ext

  driver = ldap


These enable LDAP to be used as passdb and userdb. The userdb prefetch allows IMAP/POP3 logins to do only a single LDAP lookup by returning the userdb information already in the passdb lookup. http://wiki.dovecot.org/UserDatabase/Prefetch has more details on the prefetch userdb. 

LDAP Backend Configuration

The included dovecot-ldap-backend.conf.ext can be used as template for the /etc/dovecot/dovecot-ldap.conf.ext. Its most important settings are:

hosts = ldap.example.com

dn = cn=admin,dc=example,dc=com

dnpass = secret

base = dc=example,dc=com

Configure how the LDAP server is reached.

auth_bind = yes

Use LDAP authentication binding for verifying users’ passwords.

blocking = yes

Use auth worker processes to perform LDAP lookups in order to use multiple concurrent LDAP connections. Otherwise only a single LDAP connection is used.

'pass_attrs = \

  =user=%{ldap:mailRoutingAddress}, \

  =password=%{ldap:userPassword}, \


Normalize the username to exactly the mailRoutingAddress field’s value regardless of how the pass_filter found the user. The userdb_quota_rule is used by userdb prefetch to return the userdb values. If other userdb fields are wanted, they must be placed to both user_attrs (without “userdb_” prefix) and pass_attrs (with “userdb_” prefix).

user_attrs = \

  =user=%{ldap:mailRoutingAddress}, \


Returns userdb fields when prefetch userdb wasn’t used (LMTP & doveadm). The username is again normalized in case user_filter found it via some other means.

pass_filter = (mailRoutingAddress=%u)

user_filter = (mailRoutingAddress=%u)

How to find the user for passdb lookup.

iterate_attrs = mailRoutingAddress=user

iterate_filter = (objectClass= smiMessageRecipient)

How to iterate through all the valid usernames.

Mail Location Settings (Object Storage)

See http://wiki.dovecot.org/MailLocation for more details.

Note: these settings are assuming that message data is being stored in object storage (obox mailbox).  These settings should not be used if a block storage driver (e.g. mdbox) is being used. 

mail_home = /var/vmail/%2Mu/%u

Specifies the location for the local mail cache directory. This will contain Dovecot index files and it needs to be high performance (e.g. SSD storage).  Alternatively, if there is enough memory available to hold all concurrent users’ data at once, a tmpfs would work as well. The “%2Mu” takes the first 2 chars of the MD5 hash of the username so everything isn’t in one directory.

mail_uid = email

mail_gid = email

UNIX UID & GID which are used to access the local cache mail files.

mail_fsync = never

We can disable fsync()ing for better performance. It’s not a problem if locally cached index file modifications are lost.

mail_temp_dir = /tmp

Directory where downloaded/uploaded mails are temporarily stored to. Ideally all of these would stay in memory and never hit the disk, but in some situations the mails may have to be kept for a somewhat longer time and it ends up in disk. So there should be enough disk space available in the temporary filesystem.

mailbox_list_index = yes

Enable mailbox list indexes. This is required with obox format.

Namespace Settings

See http://wiki.dovecot.org/Namespaces for more details.

namespace inbox {

  prefix =

  separator = /

  inbox = yes


Configure the INBOX namespace with specified IMAP namespace prefix and separator. When migrating from an existing system the prefix and separator must match exactly what the old system used. Otherwise clients may download all mails again or become otherwise confused.

namespace inbox {

  mailbox Drafts {

    special_use = \Drafts

    auto = create


mailbox Junk {

    special_use = \Junk

    auto = create   }

  mailbox Trash {

    special_use = \Trash

    auto = create


  mailbox Sent {

    special_use = \Sent

    auto = create



These can be used to automatically create some default folders for all users with auto=create settings. The autocreated folders aren’t automatically subscribed though, that can be done with auto=subscribe setting. The autocreated/autosubscribed folders can’t be deleted/unsubscribed by the users.

The special_use setting specifies the IMAP SPECIAL-USE (RFC 6154) flags for the folders. Some newer IMAP clients can use these to automatically configure themselves to use the server-provided default folder names. See http://imapwiki.org/SpecialUse

Obox Settings

mail_plugins = $mail_plugins box

Enable obox plugin.

mail_prefetch_count = 10

How many mails to download in parallel from object storage. A higher number improves the performance, but also increases the local disk usage and number of used file descriptors. This setting is also the default for obox_max_parallel_* settings below.

plugin {

  # Store object IDs to Dovecot indexes (will become default later)

  obox_use_object_ids = yes

  # How much disk space metacache can use before old data is cleaned up.   # This should usually fill up most of the available disk space.

  metacache_max_space = 200G

  # Avoid uploading indexes at the cost of more GETs on failures

  # (will become default later)

metacache_delay_uploads = yes

  # How often to upload modified indexes to object storage?   # This is done on background. Default 5 min.

  #metacache_upload_interval = 5min

  # If delayed index uploads are enabled, upload indexes anyway   # after this many mails have been saved. Default 10.

  #obox_max_rescan_mail_count = 10

  # Override mail_prefetch_count setting for write, copy and delete   # operations.

  #obox_max_parallel_writes = $mail_prefetch_count

  #obox_max_parallel_copies = $mail_prefetch_count

  #obox_max_parallel_deletes = $mail_prefetch_count

  # If user’s index cache was accessed max this many seconds ago, assume    # it’s up-to-date and there’s no need to refresh them from object storage.   # Default 2 seconds.

  metacache_close_delay = 2s

  # If activated, when an unexpected 404 is found when retrieving a   # message from object storage, Dovecot will rescan the mailbox by   # listing its objects.  If the 404-object is still listed in this query,   # Dovecot issues a HEAD to determine if the message actually exists.   # If this HEAD request returns a 404, the message is dropped from   # the index. The message object is not removed from the object   # storage.  THIS SHOULD NORMALLY NOT BE ACTIVATED.

  # obox_autofix_storage = no


The rest of the obox settings are specific to the object storage backend that is used.


plugin {

  obox_fs = fscache:1G:/var/cache/mails:…


All of the object storage Backends should be set up to use fscache with at least some amount of disk space, otherwise some operations will be very inefficient (such as IMAP client downloading a mail in small pieces). The fscache is also ideally large enough that when a mail is delivered, any IMAP and POP3 client that is actively downloading the mails should download it from the cache.

Other than that, the fscache doesn’t usually need to be very large. It’s more useful to give the extra disk space to metacache (obox_fs setting).

Note that if fscache sees cache write failures (e.g. out of disk space) those will cause client-visible errors. The disk space usage also isn’t strictly enforced due to race conditions, so if you set fscache limit to 1 GB it may temporarily grow above it. So make sure that the fscache always has some extra disk space available for writing (e.g. a 1 GB fscache mounted on a 1.1 GB mount point).


plugin {

  obox_index_fs = compress:gz:6:…


All of the object storage backends should be set up to compress index bundle objects. This commonly shrinks the indexes down to 20-30% of the original size with gzip -6 compression. It’s possible to use also other compression algorithms. The level parameter must be between 1..9. See http://wiki.dovecot.org/Plugins/Zlib for the current list of supported algorithms.

Note: Currently, there is no compression auto-detection for index bundles.  Therefore, all index bundles must either be compressed (or uncompressed) in object storage; mixing and matching compressed index bundles is not possible automatically.

Email object (a/k/a message blob data) compression should be done with the zlib plugin instead of via the “compress” fs wrapper.  Example:

# See http://wiki.dovecot.org/Plugins/Zlib

mail_plugins = $mail_plugins lib

plugin {

  zlib_save = gz

  zlib_save_level = 6


Compression status of email object data is auto-detected.  Therefore, zlib_save may safely be added to a currently existing system; existing non-compressed mail objects will be identified correctly.

HTTP-based object storages

The HTTP-based object storages use an HTTP URL to specify how the object storage is accessed. The parameters are specified as URL-style parameters, such as http://url/?param1=value1&param2=value2. The parameters common to all object storages include:

  • connect_timeout_msecs=<ms>: Timeout for establishing a TCP connection (default: 5s)
  • max_connect_retries=<n>: Number of connect retries (default: 2)
  • timeout_msecs=<ms>: Timeout for receiving a HTTP response (default: 10s)
  • max_retries=<n>: Max number of HTTP request retries (default: 4)
  • addhdr=<name>:<value>: Add the specified header to all HTTP requests. This can only be specified once. This may be useful for load balancing purposes.

Scality CDMI

mail_location = obox:%2Mu/%2.3Mu/%u:INDEX=~/:CONTROL=~/

We’ll use 2 + 3 chars of the MD5 of the username at the beginning of each object path to improve performance. These directories should be pre-created to CDMI. The index and control dirs have to point to the user’s home directory.

plugin {

  obox_fs = fscache:1G:/var/cache/mails:scality:http://scality-url/mails/?addhdr=X-Dovecot-


  obox_index_fs = compress:gz:6:scality:http://scality-url/mails/?addhdr=X-Dovecot-


  # With bulk-delete and bulk-link enabled, these can be large:

  obox_max_parallel_copies = 100

  obox_max_parallel_deletes = 100


The X-Dovecot-Hash header is important for CDMI load balancer stickiness.

Amazon S3

mail_location = obox:%2Mu/%2.3Mu/%u:INDEX=~/:CONTROL=~/

We’ll use 2 + 3 chars of the MD5 of the username at the beginning of each object path to improve performance. The index and control dirs have to point to the user’s home directory.

plugin {

  obox_fs = fscache:1G:/var/cache/mails:s3: https://ACCESSKEY:SECRET@BUCKETNAME.s3.amazonaws.com/

  obox_index_fs = compress:gz:6:s3: https://ACCESSKEY:SECRET@BUCKETNAME.s3.amazonaws.com/


Get ACCESSKEY and SECRET from http://aws.amazon.com/ -> My account -> Security credentials -> Access credentials. Create the BUCKETNAME from AWS Management Console -> S3 -> Create Bucket.

If the ACCESSKEY or SECRET contains any special characters, they can be %hex-encoded. Note that dovecot.conf handles %variable expansion internally as well, so % needs to be escaped as %% and ‘:’ needs to be escaped as %%3A.

Mail Event Logging

See http://wiki.dovecot.org/Plugins/MailLog for more details.

mail_plugins = $mail_plugins notify mail_log

Enable the mail_log plugin.

plugin {

  mail_log_events = delete undelete expunge copy mailbox_delete mailbox_rename

  mail_log_fields = uid box msgid size from


Log a line about events that may cause message to be deleted. This is commonly useful when debugging why users have lost messages.


See http://wiki.dovecot.org/Quota/Configuration for more details.

mail_plugins = $mail_plugins quota

Enable quota plugin for tracking and enforcing the quota.

protocol map {

  mail_plugins = $mail_plugins map_quota


Enable the IMAP QUOTA extension, allowing IMAP clients to ask for the current quota usage.

plugin {

  quota = count:User quota

Track the current quota usage in Dovecot’s index files.

  quota_vsizes = yes

Required by quota=count backend. Indicates that the quota plugin should use “virtual sizes” rather than “physical sizes” when calculating message sizes. 

  quota_warning = storage=100%% quota-warning 100 %u

  quota_warning2 = storage=95%% quota-warning 95 %u

  quota_warning3 = -storage=100%% quota-warning below %u

Configure quota warning scripts to be triggered at specific sizes. Note that %% needs to be written twice to avoid %variable expansion. For example, at 95% usage a warning email could be sent to user. At 100% an external SMTP database could be updated to reject mails directly. At -100% user allow mails again. The “quota-warning” means to connect to the quota-warning UNIX socket, which is a Dovecot script service described below. }

service quota-warning {

  executable = script /usr/local/bin/quota-warning.sh

  user = email

  unix_listener quota-warning {



Example quota-warning service which executes quota-warning.sh script.

You may also want to use quota_clone plugin to keep track of all the users’ quotas in an efficient database. (It’s very slow to query every user’s quota from the index files directly.) See http://wiki.dovecot.org/Plugins/QuotaClone 


imap_client_workarounds = tb-extra-mailbox-sep tb-lsub-flags

Enable some workarounds for Thunderbird.


See http://wiki.dovecot.org/POP3Server for more details.

pop3_no_flag_updates = yes

Improve performance by not updating the IMAP \Seen flag whenever downloading mails via POP3.

pop3_client_workarounds = outlook-no-nuls oe-ns-eoh

Enable some workarounds for Outlook clients so they won’t hang on unexpected data.

pop3_uidl_format = %g

Use message GUID as POP3 UIDL. For old mails their UIDLs must be migrated using the migration scripts.

LMTP & Sieve

postmaster_address = postmaster@%d

Email address to use in the From: field for outgoing email rejections. The %d variable expands to the recipient domain.

submission_host = smtp-out.example.com:25

SMTP server which is used for sending email rejects, Sieve forwards, vacations, etc. Alternatively, sendmail_path setting can be used to send mails using the sendmail binary.

protocol smtp {

  mail_plugins = $mail_plugins sieve


Enable Sieve plugin.

Dovecot Pro Director Configuration

Generic Settings

protocols = imap pop3 smtp sieve

Protocols to enable.

verbose_proctitle = yes

Show state information in process titles (in “ps” output).


See http://wiki2.dovecot.org/Authentication for more details.

auth_mechanisms = plain login

Enables the PLAIN and LOGIN authentication mechanisms. The LOGIN mechanism is obsolete, but still used by old Outlooks and some Microsoft phones.

auth_verbose = yes

Log a line for each authentication attempt failure.

auth_verbose_passwords = shall:6

Log the password hashed and truncated for failed authentication attempts. For example the SHA1 hash for “pass” is 9d4e1e23bd5b727046a9e3b4b7db57bd8d6ee684 but because of :6 we only log “9d4e1e”. This can be useful for detecting brute force authentication attempts without logging the users’ actual passwords.

service anvil {

  unix_listener anvil-auth-penalty {

    mode = 0



Disable authentication penalty. This is explained in http://wiki2.dovecot.org/Authentication/Penalty

auth_cache_size = 100M

Specifies the amount of memory used for authentication caching (passdb and userdb lookups).

LDAP Authentication

See http://wiki2.dovecot.org/AuthDatabase/LDAP for more details. Note that a director proxy doesn’t need userdb configuration (unlike backends).

passed {

  args = /etc/dovecot/dovecot-ldap.conf.ext

  driver = ldap


This enables LDAP to be used as passdb.

The included dovecot-ldap-director.conf.ext can be used as template for the /etc/dovecot/dovecot-ldap.conf.ext. Its most important settings are:

hosts = ldap.example.com

dn = cn=admin,dc=example,dc=com

dnpass = secret

base = dc=example,dc=com

Configure how the LDAP server is reached.

auth_bind = yes

Use LDAP authentication binding for verifying users’ passwords.

blocking = yes

Use auth worker processes to perform LDAP lookups in order to use multiple concurrent LDAP connections. Otherwise only a single LDAP connection is used.

pass_attrs = \

  =proxy=y, \

  =proxy_timeout=10, \

  =user=%{ldap:mailRoutingAddress}, \


Normalize the username to exactly the mailRoutingAddress field’s value regardless of how the pass_filter found the user.

pass_filter = (mailRoutingAddress=%u)

iterate_attrs = mailRoutingAddress=user

iterate_filter = (objectClass= messageStoreRecipient)

How to iterate through all the valid usernames.

Director Configuration

See http://wiki2.dovecot.org/Director for more details.

director_mail_servers = dovecot-backends.example.com

This setting contains a space-separated list of Dovecot backends’ IP addresses or DNS names. One DNS entry may contain multiple IP addresses (which is maybe the simplest way to configure them).

director_servers = dovecot-directors.example.com

This setting contains a space-separated list of Dovecot directors’ IP addresses or DNS names. One DNS entry may contain multiple IP addresses (which is maybe the simplest way to configure them).

director_consistent_hashing = yes

This setting enables consistent hashing to director. This reduces users being moved around when doing backend changes. This will be the default setting in v2.3.

auth_socket_path = director-userdb

service director {

  fifo_listener login/proxy-notify {

    mode = 0600

    user = $default_login_user


  net_listener {

    port = 9090


  unix_listener director-userdb {

    mode = 0600


  unix_listener login/director {

    mode = 0666


  unix_listener director-admin {

    mode = 0600



service pic {

  unix_listener pic {

    user = dovecot



service imap-login {

  executable = imap-login director


service pop3-login {

  executable = pop3-login director


service managesieve-login {

  executable = managesieve-login director


All these settings configure the Dovecot director. They don’t usually need to be modified, except the TCP port 9090 may be changed. It is used for the directors’ internal communication.

You’ll also need to install poolmon (or equivalent) monitor script: https://github.com/brandond/poolmon 

Dovecot Proxy Configuration

See http://wiki2.dovecot.org/PasswordDatabase/ExtraFields/Proxy for more details.

login_trusted_networks =

Include Dovecot Proxy’s IP addresses/network so they can pass through the session ID and the client’s original IP address. If Open-Xchange is connecting to Dovecot Directors, it’s also useful to provide OX’s IPs/network here for passing through its session ID and the web browser’s original IP address.

lmtp_proxy = yes

Enable LMTP to do proxying by doing passdb lookups (instead of only userdb lookups). login_proxy_max_disconnect_delay = 30 secs This setting is used to avoid load spikes caused by reconnecting clients after a backend server has died or been restarted. Instead of disconnecting all the clients at the same time, the disconnections are spread over longer time period. (v2.2.19+)

#doveadm_password =   This configures the doveadm server’s password. It can be used to access users’ mailboxes and do various other things, so it should be kept secret.

doveadm_port = 24245

service doveadm {

  net_listener {

    port = 24245



These settings configure the doveadm port when acting as doveadm client and doveadm server.

service smtp {

  inet_listener smtp {

    port = 24



This setting configures the LMTP port to use.

service imap-login {

  service_count = 0

  process_min_avail = 4

  process_limit = 4


These 3 settings configure the imap-login process to be in “high performance mode” as explained in http://wiki2.dovecot.org/LoginProcess. The 4 should be changed to the number of CPU cores on the server.

service pop3-login {

service_count = 0

  process_min_avail = 4

  process_limit = 4


Enable high performance mode for POP3 as well (as explained above).

SSL Configuration

See http://wiki2.dovecot.org/SSL for more details.

disable_plaintext_auth = no

Should we allow plaintext authentication or require clients to always use SSL/TLS?

ssl_cert = </etc/dovecot/dovecot.crt

ssl_key = </etc/dovecot/dovecot.key

SSL certificate and SSL secret key files. You must use the “<” prefix so Dovecot reads the cert/key from the file. (Without “<” Dovecot assumes that the certificate is directly included in the dovecot.conf.)

For using different SSL certificates for different IP addresses you can put them inside local {} blocks:

local {

  ssl_cert = </etc/dovecot/dovecot.crt

  ssl_key = </etc/dovecot/dovecot.key


local {

  ssl_cert = </etc/dovecot/dovecot2.crt

  ssl_key = </etc/dovecot/dovecot2.key


If you need different SSL certificates for IMAP and POP3 protocols, you can put them inside protocol {} blocks :

local {

  protocol map {

    ssl_cert = </etc/dovecot/dovecot-imap.crt

    ssl_key = </etc/dovecot/dovecot-imap.key


  protocol pop3 {

    ssl_cert = </etc/dovecot/dovecot-pop3.crt

    ssl_key = </etc/dovecot/dovecot-pop3.key



Dovecot supports also using TLS SNI extension for giving different SSL certificates based on the server name when using only a single IP address, but the TLS SNI isn’t yet supported by all clients so that may not be very useful. It’s anyway possible to configure it by using local_name imap.example.com {} blocks.

Dovemon monitoring tool

Dovemon is a backend monitoring tool for director hosts. It monitors backend responses and disables/enables backends if they stop responding. (Requires Dovecot v2.2.19 or later. For older versions use poolmon.)

Configuration file: /etc/dovecot/dovemon/config.yml:

loglevel: 4

syslog_facility: local5

director_admin_socket: /var/run/dovecot/director-admin

poll_imap: yes

poll_pop3: no

poll_lmtp: no

imap_ssl: no

pop3_ssl: no

lmtp_ssl: no

interval: 10

timeout: 3

retry_count: 3

logelevel:  0-4 syslog_facility: local5

  • Syslog facility to use when logging

director_admin_socket: /var/run/dovecot/director-admin

  • director-admin unix socket used for director admin communication. director-admin unix listener service needs to be configured in dovecot.conf

poll_imap: yes/no

  • use imap connection to poll backend

poll_pop3: yes/no                                                                   

  • use pop3 connection to poll backend

poll_lmtp: yes/no

  • use lmtp connection to poll backend

imap_ssl: yes/no

  • use ssl connection for map poll

pop3_ssl: yes/no

  • use ssl connection for pop3 poll

lmtp_ssl: yes/no

  • use ssl connection for lmtp poll

interval: 0-n

  • poll interval in seconds

timeout: 0-n

  • timeout in seconds for each poll

retry_count: 0-n

  • number of failed polls before issuing HOST-DOWN for the backend

Test accounts file: /etc/dovecot/dovemon/test.accounts.yml

        username: user0001

        password: tosivaikeasalasana

        username: user0002

        password: tosivaikeasalasana

This file allows configuring a separate test account for each backend. The backend must be specified using the same IP address as what “doveadm director status” shows for it.

dovemon issues HOST-DOWN on backend upon 3 (retry_count in config) consecutive failed polls. And issues HOST-UP on backend upon first successful poll if backend is already marked down.

OS Configuration

The default Linux configurations are usually quite good. The only thing needed for large installations is to increase /proc/sys/net/ipv4/ip_local_port_range to provide more local ports in case they run out when proxying. For example “1025 65000” could be a good value to more than double the available ports. If this is not enough, you need to use multiple local IP addresses and list them in login_source_ps setting.