Search K
Appearance
Appearance
Dovecot Pro Version 3.x ("Pro") is a highly-available, distributed, standards compliant email platform designed to handle both message storage and remote message access, that scales to tens of millions of active users.
The platform is highly available, can work across multiple physical and/or virtual sites, and provides SLA support options.
Pro’s design allows any platform component to be lost, taken offline for maintenance, or upgraded without affecting the overall service availability for an individual end-user.
Pro supports distributed storage technologies to securely and reliably provide cost-effective message and mailbox metadata storage.
Pro implements the following open email network protocols:
See Dovecot RFC Support for a non-exhaustive list of additional standards that Dovecot implements.
This page describes Pro in detail, and defines how the software must be used in order to be eligible for SLA support.
This document incorporates all content from this website for support descriptions of any features or concepts that do not specifically appear on this page. If there is a conflict between the information in this page, and the documentation on the rest of site, this document controls.
Pro's exclusive mail platform architecture, the Dovecot Pro Palomar Architecture ("Palomar"), contains the various components (both Dovecot software and external services) necessary to operate the platform.
The customer MUST operate Palomar in the design outlined in this product definition. Dovecot/OX cannot support operation of Pro software via custom configuration and designs.
The Palomar architecture comprises Dovecot Pro Proxies ("Proxy") and Dovecot Pro Backends ("Backend"), Palomar Cluster Controller ("Controller"), GeoDB, a highly-available shared storage with access to a shared Dictmap server, OX Abuse Shield (optional), and integration with a customer's identity and authentication databases (passdb/userdb) and external load balancers.
Palomar's stateless design enables a highly-available service, in which any of the individual components can be lost, taken offline for maintenance, or upgraded (up to the high availability capacity limitations determined by a customer) without affecting the overall service availability. This high-availability design maximizes both uptime and operational flexibility, allowing for dynamic scalability and zero-downtime maintenance.
At any given point in time, a user is routed to a single Backend node in the system, from any connection point at the external boundary of the platform, due to performance and efficiency reasons. However, the user does not "live" on this Backend and may be serviced from any other Backend as time passes.
A user's mails should not be accessed simultaneously from multiple Backends in this architecture; the GeoDB and cluster services prevent this from happening. However, even if simultaneous access on multiple Backends did occur, the obox plugin mailbox format was designed not to lose any mails or metadata changes in such situation. This self-healing feature does introduce performance loss if it occurs, due to the need to continually merge index changes between the multiple Backend accesses, which is why Palomar directs a user to a single Backend, no matter the number of incoming connections that a user spawns.
In Palomar, users are packaged into "groups". Generally, Palomar actions (such as site rebalancing) are performed on these user groups as opposed to individual users.
The Palomar architecture supports multiple sites. "Sites" can be virtual sites hosted within the same data center. Multi-site operations are supported ONLY IF using a multi-site capable storage solution (currently, only Scality (sproxyd). For purposes of Palomar, a "cluster" is a synonym for a "site".
The Palomar architecture enables both horizontal and vertical scaling. The required network topology is:
In Palomar, Proxy and Backend nodes run the same core Pro software; local configuration defines the role each system takes.
It is HIGHLY recommended to configure the entire platform as a zero-trust environment; however, the Palomar architecture does not currently require this.
End users are defined by provisioning their details into userdb/passdb.
End users do not live on Palomar machines; they are exclusively virtual users. End users connect to Palomar via mail APIs; there is no shell access to the systems and users are not provisioned on Palomar machines.
Incoming connections to publicly available services exposed on Proxies should be distributed among all available nodes. Each incoming connection is independent at this level, so there is no requirement to route connections from the same user/external source to a single proxy.
Palomar does not require a specific solution at this level, so the choice is up to the customer. Both hardware (e.g. F5) and software (e.g., haproxy) load balancers have been used successfully in customer installations.
The load balancers must either be transparent or they must implement HAProxy Proxy Protocol v2. This is so Pro can correctly log and track the connection telemetry of the end-users (as opposed to the load balancer).
For software load balancers, this processes MUST live on a separate system than Proxies. Palomar does not support running load balancers and Proxies on the same system.
The Proxy's main function is initial authentication/authorization and user identity normalization.
Proxies directly expose publicly available services and handle initial client connections, make a user database lookup to the customer's identity management/auth system (e.g., LDAP) to authenticate/authorize the user, and to lookup and user-specific routing parameters.
Authentication MUST be done at this layer as well as user identity normalization. Palomar assumes the user has been authorized in all layers below the Proxy.
After authentication, sessions are routed to the site where the user is currently assigned.
The cluster service, which runs on the Proxies, makes sure that a user’s data is not concurrently accessed by multiple Backends. This is required to both optimize performance and avoid user data corruption.
Proxies are stateless allowing any of them to be removed or become unavailable without end-user impact. (A user's session may be terminated on the client, but this is an expected event in mail access protocols and transparent client reconnection will re-enable the session.)
Proxies are connected to:
Pro/Palomar provides an optional submission proxy service that acts as a frontend for any full-featured Mail Transfer Agent (MTA), adding all the necessary functionality for an SMTP Submission service, also known as a Mail Submission Agent (MSA).
The submission proxy can fully handle the required SMTP authentication, which allows a single authentication system to be defined in Dovecot for use with all mail storage and delivery protocols.
Submission proxying for the submission service works by proxying to a Dovecot Pro Backend, which then relays to a MTA (which is not provided as part of Pro). It is not supported to proxy directly from the proxy to an MTA system.
Pro supports the following systems for authentication/user information:
Operation, administration, and system performance of the passdb/userdb is the responsibility of the customer; Pro only supports maintaining stable API access to the external systems.
LDAP is recommended, as experience has shown it has the necessary performance to scale to millions of connections.
SQL is supported functionally, but is NOT recommended. At scale, SQL servers cannot handle login loads and will cause noticeably authentication latencies. Dovecot/OX cannot provide support or optimizations for the slow performance.
Customer-specific authentication solutions should be developed using Pro's Lua auth framework. Dovecot/OX Professional Services can be engaged to assist in developing custom solutions. If Lua auth is used to connect to an external HTTP-based service, it must use Pro's Lua http functionality.
Static (local) files are mainly useful to inject local configuration into the Pro session. It should not be used for specific customer data. It is recommended that all variables should be injected into configuration via Lua (if used) instead of using separate static files, if possible.
OX Abuse Shield is an optional component of Palomar that applies authentication policy during user login, via the Proxies.
OX Abuse Shield is recommended to be operated in a high-availability setup; it is not required for login though, although authentication policy will be skipped if the service is not available.
OX Abuse Shield is not provided as part of the base Pro license; separate licensing is needed for the product.
The Backend does the primary work of reading and writing mails to storage and handling the bulk of the mail protocol interaction with the client.
The Backends are organized as a pool of independent nodes. A user is not permanently assigned to a specific Backend. However, due to the performance and load reasons, the platform is designed to allow users to move between Backends.
Backends are connected to:
The Controller is a non-highly available component that performs administrative tasks and automated tasks on cluster state. These tasks include load balancing, health monitoring, and statistics gathering.
One Controller node is needed per site. The Controller does not need to be highly available, but platform-level actions will not occur unless the Controller is active.
The Controller provides a graphical user interface to the admin API. It also exposes an API to other Palomar nodes, for maintenance tasks and user administration.
The Controller is connected to the GeoDB to track user routing inside the Palomar platform.
The Controller is installed/instantiated via Kubernetes/helm chart or docker-compose.
The GeoDB contains the metadata used by Palomar to route users/groups, maintain the platform health and performance, and allow autodiscovery/bootstrapping of nodes in the platform.
Proxies, Backends, and Controllers access GeoDB to perform these actions.
GeoDB requires a server that supports CQL version 4 (or higher).
Storage is generally the most expensive and technically difficult component of an email system, due to the volume and storage capacities needed. As such, it is critically important that the underlying technology is proven to be stable and secure through extensive development, testing, and QA. Therefore, Palomar/Pro only supports a small, defined list of storage technologies.
Palomar requires a highly available, distributed storage solution, independent of the Backends, as users must be able to move between nodes in the platform.
Data replication MUST be handled by the storage system; Pro does not do data replication itself.
Premium support indicates storage systems that are actively tested and developed against by Dovecot/OX.
Each Pro release is certified to work on at least one version/installation of the system (identified in Release Notes for that specific Pro release).
Storage options in this level are guaranteed to have compatibility with the following:
Execution/Operation/Configuration of the storage is NOT directly supported by Dovecot/OX. However, these solutions are considered Premium options because:
See Scality (sproxyd).
See AWS S3.
Basic support indicates that Pro maintains a specified API interface that can be used to interact with the storage, but it does not certify actual storage vendors and does not test directly against these systems prior to any release version.
Dovecot/OX will not support any additional features other than basic API compatibility defined in the Product Definition. Dovecot/OX cannot guarantee or troubleshoot storage performance or behavior.
Execution/Operation/Configuration of the storages is NOT supported by Dovecot/OX. Vendor support is storage dependent.
Customer assumes the risk if a storage vendor's API changes in the future.
WARNING
NFS SLA Support is not provided as part of the base Dovecot Pro License.
A separate SLA Support agreement needs to be negotiated for customers that need NFS support.
Pro's obox mailbox format is the ONLY mailbox format supported for production use.
No other mailbox is supported for production use. Software support for non-obox mailbox formats is limited to migration, backups, and/or archiving use-cases.
Customers using sdbox/mdbox on Pro 2.3 MUST migrate users to obox on 3.0. There is no support for direct physical mailbox conversion of sdbox/mdbox to obox.
For all object storage installations, fs-dictmap is REQUIRED. fs-dictmap is not required for NFS.
Dovecot stores and retrieves fs-dictmap information using CQL (Cassandra Query Language). Dovecot/OX recommends and tests Pro releases with CQL protocol version 4.
The fs-dictmap database must be configured to be multi-node and highly available.
Dovecot/OX tests at least one version of Apache Cassandra for every Pro release.
Dovecot/OX does not support customer configuration or operation of Cassandra. Dovecot/OX may provide Cassandra recommendations, but these are not binding. Support for Cassandra can be pursued through 3rd parties.
As part of Release Notes, it will be identified which specific version of Cassandra was tested and confirmed working.
Customer can use other solutions that claim compatibility with CQL, but the customer is responsible for determining this compatibility.
Dovecot/OX can not provide support for customer configuration or operation of these databases.
Dovecot/OX can only support direct mail data access if customer uses Pro provided APIs (i.e. doveadm commands; scripts shipped with Pro packages).
Dovecot/OX CANNOT support direct modification of mail storage, for either NFS or object storage solutions.
doveadm-fs(1)
commands can be used for debugging and fixing as instructed by OX/Dovecot, but not as a production method to access mails.
Palomar only supports mail delivery via LMTP.
Dovecot does not provide or directly support any AV/AS solution. Such a solution must deliver mails via LMTP when processing is finished.
Palomar only supports searching messages, via the IMAP protocol, if the fts-dovecot plugin is used.
"obox" is Pro's exclusively supported mailbox format for Palomar.
For obox, a user’s mail data is retrieved to a Backend, index and metadata ("metacache") changes occur on that Backend, and this changed data is uploaded back to object storage as needed. In case of a Backend failure, another Backend can continue servicing the user mailbox by downloading the metacache locally onto it's server.
Although the system is designed to allow users to move between multiple backends, and there is code to support accidental access of the mailbox from two servers at once, this behavior comes with a performance penalty. Thus, Palomar requires that a user must be accessed from a single server and the platform is designed to enable and support this behavior.
obox is optimized for cloud technologies by minimizing the I/O with the storage. obox tracks which index files have been altered or are needed locally and uploads / downloads them to object storage only as necessary. This usage pattern most efficiently leverages the object storage paradigm, as opposed to a more traditional black storage strategy.
While using object storage, a user’s mail indexes are fetched from storage and cached locally. The mail indexes are periodically updated to object storage while the session is active. Once the session either expires or the user logs off, any updated indexes are uploaded to storage. By working with local, cached indexes, Pro provides fast access to the user’s mailboxes while leveraging the advantages that object storage provides for long-term storage needs.
obox consists of three major components. The first component is a block-storage native mailbox format. Each message is stored in its own “file” (a discrete object). Mailbox indexes, and other Dovecot user data files, are bundled into separate discrete objects. The second component is a collection of drivers that implement support for various storages, such as S3 and sproxyd. There is additionally a "fscache" driver that implements a local filesystem cache for mail objects. The third component is metadata storage for index files and other metadata, such as Sieve scripts. It synchronizes these files between a local cache and the object storage.
When messages are not pre-indexed, IMAP searches fall back on slow sequential searches through all message headers or text. This strategy is slow on block storage and becomes performance prohibitive in a distributed object storage architecture.
At the same time, mobile clients are the fastest growing segment for mailbox access. These clients have additional bandwidth limitations, both in monetary cost and network latency, that emphasize even more the need for an efficient, feature rich server-based search solution.
To assist in addressing these concerns, a Pro exclusive indexing and search architecture has been developed. This design is built-in to the Pro software, provides better performance and scaling for large mail volumes, and uses the same storage pool as mail objects.
Full text search has the following features:
Pro's standard IMAP SEARCH TEXT/BODY (RFC 3501) parameters use the FTS indexes. Searches of message headers already benefit from Pro's fast message index cache implementation but can optionally be done from FTS indexes also.
Dovecot Pro supports multiple users connecting to the same underlying mailbox. This is done through configuration of the software.
All users with access to a defined mailbox connect to a distinct mailbox on a single Backend, so there may be performance penalties if an excessive number of users concurrently access the same mailbox.
Pro does not provide support for administration of user access rights. This is the responsibility of the Customer’s identity management system.
The ImapTest tool is provided to Pro customers as a courtesy. This package/software has absolutely NO support, warranty, or SLA.
Pro, with the exception of the Cluster Controller component, is currently provided for packages built for a specified list of operating systems.
Pro is only supported on Linux distributions on x86 hardware.
The list of distributions supported is listed in the Release Notes for a given Pro release.
The product rules for when Operating System support is added and dropped can be found at OS Distribution Support.
The Cluster Controller is distributed either via helm charts (for use with Kubernetes) or via execution through docker-compose.
Other than Cluster Controller, Pro is not currently supported as a Kubernetes deployment.