fs_auth_cache
Value | Named Filter |
---|
Named filter for fs-auth service. Used for configuring dictionary for authentication cache. This allows sharing the cache between multiple servers.
Appearance
An annotated configuration example:
# Load the obox plugin
mail_plugins {
obox = yes
}
# How many mails to download in parallel from object storage.
#
# A higher number improves the performance, but also increases the local disk
# usage and number of used file descriptors.
mail_prefetch_count = 10
# Override setting for writes, copies, or deletes.
# They default to 0, which expands to `mail_prefetch_count` + 1.
obox_max_parallel_writes = 0 #mail_prefetch_count + 1
obox_max_parallel_copies = 0 #mail_prefetch_count + 1
obox_max_parallel_deletes = 0 #mail_prefetch_count + 1
How much disk space metacache can use before old data is cleaned up.
Generally, this should be set at ~90% of the available disk space.
metacache_max_space = 200G
How much disk space on top of metacache_max_space
can be used before Dovecot stops allowing more users to login.
metacache_max_grace = 10G
How often to upload important index changes to object storage? This mainly means that if a backend crashes during this time, message flag changes within this time may be lost. A longer time can however reduce the number of index bundle uploads.
metacache_upload_interval = 5min
If user was accessed this recently, assume the user's indexes are up-to-date. If not, list index bundles in object storage (or Cassandra) to see if they have changed. This typically matters only when user is being moved to another backend and soon back again, or if the user is simultaneously being accessed by multiple backends. Default is 2 seconds.
metacache_close_delay = 2secs
mail_home = /var/vmail/%(user | sha1 % 256 | hex(2)}/%{user}
Specifies the location for the local mail cache directory. This will contain Dovecot index files and it needs to be high performance (e.g. SSD storage). Alternatively, if there is enough memory available to hold all concurrent users' data at once, a tmpfs would work as well. The "%{user | sha1 % 256 | hex(2)}" shards the username so everything isn't in one directory.
mail_uid = vmail
mail_gid = vmail
UNIX UID & GID which are used to access the local cache mail files.
mail_fsync = never
We can disable fsync()ing for better performance. It's not a problem if locally cached index file modifications are lost.
mail_temp_dir = /tmp
Directory where downloaded/uploaded mails are temporarily stored to. Ideally all of these would stay in memory and never hit the disk, but in some situations the mails may have to be kept for a somewhat longer time and it ends up in disk. So there should be enough disk space available in the temporary filesystem.
TIP
/tmp
should be a good choice on any recent OS, as it normally points to /dev/shm
, so this temporary data is stored in memory and will never be written to disk. However, this should be checked on a per installation basis to ensure that it is true.
mailbox_list_index = yes
Enable mailbox list indexes. This is required with obox format.
mailbox_list_index_include_inbox = yes
If mailbox_list_index_prefix
resides in tmpfs, INBOX status should be included in list index.
fs_auth_cache
Value | Named Filter |
---|
Named filter for fs-auth service. Used for configuring dictionary for authentication cache. This allows sharing the cache between multiple servers.
fs_auth_request_max_retries
Default | 1 |
---|---|
Value | unsigned integer |
If fs-auth fails to perform authentication lookup, retry the HTTP request this many times.
fs_auth_request_timeout
Default | 10s |
---|---|
Value | time (milliseconds) |
Absolute HTTP request timeout for authentication lookups.
fs_aws_s3
Value | Named Filter |
---|---|
See Also |
Filter for AWS S3-specific settings. fs_s3
filter is also used.
fs_azure
Value | Named Filter |
---|
Filter for Azure-specific settings.
fs_azure_account_name
Default | [None] |
---|---|
Value | string |
See Also |
Azure account name for all authentication types.
fs_azure_auth_type
Default | user-sas |
---|---|
Value | string |
Allowed Values | user-sas service-sas legacy |
Azure authentication type to use.
Options:
Value | Description |
---|---|
user-sas |
See Azure User SAS |
service-sas |
See Azure Service SAS |
legacy |
See Azure Legacy Authentication |
fs_azure_bulk_delete_limit
Default | 256 |
---|---|
Value | unsigned integer |
Number of deletes supported within the same bulk delete request. 0
disables
bulk deletes.
fs_azure_container_name
Default | [None] |
---|---|
Value | string |
Azure container name.
fs_azure_legacy_auth_secret
Default | [None] |
---|---|
Value | string |
See Also |
Base64-encoded authentication shared key secret when using
fs_azure_auth_type = legacy
.
fs_azure_service_sas_secret
Default | [None] |
---|---|
Value | string |
See Also |
Base64-encoded authentication shared key secret when using
fs_azure_auth_type = service-sas
.
fs_azure_url
Default | https://blob.core.windows.net |
---|---|
Value | string |
Advanced Setting; this should not normally be changed. |
URL for accessing the Azure storage. It is not intended to be changed, unless testing some other Azure-compatible storage.
fs_azure_user_sas_client_id
Default | [None] |
---|---|
Value | string |
See Also |
ClientId to be used for authentication against Entra IDM. This is needed only
with fs_azure_auth_type = user-sas
.
fs_azure_user_sas_client_secret
Default | [None] |
---|---|
Value | string |
See Also |
Client secret to be used for authentication against Entra IDM (base64 encoded).
This is needed only with fs_azure_auth_type = user-sas
.
fs_azure_user_sas_tenant_id
Default | [None] |
---|---|
Value | string |
See Also |
TenantId to be used for authentication against Entra IDM. This is needed only
with fs_azure_auth_type = user-sas
.
fs_dictmap_bucket_cache_path
Default | [None]obox { "%{home}/buckets.cache" } |
---|---|
Value | string |
See Also |
Required when fs_dictmap_bucket_size
is set. Bucket counters are
cached in this file. This path should be located under the obox indexes
directory (on the SSD backed cache mount point; e.g.
%{home}/buckets.cache
).
fs_dictmap_bucket_deleted_days
Default | 0obox { 11 } |
---|---|
Value | unsigned integer |
See Also |
Track Cassandra's tombstones in buckets.cache
file to avoid creating
excessively large buckets when a lot of mails are saved and deleted in a
folder. The value should be one day longer than gc_grace_seconds
for the
user_mailbox_objects
table. By default this is 10 days, so in that case
fs_dictmap_bucket_deleted_days = 11
should be used. When determining
whether fs_dictmap_bucket_size
is reached and a new one needs to be
created, with this setting the tombstones are also taken into account. This
tracking is preserved only as long as the buckets.cache
exists.
fs_dictmap_bucket_size
Default | 0obox { 10000 } |
---|---|
Value | unsigned integer |
Dependencies | |
See Also |
Separate email objects into buckets, where each bucket can have a maximum of
this many emails. This should be set to 10000
with Cassandra to avoid
partitions becoming too large when there are a lot of emails.
fs_dictmap_cleanup_uncertain
Default | yes |
---|---|
Value | boolean |
See Also |
If a write to Cassandra fails with uncertainty and this setting is enabled Dovecot attempts to clean it up.
fs_dictmap_delete_dangling_links
Default | no |
---|---|
Value | boolean |
See Also |
If an object exists in dict, but not in storage, delete it automatically from dict when it's noticed.
WARNING
This setting isn't safe to use by default, because storage may return "object doesn't exist" errors only temporarily during split brain.
fs_dictmap_delete_timestamp
Default | 10s |
---|---|
Value | time (milliseconds) |
Increase Cassandra's DELETE
timestamp by this value. This is useful to make
sure the DELETE
isn't ignored because Dovecot backends' times are slightly
different.
WARNING
If the same key is intentionally attempted to be written again soon afterwards,
the write becomes ignored. Dovecot doesn't normally do this, but this can
happen if the user is deleted with doveadm obox user delete
and the same
user is recreated. This can also happen with doveadm backup
that reverts
changes by deleting a mailbox; running the doveadm backup
again will
recreate the mailbox with the same GUID.
fs_dictmap_dict_prefix
Default | [None] |
---|---|
Value | string |
Prefix that is added to all dict keys.
fs_dictmap_diff_table
Default | no metacache { yes } |
---|---|
Value | boolean |
See Also |
Store diff and self index bundle objects to a separate table. This is a Cassandra-backend optimization.
fs_dictmap_lock_path
Default | [None] |
---|---|
Value | string |
See Also |
If fs_dictmap_refcounting_table
is enabled, use this dictionary for
creating lock files to objects while they're being copied or deleted. This
attempts to prevent race conditions where an object copy and delete runs
simultaneously and both succeed, but the copied object no longer exists. This
can't be fully prevented if different servers do this concurrently. If
lazy-expunge plugin is used this setting isn't really needed, because such
race conditions are practically nonexistent. Not using the setting will
improve performance by avoiding a Cassandra SELECT
when copying mails.
fs_dictmap_max_parallel_iter
Default | 10 |
---|---|
Value | unsigned integer |
Changes |
|
Describes how many parallel dict iterations can be created internally. The
default value is 10
. Parallel iterations can especially help speed up
reading huge folders.
fs_dictmap_nlinks_limit
Default | 0obox { 3 } |
---|---|
Value | unsigned integer |
Defines the maximum number of results returned from a dictionary iteration
lookup (i.e. Cassandra CQL query) when checking the number of links to an
object. Limiting this may improve performance. Currently Dovecot only cares
whether the link count is 0
, 1
or "more than 1
" so for a bit of
extra safety we recommend setting it to 3
.
fs_dictmap_refcounting_index
Default | no |
---|---|
Value | boolean |
See Also |
Similar to the fs_dictmap_refcounting_table
setting, but instead of
using a reverse table to track the references, assume that the database has a
reverse index set up.
fs_dictmap_refcounting_table
Default | no obox { yes } |
---|---|
Value | boolean |
See Also |
Enable reference counted objects. Reference counting allows a single mail object to be stored in multiple mailboxes, without the need to create a new copy of the message data in object storage.
fs_dictmap_storage_objectid_migrate
Default | no |
---|---|
Value | boolean |
This is expected to be used with storage-objectid-prefix when adding fs-dictmap
for an existing installation. The newly created object IDs have
<storage-objectid-prefix>/<object-id>
path while the migrated object IDs
have <user>/mailboxes/<mailbox-guid>/<oid>
path. The newly created object
IDs can be detected from the 0x80
bit in the object ID's extra-data
.
Migrated object IDs can't be copied directly within dict - they'll be first
copied to a new object ID using the parent fs.
fs_dictmap_storage_objectid_prefix
Default | [None] |
---|---|
Value | string |
See Also |
Use fake object IDs with object storage that internally uses paths. This makes
their performance much better, since it allows caching object IDs in Dovecot
index files and copying them via dict. This works by storing object in
<prefix>/<objectid>
. This setting should be used inside obox plugin
named filter for storing mails under <prefix>
(but not for
metacache
or fts
).
For example:
fs_dictmap_storage_objectid_prefix = %{user}/mails/
fs_dictmap_storage_passthrough_paths
Default | none |
---|---|
Value | string |
Allowed Values | none full read-only |
See Also |
Use fake object IDs with object storage that internally uses path. Assume that
object ID is the same as the path. Objects can't be copied within the dict.
This setting should be used inside metacache
and
fts_dovecot
named filters, because they don't need to support
copying objects. For mails, use fs_dictmap_storage_objectid_prefix
instead.
Value | Description |
---|---|
none |
Don't use fake object IDs. |
full |
The object ID is written to dict as an empty value, because it's not used. |
read-only |
Useful for backwards compatibility. The path is written to the dict as the object ID even though it is not used (except potentially by an older Dovecot version). |
fs_fscache_path
Default | [None] |
---|---|
Value | string |
Path to the fscache.
fs_fscache_size
Default | [None] |
---|---|
Value | size |
Size of the fscache.
fs_http_add_headers
Default | [None] |
---|---|
Value | String List |
Headers to add to HTTP requests.
fs_http_log_headers
Default | [None] |
---|---|
Value | Boolean List |
Headers with the given name in HTTP responses are logged as part of any error,
debug or warning messages related to the HTTP request. These headers are also
included in the http_request_finished
event as fields prefixed with
http_hdr_
.
fs_http_log_trace_headers
Default | yes |
---|---|
Value | boolean |
If yes add X-Dovecot-User:
and X-Dovecot-Session:
headers to HTTP
request. The session header is useful to correlate object storage requests to
AppSuite/Dovecot sessions.
fs_http_reason_header_max_length
Default | [None] |
---|---|
Value | unsigned integer |
If non-zero add X-Dovecot-Reason:
header to the HTTP request. The value
contains a human-readable string why the request is being sent.
fs_http_slow_warning
Default | 5s |
---|---|
Value | time (milliseconds) |
Log a warning about any HTTP request that takes longer than this time.
fs_s3
Value | Named Filter |
---|
Filter for S3-specific settings.
fs_s3_access_key
Default | [None] |
---|---|
Value | string |
S3 access key. Not needed when AWS IAM is used.
fs_s3_auth_host
Default | 169.254.169.254 |
---|---|
Value | string |
Advanced Setting; this should not normally be changed. |
AWS IAM hostname. Normally there is no reason to change this. This is mainly intended for testing.
fs_s3_auth_port
Default | 80 |
---|---|
Value | Port Number |
Advanced Setting; this should not normally be changed. |
AWS IAM port. Normally there is no reason to change this. This is mainly intended for testing.
fs_s3_auth_role
Default | [None] |
---|---|
Value | string |
See Also |
If not empty, perform AWS IAM lookup using this role.
fs_s3_bucket
Default | [None] |
---|---|
Value | string |
S3 bucket name added to the request path.
fs_s3_bulk_delete_limit
Default | 1000 |
---|---|
Value | unsigned integer |
Number of deletes supported within the same bulk delete request. 0
disables
bulk deletes.
fs_s3_region
Default | [None] |
---|---|
Value | string |
See Also |
Specify region name for AWS S3 bucket. Only needed when using v4 signing.
fs_s3_secret
Default | [None] |
---|---|
Value | string |
S3 secret. Not needed when AWS IAM is used.
fs_s3_signing
Default | v4 |
---|---|
Value | string |
Allowed Values | v4 v2 |
See Also |
AWS s3 signing version to use. It is recommended to keep the default
v4 signing which also requires
fs_s3_region
to be set. The AWS v2 signing
is deprecated.
fs_s3_url
Default | [None] |
---|---|
Value | string |
URL for accessing the S3 storage. For example:
https://BUCKETNAME.s3.example.com
fs_server_backend
Default | [None] |
---|---|
Value | string |
See Also |
Specifies how to store files with fs-server.
fs_sproxyd
Value | Named Filter |
---|---|
See Also |
Filter for sproxyd-specific settings.
fs_sproxyd_access_by_path
Default | no |
---|---|
Value | boolean |
Advanced Setting; this should not normally be changed. |
Objects are accessed by the path instead of by the object ID. Scality sproxyd internally converts the paths into object IDs.
fs_sproxyd_avoid_423_timeout
Default | [None] |
---|---|
Value | unsigned integer |
Advanced Setting; this should not normally be changed. |
Delay DELETE
requests if the same object ID has been
GET
/HEAD
/PUT
by the same process within
Millisecond Time. This is intended to reduce
423 Locked
sent by Scality.
When 0
, no delay is added. Only use this setting, if it can be seen to
bring a benefit. Careful investigation of current error-rates and consideration
of the overall throughput of the platform are recommended before using it.
fs_sproxyd_class
Default | 2 |
---|---|
Value | unsigned integer |
Scality Class of Service. 2
means that the objects are written to the
Scality RING 3 times in total. This is generally the minimum allowable
redundancy for mail and index objects.
FTS data is more easily reproducible, so losing those indexes is not as
critical; Class of Service 1
may be appropriate based on customer
requirements.
fs_sproxyd_url
Default | [None] |
---|---|
Value | string |
URL for accessing the sproxyd storage.
metacache
Value | Named Filter |
---|---|
See Also |
Named filter for initializing FS Driver for obox index bundles.
metacache_bg_root_uploads
Default | no |
---|---|
Value | boolean |
Advanced Setting; this should not normally be changed. |
By default, doing changes to folders (e.g. creating or renaming) uploads
changes immediately to object storage. If this setting is enabled, the
upload happens sometimes later (within metacache_upload_interval
).
metacache_bundle_list_cache
Default | yes |
---|---|
Value | boolean |
Advanced Setting; this should not normally be changed. |
Enable caching bundle list.
metacache_close_delay
Default | 2secs |
---|---|
Value | time |
If user was accessed this recently, assume the user's indexes are up-to-date. If not, list index bundles in object storage (or Cassandra) to see if they have changed. This typically matters only when user is being moved to another backend and soon back again, or if the user is simultaneously being accessed by multiple backends.
metacache_forced_refresh_interval
Default | 8 hours |
---|---|
Value | time |
Changes |
|
If the user indexes haven't been refreshed for this long time, force the
refresh. This is done by ignoring the
metacache_close_delay
setting (i.e. same as if it is 0
).
This setting allows highly active users' indexes still to be refreshed once in a while. (Although if the user has an active session 100% of the time, the refresh cannot be done.)
metacache_index_merging
Default | v2 |
---|---|
Value | string |
Advanced Setting; this should not normally be changed. |
Specifies the algorithm to use when merging folder indexes.
Algorithm | Description |
---|---|
none |
Disable the merging algorithm |
v2 |
The new algorithm designed specifically for this purpose of merging two indexes. This is the recommended setting. |
metacache_last_host_dict
Default | [None] |
---|---|
Value | string |
If this setting is configured to a valid Dictionary URI, obox looks up
metacache_last_host
key from dict. This is meant to be used with
Palomar.
The metacache_last_host
value is kept in Palomar GeoDB.
If the lookup is successful and metacache_last_host
is different
from the current host (cluster_backend_name
), metacache is pulled
from the metacache_last_host
backend. Obox also updates
metacache_last_host
to the given dict.
metacache_max_grace
Default | 1G |
---|---|
Value | size |
How much disk space on top of metacache_max_space
can be used
before Dovecot stops allowing more users to login.
metacache_max_parallel_requests
Default | 10 |
---|---|
Value | unsigned integer |
Advanced Setting; this should not normally be changed. |
Maximum number of metacache read/write operations to do in parallel.
metacache_max_space
Default | unlimited |
---|---|
Value | size |
How much disk space metacache can use before old data is cleaned up.
Generally, this should be set at ~90% of the available disk space.
metacache_merge_max_uid_renumbers
Default | 100 |
---|---|
Value | unsigned integer |
Advanced Setting; this should not normally be changed. |
This is used only with metacache_index_merging = v2
.
If the merging detects that there are more than this many UIDs that are conflicting and would have to be renumbered, don't renumber any of them. This situation isn't expected to happen normally, and renumbering too many UIDs can cause unnecessary extra disk I/O.
The downside is that a caching IMAP client might become confused if it had previously seen different UIDs.
metacache_priority_weights
Default | [None] |
---|---|
Value | string |
Advanced Setting; this should not normally be changed. |
metacache_refresh_index_once_after
Default | [None] |
---|---|
Value | unsigned integer |
This forces the next mailbox open after the specified UNIX timestamp to refresh locally cached indexes to see if other backends have modified the user's indexes simultaneously.
metacache_rescan_interval
Default | 1 day |
---|---|
Value | time |
See Also | |
Advanced Setting; this should not normally be changed. |
How often to run a background metacache rescan, which makes sure that the disk space usage tracked by metacache process matches what really exists on filesystem.
The desync may happen, for example, because the metacache process (or the whole backend) crashes.
The rescanning helps with two issues:
Setting this to 0
disables the rescan.
It's also possible to do this manually by running the
doveadm metacache rescan
command.
metacache_rescan_mails_once_after
Default | [None] |
---|---|
Value | unsigned integer |
This forces the next mailbox open after the specified UNIX timestamp to rescan the mails to make sure there aren't any unindexed mails.
metacache_roots
Default | mail_home and mail_chroot |
---|---|
Value | string |
See Also | |
Advanced Setting; this should not normally be changed. |
List of metacache root directories, separated with :
.
Usually this is automatically parsed directly from mail_home
and
mail_chroot
settings.
Accessing a metacache directory outside these roots will result in a warning: "Index directory is outside metacache_roots".
It's possible to disable this check entirely by setting the value to :
.
TIP
This setting is required for metacache_rescan_interval
.
metacache_secondary_indexes
Default | yes |
---|---|
Value | boolean |
Advanced Setting; this should not normally be changed. |
Enable including secondary indexes into the user root bundle when using the virtual or virtual-attachments plugin.
This setting can be used to exclude the virtual and virtual-attachments folders from the user root bundle in case any problems are encountered.
metacache_size_weights
Default | [None] |
---|---|
Value | string |
Advanced Setting; this should not normally be changed. |
Whenever metacache notices that metacache_max_space
has been
reached, it needs to delete some older index files to make space for new ones.
This is done by calculating cleanup weights.
The simplest cleanup weight is to just use the user's last access UNIX timestamp as the weight. The lowest weight gets deleted first.
It's possible to enable using only simple weights by explicitly setting
metacache_priority_weights
and this setting
to empty values. However, by default priorities are taken into account when
calculating the weight.
The metacache_priority_weights
setting can be used to fine tune how
metacache adjusts the cleanup weights for different index priorities. There
are 4 major priorities (these are also visible in e.g.
doveadm metacache list
output):
Priority | Description |
---|---|
0 |
User root indexes (highest priority) |
1 |
FTS indexes |
2 |
INBOX and Junk folder indexes ("special" folders) |
3 |
Non-special folder indexes (lowest priority) |
The metacache_priority_weights
contains <percentage> <weight adjustment>
pairs for each of these priorities. So, for example, the
first 10% +1d
applies to the user root priority and the last 100% 0
applies to other folders' priority.
The weight calculation is then done by:
metacache_priority_weights
is next looked up for the given
priority indexes<percentage>
, add <weight adjustment>
to weight. So, for
example, with 10% +1d
if the disk space used by index files of this
priority type take <= 10% of metacache_max_space
, increase the
weight by 1d = 60*60*24 = 86400
.+1d
typically gives 1 extra day for the index files to exist
compared to index files that don't have the weight boost.<percentage>
exists so that the weight boost doesn't cause some
index files to dominate too much. For example, if root indexes' weights
weren't limited, it could be possible that the system would be full of
only root indexes and active users' other indexes would be cleaned
almost immediately.This setting is used to do final adjustments
depending on the disk space used by this user's indexes of the specific
priority. The setting is in format
<low size> <low weight adjustment> <max size> <high weight adjustment>
.
The weight adjustment calculation is:
<low size>
, increase weight by
(<low size> - <disk space>) * <low weight adjustment> / <low size>
<disk space>
to <max size>
and increase
weight by (<disk space> - <low size>) * <high weight adjustment> / (<max size> - <low size>)
2M +30 1G +120
value the priority adjustments will
look like:
+30
+23
+15
+8
0
+1
+6
+12
+30
+60
+120
Example:
metacache_priority_weights = 10% +1d 10% +1d 50% +1h 100% 0
metacache_size_weights = 2M +30 1G +120
metacache_upload_interval
Default | 5min |
---|---|
Value | time |
How often to upload important index changes to object storage?
This mainly means that if a backend crashes during this time, message flag changes within this time may be lost. A longer time can however reduce the number of index bundle uploads.
metacache_userdb
Default | metacache/metacache-users.db |
---|---|
Value | string |
Advanced Setting; this should not normally be changed. |
Path to a database which metacache process periodically writes to.
This database is read by metacache at startup to get the latest state.
The path is relative to state_dir
.
obox
Value | Named Filter |
---|---|
See Also |
Named filter for initializing FS Driver for obox mails.
INFO
See the storage provider pages for specific parameters that can be used.
obox_allow_nonreproducible_uids
Default | no |
---|---|
Value | boolean |
Advanced Setting; this should not normally be changed. |
Normally Dovecot attempts to make sure that IMAP UIDs aren't lost even if a backend crashes (or if user is moved to another backend without indexes first being uploaded). This requires uploading index bundles whenever expunging recently saved mails. Setting this to "yes" avoids this extra index bundle upload at the cost of potentially changing IMAP UIDs. This could cause caching IMAP clients to become confused, possibly even causing it to delete wrong mails. Also FTS indexes may become inconsistent since they also rely on UIDs.
obox_autofix_storage
Default | no |
---|---|
Value | boolean |
Advanced Setting; this should not normally be changed. |
If activated, when an unexpected 404 is found when retrieving a message from object storage, Dovecot will rescan the mailbox by listing its objects. If the 404-object is still listed in this query, Dovecot issues a HEAD to determine if the message actually exists. If this HEAD request returns a 404, the message is dropped from the index. The message object is not removed from the object storage.
obox_avoid_cached_vsize
Default | no |
---|---|
Value | boolean |
Advanced Setting; this should not normally be changed. |
Avoid getting the email's size from the cache whenever the email body is opened anyway. This avoid unnecessary errors if a lot of the vsizes are wrong. The vsize in dovecot.index is also automatically updated to the fixed value with or without this setting.
This setting was mainly useful due to earlier bugs that caused the vsize to be wrong in many cases.
obox_disable_fast_copy
Default | no |
---|---|
Value | boolean |
Advanced Setting; this should not normally be changed. |
Workaround for object storages with a broken copy operation. Instead perform copying by reading and writing the full object.
obox_fetch_lost_mails_as_empty
Default | no |
---|---|
Value | boolean |
See Also | |
Advanced Setting; this should not normally be changed. |
Cassandra: "Object exists in dict, but not in storage" errors will be
handled by returning empty emails to the IMAP client. The tagged FETCH
response will be OK
instead of NO
.
obox_lost_mailbox_prefix
Default | recovered-lost-folder- |
---|---|
Value | string |
Advanced Setting; this should not normally be changed. |
If folder name is lost entirely due to lost index files, generate a name for the folder using this prefix.
obox_max_parallel_copies
Default | mail_prefetch_count |
---|---|
Value | unsigned integer |
Maximum number of email HTTP copy/link operations to do in parallel.
If the storage driver supports bulk-copy/link operation, this controls how many individual copy operations can be packed into a single bulk-copy/link HTTP request.
obox_max_parallel_deletes
Default | mail_prefetch_count |
---|---|
Value | unsigned integer |
Maximum number of email HTTP delete operations to do in parallel.
If the storage driver supports bulk-delete operation, this controls how many individual delete operations can be packed into a single bulk-delete HTTP request.
obox_max_parallel_writes
Default | mail_prefetch_count |
---|---|
Value | unsigned integer |
Maximum number of email write HTTP operations to do in parallel.
obox_max_rescan_mail_count
Default | 10 |
---|---|
Value | unsigned integer |
Advanced Setting; this should not normally be changed. |
Keep a maximum of this many newly saved mails in local metacache indexes
before metacache is flushed to object storage. For example with a value of
10, every 11th mail triggers a metacache flush. Note that the flush isn't
immediate - it will happen in the background some time within the next
metacache_upload_interval
.
A higher value reduces the number of index bundle uploads, but increases the number of mail downloads to fill the caches after a backend crash.
obox_pop3_backend_uidls
Default | yes |
---|---|
Value | boolean |
Advanced Setting; this should not normally be changed. |
If set to no
, don't try to lookup migrated POP3 UIDLs from email metadata
in any situation. This used to bring performance improvements, but now the
existence of migrated UIDLs is tracked more efficiently and there should be no
need to change this setting.
obox_size_missing_action
Default | warn-read |
---|---|
Value | string |
Allowed Values | warn-read read stat |
Advanced Setting; this should not normally be changed. |
This setting controls what should be done when the mail object is missing the size metadata.
Options:
Value | Description |
---|---|
read |
Same as warn-read , but doesn't log a warning. |
stat |
Use fs_stat() to get the size, which is the fastest but doesn't work if mails are compressed or encrypted. |
warn-read |
Log a warning and fallback to reading the email to calculate its size. |
obox_track_copy_flags
Default | no |
---|---|
Value | boolean |
Try to avoid Cassandra SELECTs when expunging mails.
Enable only if dictmap/Cassandra and lazy-expunge plugin are used.
obox_use_object_ids
Default | yes |
---|---|
Value | boolean |
Advanced Setting; this should not normally be changed. |
If enabled, access objects directly via their IDs instead of by path, if possible.
There is no need to add date.save
to the various cache settings, as this data is always stored in dovecot.index file by obox.
mail_fsync
With obox, this setting is recommended to be set to never
.
In obox installastions, this option only affects the local metacache operations. If a server crashes, the existing metacache is treated as potentially corrupted and isn't used, so never
provides the best performance.
mail_prefetch_count
For obox, this setting affects reading multiple mails in parallel from object storage to local disk without waiting for previous reads to finish.
The downside is that each mail uses a file descriptor and disk space in mail_temp_dir
.
For obox, a good value is likely between 10
to 100
.
mail_sort_max_read_count
As a special case with obox when doing a SORT (ARRIVAL)
, the SORT will always return OK.
When it reaches the limit, it starts getting the received-timestamps from the time the object was saved. This is commonly the same as the received-timestamp, but not always.
Often this produces mostly the same result, especially in the INBOX.
obox installations using quota_over_status
must also have quota_over_status_lazy_check
enabled.
Otherwise the quota_over_status_current
checking may cause a race condition with metacache cleaning, which may end up losing folder names or mail flags within folders.
fs-compress
) For an overview of compression features see:
fs_s3_url = https://s3.example.com/
fs_s3_access_key = ACCESSKEY
fs_s3_secret = SECRET
fs_s3_bucket = mails
fs_compress_write_method = zstd
obox {
fs fscache {
size = 512M
path = /var/cache/mails/%{user | sha1 % 4}
}
fs compress {
}
fs dictmap {
dict proxy {
name = cassandra
socket_path = dict-async
}
storage_objectid_prefix = %{user}/mails/
#lock_path = /tmp # Set only without lazy_expunge plugin
}
fs s3 {
}
}
metacache {
fs compress {
}
fs dictmap {
dict proxy {
name = cassandra
socket_path = dict-async
}
storage_passthrough_paths = full
}
fs s3 {
}
}
fts dovecot {
fs fts-cache {
}
fs fscache {
size = 512M
path = /var/cache/fts/%{user | sha1 % 4}
}
fs compress {
}
fs dictmap {
dict proxy {
name = cassandra
socket_path = dict-async
}
storage_passthrough_paths = full
}
fs s3 {
}
}
Note that these both work and don't have any practical difference, because Dictmap doesn't modify the object contents in any way:
# compress before dictmap
metacache {
fs compress {
}
fs dictmap {
}
fs sproxyd {
}
}
# compress after dictmap
metacache {
fs dictmap {
}
fs compress {
}
fs sproxyd {
}
}
With encryption enabled:
obox {
fs fscache {
size = 512M
path = /var/cache/mails/%{user | sha1 % 4}
}
fs compress {
}
fs crypt {
}
fs dictmap {
dict proxy {
name = cassandra
socket_path = dict-async
}
storage_objectid_prefix = %{user}/mails/
#lock_path = /tmp # Set only without lazy_expunge plugin
}
fs s3 {
}
}
# Similarly add for metacache { .. } and fts dovecot { .. }
To encrypt mails in-rest you can use the fs-crypt plugin. Using the mail-crypt plugin is not recommended with obox.
It is not recommended to put the fs-crypt plugin before the fs-cache plugin for performance reasons. The plugin is intended for encrypting data at-rest in remote storage.
First, one needs to generate a keypair. This can be done with OpenSSL. Dovecot 3.0 supports EDDSA with X25519, ECDSA and RSA. RSA is not recommended for performance and size reasons.
To generate a key, you can use
openssl genpkey -algorithm EC -pkeyopt ec_paramgen_curve:prime256v1 -out private.pem
or
openssl genpkey -algorithm X25519 -out private.pem
Once private key is generated, you can generate public key with
openssl pkey -in private.pem -out public.pem -pubout
WARNING: If you ever lose the private key, all data encrypted with it will be IRREVOCABLY lost. There is no way to recover mails without the private key.
To use the private key, configure
crypt_global_public_key_file = /etc/dovecot/public.pem
crypt_global_private_key main {
crypt_private_key_file = /etc/dovecot/private.pem
}
It is highly recommended to use lazy_expunge plugin.
TIP
If autoexpunging is done on the lazy_expunge folder, it must be larger than any potentially slow object storage operation. For example 15 minutes should be a rather safe minimum.
mail_plugins {
lazy_expunge = yes
}
lazy_expunge_mailbox = .EXPUNGED
namespace inbox {
mailbox .EXPUNGED {
autoexpunge = 7 days
}
}
lazy_expunge_only_last_instance
This is recommended to be enabled for all installations.
Obox does reference counting in Cassandra (fs-dictmap), and this setting takes advantage of that setup.
obox_track_copy_flags
Lazy expunge allows reduction of Cassandra dictmap lookups by removing the fs_dictmap_lock_path
setting and enabling the obox_track_copy_flags
setting.
obox_track_copy_flags = yes
Dovecot mailbox indexes are required when using obox.
When using the virtual plugin with obox, the virtual INDEX location must point to a directory named "virtual" in the user home directory. This way the virtual indexes are added to the obox root index bundles and will be preserved when user moves between backends or when metacache is cleaned.
mail_driver = virtual
mail_path = /etc/dovecot/virtual
mail_index_path = ~/virtual
The virtual indexes will be stored in the user root bundle.
It is possible to disable storing virtual indexes in the user root bundle using metacache_disable_secondary_indexes
.
Particularly with obox, Dovecot nodes need to do frequent DNS lookups. It is recommended that the underlying platform provides either a performant DNS service or deploys a local DNS cache on the Dovecot nodes.
Software that is known to work in this regard is PowerDNS as a service and nscd for local caching.
In environments where reaching a particular packets per second (PPS) rate for DNS or all packets combined, can lead to harsh throttling, it is recommended to select a local caching option, such as nscd. The same applies to certain virtualized environments, where the layer between virtual machine and hypervisor can drop packets under high load, leading to DNS timeouts. Additionally, Amazon AWS instances have been known to react adversely when an undocumented PPS rate is reached.
In order to reduce I/O on the backends, it is recommended to disable the ext4 journal:
tune2fs -O ^has_journal /dev/vdb
e2fsck -f /dev/vdb
Dovecot doesn't require atimes, so you can mount the filesystem with noatime:
mount -o defaults,discard,noatime /dev/vdb /metacache
$ umount /metacache
$ tune2fs -O ^has_journal /dev/sdc1
tune2fs 1.42.9 (28-Dec-2013)
$ fsck.ext4 -f /dev/sdc1
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sdc1: 11/16777216 files (0.0% non-contiguous), 1068533/67108608 blocks
$ tune2fs -o discard /dev/sdc1
tune2fs 1.42.9 (28-Dec-2013)
$ dumpe2fs /dev/sdc1 | grep discard
dumpe2fs 1.42.9 (28-Dec-2013)
Default mount options: user_xattr acl discard
$ blkid /dev/sdc1
/dev/sdc1: UUID="5d20d432-3152-4ccf-98e3-94e7500cfd40" TYPE="ext4"
$ vi /etc/fstab
UUID=5d20d432-3152-4ccf-98e3-94e7500cfd40 /metacache ext4 defaults,noatime,nodiratime 0 0
$ mount /metacache
$ mount | grep metacache
/dev/sdc1 on /metacache type ext4 (rw,noatime,nodiratime,seclabel)
To further reduce IOPs on the metacache volume when using the mail compression or mail crypt plugins, set the dovecot temp directory to a tmpfs volume:
mail_temp_dir = /dev/shm/
It can be useful to flush unimportant changes in metacache every night when the system has idle capacity. This way if users are moved between backends, there's somewhat less work to do on the new backends since caches are more up-to-date. This can be done by running doveadm metacache flushall
in a cronjob.
metacache {
fs compress {
}
...
}
All of the object storage backends should be set up to compress index bundle objects. This commonly shrinks the indexes down to 20-30% of the original size (with zstd compression).
See this documentation for supported algorithms and their settings: fs-compress plugin.
Email object (a/k/a message blob data) compression is recommended to be done with Compression Plugin: fs-compress. Example:
fs_compress_write_method = zstd
obox {
fs fscache {
}
fs compress {
}
#...
}
By using compress fs after the fscache fs, the mails are stored uncompressed in the fscache and reading is more efficient.
If fs_compress_read_plain_fallback = yes
, the compression status of email object data is auto-detected. Therefore, fs_compress_write_method
may safely be added to a currently existing system; existing non-compressed mail objects will be identified correctly.
An alternative to Compression Plugin: fs-compress for mails is to use the mail-compress plugin plugin. However, the problem with this with obox is that the mail files are written compressed to fscache, which can be inefficient.
Using obox with Cassandra is done via the fs-dictmap wrapper.
Configuration information can be found at Dictmap.
It's possible to split fscaches over multiple independent directories by including %variables in the path. This is typically done based on username hashing, e.g. /var/fscache/%{user | sha1 % 4}
would use 4 fscache directories. This is especially recommended with larger fscaches (>10 GB). The main benefit of split fscaches is that any cache trashing caused by a few users will be limited only to those users' fscaches.
For example if Dovecot is internally rebuilding caches for a single user, the 1 GB fscache could quickly be filled only with that one user's emails. But if the fscache is slit over multiple directories, the other directories won't be affected and may still contain useful cache for other users.
The fscache plugin relies on filesystem usage information to be consistent. For example ZFS provides different information on block usage depending on when the information is queried, making fscache not work.
WARNING
ZFS support has been currently explicitly disabled.
obox {
fs fscache {
size = 2G
path = /var/cache/mails
}
}
# Or split users to multiple directories (4 * 512MB = 2GB total):
obox {
fs fscache {
size = 2G
path = /var/cache/mails/%{user | sha1 % 4}
}
}