fs_auth_cache_dict
| Default | [None] |
|---|---|
| Value | string |
Dictionary URI where fs-auth process keeps authentication cache. This allows sharing the cache between multiple servers.
Appearance
An annotated configuration example:
# Load the obox plugin
mail_plugins = $mail_plugins obox
# How many mails to download in parallel from object storage.
#
# A higher number improves the performance, but also increases the local disk
# usage and number of used file descriptors.
mail_prefetch_count = 10
plugin {
Override setting for writes, copies, or deletes. They default to `mail_prefetch_count`.
obox_max_parallel_writes = $mail_prefetch_count
obox_max_parallel_copies = $mail_prefetch_count
obox_max_parallel_deletes = $mail_prefetch_count
}How much disk space metacache can use before old data is cleaned up.
Generally, this should be set at ~90% of the available disk space.
plugin {
metacache_max_space = 200G
}How much disk space on top of metacache_max_space can be used before Dovecot stops allowing more users to login.
plugin {
metacache_max_grace = 10G
}How often to upload important index changes to object storage? This mainly means that if a backend crashes during this time, message flag changes within this time may be lost. A longer time can however reduce the number of index bundle uploads.
plugin {
metacache_upload_interval = 5min
}If user was accessed this recently, assume the user's indexes are up-to-date. If not, list index bundles in object storage (or Cassandra) to see if they have changed. This typically matters only when user is being moved to another backend and soon back again, or if the user is simultaneously being accessed by multiple backends. Default is 2 seconds.
plugin {
metacache_close_delay = 2secs
}mail_home = /var/vmail/%2Mu/%uSpecifies the location for the local mail cache directory. This will contain Dovecot index files and it needs to be high performance (e.g. SSD storage). Alternatively, if there is enough memory available to hold all concurrent users' data at once, a tmpfs would work as well. The "%2Mu" takes the first 2 chars of the MD5 hash of the username so everything isn't in one directory.
mail_uid = vmail
mail_gid = vmailUNIX UID & GID which are used to access the local cache mail files.
mail_fsync = neverWe can disable fsync()ing for better performance. It's not a problem if locally cached index file modifications are lost.
mail_temp_dir = /tmpDirectory where downloaded/uploaded mails are temporarily stored to. Ideally all of these would stay in memory and never hit the disk, but in some situations the mails may have to be kept for a somewhat longer time and it ends up in disk. So there should be enough disk space available in the temporary filesystem.
... tip Note /tmp should be a good choice on any recent OS, as it normally points to /dev/shm, so this temporary data is stored in memory and will never be written to disk. However, this should be checked on a per installation basis to ensure that it is true.
mailbox_list_index = yesEnable mailbox list indexes. This is required with obox format.
mailbox_list_index_include_inbox = yesIf LISTINDEX resides in tmpfs, INBOX status should be included in list index.
fs_auth_cache_dict| Default | [None] |
|---|---|
| Value | string |
Dictionary URI where fs-auth process keeps authentication cache. This allows sharing the cache between multiple servers.
fs_auth_request_max_retries| Default | 1 |
|---|---|
| Value | unsigned integer |
If fs-auth fails to perform authentication lookup, retry the HTTP request this many times.
fs_auth_request_timeout| Default | 10s |
|---|---|
| Value | time (milliseconds) |
Absolute HTTP request timeout for authentication lookups.
fs_server_backend| Default | |
|---|---|
| Value | string |
| See Also |
Specifies how to store files with fs-server.
metacache_bg_root_uploads| Default | no |
|---|---|
| Value | boolean |
| Advanced Setting; this should not normally be changed. | |
By default, doing changes to folders (e.g. creating or renaming) uploads
changes immediately to object storage. If this setting is enabled, the
upload happens sometimes later (within metacache_upload_interval).
metacache_close_delay| Default | 2secs |
|---|---|
| Value | time |
If user was accessed this recently, assume the user's indexes are up-to-date. If not, list index bundles in object storage (or Cassandra) to see if they have changed. This typically matters only when user is being moved to another backend and soon back again, or if the user is simultaneously being accessed by multiple backends.
metacache_disable_bundle_list_cache| Default | no |
|---|---|
| Value | boolean |
| Advanced Setting; this should not normally be changed. | |
Disable caching bundle list.
metacache_disable_secondary_indexes| Default | no |
|---|---|
| Value | boolean |
| Advanced Setting; this should not normally be changed. | |
Disable including secondary indexes into the user root bundle when using the virtual or virtual-attachments plugin.
This setting can be used to exclude the virtual and virtual-attachments folders from the user root bundle in case any problems are encountered.
metacache_forced_refresh_interval| Default | 8 hours |
|---|---|
| Value | time |
| Changes |
|
If the user indexes haven't been refreshed for this long time, force the
refresh. This is done by ignoring the
metacache_close_delay setting (i.e. same as if it is 0).
This setting allows highly active users' indexes still to be refreshed once in a while. (Although if the user has an active session 100% of the time, the refresh cannot be done.)
metacache_index_merging| Default | v2 |
|---|---|
| Value | string |
| Advanced Setting; this should not normally be changed. | |
Specifies the algorithm to use when merging folder indexes.
| Algorithm | Description |
|---|---|
none |
Disable the merging algorithm |
v2 |
The new algorithm designed specifically for this purpose of merging two indexes. This is the recommended setting. |
metacache_last_host_dict| Default | [None] |
|---|---|
| Value | string |
If this setting is configured to a valid Dictionary URI, obox looks up
metacache_last_host key from dict. This is meant to be used with
Palomar.
The metacache_last_host value is kept in Palomar GeoDB.
If the lookup is successful and metacache_last_host is different
from the current host (cluster_backend_name), metacache is pulled
from the metacache_last_host backend. Obox also updates
metacache_last_host to the given dict.
metacache_max_grace| Default | 1G |
|---|---|
| Value | time |
How much disk space on top of metacache_max_space can be used
before Dovecot stops allowing more users to login.
metacache_max_parallel_requests| Default | 10 |
|---|---|
| Value | unsigned integer |
| Advanced Setting; this should not normally be changed. | |
Maximum number of metacache read/write operations to do in parallel.
metacache_max_space| Default | 0 |
|---|---|
| Value | size |
How much disk space metacache can use before old data is cleaned up.
Generally, this should be set at ~90% of the available disk space.
metacache_merge_max_uid_renumbers| Default | 100 |
|---|---|
| Value | unsigned integer |
| Advanced Setting; this should not normally be changed. | |
This is used only with metacache_index_merging = v2.
If the merging detects that there are more than this many UIDs that are conflicting and would have to be renumbered, don't renumber any of them. This situation isn't expected to happen normally, and renumbering too many UIDs can cause unnecessary extra disk I/O.
The downside is that a caching IMAP client might become confused if it had previously seen different UIDs.
metacache_priority_weights| Default | [None] |
|---|---|
| Value | string |
| Advanced Setting; this should not normally be changed. | |
metacache_rescan_interval| Default | 1 day |
|---|---|
| Value | time |
| See Also | |
| Advanced Setting; this should not normally be changed. | |
How often to run a background metacache rescan, which makes sure that the disk space usage tracked by metacache process matches what really exists on filesystem.
The desync may happen, for example, because the metacache process (or the whole backend) crashes.
The rescanning helps with two issues:
Setting this to 0 disables the rescan.
It's also possible to do this manually by running the
doveadm metacache rescan command.
metacache_roots| Default | mail_home and mail_chroot |
|---|---|
| Value | string |
| See Also | |
| Advanced Setting; this should not normally be changed. | |
List of metacache root directories, separated with :.
Usually this is automatically parsed directly from mail_home and
mail_chroot settings.
Accessing a metacache directory outside these roots will result in a warning: "Index directory is outside metacache_roots".
It's possible to disable this check entirely by setting the value to :.
TIP
This setting is required for metacache_rescan_interval.
metacache_size_weights| Default | [None] |
|---|---|
| Value | string |
| Advanced Setting; this should not normally be changed. | |
Whenever metacache notices that metacache_max_space has been
reached, it needs to delete some older index files to make space for new ones.
This is done by calculating cleanup weights.
The simplest cleanup weight is to just use the user's last access UNIX timestamp as the weight. The lowest weight gets deleted first.
It's possible to enable using only simple weights by explicitly setting
metacache_priority_weights and this setting
to empty values. However, by default priorities are taken into account when
calculating the weight.
The metacache_priority_weights setting can be used to fine tune how
metacache adjusts the cleanup weights for different index priorities. There
are 4 major priorities (these are also visible in e.g.
doveadm metacache list output):
| Priority | Description |
|---|---|
0 |
User root indexes (highest priority) |
1 |
FTS indexes |
2 |
INBOX and Junk folder indexes ("special" folders) |
3 |
Non-special folder indexes (lowest priority) |
The metacache_priority_weights contains <percentage> <weight adjustment> pairs for each of these priorities. So, for example, the
first 10% +1d applies to the user root priority and the last 100% 0
applies to other folders' priority.
The weight calculation is then done by:
metacache_priority_weights is next looked up for the given
priority indexes<percentage>, add <weight adjustment> to weight. So, for
example, with 10% +1d if the disk space used by index files of this
priority type take <= 10% of metacache_max_space, increase the
weight by 1d = 60*60*24 = 86400.+1d typically gives 1 extra day for the index files to exist
compared to index files that don't have the weight boost.<percentage> exists so that the weight boost doesn't cause some
index files to dominate too much. For example, if root indexes' weights
weren't limited, it could be possible that the system would be full of
only root indexes and active users' other indexes would be cleaned
almost immediately.This setting is used to do final adjustments
depending on the disk space used by this user's indexes of the specific
priority. The setting is in format
<low size> <low weight adjustment> <max size> <high weight adjustment>.
The weight adjustment calculation is:
<low size>, increase weight by
(<low size> - <disk space>) * <low weight adjustment> / <low size><disk space> to <max size> and increase
weight by (<disk space> - <low size>) * <high weight adjustment> / (<max size> - <low size>)2M +30 1G +120 value the priority adjustments will
look like:
+30+23+15+80+1+6+12+30+60+120Example:
plugin {
metacache_priority_weights = 10% +1d 10% +1d 50% +1h 100% 0
metacache_size_weights = 2M +30 1G +120
}
metacache_socket_path| Default | metacache |
|---|---|
| Value | string |
| Advanced Setting; this should not normally be changed. | |
Path to communicate with metacache process.
metacache_upload_interval| Default | 5min |
|---|---|
| Value | time |
How often to upload important index changes to object storage?
This mainly means that if a backend crashes during this time, message flag changes within this time may be lost. A longer time can however reduce the number of index bundle uploads.
metacache_userdb| Default | metacache/metacache-users.db |
|---|---|
| Value | string |
| Advanced Setting; this should not normally be changed. | |
Path to a database which metacache process periodically writes to.
This database is read by metacache at startup to get the latest state.
The path is relative to state_dir.
obox_allow_nonreproducible_uids| Default | no |
|---|---|
| Value | boolean |
| Advanced Setting; this should not normally be changed. | |
Normally Dovecot attempts to make sure that IMAP UIDs aren't lost even if a backend crashes (or if user is moved to another backend without indexes first being uploaded). This requires uploading index bundles whenever expunging recently saved mails. Setting this to "yes" avoids this extra index bundle upload at the cost of potentially changing IMAP UIDs. This could cause caching IMAP clients to become confused, possibly even causing it to delete wrong mails. Also FTS indexes may become inconsistent since they also rely on UIDs.
obox_autofix_storage| Default | no |
|---|---|
| Value | boolean |
| Advanced Setting; this should not normally be changed. | |
If activated, when an unexpected 404 is found when retrieving a message from object storage, Dovecot will rescan the mailbox by listing its objects. If the 404-object is still listed in this query, Dovecot issues a HEAD to determine if the message actually exists. If this HEAD request returns a 404, the message is dropped from the index. The message object is not removed from the object storage.
obox_avoid_cached_vsize| Default | no |
|---|---|
| Value | boolean |
| Advanced Setting; this should not normally be changed. | |
Avoid getting the email's size from the cache whenever the email body is opened anyway. This avoid unnecessary errors if a lot of the vsizes are wrong. The vsize in dovecot.index is also automatically updated to the fixed value with or without this setting.
This setting was mainly useful due to earlier bugs that caused the vsize to be wrong in many cases.
obox_disable_fast_copy| Default | no |
|---|---|
| Value | boolean |
| Advanced Setting; this should not normally be changed. | |
Workaround for object storages with a broken copy operation. Instead perform copying by reading and writing the full object.
obox_dont_use_object_ids| Default | no |
|---|---|
| Value | boolean |
| Advanced Setting; this should not normally be changed. | |
Access objects directly via their paths instead of IDs, if possible.
obox_fetch_lost_mails_as_empty| Default | no |
|---|---|
| Value | boolean |
| See Also | |
| Advanced Setting; this should not normally be changed. | |
Cassandra: "Object exists in dict, but not in storage" errors will be
handled by returning empty emails to the IMAP client. The tagged FETCH
response will be OK instead of NO.
obox_fs| Default | [None] |
|---|---|
| Value | string |
This setting handles the basic Object Storage configuration.
INFO
See the storage provider pages for specific parameters that can be used.
obox_index_fs| Default | obox_fs |
|---|---|
| Value | string |
This setting handles the object storage configuration for index bundles.
obox_lost_mailbox_prefix| Default | recovered-lost-folder- |
|---|---|
| Value | string |
| Advanced Setting; this should not normally be changed. | |
If folder name is lost entirely due to lost index files, generate a name for the folder using this prefix.
obox_max_parallel_copies| Default | mail_prefetch_count |
|---|---|
| Value | unsigned integer |
Maximum number of email HTTP copy/link operations to do in parallel.
If the storage driver supports bulk-copy/link operation, this controls how many individual copy operations can be packed into a single bulk-copy/link HTTP request.
obox_max_parallel_deletes| Default | mail_prefetch_count |
|---|---|
| Value | unsigned integer |
Maximum number of email HTTP delete operations to do in parallel.
If the storage driver supports bulk-delete operation, this controls how many individual delete operations can be packed into a single bulk-delete HTTP request.
obox_max_parallel_writes| Default | mail_prefetch_count |
|---|---|
| Value | unsigned integer |
Maximum number of email write HTTP operations to do in parallel.
obox_max_rescan_mail_count| Default | 10 |
|---|---|
| Value | unsigned integer |
| Advanced Setting; this should not normally be changed. | |
Keep a maximum of this many newly saved mails in local metacache indexes
before metacache is flushed to object storage. For example with a value of
10, every 11th mail triggers a metacache flush. Note that the flush isn't
immediate - it will happen in the background some time within the next
metacache_upload_interval.
A higher value reduces the number of index bundle uploads, but increases the number of mail downloads to fill the caches after a backend crash.
obox_no_pop3_backend_uidls| Default | no |
|---|---|
| Value | boolean |
| Advanced Setting; this should not normally be changed. | |
Enable if there are no migrated POP3 UIDLs. If enabled, don't try to look up UIDLs in any situation.
obox_refresh_index_once_after| Default | 0 |
|---|---|
| Value | unsigned integer |
This forces the next mailbox open after the specified UNIX timestamp to refresh locally cached indexes to see if other backends have modified the user's indexes simultaneously.
obox_rescan_mails_once_after| Default | 0 |
|---|---|
| Value | unsigned integer |
This forces the next mailbox open after the specified UNIX timestamp to rescan the mails to make sure there aren't any unindexed mails.
obox_size_missing_action| Default | warn-read |
|---|---|
| Value | string |
| Advanced Setting; this should not normally be changed. | |
This setting controls what should be done when the mail object is missing the size metadata.
Options:
| Value | Description |
|---|---|
read |
Same as warn-read, but doesn't log a warning. |
stat |
Use fs_stat() to get the size, which is the fastest but doesn't work if mails are compressed or encrypted. |
warn-read |
Log a warning and fallback to reading the email to calculate its size. |
obox_track_copy_flags| Default | no |
|---|---|
| Value | boolean |
Try to avoid Cassandra SELECTs when expunging mails.
Enable only if dictmap/Cassandra and lazy-expunge plugin are used.
obox_username| Default | mail_location |
|---|---|
| Value | string |
| Advanced Setting; this should not normally be changed. | |
Overrides the obox username in storage.
There is no need to add date.save to the various cache settings, as this data is always stored in dovecot.index file by obox.
mail_fsync With obox, this setting is recommended to be set to never.
In obox installastions, this option only affects the local metacache operations. If a server crashes, the existing metacache is treated as potentially corrupted and isn't used, so never provides the best performance.
mail_prefetch_count For obox, this setting affects reading multiple mails in parallel from object storage to local disk without waiting for previous reads to finish.
The downside is that each mail uses a file descriptor and disk space in mail_temp_dir.
For obox, a good value is likely between 10 to 100.
mail_sort_max_read_count As a special case with obox when doing a SORT (ARRIVAL), the SORT will always return OK.
When it reaches the limit, it starts getting the received-timestamps from the time the object was saved. This is commonly the same as the received-timestamp, but not always.
Often this produces mostly the same result, especially in the INBOX.
quota_over_script obox installations using quota_over_script must also have quota_over_flag_lazy_check enabled.
Otherwise the quota_over_flag checking may cause a race condition with metacache cleaning, which may end up losing folder names or mail flags within folders.
In general, the maybe- versions of the compression algorithm should normally be used. For example:
obox_fs = fscache:512M:/var/cache/mails/%4Nu:compress:maybe-zstd:3:dictmap:proxy:dict-async:cassandra ; s3:https://ACCESSKEY:SECRET@s3.example.com/?bucket=mails&reason_header_max_length=200 ; refcounting-table:bucket-size=10000:bucket-cache=%h/buckets.cache:nlinks-limit=3:delete-timestamp=+10s:bucket-deleted-days=11:storage-objectid-prefix=%u/mails/This decompresses mails if they were stored using zstd compression and falls back to reading the mails as plaintext.
Note
Note that these both work and don't have any practical difference, because fs-dictmap doesn't modify the object contents in any way:
obox_index_fs = compress:zstd:3:dictmap:proxy:dict-async:cassandra ; s3:https://ACCESSKEY:SECRET@s3.example.com/?bucket=mails&reason_header_max_length=200 ; diff-table:storage-passthrough-paths=full
obox_index_fs = dictmap:proxy:dict-async:cassandra ; compress:zstd:3:s3:https://ACCESSKEY:SECRET@s3.example.com/?bucket=mails&reason_header_max_length=200 ; diff-table:storage-passthrough-paths=fullTo encrypt mails in-rest you can use the fs-crypt plugin. Using the mail-crypt plugin is not recommended with obox.
It is not recommended to put the fs-crypt plugin before the fs-cache plugin for performance reasons. The plugin is intended for encrypting data at-rest in remote storage.
First, one needs to generate a keypair. This can be done with OpenSSL. Dovecot 3.0 supports EDDSA with X25519, ECDSA and RSA. RSA is not recommended for performance and size reasons.
To generate a key, you can use
openssl genpkey -algorithm EC -pkeyopt ec_paramgen_curve:prime256v1 -out private.pemor
openssl genpkey -algorithm X25519 -out private.pemOnce private key is generated, you can generate public key with
openssl pkey -in private.pem -out public.pem -puboutWARNING: If you ever lose the private key, all data encrypted with it will be IRREVOCABLY lost. There is no way to recover mails without the private key.
To use the private key, configure
obox_fs = fscache:512M:/var/cache/mails/%4Nu:crypt:public_key_path=/etc/dovecot/public.pem:private_key_path=/etc/dovecot/private.pem:dictmap:proxy:dict-async:cassandra ; s3:https://ACCESSKEY:SECRET@s3.example.com/?bucket=mails&reason_header_max_length=200 ; refcounting-table:bucket-size=10000:bucket-cache=%h/buckets.cache:nlinks-limit=3:delete-timestamp=+10s:bucket-deleted-days=11:storage-objectid-prefix=%u/mails/It is highly recommended to use lazy_expunge plugin.
Note
If autoexpunging is done on the lazy_expunge folder, it must be larger than any potentially slow object storage operation. For example 15 minutes should be a rather safe minimum.
mail_plugins = $mail_plugins lazy_expunge
plugin {
lazy_expunge = .EXPUNGED
}
namespace inbox {
mailbox .EXPUNGED {
autoexpunge = 7 days
}
}lazy_expunge_only_last_instance This is recommended to be enabled for all installations.
Obox does reference counting in Cassandra (fs-dictmap), and this setting takes advantage of that setup.
obox_track_copy_flags Lazy expunge allows reduction of Cassandra dictmap lookups by removing the lockdir setting and enabling the obox_track_copy_flags setting.
plugin {
obox_track_copy_flags = yes
}Dovecot mailbox indexes are required when using obox.
When using the virtual plugin with obox, the virtual INDEX location must point to a directory named "virtual" in the user home directory. This way the virtual indexes are added to the obox root index bundles and will be preserved when user moves between backends or when metacache is cleaned.
location = virtual:/etc/dovecot/virtual:INDEX=~/virtualThe virtual indexes will be stored in the user root bundle.
It is possible to disable storing virtual indexes in the user root bundle using metacache_disable_secondary_indexes.
Particularly with obox, Dovecot nodes need to do frequent DNS lookups. It is recommended that the underlying platform provides either a performant DNS service or deploys a local DNS cache on the Dovecot nodes.
Software that is known to work in this regard is PowerDNS as a service and nscd for local caching.
In environments where reaching a particular packets per second (PPS) rate for DNS or all packets combined, can lead to harsh throttling, it is recommended to select a local caching option, such as nscd. The same applies to certain virtualized environments, where the layer between virtual machine and hypervisor can drop packets under high load, leading to DNS timeouts. Additionally, Amazon AWS instances have been known to react adversely when an undocumented PPS rate is reached.
In order to reduce I/O on the backends, it is recommended to disable the ext4 journal:
tune2fs -O ^has_journal /dev/vdb
e2fsck -f /dev/vdbDovecot doesn't require atimes, so you can mount the filesystem with noatime:
mount -o defaults,discard,noatime /dev/vdb /metacache$ umount /metacache
$ tune2fs -O ^has_journal /dev/sdc1
tune2fs 1.42.9 (28-Dec-2013)
$ fsck.ext4 -f /dev/sdc1
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sdc1: 11/16777216 files (0.0% non-contiguous), 1068533/67108608 blocks
$ tune2fs -o discard /dev/sdc1
tune2fs 1.42.9 (28-Dec-2013)
$ dumpe2fs /dev/sdc1 | grep discard
dumpe2fs 1.42.9 (28-Dec-2013)
Default mount options: user_xattr acl discard
$ blkid /dev/sdc1
/dev/sdc1: UUID="5d20d432-3152-4ccf-98e3-94e7500cfd40" TYPE="ext4"
$ vi /etc/fstab
UUID=5d20d432-3152-4ccf-98e3-94e7500cfd40 /metacache ext4 defaults,noatime,nodiratime 0 0
$ mount /metacache
$ mount | grep metacache
/dev/sdc1 on /metacache type ext4 (rw,noatime,nodiratime,seclabel)To further reduce IOPs on the metacache volume when using the mail compression or mail crypt plugins, set the dovecot temp directory to a tmpfs volume:
mail_temp_dir = /dev/shm/It can be useful to flush unimportant changes in metacache every night when the system has idle capacity. This way if users are moved between backends, there's somewhat less work to do on the new backends since caches are more up-to-date. This can be done by running doveadm metacache flushall in a cronjob.
plugin {
obox_index_fs = compress:maybe-<algorithm>:<level>:<...>
}All of the object storage backends should be set up to compress index bundle objects. This commonly shrinks the indexes down to 20-30% of the original size (with zstd compression).
See this documentation for supported algorithms and the meaning of the level parameter: fs-compress plugin.
Email object (a/k/a message blob data) compression has generally been done with mail-compress plugin instead of via the fs-compress wrapper.
Example:
# NOTE: Using this has some trade-offs with obox installations, see below.
mail_plugins = $mail_plugins mail_compress
plugin {
mail_compress_save = <algorithm>
mail_compress_save_level = <level>
}However, the problem with this with obox is that the mail files are written compressed to fscache. On one hand this increases the fscache's storage capacity, but on the other hand it requires Dovecot to always uncompress the files before accessing them.
This decompression uses a temporary file that is written to mail_temp_dir. By using the compress fs wrapper after fscache in obox_fs line, the mails are stored uncompressed in fscache, and reading the mails from there doesn't require writing to mail_temp_dir.
Compression status of email object data is auto-detected. Therefore, mail_compress_save may safely be added to a currently existing system; existing non-compressed mail objects will be identified correctly.
Using obox with Cassandra is done via the fs-dictmap wrapper.
Configuration information can be found at Dictmap.
It's possible to split fscaches over multiple independent directories by including %variables in the path. This is typically done based on username hashing, e.g. /var/fscache/%8Nu would use 8 fscache directories. This is especially recommended with larger fscaches (>10 GB). The main benefit of split fscaches is that any cache trashing caused by a few users will be limited only to those users' fscaches.
For example if Dovecot is internally rebuilding caches for a single user, the 1 GB fscache could quickly be filled only with that one user's emails. But if the fscache is slit over multiple directories, the other directories won't be affected and may still contain useful cache for other users.
The fscache plugin relies on filesystem usage information to be consistent. For example ZFS provides different information on block usage depending on when the information is queried, making fscache not work.
WARNING
ZFS support has been currently explicitly disabled.