Search K
Appearance
Appearance
plugin {
# basic configuration:
obox_fs = azure:https://ACCOUNT@CONTAINER.blob.core.windows.net/?parameters
}
NOTE
The Dovecot format for configuring a Azure URI is different from what is used in Microsoft documentation. This is especially relevant for the position of ACCOUNT
and CONTAINER
.
There are different ways to authenticate with the Azure blob storage
This is the preferred way to authenticate with Azure blob storage. It is using Microsoft Entra credentials to authenticate.
User Shared Access Signature (SAS) uses tenant-id, client-id and client_secret to authenticate to Microsoft Entra IDM via oauth2. With the Bearer token retrieved from https://login.microsoftonline.com a user-delegation-key is requested at ACCOUNT.blob.core.windows.net this key is requested with a limited lifetime. With these credentials User SAS token are generated and used to authenticate requests against the Azure blob storage.
User SAS is enabled by passing auth_tenant_id
, auth_client_id
and client_secret
alongside with accountname
and containername
to Dovecot Pro.
When using any SAS you must ensure that the fs-auth
service has proper permissions/owner.
Configure the user for the fs-auth listener to be the same as for mail_uid
.
plugin {
obox_fs = azure:https://ACCOUNT@CONTAINER.blob.core.windows.net/?auth_tenant_id=TENANT_ID&auth_client_id=CLIENT_ID&client_secret=BASE64_CLIENT_SECRET
}
service fs-auth {
unix_listener fs-auth {
user = vmail
}
}
resourceGroup1
and resourceGroup2
belowaz account show --query id --output tsv
); see how the placeholder value
is used below.There are different ways to configure the aforementioned prerequisites using the Azure Portal or Azure cli are probably the most well known.
This section is giving an example for how to Setup the needed credentials used with User SAS. It assumes that there already is a blob storage created as well as a Resource Group containing it. The following example uses the az
command. This is the Azure Command-Line Interface (CLI) which can be used to execute execute administrative commands on Azure resources.
See for more details on how to use az, the Azure CLI tool for more details on how to run this command. Replace subscription ID placeholder "" with your subscription ID.
az ad sp create-for-rbac --name dovecot-pro-azure-auth \
--role "Storage Blob Data Contributor" \
--scopes /subscriptions/{{guid}}/resourceGroups/resourceGroup1 /subscriptions/{{guid}}/resourceGroups/resourceGroup2
If successfully executed command will reply with the following fields:
"appId": "39c6f374-78f2-43e6-a7a0-376586891af0", // Generated client-id
"displayName": "dovecot-pro-azure-auth", // The role name you choose
"password": "dmVyeV9sb25nX3NlY3VyZV9wYXNzd29yZAo=", // Generated password (base64 encoded)
"tenant": "f0a1bf88-062f-4751-bf32-df2e4daf8ded" // Generated tenant-id
This fields are to be used in the Dovecot Pro configuration:
Azure cli reply fields | Dovecot Pro Azure Configuration parameter |
---|---|
appId | auth_client_id |
password | client_secret |
tenant | auth_tenant_id |
displayName | <unused> |
Service SAS is using the accountname (e.g. ACCOUNT
) and shared key (e.g. SHARED_KEY
) directly to create SAS Tokens with a limited scope and validity. This can be used for testing with local Azure blob storage emulation.
When using any SAS you must ensure that the fs-auth
service has proper permissions/owner.
Configure the user for the fs-auth listener to be the same as for mail_uid
.
plugin {
obox_fs = azure:https://ACCOUNT:BASE64_SHARED_KEY@CONTAINER.blob.core.windows.net/?parameters
}
service fs-auth {
unix_listener fs-auth {
user = vmail
}
}
The SHARED_KEY
should be passed base64 encoded as shown above (SHARED_KEY_BASE64
). Additionally it needs to be %hex-encoded.
Note
dovecot.conf
handles variable expansion internally as well, so % needs to be escaped as %% and ':' needs to be escaped as %%3A.
For example if the SHARED_KEY
is "foo:bar" this would be encoded as https://ACCOUNT:foo%%3Abar@CONTAINER.blob.core.windows.net/
.
This double-%% escaping is needed only when the string is read from dovecot.conf
- it doesn't apply, for example, if the string comes from a userdb lookup.
To use accountname and shared key directly to authenticate requests the no_sas_auth
URL parameter can be passed.
plugin {
obox_fs = azure:https://ACCOUNT:BASE64_SHARED_KEY@CONTAINER.blob.core.windows.net/?no_sas_auth=yes
}
WARNING
For optimal security using User SAS with the Entra IDM is recommended.
An HTTP URL to specify how the object storage is accessed.
The parameters are specified as URL-style parameters, such as http://url/?param1=value1¶m2=value2
.
URL escaping is used, so if password is foo/bar
the URL is http://user:foo%2fbar@example.com/
.
Additionally, because Dovecot expands %variables inside the plugin section, the %
needs to be escaped. So the final string would be e.g.:
plugin {
obox_fs = s3:https://user:foo%%2fbar@example.com/ # password is foo/bar
}
Parameter | Description | Default |
---|---|---|
absolute_timeout=<time_msecs> | Maximum total time for an HTTP request to finish. Overrides all timeout configuration. | None |
addhdr=<name>:<value> | Add the specified header to all HTTP requests. | None |
addhdrvar=<name>:<variable> | Add the specified header to all HTTP requests and set the value to the expanded variables value. | None |
bulk_delete_limit=<n> | Number of deletes supported within the same bulk delete request. 0 disables bulk deletes. | 256 |
connect_timeout=<time_msecs> | Timeout for establishing a TCP connection. | <timeout> parameter |
delete_max_retries=<n> | Max number of HTTP request retries for delete actions. | <max_retries> parameter |
delete_timeout=<time_msecs> | Timeout for sending a delete HTTP response. | <timeout> parameter |
loghdr=<name> | Headers with the given name in HTTP responses are logged as part of any error, debug or warning messages related to the HTTP request. These headers are also included in the http_request_finished event as fields prefixed with http_hdr_ . Can be specified multiple times. | None |
max_connect_retries=<n> | Number of connect retries. | 2 |
max_retries=<n> | Max number of HTTP request retries. Retries happen for 5xx errors as well as for 423(locked) with sproxyd. There is a wait between attempting next retry. The initial retry is done after 50ms. The following retries are done after waiting ten times as long as the previous attempt, so 50ms -> 500 ms -> 5s ->10s. The maximum wait time per attempt before retry is limited to 10 seconds. Please note that if the overall request time exceeds the configured absolute_timeout it takes precedence, emits an error and prevents further retries. While the configured timeout value determines how long HTTP responses are allowed to take before an error is emitted. | 4 |
no_trace_headers=1 | Set to 1 to not add X-Dovecot-User or X-Dovecot-Session headers to HTTP request. These headers are useful to correlate object storage requests to App Suite/Dovecot sessions. If not doing correlations via log aggregation, this is safe to disable. | 0 |
read_max_retries=<n> | Max number of HTTP request retries for read actions. | <max_retries> parameter |
read_timeout=<time_msecs> | Timeout for a receiving read HTTP response. | <timeout> parameter |
reason_header_max_length=<n> | Maximum length for X-Dovecot-Reason HTTP header. If header is present, it contains information why obox operation is being done. | 0 |
slow_warn=<time_msecs> | Log a warning about any HTTP request that takes longer than this time. | 5s |
timeout=<time_msecs> | Default timeout for HTTP responses, unless overwritten by other parameters. | 10s |
write_max_retries=<n> | Max number of HTTP request retries for write actions. | <max_retries> parameter |
write_timeout=<time_msecs> | Timeout for a write HTTP response. | <timeout> parameter |
Azure bulk / batch deletes are supported.
The bulk-delete
option is enabled by default and will perform a maximum of 256 deletes per request. The exact amount can be adjusted with bulk_delete_limit but will never be more than 256 as this is a requirement set by Azure. Using the bulk_delete_limit option also requires setting the obox_max_parallel_deletes
option:
obox_max_parallel_deletes = 256
This value should be the same as bulk_delete_limit or lower.
NOTE
The bulk-delete
option is not supported with Legacy Authentication.
WARNING
All text indicated by {{VARIABLE NAME}}
in the examples below MUST be replaced with your local configuration value(s).
Note
Dictmap must also be configured to use this storage driver.
mail_location = obox:%u:INDEX=~/:CONTROL=~/
plugin {
# ACCOUNT: Azure account name for storage access.
# CONTAINER: Storage container name to use.
# TENANT_ID: TenantId to be used for authentication against Entra IDM
# CLIENT_ID: ClientId to be used for authentication against Entra IDM
# CLIENT_SECRET: Client secret to be used for authentication against Entra IDM (base64 encoded)
# Examples use data compression with zstd (level 3)
# Without lazy_expunge plugin:
obox_fs = fscache:2G:/var/cache/mails/%4Nu:compress:zstd:3:dictmap:proxy:dict-async:cassandra ; azure:https://{{ACCOUNT}}@{{CONTAINER}}.blob.core.windows.net/?reason_header_max_length=200&auth_tenant_id={{TENANT_ID}}&auth_client_id={{CLIENT_ID}}&client_secret={{CLIENT_SECRET}} ; refcounting-table:lockdir=/tmp:bucket-size=10000:bucket-cache=%h/buckets.cache:nlinks-limit=3:delete-timestamp=+10s:bucket-deleted-days=11:storage-objectid-prefix=%u/mail-storage/
# With lazy_expunge plugin:
#obox_fs = fscache:2G:/var/cache/mails/%4Nu:compress:zstd:3:dictmap:proxy:dict-async:cassandra ; azure:https://{{ACCOUNT}}@{{CONTAINER}}.blob.core.windows.net/?reason_header_max_length=200&auth_tenant_id={{TENANT_ID}}&auth_client_id={{CLIENT_ID}}&client_secret={{CLIENT_SECRET}} ; refcounting-table:bucket-size=10000:bucket-cache=%h/buckets.cache:nlinks-limit=3:delete-timestamp=+10s:bucket-deleted-days=11:storage-objectid-prefix=%u/mail-storage/
obox_index_fs = compress:zstd:3:dictmap:proxy:dict-async:cassandra ; azure:https://{{ACCOUNT}}@{{CONTAINER}}.blob.core.windows.net/?reason_header_max_length=200&auth_tenant_id={{TENANT_ID}}&auth_client_id={{CLIENT_ID}}&client_secret={{CLIENT_SECRET}} ; diff-table:storage-passthrough-paths=full
fts_dovecot_fs = fts-cache:fscache:2G:/var/cache/fts/%4Nu:compress:zstd:3:dictmap:proxy:dict-async:cassandra ; azure:https://{{ACCOUNT}}@{{CONTAINER}}.blob.core.windows.net/?reason_header_max_length=200&auth_tenant_id={{TENANT_ID}}&auth_client_id={{CLIENT_ID}}&client_secret={{CLIENT_SECRET}} ; dict-prefix=%u/fts/:storage-passthrough-paths=full
}
URL | Notes |
---|---|
GET /CONTAINERNAME/<PATH> | Read operations to Azure blob storage |
HEAD /CONTAINERNAME/<PATH> | Read metadata operations to Azure blob storage |
PUT /CONTAINERNAME/<PATH>?blockid=<BLOCK>&comp=block | Writing objects to Azure blob storage, objects bigger then 4 Mb are written in blocks |
DELETE /CONTAINERNAME/<PATH> | Deleting objects from Azure blob storage |
POST https://login.microsoftonline.com/TENANT_ID/oauth2/token | Retrieve a Bearer token for Authentication with Entra IDM (User SAS only) |
POST /?restype=service&comp=userdelegationkey | Retrieve a user delegation key from the Azure blob storage (User SAS only) |
URI Path we write to:
/<key prefix>/<CONTAINER>/<dispersion prefix>/<dovecot internal path>
Key | Description |
---|---|
<key prefix> | From URL config (optional; empty if not specified) |
<container> | Extracted from azure scheme URL |
<dispersion prefix> | From mail_location setting . Recommended value (see XXX) gives two levels of dispersion of the format: [0-9a-f]{2}/[0-9a-f]{3} |
<dovecot internal path> | Dovecot internal path to file. Example: $user/mailboxes/$mailboxguid/$messageguid |
Internal Path Variables:
Variable | Description |
---|---|
$user | Dovecot unique username (installation defined) |
$mailboxguid | 32 byte randomly generated UID defining a mailbox |
$messageguid | 32 byte randomly generated UI defining a message blob |
To be able to easily track requests outgoing from Dovecot and incoming from the Azure storage the following headers should be added as loghdr
:
plugin {
# Debugging configuration:
obox_fs = azure:https://ACCOUNT:SHARED_KEY_BASE64@CONTAINER.blob.core.windows.net/?loghdr=x-ms-client-request-id&loghdr=x-ms-request-id
}
This configuration makes sure that the x-ms-client-request-id
header is added to the requests by Dovecot. This will use the current session id, which allows to correlate Dovecot activities with requests that the server receives. Additionally the x-ms-request-id
header should be added as loghdr as well. This header is added by the Azure storage to identify individual requests.
Dovecot sends the following HTTP headers towards storage. They should be logged for troubleshooting purposes:
X-Dovecot-Username
X-Dovecot-Session-Id
X-Dovecot-Reason
When saving data to object storage, Dovecot stores metadata associated with each blob for data recovery purposes.
This data is written to the HTTP endpoint by adding Dovecot metadata headers to the request. When retrieving a message from object storage, this data is returned in the received headers (only parsed by Dovecot if needed).
For S3, the header names are: x-ms-meta-<key>
.
Key | Description | Max Length (in bytes) | Other Info |
---|---|---|---|
fname | Dovecot filename | N/A (installation dependent; username component to naming) | |
guid | Message GUID | 32 | |
origbox | Folder GUID of first folder where stored | 32 | Copying does not update |
pop3order | POP3 message order | 10 | Only if needed by migration |
pop3uidl | POP3 UIDL | N/A (depends on source installation) | Only if message was migrated |
received | Received data | 20 (in theory; rarely more than 10) | UNIX timestamp format |
saved | Saved data | 20 (in theory; rarely more than 10) | UNIX timestamp format |
size | Message size | 20 (in theory; rarely more than 10) | Size in bytes |
username | Dovecot unique username | N/A (installation dependent) |
Key | Description | Max Length (in bytes) | Other Info |
---|---|---|---|
fname | Dovecot filename | N/A (installation dependent; username component to naming) | |
mailbox-guid | Mailbox GUID the index refers to | 32 | |
size | Message size | 20 (in theory; rarely more than 10) | Size in bytes |
username | Dovecot unique username | N/A (installation dependent) |
Key | Description | Max Length (in bytes) | Other Info |
---|---|---|---|
fname | Dovecot filename | N/A (installation dependent; username component to naming) | |
username | Dovecot unique username | N/A (installation dependent) |
As with all other object-storage backends, Azure Blob storage requires dictmap to do object IDs to Dovecot lib-fs mappings.
Cosmos DB is a service provided by Azure that advertises full Cassandra/CQL API compatibility.
WARNING
Cosmos DB is not supported or tested by Open-Xchange, and cannot provide recommendations or operational advice. See Managed CQL Services.
The documentation provided here is based on feedback from customers that are using Azure Blob storage with Cosmos DB, and is for informational purposes only.
Cosmos DB requires page-size to be explicitly configured in Cassandra connect setting.
Cosmos DB has paging enabled by default, and the Cassandra driver doesn't realize it without the explicit page_size
configuration, leading to data loss.
Shrink fs-dictmap's bucket-size from 10000
to 1000
, which distributes data across more Cosmos DB partitions.
This is intended to reduce costs.