fs_azure
Value | Named Filter |
---|
Filter for Azure-specific settings.
Appearance
There are different ways to authenticate with the Azure blob storage
This is the preferred way to authenticate with Azure blob storage. It is using Microsoft Entra credentials to authenticate.
User Shared Access Signature (SAS) uses tenant-id, client-id and client_secret to authenticate to Microsoft Entra IDM via oauth2. With the Bearer token retrieved from https://login.microsoftonline.com a user-delegation-key is requested at ACCOUNT.blob.core.windows.net this key is requested with a limited lifetime. With these credentials User SAS token are generated and used to authenticate requests against the Azure blob storage.
User SAS is enabled by setting fs_azure_auth_type = user-sas
and providing fs_azure_account_name
, fs_azure_user_sas_tenant_id
, fs_azure_user_sas_client_id
and fs_azure_user_sas_client_secret
settings alongside with CONTAINER
.
When using any SAS you must ensure that the fs-auth
service has proper permissions/owner.
Configure the user for the fs-auth
listener to be the same as for mail_uid
.
fs_azure_container_name = CONTAINER
#fs_azure_auth_type = user-sas # default
fs_azure_account_name = ACCOUNT
fs_azure_auth_user_sas_tenant_id = TENANT_ID
fs_azure_auth_user_sas_client_id = CLIENT_ID
fs_azure_user_sas_client_secret = BASE64_CLIENT_SECRET
service fs-auth {
unix_listener fs-auth {
user = vmail
}
}
resourceGroup1
and resourceGroup2
belowaz account show --query id --output tsv
); see how the placeholder value
is used below.There are different ways to configure the aforementioned prerequisites using the Azure Portal or Azure cli are probably the most well known.
This section is giving an example for how to Setup the needed credentials used with User SAS. It assumes that there already is a blob storage created as well as a Resource Group containing it. The following example uses the az
command. This is the Azure Command-Line Interface (CLI) which can be used to execute execute administrative commands on Azure resources.
See for more details on how to use az, the Azure CLI tool for more details on how to run this command. Replace subscription ID placeholder "" with your subscription ID.
az ad sp create-for-rbac --name dovecot-pro-azure-auth \
--role "Storage Blob Data Contributor" \
--scopes /subscriptions/{{guid}}/resourceGroups/resourceGroup1 /subscriptions/{{guid}}/resourceGroups/resourceGroup2
If successfully executed command will reply with the following fields:
"appId": "39c6f374-78f2-43e6-a7a0-376586891af0", // Generated client-id
"displayName": "dovecot-pro-azure-auth", // The role name you choose
"password": "dmVyeV9sb25nX3NlY3VyZV9wYXNzd29yZAo=", // Generated password (base64 encoded)
"tenant": "f0a1bf88-062f-4751-bf32-df2e4daf8ded" // Generated tenant-id
This fields are to be used in the Dovecot Pro configuration:
Azure cli reply fields | Dovecot Pro Setting |
---|---|
appId | fs_azure_user_sas_client_id |
password | fs_azure_user_sas_client_secret |
tenant | fs_azure_user_sas_tenant_id |
displayName | <unused> |
Service SAS is enabled by setting fs_azure_auth_type = service-sas
, and providing fs_azure_account_name
and fs_azure_service_sas_secret
to create SAS Tokens with a limited scope and validity. This can be used for testing with local Azure blob storage emulation.
When using any SAS you must ensure that the fs-auth
service has proper permissions/owner.
Configure the user for the fs-auth listener to be the same as for mail_uid
.
fs_azure_container_name = CONTAINER
fs_azure_auth_type = service-sas
fs_azure_account_name = ACCOUNT
fs_azure_service_sas_secret = BASE64_SHARED_KEY
service fs-auth {
unix_listener fs-auth {
user = vmail
}
}
The SHARED_KEY
should be passed base64 encoded as shown above (SHARED_KEY_BASE64
). Additionally it needs to be %hex-encoded.
Set fs_azure_auth_type = legacy
to use fs_azure_account_name
and fs_azure_legacy_auth_secret
for legacy authentication.
WARNING
For optimal security using User SAS with the Entra IDM is recommended.
fs_azure
Value | Named Filter |
---|
Filter for Azure-specific settings.
fs_azure_account_name
Default | [None] |
---|---|
Value | string |
See Also |
Azure account name for all authentication types.
fs_azure_auth_type
Default | user-sas |
---|---|
Value | string |
Allowed Values | user-sas service-sas legacy |
Azure authentication type to use.
Options:
Value | Description |
---|---|
user-sas |
See Azure User SAS |
service-sas |
See Azure Service SAS |
legacy |
See Azure Legacy Authentication |
fs_azure_bulk_delete_limit
Default | 256 |
---|---|
Value | unsigned integer |
Number of deletes supported within the same bulk delete request. 0
disables
bulk deletes.
fs_azure_container_name
Default | [None] |
---|---|
Value | string |
Azure container name.
fs_azure_legacy_auth_secret
Default | [None] |
---|---|
Value | string |
See Also |
Base64-encoded authentication shared key secret when using
fs_azure_auth_type = legacy
.
fs_azure_service_sas_secret
Default | [None] |
---|---|
Value | string |
See Also |
Base64-encoded authentication shared key secret when using
fs_azure_auth_type = service-sas
.
fs_azure_url
Default | https://blob.core.windows.net |
---|---|
Value | string |
Advanced Setting; this should not normally be changed. |
URL for accessing the Azure storage. It is not intended to be changed, unless testing some other Azure-compatible storage.
fs_azure_user_sas_client_id
Default | [None] |
---|---|
Value | string |
See Also |
ClientId to be used for authentication against Entra IDM. This is needed only
with fs_azure_auth_type = user-sas
.
fs_azure_user_sas_client_secret
Default | [None] |
---|---|
Value | string |
See Also |
Client secret to be used for authentication against Entra IDM (base64 encoded).
This is needed only with fs_azure_auth_type = user-sas
.
fs_azure_user_sas_tenant_id
Default | [None] |
---|---|
Value | string |
See Also |
TenantId to be used for authentication against Entra IDM. This is needed only
with fs_azure_auth_type = user-sas
.
fs_http_add_headers
Default | [None] |
---|---|
Value | String List |
Headers to add to HTTP requests.
fs_http_log_headers
Default | [None] |
---|---|
Value | Boolean List |
Headers with the given name in HTTP responses are logged as part of any error,
debug or warning messages related to the HTTP request. These headers are also
included in the http_request_finished
event as fields prefixed with
http_hdr_
.
fs_http_log_trace_headers
Default | yes |
---|---|
Value | boolean |
If yes add X-Dovecot-User:
and X-Dovecot-Session:
headers to HTTP
request. The session header is useful to correlate object storage requests to
AppSuite/Dovecot sessions.
fs_http_reason_header_max_length
Default | [None] |
---|---|
Value | unsigned integer |
If non-zero add X-Dovecot-Reason:
header to the HTTP request. The value
contains a human-readable string why the request is being sent.
fs_http_slow_warning
Default | 5s |
---|---|
Value | time (milliseconds) |
Log a warning about any HTTP request that takes longer than this time.
Azure bulk / batch deletes are supported.
Bulk deletion is enabled by default and will perform a maximum of 256 deletes per request. The exact amount can be adjusted with fs_azure_bulk_delete_limit
, but will never be more than 256 as this is a requirement set by Azure. Using the fs_azure_bulk_delete_limit
setting also requires setting obox_max_parallel_deletes
:
obox_max_parallel_deletes = 256
This value should be the same as fs_azure_bulk_delete_limit
or lower.
fs-azure
overrides some of the default HTTP client settings:
http_client_max_idle_time = 1s
http_client_max_parallel_connections = 10
http_client_max_connect_attempts = 3
http_client_request_max_redirects = 2
http_client_request_max_attempts = 5
http_client_connect_backoff_max_time = 1s
http_client_user_agent = FS_HTTP_USER_AGENT
http_client_connect_timeout = 5s
http_client_request_timeout = 10s
You can override these and any other HTTP client or SSL settings by placing them inside fs_azure
named filter.
WARNING
All text indicated by {{VARIABLE NAME}}
in the examples below MUST be replaced with your local configuration value(s).
TIP
Dictmap must also be configured to use this storage driver.
mail_driver = obox
mail_path = %{user}
# Storage container name to use.
fs_azure_container_name = {{CONTAINER}}
# Azure account name for storage access.
fs_azure_account_name = {{ACCOUNT}}
# TenantId to be used for authentication against Entra IDM
fs_azure_user_sas_tenant_id = {{TENANT_ID}}
# ClientId to be used for authentication against Entra IDM
fs_azure_user_sas_client_id = {{CLIENT_ID}}
# Client secret to be used for authentication against Entra IDM (base64 encoded)
fs_azure_user_sas_client_secret = {{CLIENT_SECRET}}
fs_http_reason_header_max_length = 200
fs_compress_write_method = zstd
obox {
fs fscache {
size = 512M
path = /var/cache/mails/%{user | sha1 % 4}
}
fs compress {
}
fs dictmap {
dict proxy {
name = cassandra
socket_path = dict-async
}
storage_objectid_prefix = %{user}/mails/
#lock_path = /tmp # Set only without lazy_expunge plugin
}
fs azure {
}
}
metacache {
fs compress {
}
fs dictmap {
dict proxy {
name = cassandra
socket_path = dict-async
}
storage_passthrough_paths = full
}
fs azure {
}
}
fts dovecot {
fs fts-cache {
}
fs fscache {
size = 512M
path = /var/cache/fts/%{user | sha1 % 4}
}
fs compress {
}
fs dictmap {
dict proxy {
name = cassandra
socket_path = dict-async
}
storage_passthrough_paths = full
}
fs azure {
}
}
URL | Notes |
---|---|
GET /CONTAINERNAME/<PATH> | Read operations to Azure blob storage |
HEAD /CONTAINERNAME/<PATH> | Read metadata operations to Azure blob storage |
PUT /CONTAINERNAME/<PATH>?blockid=<BLOCK>&comp=block | Writing objects to Azure blob storage, objects bigger then 4 Mb are written in blocks |
DELETE /CONTAINERNAME/<PATH> | Deleting objects from Azure blob storage |
POST https://login.microsoftonline.com/TENANT_ID/oauth2/token | Retrieve a Bearer token for Authentication with Entra IDM (User SAS only) |
POST /?restype=service&comp=userdelegationkey | Retrieve a user delegation key from the Azure blob storage (User SAS only) |
URI Path we write to:
/<key prefix>/<CONTAINER>/<dispersion prefix>/<dovecot internal path>
Key | Description |
---|---|
<key prefix> | From URL config (optional; empty if not specified) |
<container> | Extracted from azure scheme URL |
<dispersion prefix> | From the mail_path setting. Recommended value (see XXX) gives two levels of dispersion of the format: [0-9a-f]{2}/[0-9a-f]{3} |
<dovecot internal path> | Dovecot internal path to file. Example: $user/mailboxes/$mailboxguid/$messageguid |
Internal Path Variables:
Variable | Description |
---|---|
$user | Dovecot unique username (installation defined) |
$mailboxguid | 32 byte randomly generated UID defining a mailbox |
$messageguid | 32 byte randomly generated UI defining a message blob |
To be able to easily track requests outgoing from Dovecot and incoming from the Azure storage, the default configuration contains:
fs_http_log_headers {
x-ms-request-id = yes
x-ms-client-request-id = yes
}
If x-ms-client-request-id
header is logged, this additionally enables sending x-ms-client-request-id
header in HTTP requests to Azure. This uses the current session-id, which allows to correlate Dovecot activities with requests that the server receives.
The x-ms-request-id
header is added by the Azure storage to identify individual requests.
Dovecot sends the following HTTP headers towards storage. They should be logged for troubleshooting purposes:
X-Dovecot-Username
X-Dovecot-Session-Id
X-Dovecot-Reason
When saving data to object storage, Dovecot stores metadata associated with each blob for data recovery purposes.
This data is written to the HTTP endpoint by adding Dovecot metadata headers to the request. When retrieving a message from object storage, this data is returned in the received headers (only parsed by Dovecot if needed).
For S3, the header names are: x-ms-meta-<key>
.
Key | Description | Max Length (in bytes) | Other Info |
---|---|---|---|
fname | Dovecot filename | N/A (installation dependent; username component to naming) | |
guid | Message GUID | 32 | |
origbox | Folder GUID of first folder where stored | 32 | Copying does not update |
pop3order | POP3 message order | 10 | Only if needed by migration |
pop3uidl | POP3 UIDL | N/A (depends on source installation) | Only if message was migrated |
received | Received data | 20 (in theory; rarely more than 10) | UNIX timestamp format |
saved | Saved data | 20 (in theory; rarely more than 10) | UNIX timestamp format |
size | Message size | 20 (in theory; rarely more than 10) | Size in bytes |
username | Dovecot unique username | N/A (installation dependent) |
Key | Description | Max Length (in bytes) | Other Info |
---|---|---|---|
fname | Dovecot filename | N/A (installation dependent; username component to naming) | |
mailbox-guid | Mailbox GUID the index refers to | 32 | |
size | Message size | 20 (in theory; rarely more than 10) | Size in bytes |
username | Dovecot unique username | N/A (installation dependent) |
Key | Description | Max Length (in bytes) | Other Info |
---|---|---|---|
fname | Dovecot filename | N/A (installation dependent; username component to naming) | |
username | Dovecot unique username | N/A (installation dependent) |
As with all other object-storage drivers, Azure Blob storage requires dictmap to do object IDs to Dovecot lib-fs mappings.
Cosmos DB is a service provided by Azure that advertises full Cassandra/CQL API compatibility.
WARNING
Cosmos DB is not supported or tested by Open-Xchange, and cannot provide recommendations or operational advice. See Managed CQL Services.
The documentation provided here is based on feedback from customers that are using Azure Blob storage with Cosmos DB, and is for informational purposes only.
Cosmos DB has paging enabled by default, the Cassandra driver doesn't realize this without the cassandra_page_size
setting, leading to data loss. Thus Cosmos DB requires the cassandra_page_size
setting to be configured.
Shrink fs-dictmap's fs_dictmap_bucket_size
from 10000
to 1000
, which distributes data across more Cosmos DB partitions.
This is intended to reduce costs.