I got into an issue with the size of the Log folder on the first node of a Service Fabric cluster. It seems that there isn't any upper limit on the disk space it can use and it will eventually eat up the entire local disk.
The cluster was created via an ARM template and I set up two storage accounts associated to the cluster. The variable names for the storage accounts are: supportLogStorageAccountName
and applicationDiagnosticsStorageAccountName
However, the etl
files are written only to local cluster nodes disks and not to the storage (where I could find dtr
files).
Is there any way to set the destination to the etl
files to an external storage or at least to limit the size of the Log folder? I wonder if the parameter overallQuotaInMB
in the ARM template could be related to that.
You can override the MaxDiskQuotaInMb setting from the default of 10240 to reduce the disk usage by etl's. (We use the local logs in cases where Azure storage isn't available for some reason)
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-fabric-settings