I'm working on a C# application that sends Logs on a remote Seq Sink by Serilog (Serilog.Sinks.Seq). It was working good until I added some more logs; After that it just sends some of the logs to seq. I tested File sink but it works without any problem and writes all logs to the file, but the seq is still have same problem. I even tested Seq on local computer, but no profit.
So I have installed Serilog.Sinks.PeriodicBatching hoping to solve the problem. but unfortunately I did not found any documentation or examples how to configure and enable it in my project. the only code I found is https://github.com/serilog/serilog-sinks-periodicbatching that I don't understand. anyone know how to use it to solve this problem with Seq? I need a simple example.
I'm using latest version of Serilog and PeriodicBatching.
Update 1:
Here is the File log that is produced correctly. In the seq the three last logs are dropped.
2021-02-06 09:45:36.164 +03:30 [INF] ⠀⠀⠀⠀⠀|
2021-02-06 09:45:36.180 +03:30 [INF] Application Start.
2021-02-06 09:45:36.180 +03:30 [ERR] It is a fresh OS
System.ArgumentNullException: Value cannot be null.
Parameter name: value
at Newtonsoft.Json.JsonConvert.DeserializeObject(String value, Type type, JsonSerializerSettings settings)
at Newtonsoft.Json.JsonConvert.DeserializeObject[T](String value, JsonSerializerSettings settings)
at SenderConsole.Program.scanClientAndSendIfDifferent() in D:\Programing\SysWatch\SenderConsole\Program.cs:line 434
2021-02-06 09:45:36.195 +03:30 [DBG] getCurrentClientConfig() Started.
2021-02-06 09:45:36.289 +03:30 [DBG] getCurrentClientConfig()==> CpuChanges
2021-02-06 09:45:36.305 +03:30 [DBG] getCurrentClientConfig()==> StorageChanges
2021-02-06 09:45:36.383 +03:30 [DBG] getCurrentClientConfig()==> RamChanges
2021-02-06 09:45:36.398 +03:30 [DBG] getCurrentClientConfig()==> MotherboardChanges
2021-02-06 09:45:36.414 +03:30 [DBG] getCurrentClientConfig()==> OsChanges
2021-02-06 09:45:36.492 +03:30 [DBG] getCurrentClientConfig()==> NicChanges
2021-02-06 09:45:36.679 +03:30 [DBG] getCurrentClientConfig()==> PrinterChanges
2021-02-06 09:45:36.695 +03:30 [DBG] getCurrentClientConfig()==> DomainChanges
2021-02-06 09:45:36.883 +03:30 [DBG] getCurrentClientConfig()==> iSMBIOSChanges
2021-02-06 09:45:36.914 +03:30 [DBG] getCurrentClientConfig()==> AppChanges
2021-02-06 09:45:36.929 +03:30 [DBG] getCurrentClientConfig()==> AntivirusChanges
2021-02-06 09:45:36.929 +03:30 [DBG] getCurrentClientConfig() Ended.
2021-02-06 09:45:36.929 +03:30 [DBG] configsAreDifferent() Started.
2021-02-06 09:45:36.929 +03:30 [INF] Configs are different.
2021-02-06 09:45:36.929 +03:30 [DBG] serializeByJson() Started.
2021-02-06 09:45:37.148 +03:30 [DBG] serializeByJson() Ended.
2021-02-06 09:45:39.210 +03:30 [FTL] Unable to connect to the remote server
2021-02-06 09:45:39.210 +03:30 [INF] Sent to DB.
2021-02-06 09:45:39.210 +03:30 [INF] Application End.
Update based on your edit:
It appears the last few messages before your program ends are the ones being dropped. You need to flush the logger to ensure Serilog forces and waits for any pending logs events before the application has time to exit:
public static void Main()
{
try
{
// Program Logic here
}
finally
{
Serilog.Log.CloseAndFlush();
}
}
Original answer:
Structured logging properties can become quite large which leads to larger HTTP requests and increased CPU requirements to process them. The reason your events are getting "lost" is because there are limitations imposed by both Seq (the server) and the Serilog Sink that attempt to alleviate these concerns. Messages (single events or entire batches) can be dropped by either the client or the server depending on the specific configuration and the message/batch size. Notice I said "batch"; calling the Seq
extension method chained against WriteTo
will add a PeriodicBatchingSink
(an implementation of the IBatchedLogEventSink
) which wraps the SeqSink
. So you don't need to try and wrap it yourself--it's already done.
So how do you alleviate your problems? Well first, you can update the settings on the Server. Seq recommends against this, however there are definite use-cases, especially if you are sending a lot of properties. Under Settings -> System -> Ingestion
you can modify the "Raw ingestion payload limit" and/or the "Raw event body limit" settings. Personally, I leave the former at the default value and increased the latter based on some calculations we've made based on how we will configure the Sink itself:
Take heed of the underlined warnings! Make sure you understand the implications of modifying these settings.
Now on the client side you have a few options. The Seq
extension method has a few parameters that can be used to tweak the logger's behavior.
Parameter | Default Value | Description |
---|---|---|
eventBodyLimitBytes |
262,144 | The maximum size, in bytes, that the JSON representation of an event may take before it is dropped rather than being sent to the Seq server. Specify null for no limit |
batchPostingLimit |
1,000 | The maximum number of events to post in a single batch |
If the batched events do not fit within eventBodyLimitBytes
after they have been serialized into "compacted JSON", then the entire batch will be dropped. The batchPostingLimit
is used by the PeriodBatchingSink
to determine how many messages to queue up before it sends the events to the inner sink (which performs the serialization/dropping). You might also consider changing the minimum log level as it defaults to Verbose
or passing in a LoggingLevelSwitch
var maxSizeBytes = 512 * 1024; // 512KB, double the default
var batchLimit = 100; // 1/10 the default
logger.WriteTo.Seq(
"http://seq.server.com",
eventBodyLimitBytes: maxSizeBytes,
batchPostingLimit: batchLimit,
restrictedToMinimumLevel: LogEventLevel.Information
);
Configuring a smaller batch size could decrease the chance that you will exceed the byte limit but that will cause more HTTP requests to be sent which could be a problem of its own. Now remember, you also need to adhere to the configuration of the server; your combined event size needs to be less than the "Raw ingestion payload limit" to ensure that an entire batch isn't dropped and a single event must be smaller than "Raw event body limit".
Since I don't know anything about your actual events, I can't advise on the appropriate settings here. I can only point you in the correct direction. All I can say is that you should perform some calculations that make sense based on the server settings, knowledge of your events (and their attached properties) and your business requirements:
Your other option is to use an "Audit" sink. In this mode, Serilog will throw an exception if a log message cannot be transmitted to the target medium. When using the Seq
extension method against AuditTo
, a DurableSeqSink
is used instead. Logs are persisted to disk first (as temporary storage) and then shipped to Seq one at a time. If you really must guarantee log messages aren't dropped (and that, if they are you are notified) this is the way to go. There's overhead here; messages must be written to a file, you need a new HTTP request per event and you need to guard all of your logging statements with try/catch
. The latter point is IMO the biggest reason against this, particularly if you are using the Microsoft.Extensions.Logging.ILogger
abstraction around Serilog as consumers typically expect logging to be exception free.
You can also make use of log filters/LoggingLevelSwitch
and Serilog's Filter.ByExcluding
to control the messages that will be sent. Furthermore, you might also consider tweaking the Destructuring depth and size to ensure that nested properties and collections are only serialized up to a maximum size.
logger.Destructure.ToMaximumDepth(5) // default 10
.Destructure.ToMaximumStringLength(1000) // made up value, default is int.MaxValue
.Destructure.ToMaximumCollectionCount(10) // made up, default is int.MaxValue