Home:ALL Converter>Monitor journald for further processing

Monitor journald for further processing

Ask Time:2020-06-11T06:17:18         Author:ygoe

Json Formatter

I'm writing a service application that will read log entries from several sources, parse them into a simpler format and allow analytics and statistics on them. The most important of the sources will be journald.

I understand that a command like journalctl -o json -f will give me all the data I need. I can open that as a subprocess and read from its stdout stream. An additional --since "..." allows me to catch up from the last received entry, for example after a reboot or service restart.

But what about the performance of this. Is it recommended to use the journalctl command for such background and permanent monitoring activity? Will it eat too much memory, disk I/O or runtime to keep the journalctl instance running in the background "forever"? Or should I rather find a more direct API into journald? I've also read about systemd-journal-upload but I'm not entirely sure whether it can catch up after a reboot. It also sends all data through HTTP which has its own protocol overhead, although my service will be written with ASP.NET Core so it can already accept HTTP requests.

I'm targeting Ubuntu Server 20.04 and calling from .NET Core 3.1, if that's relevant. Journald is configured to use persistent storage. Everything runs locally; for now I'm not collecting logs from remote machines.

Author:ygoe,eproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/62314031/monitor-journald-for-further-processing
yy