I currently have a logging solution in Kubernetes as follows.
Fluent-bit deployed as daemon sets that collect logs from the nodes
After collecting logs, fluent-bit forwards data to two destinations
-> fluentd ( aggregator ) which performs log archival.
-> Loki which stores logs that can be viewed using Grafana
In this setup , consider a scenario where a specific application malfunctions and starts sending same data or unwanted data repeatedly. As we have log curation on fluentd and loki , in order to give space for new logs , the old logs might get deleted.
Can we prevent the throttling ( in each component wise ), so that we prevent losing our valuable old data that is getting deleted for storing the unwanted logs coming from a malfunctioned application ?
Is there any way , we can set a criteria to drop the unwanted logs coming from a node, either in sending end ( fluent-bit ) or in receiving end ( loki or fluentd )