Kosmos Community

Story: Set up log collection, centralization and monitoring on Kosmos servers

Defining this as a ‘story’ ticket because other topics will need to be made for below. I proposed this to basti to set up log collection.

How to get started (since this is a big task) we start with 1 or 2 servers.

Log Collection

  • Define the sources to log - servers, databases, any network rules set up (ie snort), authentication events, proxy logs, mail server logs (if relevant), dns servers, etc. This requires obtaining a list of what the assets are, then drill down. This is the coverage scope of events.

  • Define the auditing sources to log - define what the auditing sources are (ie configuration files) and audit rules creation

  • Define the events to log, parsing rules. The priority would be security related events. We’d also might want to define others, like events for troubleshooting/operational reasons.

  • Define which event logs may hold PII in it

Log Centralization

  • Define the vendors to use. Note: I’ve so far used Splunk, ELK and also a third party log agent that is supported by Graylog, also since we are doing Linux/Unix-like servers there are other agents that could be used like syslog-ng or the built in Rsyslog

  • AFAIK there is already Graylog, this would inform that type of log agents to use (Graylog sidecar), what extensions to use etc. Need to see if there is a move to stay with Graylog or elsewhere.

  • Create/find the initial baseline of events. Wider logging results in a bigger storage requirement to hold the events, plus need to watch out for potential performance issues if the logging becomes too verbose.

Log Monitoring

  • Configure alert rules

  • Add a visualization dashboard, with rules (ie uncoder.io). This would be a another ticket when time comes to define the visualization rules.

  • As Graylog is more like an Log Management solution, ideally we should try to build something with Graylog and find components with more SIEM like features to it, or use Graylog as an LM then send to another instance, like Splunk or Elastic with relevant Kibana dashboards (see uncoder.io Kibana rules)

1 Like