Syslog, Syslog-ng, and Splunk Forwarders

Important Update as of 6/5/2020: Splunk has released Splunk Connect for Syslog (SC4S) and solution for syslog data sources. More information can be found in our blog post, here

I often get asked, which is better for Log Management; Syslog, Syslog-ng or Splunk Forwarders…

The answer is nearly always the same. “What are you currently running in your infrastructure? Do you have a log archive? What are you comfortable configuring?”

Most, if not all systems come with syslog built in. Setting Splunk up to handle syslog inputs is trivial. If you only deal with single line events then syslog is fine. You would just configure Splunk to use the Monitor input and point it to the target directory that you are storing your syslog log files in. Often this is /var/log or /var/adm depending on a Linux or Solaris installation.

If you have a medium scale deployment where you have lots of servers, you can configure syslog to listen to remote syslog hosts. Run Splunk on your receiver and you’re done.

As an example, lets say we have a Linux deployment.

  • Step one, configure syslog to “listen” to incoming messages. On most systems these days the syslog flags are configured in the /etc/sysconfig/syslog file. Append -r to the SYSLOGD_OPTIONS=”-m 0 -r”
  • On the sender hosts append to the end of the file “*.* @LOGHOST”
  • Add an entry to your /etc/hosts file for the IP address of “LOGHOST”

Assuming your receiver has the /var/log directory set up create an inputs.conf in your $SPLUNK_HOME/etc/system/local/ directory with the following stanza.

sourcetype = syslog
disabled = false
host = host_name

I like to recommend syslog-ng for both large scale deployments, and deployments where there is significant traffic. Syslog-ng allows you to use TCP rather than UDP to send your log messages. As we all know, UDP is lossy.. If you have too many messages for the network, interface, or host you are running syslog on you will drop data. Also, syslog-ng allows you to pre-filter messages upon their arrival into “buckets” to give you better control over your logs. Splunk can still be easily configured to monitor the target path and easily handle the naming of incoming systems, events, and dates.

To configure your Splunk host to properly get the hostname on a log archive with syslog-ng, you would have to make sure syslog-ng is creating the hostname in the path. For example, /var/log/archive/hosts/hostname/…/

The Splunk monitor stanza would look like this:

host_segment = 5
sourcetype = syslog

Where does Splunk Forwarders come into play here? (I knew you would ask)

Splunk forwards multi-line log events. This makes troubleshooting java apps, php apps, practically anything that uses this format, trivial. Typically I recommend using a mixture of inputs. I like using syslog/syslog-ng for collecting the log data to a central repository. This guarantees that you will always have the original data around. I then recommend configuring a Splunk instance to monitor the target directory of the syslog messages as well as pointing Splunk at the directories that contain the multi-line events. Best of both worlds.

What are the drawbacks of Forwarders? Just like conifiguring Splunk as a syslog receiver, if your splunk instance is down, you get no data.

So, often the best solution is to run Splunk Forwarders on those hosts that have multiline logs and use syslog/syslog-ng on your central server. Collect syslog with syslog-ng and collect app logs with Splunk. Best of both worlds.

Posted by