The Splunk App for Infrastructure (SAI) has changed the game when it comes to IT Operations monitoring and alerting of metrics and logs. The App provides a uniform and dynamic overview dashboard, and an analysis workspace for a simple method to work with metrics.
On Linux Machines, Splunk leverages the collectd project to push metrics to upstream Indexers, typically by the HTTP Event Collector (HEC). For existing Splunk customers, Universal Forwarders would have already been configured to send data upstream through the default Splunk-to-Splunk port 9997. As HTTP Event Collector runs on port 8088 by default, a security team may need to open up another Firewall rule, and this could delay any data ingestion to support your future projects or use cases.
What if there was another to collect metrics without making any firewall changes?
This blog describes a simple workaround to route all metrics traffic locally through a Universal Forwarder configured to listen on a UDP port. The metrics data is then forwarded out via the Splunk-to-Splunk transport on port 9997, instead of the collectd defaults sending data through the HTTP Event Collector on port 8088.
The online Splunk documents have a process on how to manually configure metrics collection on *nix hosts for Splunk App for Infrastructure. However, this solution will automate this process by modifying the parameters on the default install script provided from within the Splunk App for Infrastructure.
The instructions are based on RedHat Linux Operating Systems and assume the Universal Forwarder is pre-configured and forwarding data to an upstream Splunk indexer.
On the Universal Forwarder, create a local inputs stanza to listen on UDP port 998.
[udp://1999] index = em_metrics sourcetype = em_metrics_udp no_appending_timestamp = true Restart Splunk on the Universal Forwarder
Next, log in to the host or remotely execute this command to install and configure the collectd agent, which will forward metrics to the UDP port created above.
- Replace “YOUR_SPLUNK_FOR_INFRASTRUCTURE_HOST” with the hostname running Splunk App for Infrastructure.
- The script will need to be run with a user with sudo permissions
- Add any Dimensions below in the format of DIMENSION="env:prod",”owner:dev”
- For Docker Monitoring, change this variable SAI_ENABLE_DOCKER=YES
export SPLUNK_URL=localhost && export METRIC_USE_UDP=YES && export UDP_PORT=1998 && export METRIC_BUFFER_SIZE=9000 && export INSTALL_LOCATION=/opt/ && export SAI_ENABLE_DOCKER= && export DIMENSIONS= METRIC_TYPES=cpu,uptime,df,disk,interface,load,memory,processmon METRIC_OPTS=cpu.by_cpu LOG_SOURCES= AUTHENTICATED_INSTALL=Yes && export INSTALL_LOCATION=/opt/ && curl -L -O http://YOUR_SPLUNK_FOR_INFRASTRUCTURE_HOST:8000/static/app/splunk_app_infrastructure/unix_agent/unix-agent.tgz && tar -xzf unix-agent.tgz || gunzip -c unix-agent.tgz | tar xvf - && cd unix-agent && bash install_agent.sh --force-continue && cd .. && rm -rf unix-agent && rm -rf unix-agent.tgz
The Splunk App for Infrastructure script requires internet access to download the relevant packages. If your hosts are denied access, you can download and install the rpm packages manually, through a configuration management tool or via a remote scripted command using ssh. Re-Run the above script once the packages are installed.
RedHat versions 7/8 Reference rpm packages, refer to this here for more details:
- RHEL 8
- RHEL 7
- Refer to this page for other distributions.
Pro Tip: To adjust the collectd push interval, run the following sed command:
sed -i 's/Interval 60/Interval 30/g' /etc/collectd.conf
Navigate back to the Splunk App for Infrastructure IU and the new hosts will appear as new entities. Here’s an example.
To get started, download the Splunk App for Infrastructure, let's get the HEC out of here to start collecting Metrics!