Tracing your TCP IPv4 connections with eBPF and BCC from the Linux kernel JIT-VM to Splunk

Starting with Linux Kernel 4.1, an interesting feature got merged: eBPF. For anyone playing with network, BPF should sound familiar: it is a filtering system available to user-space tools such as tcpdump or wireshark to filter and display only the wanted (filtered) packets. The e in eBPF means extended, to bring that out of just Network traffic and allowing to trace from the Kernel various things, syscall capture, kprobes, tracepoints etc.


eBPF will run a piece of C code compiled in bytecode which uses the Just-In-Time Compiler to the BPF interpreter. In short, eBPF uses the virtual machine which interprets code into the Linux Kernel. In the current git tree, BPF offers 89 instructions called from the bytecode buffer making the eBPF instructions.


It is an amazing tool for tracing, but in this post I would like to share how we can list TCP IPv4 connections and send them to Splunk using the HTTP Event Collector (HEC), all that kernel side!

We will cover the Linux kernel configuration that you need, as well as the Splunk dashboard which monitors those events.

Step 1: Getting the latest Linux Kernel

Those steps are done on a Debian distribution, should also work on Ubuntu. If you have another distribution, adjust this or find a way to grab a Kernel > 4.1.
We first grab the freshest Linux source code from the Linus tree by running the git clone command:
$ git clone git://
Because we are on a Debian distribution, we would like to use the standardize tools provided by the Debian Kernel Package (more information here:
We need to install the following packages to automate the building and packaging creation of this kernel:
$ sudo apt-get install kernel-package build-essential libncurses5-dev fakeroot

Now we can configure options we need for our kernel by running the ncurses frontend, menuconfig:
$ make ARCH=x86_64 menuconfig

If you want to play with the new bpf() syscall, activate into the “ General Setup” the item “ Enable bpf() system call”:
Linux enable bpf() system call
We save in the “ .config” file, and we make sure the Kernel configuration builds BPF:
$ grep BPF .config
# CONFIG_NET_ACT_BPF is not set

Now we can use the Debian kernel package builder, make-kpkg:
$ make-kpkg --initrd --rootcmd fakeroot kernel_image
exec make kpkg_version=12.036+nmu3 -f /usr/share/kernel-package/ruleset/ debian ROOT_CMD=fakeroot
====== making target debian/stamp/conf/minimal_debian [new prereqs: ]======
dpkg —build                   ~/git/linux/debian/linux-image-4.6.0-rc6+ ..
dpkg-deb: building package `linux-image-4.6.0-rc6+' in `../linux-image-4.6.0-rc6+_4.6.0-rc6+-10.00.Custom_amd64.deb'.
make[1]: Leaving directory ‘~/git/linux'

It builds the kernel bzImage, as well as the modules.
We install the package like this:
$ sudo dpkg -i ../linux-image-4.6.0-rc6+_4.6.0-rc6+-10.00.Custom_amd64.deb

Now it is time to reboot on your new kernel. You can then check the version by typing:
$ uname -a | grep 4.6.0-rc6
$ echo $?

If the echo command returns 1, you have booted on the wrong kernel. So now you can check if things were started correctly from GRUB.
This is all good from the Linux kernel point of view, we can now move on to the userspace tools, with BCC.

Step 2: Building BCC

Once our kernel is setup, we are now going to install and use BCC (BPF Compiler Collection), which offers a Python API where you include the C code you will bytecode for BPF and get results directly from the Linux Kernel… in Python!
You can get BCC from the latest git repository:
$ git clone

Simply follow the BCC building instructions:
We also install the tools iperf and netperf:
$ sudo apt-get install iperf netperf

To test BCC built fine, you can run the provided program:
sudo python /usr/share/bcc/examples/
          tpvmlp-1636  [000] d...  2633.342396: : Hello, World!
          tpvmlp-1636  [000] d...  2648.547213: : Hello, World!

And also a 4 lines longer code
$ sudo python /usr/share/bcc/examples/tracing/
1636 Hello, World!
1636 Hello, World!
3182 Hello, World!
3182 Hello, World!
1636 Hello, World!

Working? Now let’s go to the next step, setting up the Splunk HTTP Event Collector!

Step 3: Splunk HTTP Event Collector

Recently, Splunk introduced the notion of a HTTP Event Collector, which allows us to craft any type of event to be ingested by Splunk. The Event must be formatted in JSON, and send to the listening socket on the Splunk side.
I recommend you go and read “ Set up and use HTTP Event Collector”, before continuing.
 We create a new HEC service, go into Settings>Data inputs:
Now select on the left side the HTTP Event Collector:
On the upper-right corner, click on Global Settings:
This pops up the following window. We click on “ Enabled” for All Tokens, we deactivate SSL, since we want to avoid adding the SSL handling code to make things easier for this blog article (however if you are not playing, it is obviously strongly discouraged to deactivate it!), and we leave the port number to the default. Click Save.
Now back to the previous page, click on “ New Token” on the upper-right corner:
We give the name “ bcc” to this token, a brief description and we can click on “ Next“:
We leave the input settings to the defaults, we can click on “ Review”:
We can now Submit:
Upon completion, our token is creating successfully like this:
Copy the value, you will need this in your Python code!
We test if events can be sent using the program curl:
$ curl -k  http://localhost:8088/services/collector/event -H "Authorization: Splunk 652AE968-58E4-4304-A1FE-C4AB7A5CF327" -d '{"event": "hello world"}'

And can check in Splunk the event was emitted:

Step 4: BCC + HEC = \m/

We are going to modify an example given by the BCC project team which simply list the connected sockets in TCP on IPV4:
$ wget

We can test the tool, by running it:
$ sudo python
PID    COMM         SADDR            DADDR            DPORT

And on the other side, run an active connection, using wget:
$ wget

Now back to where we started the program:
$ sudo python
PID    COMM         SADDR            DADDR            DPORT
4367   wget    80
4367   wget    80

We can now send a Splunk event every time there is a new connection. We need to modify the code a little bit, no need to touch the C part, just the Python one.
Copy the to
$ cp

Edit now with your favorite editor (emacs!) and go to line 20 to add the imports of httplib, os and json libraries:
from bcc import BPF
import os
import httplib
import json
# define BPF program

Now go to line 92 and initialize everything before the while loop starts:
headers = {"Authorization": "Splunk 652AE968-58E4-4304-A1FE-C4AB7A5CF327", "Content-Type": "application/json"}
conn = httplib.HTTPConnection("")
# filter and format output
while 1:

And finally, in the loop, we post received data to Splunk. We however add a pid check to make sure we do not send the connection this process creates to Splunk, otherwise we end up in a nice infinite loop!
        # Ignore messages from other tracers
        if _tag != "trace_tcp4connect":
        if os.getpid() != pid:
                message = {"event": {"pid": pid, "task": task, "saddr": inet_ntoa(int(saddr_hs, 16)),
                                     "daadr": inet_ntoa(int(daddr_hs, 16)), "dport": dport_s}}
                conn.request("POST", "/services/collector/event", json.dumps(message), headers)
                res = conn.getresponse()

We can now enjoy seeing our wget, as well as other python processes:
Splunk search sourcetype httpevent wget python


As you have seen, using latest features from the Linux kernel in order to connect to Splunk anything the Linux kernel receives, all that from the kernel side using the glue offered by BCC so we can simply write the code and prototype using Python. I hope you will find creative ways to use the new eBPF feature and I would be more than happy to hear from you amazing stuff you are doing with it and Splunk!

Sebastien Tricaud

Posted by