Splunk 4.1.6 updates OpenSSL to 0.9.8p address CVE-2010-3864 - December 1st, 2010

Table of Contents

Splunk 4.1.6 updates OpenSSL to 0.9.8p address CVE-2010-3864

Overview

Splunk 4.1.6, which was released on November 29th, 2010, updates OpenSSL to version 0.9.8p in order to address the race condition vulnerabilities described in CVE-2010-3864 (cve.mitre.org) (openssl.org).

What is OpenSSL?

OpenSSL is an Open Source toolkit for implementing the Secure Sockets Layer (SSL v2/v3) and Transport Layer Security (TLS v1) protocols as well as a full-strength general purpose cryptography library.

How does Splunk use OpenSSL?

Splunk uses OpenSSL in order to provide transport layer security.

Who is affected?

This notification applies to you if you are using any version of Splunk (2.x, 3.x, or 4.x) prior to version 4.1.6.

What should I do if I am affected?

Splunk recommends that customers upgrade to version 4.1.6 at their first opportunity

What else can I do to help remediate this issue?

Splunk recommends that customers implement as many aspects of the Splunk Hardening Standards as possible to reduce risk.

Is Splunk aware of any exploits related to CVE-2010-3864?

At the time of this announcement, Splunk is not aware of any exploits for vulnerabilities related to CVE-2010-3864.

Why has Splunk included this update to OpenSSL?

OpenSSL's advisory states:

Who is affected?
=================

All versions of OpenSSL supporting TLS extensions contain this vulnerability
including OpenSSL 0.9.8f through 0.9.8o, 1.0.0, 1.0.0a releases.

Any OpenSSL based TLS server is vulnerable if it is multi-threaded and uses
OpenSSL's internal caching mechanism. Servers that are multi-process and/or
disable internal session caching are NOT affected.

Splunk 4.1.6 includes OpenSSL 0.9.8p because all versions of Splunk prior to 4.1.6:

  • utilize the TLS extensions in OpenSSL
  • are multi-threaded
  • do not explicitly disable OpenSSL's internal caching mechanism

What if I have additional questions?

If you have any questions about the information above, please contact Support.