.CONF & SPLUNKLIVE!

Scaling Blockchains at .conf19

Our goal on the Splunk Blockchain team is to help our customers ensure their blockchain deployments are scalable, secure, and stable. This blog post will discuss a few approaches to blockchain scalability and how Splunk fits into the picture.

When considering scaling blockchains, it’s important to keep in mind that blockchains are replicated systems. Replicated systems don’t scale quite the same way that horizontally scalable systems like Splunk, Hadoop, or load balanced web servers do. In horizontally scalable systems, data is split between multiple servers and each server takes responsibility for storage and processing for that “shard” of the overall data. Horizontally scalable architectures like these have been immensely successful in reducing the cost and increasing the reliability of web scale infrastructure. In order to secure these types of systems, we add encryption, logging, firewalls, and other services on top.

In a replicated system like blockchain, all servers (aka nodes) have to keep a full copy of the data and handle processing. This replication makes the blockchain secure against data corruption, tampering, or other malicious activity. However, as more nodes are added to a blockchain it gets more secure, but it cannot handle more throughput because each server still needs to process a full copy of all the data — unlike sharded, horizontally scalable architectures.

Splunk scales with more servers and we add firewalls and encryption on top to make it more secure. Blockchains get more secure with more servers and we have to add “layer 2” systems on top in order to scale.

Scaling blockchain throughput requires new systems that sit on top of the blockchain. In essence, blockchain scaling involves moving data and compute from “on-chain” to “off-chain”. These “off-chain” systems can be another blockchain (commonly referred to as a sidechain), secure messaging networks called “state channels” or “payment channels”, or simply a traditional, centralized system.

At .conf19, we demonstrated two of these approaches to scaling blockchain networks. The first was a sidechain (the xDai network) used for Buttercup Bucks. The second is a talk wed did with Samsung SDS’s Nexledger Accelerator which is more akin to a traditional, centralized system that mimics a caching and batching layer.

The xDai Sidechain and Buttercup Bucks

The xDai sidechain is an Ethereum based blockchain network that relies on known entities called validators instead of pseudonymous miners for its security. The advantage of this approach is greatly reduced transaction costs because the blockchain does not require large amounts of energy use to secure. The trade off (there’s no free lunch!) is the trust that you put into the validators to behave honestly. While you might not want to move a billion dollars on the xDai network, it is certainly secure enough for small denomination transactions like the ones we had for Buttercup Bucks.

We also liked xDai for a couple other reasons. As a sidechain, it has a lot less transaction volume than Ethereum mainnet so we could ensure that all our Buttercup Bucks transactions would settle quickly during the conference. Also, the Dai tokens we used to pay transaction fees are a USD dollar pegged stablecoin. For 19,559 transactions over 4 days at .conf, we only paid $1.55 in fees! And I love just love geeking out about Dai! If you want to learn more about the xDai sidechain, you can check out "xDai: The birth of the stable chain."

As Splunk customers know, monitoring your IT infrastructure is critical to success and blockchain is no different. For Buttercup Bucks, we built a series of dashboards using our new dashboards framework to monitor the overall performance of the xDai sidechain.

The Buttercup Bucks Ops Center is where we linked to all the dashboards that we created.

The architecture dashboard demonstrates how you can get visibility of an entire blockchain network in one view.

Samsung SDS Nexledger Accelerator

Sidechains, however, are still blockchains and have limits to their throughput. One of those limits in Hyperledger Fabric (a permissioned, enterprise blockchain) is the per transaction latency as it is propagated through the network for endorsement, ordering and validation. These steps in Hyperledger Fabric are required to ensure the validity of each transaction.

Samsung SDS has built a system for batching many individual transactions into one big transaction that it submits to Hyperledger Fabric. This has the effect of increasing the overall transaction throughput of the network. You can learn more about their product in their whitepaper or check out the code on Github.

One of Splunk’s strengths is our ability to monitor, alert and react to complex, data driven events. One of the challenges with the Nexledger Accelerator are write conflicts that occur in certain types of stateful smart contracts. While these write conflicts don’t result in corruption on the blockchain, they do require users to resubmit their transaction which is obviously undesirable. The Nexledger Accelerator has two parameters that can be used to tune how aggressively it batches transactions: max batch items and max wait time.

In our collaboration with the team from Samsung SDS, we demonstrated how we can use Splunk Machine Learning Toolkit to learn which settings optimized for high throughput and minimal write conflicts. We also wrote an alert action that monitors blockchain logs and adaptively updates the Nexledger Accelerator in response to transaction throughput. Watch the recording of our talk for more details.

Clustering in MLTK reveals the optimal settings for Nexledger Accelerator.


Teal bars represent the number of write conflicts, before and after tuning.

 


Thanks for taking the time to read this post! Hope it was informative.

If you have more questions about our blockchain and DLT efforts, please reach out to us at blockchain@splunk.com!

Jeff Wu
Posted by

Jeff Wu

Jeff Wu is a product manager focused on blockchains at Splunk. He’s passionate about using data to solve big problems and he believes there are big problems that blockchain can solve. Prior to Splunk, he worked at Atlassian leading a team of data engineers working on data warehouse and data integration projects.

Join the Discussion