CUSTOMERS & COMMUNITY

Big data and financial services – an EMEA perspective

I was lucky enough to attend the first day of the “Big Data in Financial Services” event in London a few days ago. I know some people might not think of that as lucky but I say it on the back of a surprisingly varied agenda, entertaining speakers and a lot of good debate and content on what big data means to FS companies and how they are using it.

The key point that I took away was that right now, FS companies are using big data today to focus on operational issues – risk, efficiency, compliance, security and making better decisions. However, there is a growing trend in FS companies looking at how big data is going to be used for more innovative, growth enabling programs. These might be looking at brand, using social data, customer insight, new products and services etc.  I took a lot of notes (thanks to Evernote) and I always like to run it through Wordle to see what comes out, the trends and themes from the event:

The range of discussion went from how to best use Hadoop, how to create the right data architecture across a bank, big data as an approach to application management and why security and compliance are big data issues. It highlighted the fact that big data means different things to different people and the variety of angles that banks are taking to benefit from big data are diverse.

There were a few ideas that stuck with me from two of the sessions that I wanted to highlight that got to grips with the reality of making big data work in a bank.

Firstly – one of the world’s top five banks asked a great question for the audience – if I make big data “good” and keep it in Hadoop – who’s responsible for it as it proliferates across an organization? A term I’d not heard before (beyond peppermint tea) was “data infusion” – how to understand the data coming in to Hadoop so you can continue to keep your data “good”, clean and accurate

A top five UK bank then gave an interesting presentation on the value of getting your data architecture right and how it can be a jump start for new projects, reduced reliance on subject matter experts and improved knowledge sharing and business analysis. There were some compelling metrics:

  • Every pound spent documenting data architecture saves five pounds in the future
  • Every pound spent doing it right first time avoids three pounds in future remediation
  • Every pound spent creating centralized data saves twenty pounds in future investment
  • Every pound spent avoiding replication saves a pound building reconciliation

Creating good data architecture should be a no-brainer but people don’t do it? Why not? Short term focus and pressure to deliver quickly. There is a real impact on the bottom line if you get this right but a good data architecture is key.

After lunch, Splunk’s very own Steve Gailey (@StephenGailey) presented on “Operational Risk and Big Data”. Steve joined Splunk from Barclays where he was in charge of group security. After a brief introduction to Splunk, real-time machine big data and Hunk – our new product for Hadoop, Steve went through a number of real Splunk customers in FS.

These included:

  • A leading global equities and commodities exchange giving insight across large amounts of data from all of their applications and engines
  • End to end visibility of real-time machine data giving an FS organization $6m in annual ROI and how Splunk paid for itself in the first month
  • Customers using Splunk for Dodd Frank compliance in the areas of auditing, reporting, surveillance and proactive alerting
  • Unauthorised trading and proving the effectiveness of control
  • Joiners, Movers and Leavers (JML) and demonstrating that privileges are compliant with regulatory mandates

Steve ended with an overview of how he used Splunk at Barclays. Two great Splunk use cases came out. Firstly – Barclays built an app called PingIt for mobile payments. Splunk allowed customer queries to be instantly answered based on real-time machine data from the micropayments.

Secondly – Barclays collected 1.2 billion events a day from across the organization – none of which were owned by compliance. Barclays weren’t making the most of this data. Using Splunk, Barclays were able to give this big data back to the business units as operational intelligence with valuable insight and analytics

It was a good event and one I’d recommend next year if you get the chance. I had some interesting conversations and I got a chance to stretch my legs with a demo of Splunk for FS (many thanks to my pre-sales colleague, Hash). You can see an example of the stock trade dashboard being generated by machine data below:

If you’re interested in big data or FS (or both) then the event had real examples of financial benefit, real use cases and I’ve got a feeling that next year these stories will have gone from operational wins to innovative new ideas.

In my last post – I promised you more about the Splunk London TARDIS meeting room. You can see the interior of the meeting room below with a working, flashing central console together with dials and levers for controlling the screens. I’m yet to have a meeting in there but the time travel is going well as I’m writing this blog from the future…

Thanks for reading – see you next time.

Matt Davies
Posted by

Matt Davies

Matt is Splunk's VP, Customer Marketing (and part time Chief Colouring-In Officer). He's working closely with Splunk customers to help them understand the value that new insights from machine data can deliver to their business. Matt is also one of Splunk's technical evangelists and communicates Splunk's go to market strategy in the region. Previously Matt has worked at Cordys, Oracle/BEA, Elata, Broadquay Consulting, iPlanet/Sun, Netscape and IBM. With nearly 25 years in the software industry, Matt has extensive knowledge of enterprise IT systems.

Join the Discussion