Artificial Intelligence and Public Trust

Digital Summit 2019 in LondonAs people who know me are aware, I am NOT a fan of the term Artificial Intelligence (AI). So I thought I’d kick off my very first Splunk blog with that exact phrase.

Why would I betray my own beliefs so readily? Well mostly because Splunk were privileged to be invited along to the 2019 Digital Summit in London last month to speak about data and public trust where AI was a big topic on the day.

The Digital Summit is intended to allow digital leaders in government from across the world a chance to collectively discuss some of their biggest challenges. It was a fascinating day, and if anyone doesn’t know about Estonia’s digital ID system I’d suggest you check it out (starting with this article) – they are genuinely leading the way in providing government digital services.

While Estonia has done an amazing job maintaining public trust through a period of massive transformation, conversations around AI and public trust are much more challenging.

AI = Augmented Intelligence

I prefer to think of AI as Augmented Intelligence (as surely no machine can ever replace a brain), but trust is particularly elusive when AI gets mentioned. The level of trust probably hasn’t been helped by any number of sci-fi films and literature (has anyone ever read anything by Philip K. Dick?) or videos like this one of two google home hubs having a bizarre – and at times disturbing – conversation with each other…

Clearly, there is a long list of reasons why trust is challenging when it comes to AI, but ultimately it comes down to the transparency and explainability of any type of analytic that involves advanced statistics. Even among those who really understand the detailed statistics used in machine learning and deep learning, most of them would readily admit that statistics is pretty boring… Though studying stats certainly offers better career prospects to a budding mathematician than studying pure maths ever could.

The Impact of Dark Data

Two things that have a big impact on the application of AI techniques that we hear a lot at Splunk are: dark data and the half-life of data. These were some of the topics we spoke about at the Digital Summit.

When it comes to dark data we are talking about all the unknown and untapped data across an organisation, generated by systems, devices and interactions. The big questions here are:

  • How can you draw accurate conclusions if you don’t have full visibility of your data assets?
  • If the full context of your data is not available can you be sure that what you are making available to the public or your customers isn’t introducing bias or harm?

Added to this is the fact that many organisations hold multiple copies of the same data, which are usually out of sync. Maintaining a record of the data source that is the most accurate and up to date is a challenge in almost every organisation I have worked in.

Data Half-Life

Talking about keeping data up to date, what about the half-life of data? At a company like Splunk where almost all of our customers are using us for operational analytics half-life is extremely important, which is the idea that the value of data decreases the older it gets. If you consider a malware outbreak the first alert that fires is extremely valuable until the malware spreads, at which point that piece of information has much less use.

The main takeaways here are:

  • When it comes to applying AI, if you don’t have the full context of the data (i.e. you have a lot of dark data) then you may well be introducing bias. 
  • Additionally, if you aren’t running analytics over data in a timely fashion then the conclusions that are drawn may not be as useful as they could have been.

To help raise awareness of these issues Splunk is currently working with the World Economic Forum (WEF) on a number of initiatives around AI. One of these initiatives is to help produce a set of guidance around how governments should procure AI platforms. This guidance is intended to address key questions such as intended use, accuracy of data, fairness and transparency of algorithmic based decision flows, data security, and effectiveness of the AI solutions.

Over the next month or so we will be helping the WEF in the UK by getting involved in a number of workshops to pilot the guidance that we helped draft and I am hoping that they will be as enjoyable and thought provoking as the Digital Summit.

Stand by for updates on these sessions, along with some interesting news about new platform capabilities (see here for a cryptic clue).

Until next time, 


Greg is a recovering mathematician and part of the technical advisory team at Splunk, specialising in how to get value from machine learning and advanced analytics. Previously the product manager for Splunk’s Machine Learning Toolkit (MLTK) he helped set the strategy for machine learning in the core Splunk platform. A particular career highlight was partnering with the World Economic Forum to provide subject matter expertise on the AI Procurement in a Box project.

Before working at Splunk he spent a number of years with Deloitte and prior to that BAE Systems Detica working as a data scientist. Ahead of getting a proper job he spent way too long at university collecting degrees in maths including a PhD on “Mathematical Analysis of PWM Processes”.

When he is not at work he is usually herding his three young lads around while thinking that work is significantly more relaxing than being at home…

Show All Tags
Show Less Tags