AI is a hot topic! ChatGPT, the popular AI chatbot, broke the records for user adoption rates reaching one million users in just 5 days. AI adoption has doubled in the last five years, giving rise to more calls for around AI ethics and governance.
There’s no denying that AI tools serve exceptionally well for emulating human behavior on most tasks that can be represented systematically. The repetitive and predictable tasks that follow a fixed workflow structure. The risk rises sharply when machine intelligence without human cognitive ability — emotional nuances and a risk-averse inclination — is used to perform complex tasks.
Let’s take a look at what happens when business and AI meets. It’s not all bad news, but it does take a lot of foresight and consideration to adopt AI in an ethical and sustainable way.
Consider the learning approach used by modern machine (deep) learning algorithms. The machine learning model creates a black-box system that learns trends and patterns in data. The model learns and characterizes the relationships between data — the given input data and the corresponding system behavior. Once the model is trained, it can approximate the system’s behavior for any new data inputs.
As a simple example, if you train a computer vision model on images of cats with the correctly given image labels, it will learn to correctly classify new cat images with high accuracy. How can you explain this behavior? Black-box models are inherently unexplainable.
While the AI models can classify data patterns correctly, the process may not be interpretable and understandable. After all, an AI model is, in simple terms, a set of mathematical equations that can approximately represent the relationship or behavior of any system.
Using artificial intelligence in a business setting is a lot more complicated than accurately classifying cat images. If you’re relying on black-box outputs and outcomes for your business operations, you can’t explain, defend or justify those operations.
Another key element of AI ethics and governance goes beyond the technology itself. It is focused on business leaders and the workforce, particularly:
For any organization aiming to replace or augment the human workforce in solving complex business problems, AI ethics and governance must be operationalized.
(Imagine what generative AI, like ChatGPT, means for cybersecurity: it's risk and reward.)
This is an important question facing business leaders who are inspired by the recent progress of AI technologies but also skeptical about the risk implications of an AI going rogue — or not being sufficiently human-like.
Most businesses start with overarching PR statements that range from “we will never sell your data”, to “user safety is our priority” and “our tools are designed to serve all customers equally, free of discrimination”. But to a black-box AI system making the decisions, the concept of safety and ethics may not hold the same value unless it is specifically trained for it.
To address these limitations, you can develop an operationalized and sustainable AI ethics and governance program built on these principles.
Start by measuring your AI progress. Give a quantifiable number to the scale of impact when transitioning to an AI-first strategy. Perform a qualitative analysis of how that transition impacts your regulatory compliance and ethical responsibility toward the society.
Model and forecast AI progress as you scale your business, grow your user base and adopt AI tools for operational tasks previously conducted by a human workforce.
Is it safe to simply replace your workforce with an AI tool? Consider the applicable compliance regulations—will you still meet existing industry compliance requirements if a human is not involved?
Think about augmenting a human workforce using AI tools, gather real-world data on the safety metrics and gradually expand the scope of your AI adoption.
Explore the ethical aspects of AI adoption, including:
Define what AI ethics entails for your organization. Establish a process that specifically vets for these limitations and unexplainable output of the AI system.
The healthcare industry is a prime example of driving automation across sensitive ethical aspects of its operations. The industry has been focused on governing the use of data and automation from an end-user privacy perspective. Create an ethical framework that articulates these standards and measures the ongoing effectiveness of your quality assurance and risk mitigation programs.
Every organization faces different challenges when it comes to AI ethics. Identify the KPIs and metrics that are most relevant to your own industry, organizational culture and the user base. A robust framework clearly outlines how your data pipeline – starting from data acquisition to integration with third-party AI tools and the output produced by its AI algorithms – should account for deviations and anomalies that constitute an ethical risk.
Creating the right vision requires a systematic approach to dealing with the problem of AI ethics and governance, ongoing training and education, and executive support to govern the scope of AI adoption.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.