AI is a hot topic! ChatGPT, the popular AI chatbot, broke the records for user adoption rates reaching one million users in just 5 days. AI adoption has doubled in the last five years, giving rise to more calls for around AI ethics and governance.
There’s no denying that AI tools serve exceptionally well for emulating human behavior on most tasks that can be represented systematically. The repetitive and predictable tasks that follow a fixed workflow structure. The risk rises sharply when machine intelligence without human cognitive ability — emotional nuances and a risk-averse inclination — is used to perform complex tasks.
Let’s take a look at what happens when business and AI meets. It’s not all bad news, but it does take a lot of foresight and consideration to adopt AI in an ethical and sustainable way.
AI in business
Consider the learning approach used by modern machine (deep) learning algorithms. The machine learning model creates a black-box system that learns trends and patterns in data. The model learns and characterizes the relationships between data — the given input data and the corresponding system behavior. Once the model is trained, it can approximate the system’s behavior for any new data inputs.
As a simple example, if you train a computer vision model on images of cats with the correctly given image labels, it will learn to correctly classify new cat images with high accuracy. How can you explain this behavior? Black-box models are inherently unexplainable.
While the AI models can classify data patterns correctly, the process may not be interpretable and understandable. After all, an AI model is, in simple terms, a set of mathematical equations that can approximately represent the relationship or behavior of any system.
Using artificial intelligence in a business setting is a lot more complicated than accurately classifying cat images. If you’re relying on black-box outputs and outcomes for your business operations, you can’t explain, defend or justify those operations.
Another key element of AI ethics and governance goes beyond the technology itself. It is focused on business leaders and the workforce, particularly:
- How they envision the use of AI in solving sensitive business problems.
- How their use of advanced AI technologies has the potential to violate the ethical standards that uphold the brand reputation and loyalty toward their organization.
For any organization aiming to replace or augment the human workforce in solving complex business problems, AI ethics and governance must be operationalized.
(Imagine what generative AI, like ChatGPT, means for cybersecurity: it's risk and reward.)
How do you operationalize AI ethics and governance?
This is an important question facing business leaders who are inspired by the recent progress of AI technologies but also skeptical about the risk implications of an AI going rogue — or not being sufficiently human-like.
Most businesses start with overarching PR statements that range from “we will never sell your data”, to “user safety is our priority” and “our tools are designed to serve all customers equally, free of discrimination”. But to a black-box AI system making the decisions, the concept of safety and ethics may not hold the same value unless it is specifically trained for it.
Building an operational, sustainable, ethical AI model
To address these limitations, you can develop an operationalized and sustainable AI ethics and governance program built on these principles.
1. Measure and interpret your AI transformation
Start by measuring your AI progress. Give a quantifiable number to the scale of impact when transitioning to an AI-first strategy. Perform a qualitative analysis of how that transition impacts your regulatory compliance and ethical responsibility toward the society.
Model and forecast AI progress as you scale your business, grow your user base and adopt AI tools for operational tasks previously conducted by a human workforce.
2. Understand the problem of AI Safety
Is it safe to simply replace your workforce with an AI tool? Consider the applicable compliance regulations—will you still meet existing industry compliance requirements if a human is not involved?
Think about augmenting a human workforce using AI tools, gather real-world data on the safety metrics and gradually expand the scope of your AI adoption.
3. Understand the problem of AI ethics
Explore the ethical aspects of AI adoption, including:
- End-user privacy
Define what AI ethics entails for your organization. Establish a process that specifically vets for these limitations and unexplainable output of the AI system.
4. Build on existing programs
The healthcare industry is a prime example of driving automation across sensitive ethical aspects of its operations. The industry has been focused on governing the use of data and automation from an end-user privacy perspective. Create an ethical framework that articulates these standards and measures the ongoing effectiveness of your quality assurance and risk mitigation programs.
5. Create a governance framework custom to your needs
Every organization faces different challenges when it comes to AI ethics. Identify the KPIs and metrics that are most relevant to your own industry, organizational culture and the user base. A robust framework clearly outlines how your data pipeline – starting from data acquisition to integration with third-party AI tools and the output produced by its AI algorithms – should account for deviations and anomalies that constitute an ethical risk.
AI is a cultural shift, too
Creating the right vision requires a systematic approach to dealing with the problem of AI ethics and governance, ongoing training and education, and executive support to govern the scope of AI adoption.
What is Splunk?
This posting does not necessarily represent Splunk's position, strategies or opinion.