Currently, there are only a handful of U.S. States that have signed consumer data privacy bills into law. However, there is no overarching federal regulation that guarantees all U.S. citizens consistent protection and control over their personal data. That will soon change.
My colleagues and I recently predicted that AI will be the burning platform that will cause an eruption of regulation changes around the world, particularly as it relates to data privacy. And because of this, many established companies will be unable to (or choose not to) provide their services in certain regions. How could that impact your organization’s business strategy next year?
It’s no secret that there’s a laundry list of concerns when it comes to generative AI. Perhaps the most pervasive is how a user’s personal data — and even an organization’s proprietary data — is collected and used to train the large language models (LLMs) that power generative AI platforms. Before implementing AI-driven tools, organizations must instill clear practices for what data they feed into an LLM. It’s also up to governments to assure the safety of their citizens' data through effective regulation. But it won’t be easy.
Recently, the Biden Administration issued a milestone Executive Order establishing new AI safety, security and privacy standards, calling on Congress “to pass bipartisan data privacy legislation to protect all Americans” in response to global concerns.
However, Europe will be the first to enact overarching legislation, a precedent that was set with the adoption of the EU’s General Data Protection Regulation (GDPR), which went into effect in 2018. In fact, there are already specific acts being developed around AI and the use of personal data in Europe that have caused companies to consider not launching certain services there.
Meanwhile, the Australian government is also prioritizing its citizens’ data rights in the wake of the 2022 Optus data breach, which left over a third of the country’s population with compromised personally identifiable information (PII). In Japan, too, data privacy has become a national concern. Around the world, urgency to enact regulation around AI and the use of personal data has only intensified, which is leading to a confusing patchwork of proposed laws.
The AI boom has highlighted the fact that many governments don’t have the right foundations in place yet to enact meaningful regulation. The fear of AI, coupled with this lack of foundational regulation, has caused some governments to ban certain generative AI platforms. We expect this trend to continue, creating headaches and confusion for companies worldwide.
The solution? At this early stage, it’s all about education. While regulators will be forced to act swiftly to ensure the protection of their citizens' data, responsibility also rests on technology companies to guarantee the proper controls and use of user data — as well as continue to advocate for the overall good their services will ultimately provide.
To find out what other predictions my colleagues and I made for next year, read Splunk’s 2024 Executive Predictions.
The world’s leading organizations rely on Splunk, a Cisco company, to continuously strengthen digital resilience with our unified security and observability platform, powered by industry-leading AI.
Our customers trust Splunk’s award-winning security and observability solutions to secure and improve the reliability of their complex digital environments, at any scale.