Mike Drew, Partner and Global Practice Lead, Technology & IT Services Practice, explains why CEOs should welcome regulation for generative AI with open arms.
In a recent interview with Fox News, Elon Musk, argued strongly for AI regulation: “It’s not fun to be regulated. But we should take this seriously and have a regulatory agency; a group that initially seeks insight into AI and then solicits opinion from industry and then has proposed rulemaking.”
His remarks follow the explosion of generative AI, in particular, ChatGPT API, which was made public late last year, and what may become a business gold-rush to adopt the technology.
Already the release of ChatGPT is spurring companies and start-ups to harness chatbots and add other AI-powered features to applications.
It’s not hard to see why: in an age of disruption, no one wants to be caught behind the curve.
We can see there’s plenty of potential and real use cases for generative AI to be used in business, both broadly and within the C-suite. It can be used for brainstorming ideas for developing new market segments, targeting new customers, switching business models and making use of existing enterprise data.
There’s the potential for the technology to become an augment to the C-suite, where decision-making and strategic choices are informed by generative AI.
However, even in the short amount of time generative AI has been on the market, two clear problems have arisen: bias and a lack of trust.
Discrimination and bias are almost always baked into algorithms. ChatGPT, for example, has been found to reflect political and cultural sentiment rather than offer neutral analysis. Often, datasets behind these large language learning models are filled with conflicting evidence and volatile sentiment. For such technology to be used by leaders in high-level decision-making it would therefore need to be vetted and sanitized from discriminatory practices.
This is where businesses are faced with the second challenge. Seeing what the competition are doing, some leaders may think they should do the same and implement generative AI. Though without properly investigating the technology and understanding its limitations, leaders risk a misstep which could result in a significant trust problem; both internally among employees and externally among customers.
What’s more, there are serious concerns around data privacy and copyright. At the time of writing, the three largest generative AI image generators are facing a class-action lawsuit in the US for copyright infringement. If these companies are found to be in breach of the law, then public sentiment and trust around business use of AI is likely to decline. The problem would only be exacerbated if leaders rushed in the use of ‘black box’ technologies - those AI-powered algorithms where the methodology is hidden.
To say the current AI environment is chaotic would be an understatement. The need for regulation is paramount if businesses are to use the technology safely, ethically, and without risking their reputations. Regulators in the EU are already beginning to recognize this: the European Commission has set out proposals for companies deploying generative AI tools to disclose any copyrighted material used in developing their AI systems.
The initial steps seem to be focused on transparency around generative AI use, but further guardrails are needed to address ethical dilemmas and navigate significant ingrained biases.
It seems though that the technology is moving faster than most regulators can keep up with, and these laws are only currently being passed in the EU.
In the meantime, to mitigate the severe risks in applying generative AI, there are steps leaders can take. First, it is crucial business leaders have an understanding of the fundamentals of generative AI and how this technology can inform decision-making processes. Ensuring your chief data officer, Chief Information Officer, or Chief Technology Officer has the necessary AI competence to recognize both the opportunities and threats will be essential. On the frontline of this technology and with the most experience of technology regulation, these individuals can help guide CEOs and boards in generative AI use and best practice.
From here, leaders can develop their own policies and guidance within their organization; like other mandates this should be filtered down through the organization.
As Ray Eitel-Porter, Managing Director – Applied Intelligence, Global Lead For Responsible Ai at Accenture comments: “We have seen that organizations which already had a responsible AI foundation in place have been able to enhance it to account for the new risks of generative AI, instead of struggling to quickly build guardrails from scratch”.
The safest approach appears to be incorporating generative AI a system at a time, at small scale.
This means starting with simple tasks, and automating these for productivity gains, and using this to gain experience and an understanding of its capabilities and limitations.
Generative AI adoption will almost certainly skyrocket but to be used safely and effectively, regulation is essential. In the meantime, leaders will need to learn, adapt, and understand the risks.
With thanks to Ray Eitel-Porter, Managing Director – Applied Intelligence, Global Lead For Responsible Ai at Accenture. Read Ray's research report 'From AI compliance to competitive advantage' here.
To learn more about generative AI and how it can impact your organization, please contact our Technology & IT Services Practice directly or get in touch with us here. You can also find your local Odgers Berndtson contact here.
Stay up to date: Sign up here for our global newsletter OBSERVE, and receive the latest news in leadership and top talent, industry insights, and events directly to your inbox.