Throughout the technology revolution of the last 20 years, we have repeatedly seen technological change outpacing legal and regulatory frameworks. AI will be even faster, while also presenting unprecedented ethical leadership considerations.
Odgers Interim’s recent AI Launch event highlighted the revolutionary impact AI will have, the risks it poses for business, and the psychological and societal implications of using the technology. The legislation needed to keep pace with the technology was also a critical talking point.
The EU AI Act, expected to be in force by May and phased in over 24 months, will be the global standard by which AI ethics are collectively measured. The UK and US are working on their own local standards, but the EU’s law can be considered as the furthest reaching mechanism for now – and the law business leaders should pay serious attention to.
The specific structure and allocation of responsibility for AI ethics is likely to vary depending on an organization’s size, industry, regulatory environment, and internal policies. Regardless of these organizational differences, however, there are a number of ethical considerations all business leaders will need to address in AI development and deployment.
At this stage in the technology’s development we are seeing the responsibility for AI ethics fall under a range of leaders beyond the technology function alone. AI is becoming pervasive, its applications running through almost all aspects of a business, and becoming critical to the long-term strategy. Those currently responsible for AI include the following: Chief Executive Officer, Chief Financial Officer, Chief Risk Officer, General Counsel, Chief Compliance Officer, and the Data Protection Officer.
Over the next 24 months, the legislative landscape will become more clearly defined. Below, we examine how the EU AI Act is likely to take shape, and its implications for how boards and C-suite leaders manage corporate governance.
Increased Accountability
Companies deploying AI systems will need to ensure accountability for their AI technologies. This includes transparency in decision-making processes and responsibility for any potential harm caused by AI systems.
These systems cannot be "black boxes" where decisions are made opaquely without clear explanations. For leaders, this will mean tracking how decisions are made, the data used by AI systems, and the potential biases that could impact fairness and privacy.
Risk Management
Senior leaders will need to incorporate AI risk management into their governance frameworks, to assess and mitigate potential risks associated with AI systems, such as bias, discrimination, and privacy violations.
This means conducting thorough evaluations of AI algorithms, data sets, and decision-making processes to identify potential biases or ethical concerns before these systems are embedded. Moreover, it will require leaders to implement ongoing monitoring to ensure AI systems continue to operate within ethical boundaries as they learn and evolve.
Compliance Requirements
The EU AI Act will introduce new compliance requirements for organizations. These include mandatory impact assessments for high-risk AI systems, registration with authorities, and adherence to specific technical standards.
Leaders will need to establish robust governance structures to oversee compliance efforts. This might mean appointing dedicated teams or officers responsible for ensuring AI systems meet all legal and regulatory standards.
Board Oversight
We expect boards to play a more significant role in overseeing AI strategies within their organizations. They will be responsible for ensuring alignment with corporate values, ethical standards, and legal requirements.
Legal Liability
Companies may face increased legal liability for AI-related incidents. As a result, it may be necessary for leaders to establish mechanisms for addressing legal challenges arising from AI use, including liability insurance and dispute resolution procedures.
Navigating AI insurance involves understanding the unique risks associated with AI applications, from data breaches and privacy violations to faulty decision-making that could cause financial loss.
Demand for AI-Responsible Leadership
Over the next two years, leaders responsible for AI will need to expand their knowledge around AI corporate governance. We anticipate growing demand for leaders who have implemented policies in line with AI risk, compliance, and liability, and possess the agility to adapt to new AI legal requirements. There is likely to be noticeable demand for board and C-suite leaders who can establish a moral framework through which to navigate AI – a critical skill, given no precedent exists.
While AI is expected to streamline tasks, reduce workloads, and remove the need for some roles entirely, it will require some leaders to expand their remit, and create new positions to manage AI implementation. We’ll be working closely with organizations globally, as AI redefines leadership and expands C-suite roles – helping them to stay ahead of this rapidly evolving technology.
__________________________________________________________
Get in touch. Follow the links below to discover more, or contact our dedicated leadership experts from your local Odgers Berndtson office here.
Never miss an issue.
Subscribe to our global magazine to hear our latest insights, opinions, and featured articles.
Follow us
Join us on our social media channels and see how we’re addressing today’s biggest issues.