Across the Odgers Berndtson group, we recently held a number of webinars looking at the impact of AI. Our discussions with industry leaders explored AI’s impact on security, regulation and HR.
Our Technology leadership experts were joined by Ray Eitel-Porter, MD at Accenture, Mike Beck, Global CISO at Darktrace, Mikey Hoare, Director at Kekst CNC, Mark Brown, Global MD of the BSI, and Keith McNulty, Global Director of Talent Science and Analytics at McKinsey, to explore the opportunities and challenges of the developing AI.
Thanks to ease of use and accessibility, generative AI is impacting almost every company, across almost every sector. But while the business benefits are developing rapidly, particularly for the HR function, generative AI poses significant security and regulatory challenges. Inaccuracies in responses, privacy infringements, copyright conflicts, biases in training data, and turbocharged cyber security threats, make generative AI a minefield to navigate, at the same time as being a business asset.
Cyber security
Every security team is thinking about generative AI, and how to use and develop it in a security context. Much of the concern centers on inaccuracies in the information it provides - hallucinations - and the data fed into learning models.
For organizations more broadly, concerns about generative AI stem from employees using AI on their personal computers. Many organizations prohibit employees using generative AI for work due to data security concerns. Despite this, organizations are aware their employees use generative AI at home on their personal computers to achieve greater efficiency and output. In fact, over 50% of senior executives don’t understand generative AI and discourage its use, but they know it’s being used in their organizations.
This forces a decision on leaders. Do they embrace the complexity of generative AI, develop their own guardrails for staff to work by, or remain blinkered and ignore what could become serious cyber security threats? While challenging, generative AI at its current development level offers the opportunity for leaders to embrace it, make mistakes and learn from them to derive long-term benefits.
Regulation
Regulating generative AI is a highly complex challenge. In developing any sort of legislation around it, there are multiple variables to contend with, including international jurisdictions, sector regulation, and the speed at which it evolves. Understanding of what to regulate is also up for debate, given how broadly it can be applied and how varied its outputs.
For leaders, there is sense in developing some initial parameters in how their organizations use generative AI, and how they communicate this use to employees, partners and clients. 62% of people expect their industries to be using AI everyday by 2030, but most do not use it enough to trust it. Formal workplace rules about use and application can therefore help bridge the gap in understanding.
While the absence of any overarching generative AI standard or legislation is challenging for leaders, the signed letter for a “pause on development” is promising. While a genuine pause on development is unlikely, the petition demonstrates the tech industry’s statement of intent - the communal agreement that guardrails are necessary to ensure AI progresses at a pace society is comfortable with.
HR
Generative AI and AI in general presents significant opportunities and challenges for HR professionals. Potential early use cases are in employee engagement, workforce analysis and talent demand predictions.
Using AI, employers could track employee sentiment, analyze thousands of feedback comments, and use both to predict when teams might be struggling. By cross-referencing job descriptions with resumes, AI has the potential to identify and predict skills gaps and reskilling needs. What’s more, AI chatbots trained on a company’s HR policies will be able to triage employee HR inquiries and provide policy information, freeing HR professionals from these time-consuming tasks.
However, most organizations are a long way from this AI analytics capability. Poor data quality and governance and a lack of centralization means most CHROs cannot quickly and readily take advantage of new AI developments. But for those with mature data management, AI will enable the HR function to move from an advisory position to business partner, then to strategic talent partner. At the same time, as AI increasingly changes workforce composition, talent strategy and anticipating skills needs will become critical to the business strategy. As a result, CHROs are likely to become far more common on boards.
Without guardrails or precedent, many leaders will need to take responsibility for the direction of AI in their organizations. Implementing policies and ensuring robust data governance will mean AI is more likely to become a friend, rather than a foe.
______________________________________________________
This article was authored in partnership with Andy Wright, Partner in the Technology Practice at Odgers Interim.
For more information or to discuss the impact of generative AI on your senior talent requirements, get in touch with our authors and follow the links below to discover more about our expertise.
Never miss an issue.
Subscribe to our global magazine to hear our latest insights, opinions, and featured articles.
Follow us
Join us on our social media channels and see how we’re addressing today’s biggest issues.