Shara Roy, chief legal counsel at Ernst & Young LLP, is well aware of the challenges of integrating AI into business practices while balancing environmental, social, and governance (ESG) concerns. And she doesn’t hold back when discussing the limitations of grouping these issues together.
“In my mind, I always think of E,S and G as being unfairly grouped together,” she tells Lexpert. “They're the feel-good letters and have very little to do with each other. [However], if we tease them apart as business imperatives, we probably get more traction in economically difficult times."
While some may perceive ESG as an easy target for critics, Roy makes it clear that the governance aspect, especially as it relates to AI, demands serious attention.
“In terms of governance, [businesses] are going to have to look at how they use AI and how they use trusted AI,” Roy says, underscoring the need for businesses to demonstrate responsibility. Using customer information to drive AI is not a trivial matter, and organizations must be able to assure stakeholders that AI is being integrated in a way that is both ethical and trustworthy.
Since joining EY Canada two years ago, Roy has been laser-focused on fostering an innovation mindset within the legal function, shifting the team from a reactive to a proactive stance.
“When I joined EY Canada, I inherited a very trusted group of lawyers... where we really decided to take the [team was] on an innovation journey,” she explains. “The biggest innovation was mindset.” It took six months to establish this foundation, which then set the stage for adopting new tools and processes to make their work more efficient and forward-thinking.
Tech and AI play a central role in this evolution. AI is a tool for eliminating tedium and allowing legal teams to engage more proactively with the business. Roy recalled futurist Eidi Weiner’s comments at a recent event focused on the future of business: “It's not that AI will replace humans, but humans who know, understand and work with AI will replace humans who don't.”
This sentiment resonated with Roy, who sees AI as having reached a tipping point, with its adoption becoming inevitable. Yet, the risks are as significant as the potential, and it's clear that the focus must be on responsible and trusted AI.
At EY Canada, building trust in AI is an ongoing effort. Roy highlighted the firm’s investment in AI, including a "confidence index" aimed at providing businesses with assurance that their AI systems are functioning as intended.
“We’ve invested very heavily as a global firm in AI, including around a confidence index. So, if a business is using AI, how confident can they be that the AI tool is doing what it's supposed to do? That it doesn’t contain biases that are unhelpful or unknown?”
In terms of industry adoption, Roy sees AI as universal, “across all industries,” though how it’s used varies. Financial services, healthcare, tech, and telecoms are obvious beneficiaries, but EY Canada’s work also extends to auditing clients who use AI. Roy says that the challenge is twofold: auditing companies that have adopted AI processes while also using AI internally to improve the audit process.
“How can we audit that? How can we make sure that the processes are doing what they're supposed to do?” These are critical questions for any organization using AI, and EY Canada is tackling them head-on.
Bias in AI, however, is a persistent issue, and Roy doesn’t shy away from discussing its potential dangers.
“Bias... amplifies over time as generative AI cycles very quickly, and those biases become exponentially greater,” she warns. This is a serious concern, especially in relation to diversity, equity, and inclusion (DEI) efforts.
Drawing from past examples, Roy illustrated how even a slight imbalance in hiring can snowball into significant disparities over time, a process that AI could easily accelerate if not carefully governed. “That's why we need to have that governance, that responsible AI at the ground level, as you're going in,” she adds.
Roy does not underestimate the complexity of AI governance. She points out the challenges posed by varying global regulations on data protection and privacy, from the EU to the US and Canada.
“We're not here to slow down the progress of AI. We want to enable it. We want to accelerate it, but it needs to be done in the right way,” she says. “We need AI to enable us in our working world, as opposed to having AI be something that we can't control. The dangers are... data protection. What do we do with individuals' most sensitive data?”
For Roy and EY Canada, the future of AI is one of responsibility, trust, and careful governance. The firm’s leadership is committed to understanding AI at a deep level, with monthly executive committee meetings focused on AI learning and updates.
“Our executive committee meetings, we have at least monthly AI learning and updates that help us, as the executives of the firm, understand the broader implications of AI.”