In September 2024, Dentons hosted a client webinar on the use of artificial intelligence (AI) in Canadian capital markets, in consultation with the Alberta Securities Commission and Computershare. This article provides a summary of the important topics covered in the session.
Key applications of AI in capital markets
AI is broadly defined as systems capable of performing tasks that typically require human intelligence. The application of AI in capital markets has the potential to assist and transform various aspects of the industry, including:
- Risk analysis and management: AI tools analyze historical data and current events to assess market volatility, creditworthiness, and potential downturns.
- Sentiment analysis: Issuers use AI to gauge market sentiment by analyzing public opinions from social media and news sources, helping them understand investor behaviour.
- Price forecasting: AI attempts to predict future asset prices by analyzing large datasets, aiding issuers in pricing and structuring offerings.
- Portfolio management: For investors, AI automates portfolio management, considering individual risk tolerances and investment goals to maximize returns.
- Algorithmic trading: Deep learning models enhance AI's ability to process data on stock movements and customer feedback, allowing for quicker, more informed trading decisions.
- Fraud detection and compliance: AI tools help detect market manipulation and ensure compliance more effectively.
Applications of AI in customer experience and shareholder services
AI summarization tools improve operational processes and elevate customer service quality, helping financial institutions deliver personalized support and maintain compliance. The greatest impact arises from integrating AI chatbots with other AI systems, creating a cohesive ecosystem that enhances operational efficiency and decision-making in the financial sector. This technology improves efficiency in several ways:
- Enhanced response efficiency: Faster and more accurate responses to client inquiries.
- Personalized client interactions: Tailored communication based on insights gleaned from interactions.
- Trend analysis and insight generation: Identifying patterns and insights from client communications.
- Training and quality assurance: Providing a basis for evaluating and improving service quality.
- Reduced cognitive load: Eases the burden on relationship managers by summarizing communications, allowing them to focus on strategic tasks.
AI advancing efficiency and driving revenue growth in capital markets
From a regulatory perspective, AI has significant potential to enhance efficiency and drive revenue growth in capital markets. Key use cases include:
- Compliance automation: AI can streamline regulatory processes like transaction monitoring and reporting, reducing errors and improving efficiency.
- Underwriting and risk assessment: AI models analyze large datasets, enhancing the accuracy and speed of underwriting, especially in insurance and credit.
- Predictive analytics in trading: AI is employed in algorithmic trading to analyze data, potentially improving trade timing and accuracy, although effectiveness relies on data quality and market conditions.
- Robo-advisors: AI-powered robo-advisors provide personalized investment advice, broadening access to financial services at lower costs for retail investors.
- Fraud detection: AI monitors transactions for anomalies, bolstering fraud detection and prevention by identifying suspicious activities in real time.
AI's ability to reduce operational costs and create personalized financial products can drive revenue growth. However, success depends on implementation quality and market adoption.
Regulatory considerations
- Bias and fairness: AI systems may unintentionally produce biased outcomes, necessitating ongoing monitoring and mitigation efforts.
- Transparency and accountability: Ensuring that AI systems, particularly complex deep learning models, are transparent and auditable is crucial, especially for decisions impacting clients.
- Systemic risk: High-frequency trading using AI may amplify market volatility, requiring firms to implement safeguards and monitoring systems to maintain stability.
Regulatory frameworks are evolving to address these challenges, with a focus on explainability and responsible AI use.
Current regulatory requirements
We are currently in a pivotal moment regarding the regulation of AI in capital markets, as regulations are struggling to keep pace with technological advancements.
However, Bill C-27 has been proposed, which aims to establish the Artificial Intelligence and Data Act (AIDA) — which will be the first Canadian legislation specifically focused on AI. This bill seeks to ensure that AI systems are safe, non-discriminatory, and that the systems handle personal information legally, holding businesses accountable for their AI usage.
On December 5, 2024, the Canadian Securities Administrators published Staff Notice and Consultation 11-348 Applicability of Canadian Securities Laws and the Use of Artificial Intelligence Systems in Capital Markets, intended to provide clarity and guidance on how securities legislation applies to the use of AI systems by market participants. It addresses key considerations for registrants, reporting issuers, marketplaces and other market participants leveraging AI systems, focusing on transparency, accountability and risk mitigation to foster a fair and efficient market environment. The notice is open for feedback on consultation questions until March 31, 2025.
Adoption and regulation of AI in Canada compared to other jurisdictions globally
The regulation and adoption of AI in Canada is developing alongside global trends, with firms potentially needing to comply with both local and international regulations depending on where their operations are.
As noted, the AIDA, part of Bill C-27, provides a framework for the responsible use of AI in commercial activities, especially where it can significantly impact individuals, such as in credit decisions or financial advice. Global regulatory comparisons include:
- European Union (EU): Utilizes a risk-based approach with specific regulations for high-risk applications, categorizing AI systems by their potential impact. The EU AI Act may serve as a model for other jurisdictions, similar to the GDPR in privacy regulation.
- The United States: Regulation at the federal level is driven by an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence which encourages secure and ethical AI practices. Further, the Securities Exchange Commission has proposed a rule specific to the use of Predictive Data Analytics by brokers and advisors with a view to eliminating certain conflicts.
- Japan: Encourages AI innovation while managing risks related to data and privacy, taking a more flexible approach to support responsible development.
Unique challenges faced by traditional governance methods
As AI becomes integral to market operations, traditional governance methods are facing unique challenges:
- Lack of transparency: AI systems, especially those using deep learning, make decisions in ways that are difficult to trace or understand, complicating governance frameworks that rely on human-readable processes.
- Privacy and data governance: AI's dependence on large datasets raises concerns about managing sensitive data, as traditional governance may not effectively address risks such as data breaches or privacy violations.
- Unconscious bias: AI systems can unintentionally perpetuate biases found in their training data, leading to unfair practices and discrimination in market operations.
- Accountability and liability: Determining responsibility for decisions made by AI can be challenging, as multiple stakeholders (developers, operators, users) may be involved, complicating traditional accountability structures.
To adapt to these challenges, organizations should:
- Enhance explainability and transparency: Governance frameworks must prioritize clear, traceable explanations for AI decisions, potentially incorporating AI auditing tools.
- Strengthen data governance: Establish stronger regulations on data collection, use, and protection, ensuring compliance with standards like GDPR.
- Promote algorithmic fairness: Implement policies for regular testing and validation of AI systems to identify and mitigate biases, using diverse data sources.
- Clarify accountability: Update legal frameworks to clearly define responsibility for AI systems, ensuring developers, operators, and firms are accountable for ethical AI design and implementation.
By addressing these areas, governance can better align with the evolving demands of AI in market operations.
Use of AI in managing risks in shareholder services
AI is transforming risk management in shareholder services, particularly in fraud detection and compliance. It offers advanced, real-time analyses of transactions, allowing for the rapid identification of patterns and anomalies indicative of fraud. This capability includes detecting unusual transaction behaviours, discrepancies in documentation, and suspicious communication. AI’s adaptive learning enhances its effectiveness over time, improving accuracy and reducing response times, which is crucial for maintaining customer trust and protecting assets.
Companies like Computershare are implementing new analytics and data warehousing tools to support this technology adoption. However, as AI tools improve fraud detection, fraudsters also use AI to analyze data and uncover personal information (e.g., mother’s maiden name, high school name) available online, making it easier to impersonate individuals and bypass security measures. To combat these challenges, individuals need to be aware of their personal information's online presence, while service providers must invest in secure systems and enhanced security protocols to protect against fraud.
Advice for market participants navigating the evolving AI landscape in capital markets:
- Stay informed on regulations: Organizations must keep up with the ongoing adoption and upcoming regulations regarding AI. As new laws emerge, it’s essential to ensure AI use is responsible, ethical and compliant.
- Educate the board of directors: Boards should prioritize AI education to understand its applications, risks and rewards. AI requires enterprise-wide oversight, not just IT department management.
- Seek professional guidance: Consult knowledgeable professionals when unsure about compliance with regulations. Engaging with regulators proactively can lead to collaborative solutions and insights.
- Be bold and optimistic: Embrace the potential of AI and be willing to take calculated risks.
By following these recommendations, market participants can better navigate the complexities of AI in the capital markets.
The webinar was moderated by Kate Stevens (Partner at Dentons) and included Riley Dearden (Partner at Dentons), Mohamed Zohiri, (Legal Counsel and FinTech Advisor at the Alberta Securities Commission) and Tara Israelson, (General Manager at Computershare). The information provided in this summary is accurate as of December 2024. Watch the full webinar recording here.
***
Kate Stevens is a Partner in Dentons Canada’s Corporate, Securities and Corporate Finance, and Mergers and Acquisitions groups. She has a broad practice representing public and private companies in financings, mergers and acquisitions, recapitalizations and complex corporate reorganizations.
Riley Dearden is a Partner in Dentons Canada’s Corporate, Securities and Corporate Finance, and Mergers and Acquisitions groups. His practice focuses on corporate finance, with an emphasis on mergers and acquisitions, securities and corporate reorganizations.