Navigating the intricate world of international contract negotiations demands more than just legal expertise; it requires a deep understanding of cultural nuances, time zone gymnastics, and a robust framework for compliance across various jurisdictions.
And if there’s anyone who understands this, it’s Jonathan Strong. As the associate general counsel at Geotab, Strong is adept at addressing the logistical hurdles that often precede negotiations.
"There are multiple complexities that we have to deal with from even just time differences,” he tells Lexpert. “Sometimes it’s a challenge to get people from North America and Australia and Europe on a call at the same time. And then there are cultural differences that often take place – you need to be mindful of those when engaging in negotiations.”
It’s this aspect of negotiation which demands a comprehensive understanding of the parties' backgrounds, ensuring that communications and proposals are framed as respectful and considerate of cultural norms.
“[At Geotab], we use a global law firm that's able to provide us with targeted strategic advice if there are areas that we feel we need some backup or some extra expertise on. However, what we often do is look at the high watermark for various legal regimes. With privacy, for example, we use GDPR and California CCPA to [assess] what the legal and regulatory requirements are. How do we meet those in a way that can be applied across jurisdictions?”
And GDPR compliance can never afford to be an afterthought – like CCPA, and Canada's CASL laws. Once again emphasizing the strategic approach Geotab takes towards compliance, Strong refers back to those “high watermarks.”
“We really [use] those high watermarks [to] drive the product development and implementation process itself. It's not the cart before the horse. We're trying to make sure that those compliance requirements are feeding into the product and service design itself before they get rolled out to customers. Those are really important things for any tech company or software companies providing services across multiple jurisdictions.”
Another aspect that’s becoming even more prevalent for global organizations? Generative AI.
“From the perspective of our legal department, it's not whether or not to use AI, it's how should we use AI?” says Strong. “That's really the question. And so, it's really important to have internal policies in place so every employee knows what the requirements are within the policies, boundaries and limitations for using AI tools and methods. The use of AI tools requires legal departments and employees to understand their own responsibilities in terms of their position – what they're accountable for both internally and externally to customers. There should be a realistic understanding of what the purpose of those tools are and what the ethical implications might be that result from their use.”
And Strong’s perspective certainly seems to chime with his industry peers. A recent report from Thompson Reuters, which interviewed 440 respondent lawyers on the topic of AI, found that 82 percent believe that lawyers can use ChatGPT in their legal work, while 51 percent think it should be used. However, 62 percent added that they do have concerns around generative AI – mainly around security and accuracy. One example Strong cites is around large language models which may not be “designed to produce truth or even accuracy.”
“They're designed for coherence,” he says. “And so if you have an understanding of how the AI tool works and what it’s designed to produce, it will affect how you're going to use those tools and what your expectations are. It's an important distinction when you're using a large language model, for example, and you want it to give you the answer. You shouldn't expect truth – but you should expect something that's coherent.”
And this extends to repurposing and reprinting data – especially where customers and employees are concerned.
“When we're using an AI tool for our customers, we need to make sure that that customer data is not being used to train the AI tool, or that data is not used for purposes that are not clear to us – we need to ensure that our customers understand and agree on how their data is processed in our solution. That's really important from an AI governance perspective.
“How that data is being used – can you ring-fence it to make sure that it's only used for this specific purpose and nothing else? And then with the AI tool provider, they probably have subprocessors or other service providers that they're using to feed into that very AI tool itself. It’s like a chain of connection and processing that goes along with that tool when you're processing data. So, you can really go down the rabbit hole on those things.
“Because the technology and the law are quickly evolving on AI, it’s vital to stay as up-to-date as possible. Privacy laws are there to protect people, and we shouldn’t lose sight of that. Legal teams, and all employees, need to be vigilant to protect against bias and discrimination when using AI and regular and constant testing should be carried out to prevent harms that can affect the rights and freedoms of individuals.”