Canadian Companies' AI Policies Aim to Balance Risk and Benefits - Latest Global News

Canadian Companies’ AI Policies Aim to Balance Risk and Benefits

Article content

TORONTO – When talent search platform Plum noticed ChatGPT making waves in the tech world and beyond, it decided to go straight to the source to break down how employees can and can’t use the generative artificial intelligence chatbot.

ChatGPT, which can turn simple text instructions into poems, essays, emails and more, created a draft document last summer that brought the Kitchener, Ont.-based company about 70 percent of its final guideline.

Advertising 2

Article content

Article content

“There was nothing wrong; There was nothing crazy about it,” recalls Plum CEO Caitlin MacGregor. “But there was an opportunity to be a little more specific or to tailor it a little more to our company.”

Plum’s final policy — a four-page document that builds on ChatGPT’s draft with advice from other startups put together last summer — advises employees to keep customer and proprietary information away from AI systems, checking everything the technology spits out for accuracy to check and assign all the content generated by it.

This makes Plum one of several Canadian organizations firming up its stance on AI as people increasingly rely on the technology to increase their productivity at work.

Many were inspired to develop guidelines by the federal government, which released a set of AI guidelines for the public sector last fall. Numerous startups and larger organizations have now revised them for their own needs or are developing their own versions.

These companies say their goal is not to limit the use of generative AI, but to ensure that workers feel empowered to use it responsibly.

Article content

Advertising 3

Article content

“It would be wrong not to take advantage of the power of this technology. It offers so many opportunities for productivity and functionality,” said Niraj Bhargava, founder of Nuenergy.ai, an Ottawa-based AI management software company.

“On the other hand, using it without installing guardrails involves many risks. There are the existential risks to our planet, but then there are also the practical risks of bias and fairness or privacy issues.”

Finding a balance between the two is crucial, but Bhargava said there is “no one-size-fits-all policy” that works for every organization.

“If you are a hospital, you may have a very different answer to what is acceptable than a private sector technology company,” he said.

However, there are some principles that frequently appear in guidelines.

People avoid feeding customer or proprietary data into AI tools because companies cannot ensure that this information remains private. It could even be used to train the models that power AI systems.

Another is to treat everything the AI ​​spits out as potentially false.

AI systems are still not foolproof. Technology startup Vectara estimates that AI chatbots invent information at least three percent of the time, and in some cases as much as 27 percent of the time.

Advertising 4

Article content

A British Columbia lawyer was forced to admit in court in February that she had cited two cases fabricated by ChatGPT in a family dispute.

A California lawyer also uncovered accuracy issues when he asked the chatbot in April 2023 to compile a list of legal scholars who had sexually harassed someone. It incorrectly named an academic and cited a Washington Post article that didn’t exist.

Organizations creating AI policies also often touch on transparency issues.

“If you wouldn’t call something someone else wrote your own work, why would you call something ChatGPT wrote your own work?” asked Elissa Strome, executive director of the Pan-Canadian Artificial Intelligence Strategy on Canadian Institute for Advanced Research (CIFAR).

Many say people should be informed when it is used to analyze data, write text, or create images, videos, or audio, but other cases are not so clear.

“We can use ChatGPT 17 times a day, but do we have to write an email disclosing it every time? Probably not when you’re setting your itinerary and deciding whether to come by plane or by car, something like that,” Bhargava said.

Advertising 5

Article content

“There are many innocuous cases where I feel there is no need to disclose that I have used ChatGPT.”

It’s unclear how many companies have explored all the ways employees could use AI and told them what is and isn’t acceptable.

An April 2023 study of 4,515 Canadians by consulting firm KPMG found that 70 per cent of Canadians who use generative AI say their employer has a policy regarding the technology.

However, an October 2023 study by software company Salesforce and YouGov found that 41 per cent of 1,020 Canadians surveyed said their company did not have a policy on using generative AI for work. About 13 percent had only “loosely defined” policies.

At Sun Life Financial Inc., employees are prohibited from using external AI tools for work because the company cannot guarantee that customer, financial or health information will remain confidential when these systems are used.

However, the insurer is allowing its employees to use internal versions of Anthropic’s AI chatbot Claude and GitHub Copilot, an AI-based programming assistant, because the company has been able to ensure both comply with its privacy policies, said Chief Information Officer Laura Money.

Advertising 6

Article content

So far, she’s seen employees use the tools to write code and create memos and scripts for videos.

To encourage more experimentation, the insurer has encouraged its employees to enroll in a self-paced free online course from CIFAR that teaches the principles of AI and its impact.

Of this move, Money said, “You want your employees to be comfortable with these technologies because it will make them more productive and make their work lives better and make work a little more fun.”

About 400 workers have signed up since the course was offered to them a few weeks ago.

Although Sun Life offers the course, it knows its approach to the technology must evolve because AI is advancing so quickly.

Plum and CIFAR, for example, each launched their policies before generative AI tools that go beyond text to create sound, audio or video were available.

“There wasn’t the same level of image generation as there is now,” MacGregor said of the summer of 2023, when Plum launched its AI policy with a hackathon that asked employees to write or experiment with poems about the company using ChatGPT how it could solve some of the company’s problems.

“In any case, an annual review is probably necessary.”

Bhargava agrees, but says many organizations still have some catching up to do because they don’t yet have policies in place.

“Now is the time to do it,” he said.

“When the genie is out of the bottle, we can’t think, ‘Maybe we’ll do this next year’.”

This report by The Canadian Press was first published May 6, 2024.

Companies in this story: (TSX:SLF)

Article content

Sharing Is Caring:

Leave a Comment