Artificial intelligence: opportunities, risks and instructions for use | Part 2 – Risks

Artificial intelligence: opportunities, risks and instructions for use | Part 2 – Risks

Share This Post

Interview by Renato Cudicio, MBA, President of TechNuCom.

AI and Quebec companies: three unique perspectives

In November, members of the Gung Ho! business club organized a panel in Laval, moderated by Renato Cudicio, to discuss the impact of artificial intelligence within organizations.

A Boston Consulting Group study showed that 30% to 40% of Canadian companies were testing AI applications or considering integrating this technology. Another survey by Gartner indicates that 25-30% of employees are using AI without their superiors being aware of it, revealing a far greater AI penetration rate than might be perceived.

Three experts were invited to share their points of view with some sixty business leaders, sparking enriching exchanges.

Renato Cudicio: Today, cyber-attacks represent one of the greatest risks for businesses. Will artificial intelligence become the new main peril, not least because of its use by hackers and internal misuse?

Charles S. Morgan : The discussion around artificial intelligence (AI) often tends to raise fears, as do legal issues or disaster scenarios linked to cyber-attacks. It’s true that AI can increase the risk of cyberattacks, but it won’t replace traditional hacking methods.

Today’s cybercriminals are extremely sophisticated, making any industry vulnerable. You’ve probably all experienced this type of threat. AI, with its ability to automate and personalize attacks, amplifies the level of risk.

For example, it is now possible to simulate a call on Teams where the participants are deepfakes, which can fool a CFO into making a money transfer. Attacks can also be more subtle, often by social engineering, via phishing emails written without mistakes, thanks to AI. This makes detection even more difficult.

In addition, hackers use AI to create and modify malicious code in real time, making identification more complex. Although anti-virus systems always eventually detect such software, AI enables hackers to constantly adjust their attacks to circumvent defenses.

The risks are not limited to cybercriminals. Take, for example, the case of the chairman of a large company who, during a meeting, used AI software without authorization to transcribe and summarize the discussions. The software, which was open source, had terms of use allowing the transcripts to be used for commercial purposes, resulting in a leak of confidential information. This demonstrates a lack of awareness of technological risks within the company.

Today, AI makes it easy to create texts and modify images. How can a company protect itself against intellectual property infringement, whether its own or that of others?

There are several facets to this question.

  • Risk 1: Copyright disputes
    The New York Times has taken legal action against OpenAI, claiming that ChatGPT was formed from millions of the newspaper’s articles without authorization. This raises the legal question of whether OpenAI could be forced to discontinue certain features due to the use of unauthorized data.

  • Risk 2: Uncertainty over property rights
    The terms and conditions of many AI tools specify that ownership of AI-generated works cannot be guaranteed. Text and images created solely by AI do not benefit from the usual copyright protections. This uncertainty changes the rules of the game.

To protect themselves, companies can opt for two approaches:

  1. Use open versions of AI tools, available free online, although they offer fewer guarantees on the provenance of data and the quality of results.

  2. Adopt an enterprise version of these tools, trained on their own data, with strict data control and governance.

Some major companies, such as Microsoft, offer indemnity clauses for their customers in the event of copyright disputes, provided that the rules of use are strictly adhered to. It is therefore crucial to assess the risks and establish rigorous governance before committing oneself.

When it comes to cyber-attacks, it’s not a question of “if” they’ll happen, but “when”. Are you seeing an increase in AI-related incidents at McCarthy Tétrault? What needs to change in business practices to reduce these risks?

As a cybersecurity expert, I’ve observed a steady increase in cyberattacks, often exacerbated by AI. The profitability of cybercriminal attacks remains high, and unfortunately many companies lack a clear vision of how their employees are using technology.

In some cases, sensitive information, such as employee data or financial statements, is shared on publicly accessible AI platforms, exposing the company to significant risks. Better governance and increased employee training are needed to prevent these situations.

If you had just one piece of advice to give to reduce the risks associated with the use of AI, what would it be?

The answer is simple: governance.

It’s essential not to let fear hold back the adoption of technology, because AI has immense potential. However, we need to move forward with governance adapted to the size of the company and the specific risks involved. This means identifying the key risks and determining which uses of AI will bring the most value to the business.

Thanks to the 14 Gung Ho! members who sponsored this event, and to Karine Bélisle for organizing it.

Karine Bélisle – RBC Dominion

Christian Brassard – Hub International

Charles Brassard – Xerox

Renato Cudicio – TechNuCom

Simon Davidson – Groupe Carbonic inc.

Étienne Demeules – DSMA

Domenic Di Franco – Banque Toronto-Dominion TD

Jean-Philip Robitaille – Syscomax

René Roy – BDC

Jérémie St-Germain – Premier Consultants

Eric Taillon – ACT actuaires

More To Explore

Take a minute to get to know us

The people behind TechNuCom