Account Login
Don't have an account? Create One
Most businesses have enthusiastically welcomed the opportunity to increase productivity and accessibility—and it’s true that there are plenty of perks to AI. But what are the risks of uncritical adoption?
The current proliferation of Generative Artificial Intelligence (AI) solutions has been compared to the Gold Rush, when 100,000 prospectors headed for British Columbia and the Yukon in an attempt to get rich quick. Some were looking for gold, while others, like Friedrich Trump—paternal grandfather to today’s American president—saw opportunity to “mine the miners” by providing entertainment and other services. The whole thing was driven by an intense sense of FOMO.
Yukon gold and AI are obviously very different in nature, but the media- and marketing-driven fervor isn’t all that different. “It’s a second stage of modernization,” said Stuart Watt, an independent technical consultant living in Halifax. “Previously, modernization was working with scale… you’d try to build as many cars as you could, while keeping the parts as identical as possible so you could streamline efficiency.”
Watt, who also holds a PhD in Cognitive Science, said this second stage is more about individualization. “This newer tech includes things like robo-trading,” he said. “So now, instead of scaling out by [making] everybody identical, what you do is…micro-classify. Which is where our current version of AI is really excelling because there’s just so much opportunity for customization.”
“(AI) is a way to expand the number of people who can engage in appropriate business conversation. That means more people are able to get involved in business, which I do think is a good thing.”
—Susan Sharpe, a Nova Scotia educator developing youth-focused digital education programming
Watt attributes Generative AI’s current—and relatively sudden—popularity to an “availability cascade,” a term coined by American legal scholar Cass Sunstein. “It was a special kind of going-viral, largely through the media, where [people] were using it to… get attention and get their products out there,” said Watt. “In a way, it was inevitable in this second stage of modernization. There are things AI can do that nothing else can really do.”
He’s right. AI drives innovations in climate awareness, assistive tech, healthcare and more. But companies are also using AI to do things that most humans can do perfectly well—like writing emails, conducting basic research and answering customer inquiries. While AI is valuable for people with disabilities that make these tasks challenging, using AI for these kinds of tasks carries risks that need to be weighed—especially since large language models (LLMs) are known for “hallucinations”, which cause them to generate incorrect or misleading information.
Watt points to the Air Canada chatbot gaffe that occurred in early 2024, when the company’s chatbot offered a deep—and unintentional—discount. Air Canada tried to rescind the discount, but the customer sued. “Basically, the decision was, sorry, your technology provided this deal,” said Watt. “You have to stand by it as a commercial operator. You can’t wash your hands of it just because it was done by an AI.”
In North America, companies that build third-party apps around OpenAI will likely continue to be held responsible for any liability issues caused by the AI, at least for the foreseeable future. “[OpenAI] will say, ‘legally, we are limited to a hundred dollars of risk’,” said Watt. “And as an application developer, you are on the hook if it all goes terribly, terribly wrong.”
Susan Sharpe is a Nova Scotia educator with years of experience developing youth-focused digital education programming for a Montreal-based non-profit called Digital Moment. She’s also an advocate for teaching students how AI works, using tools like Machine Learning for Kids and Google’s Teachable Machine.
“You… choose the data you’re going to use to train your AI model,” she said. “So that means you gain an understanding of what a model actually is, and how it can change depending on the data you choose.” This helps students understand how biased training data can impact the output they receive from AI, so they can make informed decisions about how to use and fact-check it.
“The typical Google search vs. an AI-enabled search? The difference in energy consumption is eightfold.”
—Dr. Tushar Sharma, assistant professor in Dalhousie University’s Faculty of Computer Science
But, Sharpe’s enthusiastic about AI’s potential, too. “There’s a great example… of someone who was running their own landscaping business but wasn’t good at business communication,” she said. “They knew what they wanted to say, but they’d say it too casually.”
The landscaper was able to use ChatGPT to change the tone of their email. “That’s a way to expand the number of people who can engage in appropriate business conversation,” said Sharpe. “That means more people are able to get involved in business, which I do think is a good thing.”
The key is to learn to recognize when AI is truly useful—and when there are better ways to increase productivity. Sharpe shares a hypothetical example from the podcast Factually! With Adam Conover, where someone uses AI to expand a few bullet points into a lengthy document, only for someone else to use AI to summarize the document later on. “I feel like it can be used that way [a lot],” she said. “Hopefully that’s something that companies can become aware of and change.”
For Sharpe, the solution is about being more mindful about usage. “If they’re helping you get through whatever part of [the work] is the hardest for you, and you’re always being critical of the output and asking, ‘how could this improve,’ then that could be a very useful way of doing it,” she said.
It’s also important to understand how AI use is impacting the climate—especially considering the significant steps many businesses have taken to increase sustainability and reduce emissions. Dr. Tushar Sharma is an assistant professor in Dalhousie’s Faculty of Computer Science. While he is a proponent of AI, he’s deeply concerned about energy consumption.
He references a Wells Fargo report released at the end of 2024, which shows that while AI is already using massive amounts of energy, it’s projected to use 10 times more in five years. “That’s too much,” said Sharma.
We’re also using too much water. Sharma said that currently, the amount of water being used to cool AI data centres every year equals 50 per cent of the UK’s total annual usage. “We are not really realizing it, but it is happening,” he said. “Very soon we’ll be in very big trouble if we’re not addressing it.”
Sharma and his team are exploring ways to reduce the environmental impact of training AI models without compromising performance and developing tools to help measure energy usage. “If you want to measure what my whole computer is using, it is relatively easy,” he said. “But if I want to know what a specific statement in some source code is using, that is not very easy. So we did some work on that.”
““I think that’s just unethical to have copyright distorted to that extent, because then you’re basically creating an environment where people are just going to stop creating.”
—Stuart Watt, Ph.D in Cognitive Science and technical consultant living in Halifax
Until a more sustainable solution is developed and tested, Sharma echoes Watt and Sharpe by encouraging a thoughtful approach. “A lot of times, leaner models work,” said Sharma. “Even traditional methods work. The typical Google search vs. an AI-enabled search? The difference in energy consumption is eightfold.”
Companies concerned with ethics should also be aware that AI models like ChatGPT have been trained, without permission, on massive amounts of human generated content. Just last month, The Atlantic released a searchable database with many of the millions of books Meta used to train its AI without permission. Meanwhile, comic artist Sarah Andersen (aka Sarah Scribbles) is leading a class action copyright infringement lawsuit against a number of AI image generating companies.
“We should be regulating that kind of thing out of existence as far as we can,” said Watt. “I think that’s just unethical to have copyright distorted to that extent, because then you’re basically creating an environment where people are just going to stop creating.”
As we move ahead into this brave new AI-assisted world, we simply need to act with care. As Watt said, “AI is part of a larger shift that is reshaping business structures.” But if we want those structures to remain sound, we need to put ethics and accountability at the forefront.
Comment policy
Comments are moderated to ensure thoughtful and respectful conversations. First and last names will appear with each submission; anonymous comments and pseudonyms will not be permitted.
By submitting a comment, you accept that Atlantic Business Magazine has the right to reproduce and publish that comment in whole or in part, in any manner it chooses. Publication of a comment does not constitute endorsement of that comment. We reserve the right to close comments at any time.
Cancel