Dr Jodie Lobana wearing a pink blazer and a pearl necklaceDr Jodie Lobana wearing a pink blazer and a pearl necklace

Building a Ferrari with Brakes: Dr. Jodie Lobana on Canada's AI Strategy and the Future of CPAs

With Evan Solomon, Canada’s first Minister of Artificial Intelligence and Digital Innovation expected to release an updated national strategy on AI, how can Canada minimize the risks of the technology while maximizing the economic opportunities? As AI transforms the accounting profession, CPA Ontario asked Dr. Jodie Lobana, FCPA, FCA, CEO and Founder of AIGE Global Advisors, a leading AI governance firm, what she wants to see in future legislation.

March 12, 2026

“You don't build a Ferrari and overlook the brakes. The faster the car, the better the brakes need to be."

That’s how Dr. Jodie Lobana, FCPA, FCA, describes the path Canada should take as the federal government prepares to unveil new AI strategy in 2026. She calls it “Innovigilance™,” and it captures the challenge facing Evan Solomon, Canada’s first Minister of Artificial Intelligence and Digital Innovation: protecting Canadians without stifling the potential of a revolutionary technology that is transforming the profession, and the economy.

In practice, that means adaptive legislation flexible enough to evolve with the technology, not static rules that become obsolete within months. It means building safeguards into AI development from the start rather than reacting after harm occurs. And it means Canada will have to accept some risk to stay competitive.

Dr. Lobana has watched Canada get caught between two competing visions of AI legislation: the U.S. approach of full-throttle innovation with minimal federal intervention, versus the European Union’s stricter, risk-based legislative framework.

In a wide-ranging interview, Dr. Lobana laid out what she wants to see in Canada’s legislative approach to AI and what AI will mean for the profession.

1. Adaptive Legislation

Dr. Lobana’s first recommendation: any AI legislation must be adaptive enough to keep pace with technology that’s advancing rapidly.

Today’s AI tools can perform specific tasks exceptionally well, like identifying tumours in medical scans or mastering complex games, but some AI researchers predict we could see artificial general intelligence (AGI) as early as 2029. Artificial general intelligence refers to AI that matches or exceeds average human cognitive capability. Lobana describes it as “better than the average human on almost all tasks.”

This potential for exponential growth in intelligence makes it especially important for any AI legislation to take an “adaptive” approach, Dr. Lobana says. “If we write static rules for dynamic systems, we fall into the ‘Velocity Trap™’ where technology evolves exponentially but regulation moves linearly.”

2. From Fail and Hide to Fail and Learn

Managing risks effectively requires a cultural shift. Dr. Lobana believes that Canada can learn from the airline industry’s non-punitive approach to safety reporting. Under this approach, employees are encouraged to report honest mistakes and safety hazards without fear of reprisal. The aim is to ensure that critical data is captured to prevent future accidents.

To do so means moving from a “fail and hide” culture to a “fail and learn” culture, says Lobana. The key is creating a business and regulatory culture of what Nassim Taleb calls “anti-fragility”; systems that don’t just withstand stress but improve because of it.

3. AI Deployers and AI Builders

Beyond how organizations handle failures, Dr. Lobana believes regulation must address accountability: who is responsible when things go wrong. Any regulatory approach to AI should cover the full life cycle, from development to deployment.

“[AI] legislation must hold the organization deploying AI accountable, ensuring that they have governance structures to manage the tools that they buy. Most AI risk shows up in deployment, not in the laboratory.”

A company building an AI model in a lab environment is different from a bank deploying that same model to make lending decisions, or a hospital using it to prioritize patient care. Risks change when AI moves from development into use. Regulating deployment as well as development means that accountability sits with the entity making decisions about how and where to use AI, not just the company that built it.

Canada already has precedents to build on. Dr. Lobana highlighted that the Office of the Superintendent of Financial Institutions (OSFI) requires banks to do risk assessments and manage IT controls. She believes that these requirements and other existing financial and technological regulations can be adapted for AI.

4. Data Sovereignty

On data sovereignty, Dr. Lobana recommends a hybrid model. Critical sectors like health, finance, government and defence should be required to have compute and data hosted in Canada. But not everything needs domestic control.

“It is essential to fine-tune important AI systems deployed in Canada with Canadian data,” she explains, “Canadian values are different than U.S. values,” pointing to indigenous matters and language preservation as key examples. Fine-tuning AI systems with Canadian data means Canadian values will be embedded in how they behave.

What AI will mean for CPAs

While Dr. Lobana sees genuine opportunity for CPAs, she’s also frank about the pace of change. Organizations that assume they have a decade to adapt to AI may find themselves caught off guard: “[Mass disruption] may happen in as few as three to five years,” she says, “depending on how fast organizations move.”

1. Human in the Loop

The first and most significant transformation, according to Dr. Lobana, is “the move from being a preparer to a reviewer”. As AI takes on the work of drafting financial statements and reports, CPAs will need to shift from being preparers to becoming excellent reviewers who provide supervisory checkpoints, tapping into their skills of professional skepticism, critical judgement, reasonability analysis, and most importantly, the ability to do the “sniff test” on whether something looks right or not. It is also important to note that even when AI systems help CPAs draft financial documents, the ultimate accountability for those documents stays with themselves (the humans). As AI still hallucinates and has errors, her guidance is explicit in her quote “As in real estate they say, the key is location, location, location; for use of AI, the key is review, review, review”.

2. Auditing AI Systems

Dr. Lobana also sees a role for CPAs in auditing AI systems embedded into financial processes, but not in the way the profession has traditionally approached audit. Because AI systems are constantly learning and evolving, Dr. Lobana argues that the annual snapshot audit is no longer sufficient. “You cannot audit a learning system just once a year,” she explains. “The model you verify in January will be different by June.”

3. Training AI Models

CPAs will also be called on to verify the integrity of the data that AI systems learn from in much the same way a CPA would be called on to verify a critical supply chain. Poisoned training data, whether introduced through a simple labelling error, a disgruntled insider, or a deliberate external attack, can silently corrupt an AI system’s outputs. Dr. Lobana gives two concrete examples: in a medical context, a label switched in a cancer detection training set could cause the model to systematically misclassify tumours as benign or malignant. In finance, she describes a technique called a “backdoor attack,” where hidden patterns are placed in training data so that certain fraudulent transactions slip through undetected.

4. CPAs as Chief AI Risk Officers

For CPAs this will represent an entirely new kind of risk management. Auditors will have to go upstream from the financial outputs to examine where the training data came from, who handled it, and whether it was compromised at any point along the way. Dr. Lobana sees this as a significant new area of practice, and a natural extension of the profession’s existing expertise and role in protecting the public.

Dr. Lobana also sees a new C-suite role emerging that CPAs are well positioned to fill: Chief AI Risk Officer. This role would require blending technical literacy with enterprise risk management, a combination of skills that plays directly to the CPA skillset.

Her advice to CPAs is to learn how AI systems work, including the strengths and weaknesses of different foundation models. For example, ChatGPT might create more persuasive proposals because it’s trained to please users, while Claude and Gemini tend toward greater accuracy. In her work with AIGE Global Advisors, her consulting firm, one AI system drafts, another reviews, and humans provide review throughout the process. “We need to keep humans in the loop.”

5. Safeguarding Human Intelligence

CPAs are also well positioned to assess what Lobana calls the “Cognitive Capacity Index™”: whether firms have outsourced so much judgment to AI that they've lost the human capability to operate during digital failures. As businesses increasingly rely on AI, this may be one of the most valuable services the profession can offer.

Canada’s Regulatory Choice

When asked to sum up her vision for Canadian AI regulation, Lobana returns to her Ferrari metaphor: build for speed, but build with excellent brakes. Canada should take the middle path between US permissiveness and EU strictness, building for speed while ensuring control.

Will this put Canada at odds with the U.S.? “That risk is there,” Lobana acknowledges. But “that should not stop us from coming up with the legislation. We should just cross that hurdle when it comes rather than diluting our legislation right now.”

The goal, she says, is straightforward even if the path isn't: AI systems that are not only globally competitive and trustworthy, but anti-fragile.