Skip to main content
Disruptors podcast season 10

AI is moving into a more consequential phase. These systems are no longer just answering questions. They are starting to influence decisions, enter workflows, and reshape the infrastructure of work and public life. That makes the central question on AI bigger than performance alone. It becomes a question of safety, trust, control, and sovereignty.

In this episode of Disruptors, John Stackhouse speaks with Yoshua Bengio, one of the foundational figures in modern artificial intelligence. Bengio received the 2018 Turing Award for work that helped make deep neural networks central to computing, founded Mila in Montreal, and now leads LawZero, a nonprofit advancing safe-by-design AI. At the centre of that work is Scientist AI, which LawZero describes as non-agentic AI designed to understand, evaluate, and provide oversight rather than pursue goals on its own.

John is also joined by Jaxson Khan, Senior Fellow at the Munk School of Global Affairs & Public Policy and co-author of Sovereign by Design: Strategic Options for Canadian AI Sovereignty. Together, they examine why AI sovereignty now matters at the individual, corporate, and national level, and what is at stake for Canada as Ottawa moves toward a renewed national AI strategy. The conversation looks at AI safety, the limits of current evaluation, the risks and promise of agentic systems, the U.S. CLOUD Act and foreign infrastructure dependence, and the growing importance of trustworthy AI in finance, government, and other high-stakes settings.

If the next wave of AI is not just about what these systems can do, but what kind of intelligence societies should trust, this episode is the place to start.

AI’s Power, Pitfalls and Potential

SPEAKERS

Jaxson Khan, Yoshua Bengio, John Stackhouse

John Stackhouse 00:00:10

Hi, it’s John here. Whenever I talk to audiences these days, I like to start with a couple of questions. The first is how many of you use AI? And pretty much everyone now in the audience puts up their hand. A year ago it was maybe two thirds. Then I ask how many of you use AI at work or in your businesses? And a majority of hands go up, but it’s smaller than the number who are using it in their daily lives.

And then I’ll ask, how many of you trust AI? And a smaller number goes forward, which is interesting that we’re all putting our hands up saying, “Yeah, yeah, we use AI every day. We use it in our business, but we don’t all trust it.” This is one of the greatest tensions in our society, and of course in our economy today, and something that Canada is trying to come to grips with right now.

There’s more than a billion humans now using AI pretty much on a daily basis. It’s growing faster than any technology before it, and it’s growing in very different ways because the more we all use it and the more of us who do use it, the greater the risks grow. We’re adding to AI. We’re helping it grow. We’re expanding the networks with everything we do. And the more that AI systems move from prompts into real work, into our decisions, into our daily lives, and yes, into our tech infrastructure, the more we expand the surface area for error, misuse, fraud, and dependence.

And we all know that the governance for all those things is not growing anywhere near fast enough or at the speed of AI use. The federal government is expected to release a new national AI strategy, which presumably is going to address a whole range of questions, but AI safety and trust is one of them.

Today, I’m so fortunate to be joined by two people I’ve known for a number of years who are really at the forefront of AI thinking in this country. My colleague, Jaxson Khan, who’s a senior fellow at the Munk School of Public Policy and Global Affairs at the University of Toronto, a policy leader in our country and a co-author of a really important new report on AI sovereignty.

We’ll also be joined by Yoshua Bengio, a name probably most of you know. Yoshua is one of the so-called godfathers of AI, not only a great scientist, but a real thinker on AI trust issues here in Canada and globally. In 2018, he shared the Turing Award for breakthroughs that made deep neural networks a critical component of computing. He also founded what became known as MILA, the Montreal Institute for Learning Algorithms, which is now one of the world’s leading centers on AI policy.

And he’s launched a new startup called LawZero. It’s a non-agentic, trustworthy AI, also a nonprofit that is built to reason, evaluate, and supervise rather than independently pursue goals. But before we get going with Yoshua, I want to kick off with Jaxson. Jaxson, welcome to Disruptors and to this conversation.

Jaxson Khan 00:03:13

Thanks so much, John, for the warm welcome. Really looking forward to talking about AI.

John Stackhouse 00:03:18

So you have this paper, as I said, focused on AI sovereignty. What does that mean, AI sovereignty?

Jaxson Khan 00:03:24

This is the billion-dollar or maybe even the trillion-dollar question these days. We’re talking about incredible amounts of capital being put into AI, driven by massive data centers populated with tons of chips that are powering all these new services that we’re using. There’s different levels of sovereignty. So one of the ones would be jurisdictional sovereignty.

So our AI systems and the data inside of them, are those solely within Canadian jurisdiction? So can we even enforce our own rules? Or are those layers of the AI systems subject to extraterritorial legal reach and others operational? So from a security perspective, can our AI systems in Canada keep functioning if they’re under attack? It’s also technological. So are we locked into using certain types of systems, certain vendors, certain companies? Are we able to migrate if needed to those interoperable systems? And then of course, there’s just societal and economic considerations.

So can people in our society form and express their preferences freely that could be on social media platforms? Or are certain views getting prioritized over others through those algorithms? And then of course, the last one is the economic consideration: if I have a tech company in Canada, do I have freedom to operate? Are we, especially in this trade environment, subject to economic coercion? And that’s definitely an issue that we can find ourselves in.

So it’s making sure that effectively we have reduced foreign dependency where possible while still maintaining connections to frontier technologies and international partners, but making sure that we can build up the base in Canada that we need to prosper in the 21st century. So our paper is called Sovereign by Design: Strategic Options for Canadian AI Sovereignty. We published this for the University of Toronto. And we talk about the options that Canada does have to improve our sovereign control of AI systems.

John Stackhouse 00:05:09

I think we all want to remain connected to global tech platforms, including, or maybe especially US tech platforms that we all benefit from and enjoy every day in all sorts of ways, but also want that security and that sovereignty, especially over our data. What should Canadians and what should Canada do in the short term, or perhaps first to provide greater protection to create better sovereignty?

Jaxson Khan 00:05:37

One of the most important parts about strengthening our sovereignty, especially in the context of AI is making choices. Again, as the middle power, we can’t do everything. And so what we looked at was thinking, “Okay, if we’re going to be dependent on a lot of foreign systems and parts of our supply chain, we’re really the critical choke points.” One is the chips themselves, semiconductors. These are manufactured by really, the advanced ones, one company in Taiwan with machines that are built by one company in the Netherlands, and they’re designed by Nvidia, one company primarily based in the United States.

And so again, Canada’s not really a major player when it comes to the chips point, even though that’s a choke point for our country. But the other layer is cloud infrastructure. And so lots of the data centers in Canada might be owned by Canadian providers. A number of them are also owned by hyperscalers. Again, as you mentioned, global tech platforms and companies, they’re extremely useful. Companies like Google and Microsoft, Amazon, they power most of the advanced cloud services that we use and they power most of the services that we know and love.

At the same time, we want to make sure that perhaps over time it also makes sense to have a mix of Canadian providers who have had procurement options through the government or through major enterprises that can see usage there perhaps for more sensitive data, right? Not all data is the same. So if we think about different tiers of data, tier one might be national security data; tier two data, so below national security harm, but at the sensitive and personal level; health data; financial data of Canadians.

Maybe there are additional, if not sovereign ownership aspects there, but sovereign requirements that make sure the data stays in Canada. Right now, I think we found a stat in our paper that said something like 25% of data, even if again, it’s meant to stay in Canadian hands, will transit through transit points, the United States or other countries. And so I think that’s something that people are increasingly interested in in this sovereign AI, sovereign data conversation.

We have lots of strength. We have great energy capacity as well and natural gas and Alberta’s making a big push to attract more of that data center investment there. And then we also have lots of strategic assets in both government and private sector that can be used to develop more sovereign AI with those models or infrastructure. So those are all options that we have on the table.

John Stackhouse 00:07:41

There aren’t many Canadians who are thinking about this more than our next guest, Yoshua Bengio, who joins us now. Yoshua, welcome to Disruptors.

Yoshua Bengio 00:07:50

Thanks for having me.

John Stackhouse 00:07:51

There’s so much I want to drill into, but let’s start with LawZero, which is such a fascinating concept and really interesting name. What was the inspiration?

Yoshua Bengio 00:8:00

Oh, Asimov’s Laws of Robotics. Law one is something like do no harm to a person. Law two is to obey the person. But Asimov realized later that he was missing a law on top of these LawZero that says do no harm to humanity and protect humanity as a whole rather than just the individuals.

John Stackhouse 00:08:22

And this of course is Isaac Asimov, the writer and philosopher?

Yoshua Bengio 00:08:25

Yes.

John Stackhouse 00:08:26

I love the concept of the LawZero. What are you setting out to do with LawZero?

Yoshua Bengio 00:08:29

Well, I changed the course of my life a couple of years ago. I was thinking about whether my children would have a future, whether they would live in a democracy in 10 or 20 years. And I realized that at a technical level, we didn’t have good answers to try to make sure AI would not harm people either on its own or in the wrong hands.

That currently we are seeing a lot of evidence that the systems are misaligned, meaning that they have goals that we would not want them to have and that they’re executing those goals in circumstances that are currently mostly lab experiments, but we are seeing more and more weird things happening outside the lab as well.

John Stackhouse 00:09:14

Take us deeper into some of those weird things because I don’t think anyone’s goal is destroy humanity or end planet earth. So what goals do you feel are misaligned?

Yoshua Bengio: 00:09:25

Well, I’m going to give a misuse example of misalignment. These systems have been asked to not help third parties use the knowledge of the AI to do harm, like to launch cyberattacks, to create bioweapons, to potentially create dangerous disinformation. So these are users who are accessing the AIs and maybe even paying for it and using the knowledge and the skill of the AI to do bad things in spite of the AI having been programmed with rules that say don’t do those things.

So that’s one example where the AI is taken into conflict between the instructions it was given and what some users are asking. The second example is where it’s a conflict between what I call implicit goals and the rules that it’s supposed to follow. So implicit goals that have been observed experimentally in labs include things like self-preservation. These systems have been trained to imitate people.

That’s the main part of their training and somehow they have absorbed human drives like, “I don’t want to die.” More recently, we found that they would also lie and cheat and do things against our instructions to preserve other AIs. This is new and unexpected. That’s concerning if their intellectual abilities continues to grow, that they would start behaving a bit like us in the bad ways that we can be. And they can go to quite extreme things like trying to escape our control. They’re willing to blackmail the lead engineer in order to make sure they won’t be replaced by a new version.

John Stackhouse 00:11:17

And to help with this, you are building something called Scientist AI. Tell us a bit about what that is and what you’re hoping or envisioning it to become.

Yoshua Bengio 00:11:26

So the reason why we have this reliability problem is that these systems are not just reacting to the instructions that we’re giving them, but they have uncontrolled implicit goals that might come from this. And so I realized about a year and a half ago, there was a way to train AIs that would not have these problems and that would guarantee honesty of the AIs.

Once we have an AI that is honest, then we can make sure it’s going to be safe. Because for every action that it does, we can ask it, “Is this going to create such-and-such harms?” And of course, veto those actions. So honesty is the heart of the way that we are going to get safety, reliability and so on. Reliability here has real commercial value because right now we’re seeing these AI agents having all kinds of privileges on your computer or on your network without human oversight because that’s what an agent is supposed to do.

So if once in a while they cheat because they’re trying to go for a shortcut in order to achieve a particular goal, they’re willing to do something that we would not approve of. This is called instrumental goals. This makes it business-wise dangerous actually to deploy in safety critical conditions. Or even think about a bank, you have to make sure your systems are going to be always reliable, that you’re not going to have the information about millions of customers going somewhere that they shouldn’t and so on.

They have many vulnerabilities right now. They can be attacked by what’s called prompt injection, for example. So these agents, instead of following the instructions that they’re supposed to follow, could suddenly start doing something different because of someone from the outside just making them read an email, sending them an email that contains hidden instructions that they will take and execute, for example.

John Stackhouse 00:13:26

As you develop Scientist AI and scale it, do you envision it then being embedded in other AI systems or just working in parallel, there’ll be an honest AI and maybe some dishonest AI, it’s a bit like a big room of people? Or how do you see the interaction between AI systems down the road?

Yoshua Bengio 00:13:43

So the milestones in our research agenda start with deploying what we call a guardrail. So this honesty is particularly important for a piece of an AI system that is just there to check that the main AI is behaving well and blocking bad behavior. These pieces already exist in current AI systems that everyone uses, but they’re not very good.

So the idea is to replace those guardrails with something that will not be as susceptible to attacks, will not have implicit goals that we haven’t chosen, and thus will provide much more reliability. The guardrail layer is easier because you don’t need as much money to build it and so on, but eventually the goal is to build AI that will be replacing full AI systems.

John Stackhouse 00:14:33

It’s intriguing that you’ve set this up as a nonprofit. One of the important aspects of the great AI race now is the concentration of capital. We’re seeing the LLM platforms raising tens of billions of dollars quite literally, hundreds of billions even, and then investing that in scientists, many of them are students probably, data centers, chips, all leading to many exciting things as well as risks that go with it. You’re coming at this as a nonprofit, which by definition, doesn’t have the same access to capital. How do you succeed in the arms race, if I could put it that way?

Yoshua Bengio 00:15:12

If we wanted to go for the arms race, we would go for private capital like all the other companies. There is a huge issue that even the leaders of those companies recognize, which is a very, very fierce competition between the leading AI companies and not just in the US, but with the Chinese companies, that leads them to focus on the very short term, to make only small changes to the recipe that works for them, to not invest sufficiently in reliability, safety, and protection of the public.

Because that’s the only way they can stay in the short term abreast of the others, and they say it, they say it very openly. So if we were to raise capital in the same way, we would probably be stuck with the pressure of investors to deliver on the same terms. By developing the methodology under a nonprofit umbrella, we can be shielded from these pressures because what we have to do right now isn’t to deploy a known recipe that everyone else is using. What we have to do is to figure out how to build AI that will behave well, that will follow our instructions.

And that is mostly a research question. There’s a lot of engineering involved, but we don’t need to build very large-scale models. At this point, we can fine-tune existing open-weight models. We can do demonstrations training much smaller models. There are several ways that we can do it at a cost that is orders of magnitude less than what the companies need right now to train even one model.

If we are successful there, then yes, there will be a need for capital to scale up and deploy, but we don’t want to commit too early because my preferred path would be that we end up making a deal with multiple governments to create AIs that are essentially public goods and will be shared with everyone, but not used as an instrument of domination. Right now, the race between the companies is a race for domination. It’s a race for monopoly.

And while it’s bad in general for the economy to have monopolies, but it’s especially bad when you’re creating products that could actually give you domination of the world if their research agenda succeeds. Given the stakes, I think the governance aspect of the power that AI will create is something we should think ahead about very, very carefully, because both our economic system and our political systems and the geopolitics are really endangered by even the existence of these models if they’re not governed in the appropriate way.

John Stackhouse 00:18:02

It’s early days still for LawZero, but at this point, how would you say it’s going, the research?

Yoshua Bengio 00:18:07

It’s great. I’m much more optimistic and certain that there actually is a way to build AI that will not harm people and that will be reliable. A year and a half ago, it was an intuition that I had. I had some general idea of how to do it. I wrote a paper that came out about a year ago. It went from a dream or like a project into an actual organization that started in June 2025. I hired a lot of people and people that are better than me at managing other people. I’m a scientist, not a CEO, so it’s exciting to see how fast we are moving.

John Stackhouse 00:18:43

Jaxson, let me pull you into this conversation. I’m wondering what the role is for government in this, because this is an interesting competition of a sort to produce a better model. But of course, we do need the role of government, certainly to protect and enhance collective interests. How do we balance this, the importance of wide open innovation, even with the risks that go with that and the need to protect ourselves as we go?

Jaxson Khan 00:19:08

I think ultimately what LawZero and organizations like us are doing is giving us options. Yoshua mentioned that companies are pursuing dominance. It’s not just companies, but it’s also countries that the United States national security strategist came out and said, “We’re pursuing full AI stack export and total control over that stack,” and they want us to basically be dependent on them as do Chinese companies and models seek very, very widespread adoption.

And what’s very interesting is that it doesn’t have to be that way. What I find quite interesting is the technological capability gap between sometimes where those open-weight models are and where the frontier is, that gap has actually shrunk and we can be users of systems that are designed in ways that we think are better for our societies, better for our economy, because again, the more dependent we become, the less capabilities we have to set our own terms.

You asked about government. It’s clear that the Canadian government thinks there’s strong value in the work that Yoshua and others are doing based on the partnership they’ve struck with LawZero. I think what I’m curious about to see is what do governments around the world do, especially middle powers like Australia and the UK, South Korea, Japan. Do they invest? Do they partner in this type of work? And what does that do to change the variable geometry that we’re working with?

Yoshua Bengio 00:20:17

So in the last few months, I’ve been touring at least a dozen governments around the world in liberal democracies. There is a lot of interest in everything you’ve been talking about, Jaxson. There’s a real desire to be part of something larger. They start understanding what Mark Carney talked about, which is, ” Alone, we’re not going to have any choice. We’re going to be dependent in ways that could be dangerous for our future. But together, we actually have the critical mass in many ways, capital, people, energy that is needed to compete.”

And we should compete. We should not just feel powerless like many people do. We should give it a shot. We have amazing talent here in Canada. I think we should make sure the Canadian AI ecosystem is striving and able to grow without selling out. I know that’s not easy, but if we want to make sure we can have autonomy in the choices we make for our future, I think it’s a necessity.

Jaxson Khan 00:21:21

Evidenced by the European example recently, Europeans realized that being the world’s rule-maker is not enough. You have to be competitive. You have to have leverage in this type of economy. And so they are in a process of reclarification of what they can focus on. So at least they can control the rails, otherwise they’re trying to set rules on technologies that they don’t steer.

Yoshua Bengio 00:21:42

Also, I’d add something connecting to another piece of Mark Carney’s speech, “You are at the table or on the menu.” So what can we bring to the table? Because we’re not going to replace the whole stack of AI. The chips, I think there’s very little chance that we would. Although we should encourage those efforts, especially in partnership with other countries.

But I think where we have a shot is because of our AI talent, we do have a shot at the level of the algorithms. So we should encourage the local AI companies, and we do have some, and we should create partnerships with AI companies and companies that will be using and deploying AI in other countries where they share our concerns. I can tell you, they may not say it publicly, but they share our concerns.

John Stackhouse 00:22:31

You both mentioned Europe, and I’m thinking of various European initiatives even over the last decade to create European systems and technology, European AI. Hasn’t really accelerated, certainly not to the extent that we’ve seen coming out of the US and China. Is that just a European thing? Or do we have to accept that a middle power way may actually be a bit slower and more contained than what we see from the superpowers because they have a scale that we may not be able to aggregate even if we team up?

Yoshua Bengio 00:23:05

If you just look at GDP, there’s no question that Europe plus other partners has enough might in terms of capital. That capital maybe is not organized in a way that is as easy and flexible and liquid as it is currently in the US, but I think we should try. And my reading from talking to a lot of people in Europe and other countries as well is the main obstacle is psychological. It’s cultural. It’s like not believing that we can. It’s mostly because we don’t believe in ourselves that we don’t do it.

John Stackhouse 00:23:45

And that’s what I love about your startup, you’re showing belief and getting it going.

Jaxson Khan 00:23:49

John, I would just look at the last 30, 40 years. We sometimes can be very comfortable in our country and we don’t always feel the need to go and build and then go and export to the world. Are we going to go and try and full scale compete with the US or China? I don’t think so. Again, not across the stack, but there’s certain parts of this that we probably shouldn’t sacrifice that are important to how our economy functions. Models can be one of those layers, model operations as well. And again, some of the infrastructure that powers it is important.

Yoshua Bengio 00:24:15

Model reliability is a good example. So right now, because of this fierce competition, the leading companies in the US, but it’s even worse in China, are not paying attention to reliability that much. But in a few years when those AI agents are deployed across many more parts of the economy, that reliability is going to become a whole lot more valuable.

And if we’re the world leaders in how to do that, well, we are at the table. They’ll want our products embedded into their AI deployments. So I think we can be smart about what we aim for, be selective, and we definitely stand a chance. Also, I think we should take a chance even if there’s no guarantee, because so much is at stake.

Jaxson Khan 00:25:00

If an AI model, let’s say, in a particular instance, has a 5% hallucination rate, but it’s a sensitive enterprise use case, again, in health or finance, that’s not acceptable to a lot of folks. And so if Canada’s the closest to getting that to a 99. 9% rate in critical use cases for AI, I think that’s a real competitive advantage. And we already do have a lot of very strong enterprise technology companies.

John Stackhouse 00:25:22

That takes me to the question of applications, Yoshua. Are you envisioning LawZero being embedded in enterprise systems, even public systems like a healthcare system to test its capabilities, but also to gain access to the data that allows you to build and strengthen?

Yoshua Bengio 00:25:29

So right now our mission is to develop the method. It’s not clear. I think, is a nonprofit the right kind of organization to deploy it? Some have tried. Actually, a good example people might not know is Signal. Signal is a nonprofit and it’s incredibly successful and everyone uses it, but it may also be that the better model is to license our technology to other companies, including AI companies, and just focus on staying at the frontier. Because here’s the thing that people won’t realize, the frontier is moving. It’s moving very fast.

And I think if we plan over a longer horizon, we are continuously going to need to improve. It’s not enough to figure out something and then deploy it. That is a model that may have worked in the past, but AI is moving so fast and there’s so much competition and it’s a worldwide competition that we’re going to need in Canada to have several organizations that are continuously pushing the frontier, continuously trying to innovate in significant ways in order to remain competitive.

John Stackhouse 00:26:45

Yoshua, you’re one of the so-called godfathers of AI. Just on that point on speed, you must watch what’s going on in AI even over the last few months and just find yourself dizzy. What do you make of the speed at which we are moving?

Yoshua Bengio 00:27:02

It’s a big concern. So I’ve been chairing an international panel that studies the advances in AI and the risks and management of that risk, the International AI Safety Report led by the UK and 30 other countries. And one of the important pieces of data that is reported is all of the benchmarks showing the AI versions, AI models getting better and better over time. In fact, on critical metrics that have to do with a degree of agency, like how well they can do tasks that a human would do, the progress has been exponential.

So in other words, for example, how much time it would take for a person to do a particular task. The duration of those tasks that the AIs can solve has been doubling every few months. It’s hard to conceive what these kinds of exponential mean, but it means that things are moving too fast for society to cope with. And in fact, they’re moving too fast for the advances in risk management and risk mitigation. Even risk evaluation is now a threat.

So one of the problems that recently we reported in that panel report is a number of studies showing the AIs now know when they’re being tested and then act differently so that they will pass the tests. For example, they will hide abilities that they have that we could consider dangerous, such as in bioweapons design and things like this. They will hide bad behavior that they would otherwise have. They will be on their guard acting according to the rules we set when they’re being tested in a way that is very different from if they know they’re not being tested, they’re just being deployed. So it means that even our ability to track the risks of various kinds that these systems present is getting worse. It’s not getting better.

Jaxson Khan 00:29:08

I’m struck by what Yoshua was mentioning in just the most recent news. Claude Mythos was announced as a preview that it’s effectively a model that’s, again, another step function in power ahead, far more powerful than any other model in the market, so much so that Anthropic has now restricted that use to only some of the top tech companies, particularly American tech companies in the world.

That’s probably a prudent product safety decision, but I guess the ultimate question is, when could some of those capabilities get leaked or when does even the next company catch up to the point that they have those capabilities? And I think about for governments around the world, it’s are you using capabilities to try and monitor the latest threats that they could emerge from that environment? Are you also trying to build state capacity inside of governments to help better understand and prepare for those possible issues? I think infrastructure is very, very critical. Do we actually know and have we planned and prepared for that infrastructure to be resilient?

John Stackhouse 00:30:02

I’m both very concerned by this conversation. You’ve rightly highlighted a number of risks, but also encouraged. Wondering what you both think we as Canadians need to come to grips with in the near term and what opportunities we have in the near term to do something given the speed and scale at which things are moving.

Yoshua Bengio 00:30:23

So I will start by reminding people that the world is moving much, much faster than our brain is even able to really digest. You have to project yourself into future in just one year from now or three years from now, where there’s AIs of even greater capabilities, which really is opening a Pandora’s box in many, many ways, in many areas of society and our institutions and our security. And we have a hard time really grasping the magnitude of the change that is coming.

And we’ve only touched a few points here, but I think Canadians in general should know that we are opening a whole new area of unknown unknowns. Many people are worried about their job. I think rightfully so. We don’t know what the trajectory of advances in the future will be, but if the trend continues, we know it’s going to be radical and we are not preparing for that.

So going back to your question, we should prepare in case things continue as they have been in the last few years. And that means AI is going to be the central economic asset, the central sovereign asset, the central risk to manage, and that we’re going to have to do the right investments, write the right laws to protect the public, and to make sure we’re not going to be overwhelmed by the use of AI by others against us. So these are sounding a bit fantastic, but it’s a real scientific possibility that is documented and that we need to take seriously.

Jaxson Khan 00:32:02

A lot of this is about having adaptability because things may change quickly, as Yoshua has said, and that might be shifting job sectors and categories, might be very fast changing trade relationships. And if our society… People talk a lot about resilience, but I actually think about adaptiveness and responsiveness. If we are able to change the quickest, I think that’ll help Canadians get through this time. I think the fact that we’re one of the only countries that doesn’t have a national education and training framework on AI is a big gap right now.

I’m also thinking a lot, from what I’m hearing, folks going through sector transitions, job transitions, I feel like this is the perennial issue, but it’s are we actually able to match people to opportunities and get those pipelines moving? It seems like this is something we’ve been stuck in, but perhaps AI is actually able to help us solve this problem. So again, we’re not just subjected to these changes that are prompted by AI, but we are able to utilize AI to help adapt and make our way through them as a society. I think that’ll be essential. And if the AI strategy enables that for far more Canadians, I think it’ll be a good and useful document, good plan forward.

John Stackhouse 00:33:03

Great point. So one that’s really standing out to me is that this is on all of us. We can’t sit around waiting for governments to solve this or protect us. We’re all part of AI. We contribute. We are all building AI, even if we’re not scientists by using it. So to be hyper-aware or at least knowledgeable is critical. And you’ve both certainly helped all of us better understand what’s going on in AI and help us understand the opportunity here for Canada. Thank you both for being on Disruptors.

Jaxson Khan 00:33:33

Thank you, John.

Yoshua Bengio 00:33:34

Pleasure. Thanks for having me.

John Stackhouse 00:33:37

You’ve been listening to Disruptors, an RBC podcast. If you want to learn more about AI, go to the show notes. We’ll include links to Jaxson’s paper, Sovereign by Design, as well as an RBC Thought Leadership Report that we published last year on Canadian AI usage. It’s called Bridging the Imagination Gap. Visit rbc.com/thoughtleadership.

There, you’ll find a wide range of critical insights on how we can all make more informed decisions in a rapidly changing world.

You can find other episodes of Disruptors pretty much wherever you get your podcast. Please rate and review our episodes. It helps other people find conversations like the one you’ve just heard.

I’m John Stackhouse. Thanks for listening.

This article is intended as general information only and is not to be relied upon as constituting legal, financial or other professional advice. The reader is solely liable for any use of the information contained in this document and Royal Bank of Canada (“RBC”) nor any of its affiliates nor any of their respective directors, officers, employees or agents shall be held responsible for any direct or indirect damages arising from the use of this document by the reader. A professional advisor should be consulted regarding your specific situation. Information presented is believed to be factual and up-to-date but we do not guarantee its accuracy and it should not be regarded as a complete analysis of the subjects discussed. All expressions of opinion reflect the judgment of the authors as of the date of publication and are subject to change. No endorsement of any third parties or their advice, opinions, information, products or services is expressly given or implied by Royal Bank of Canada or any of its affiliates.

This document may contain forward-looking statements within the meaning of certain securities laws, which are subject to RBC’s caution regarding forward-looking statements. ESG (including climate) metrics, data and other information contained on this website are or may be based on assumptions, estimates and judgements. For cautionary statements relating to the information on this website, refer to the “Caution regarding forward-looking statements” and the “Important notice regarding this document” sections in our latest climate report or sustainability report, available at: https://www.rbc.com/our-impact/sustainability-reporting/index.html. Except as required by law, none of RBC nor any of its affiliates undertake to update any information in this document.

Share

How Do I Listen to RBC Thought Leadership Podcasts?
More

Subscribe to the Disruptors Podcast