Skip to main content
Agentic AI – The Next Frontier

AI is moving beyond passive outputs toward autonomous action. In this episode, John Stackhouse and Sonia Sennik explore Agentic AI, a new class of AI systems that can reason, plan, and take initiative with limited human oversight. These systems represent a major evolution beyond traditional and generative AI, capable of real-time adaptation and complex decision-making.

They’re joined by Adel El Hallak, Senior Director of Product Management at NVIDIA AI Enterprise, and Jacomo Corbo, CEO and Co-Founder of PhysicsX. Adel shares insights from his work delivering secure, scalable AI platforms for enterprise, while Jacomo draws on deep experience deploying AI in high-performance engineering contexts, including Formula 1 and advanced manufacturing.

Together, they unpack how agentic AI is already being deployed, the economic opportunities at stake, and the roadmaps and ethical considerations businesses need to navigate as AI agents become a force in real-world operations.

John Stackhouse: [00:00:00] Hi, it’s John.

Sonia Sennik: And I’m Sonia Sennik, CEO at Creative Destruction Lab.

John Stackhouse: This is Disruptors x CDL: The Innovation Era,

Sonia, today we’re talking about one of our favorite subjects, which is computer chips, semiconductors, as some like to call them. And the explosion in demand that we’re seeing, and frankly, we’re all part of through generative AI. We’ve got a great guest from Nvidia. It is the Global Champion right now in the chips race.

But we’ll also hear more about what we can all do in this age of Gen AI to be more efficient and effective in whatever it is we do. And the conversation could hardly be more timely. Just in the last few days, Donald Trump has taken a plane load of tech executives to the Middle East to sell American technology to the Saudis as well as to other Arab nations.

In fact, there were $600 billion in [00:01:00] commitments to American AI companies, including Nvidia, which is getting ready to sell hundreds of thousands of AI chips to Saudi Arabia. And this may just be the beginning. We got another signal of that from Mark Carney’s new cabinet in which he named Evan Solomon as the first ever minister for AI and a bunch of other things. But what’s really cool is that Canada now has a minister of AI.

Sonia Sennik: And John, you’ll remember last year, Canada announced a $2 billion investment into AI compute and setting up a new safety institute. The kingdom of Saudi Arabia’s economy is about 50% of Canada’s so about a trillion dollars in GDP versus our $2 trillion in GDP.

So you might expect a comparison, a billion dollar investment, but like you just said, they’ve just committed $600 billion in November of last year, another a hundred billion dollars. So the time is now for us to get engaged with this new wave of ai, generative AI and agent ai. It [00:02:00] is not slowing down. The more we can experiment, the better.

John Stackhouse: We’ll hear a lot more about that term agentic, but it feels like chips are the new oil and we’re better to see that than in the kingdom of oil, which is now the purchaser of chips. Not just for the sake of a trade balance, but because the Saudis are really determined to remake and reorient their economy and do it through the power of semiconductors and ai.

And that now is the challenge for Canada as well. How do we, as we think about reorienting. Our own economy use these incredible technologies to rethink, reimagine industries sectors, but also all of our organizations that can become much more efficient, more effective, and frankly more global than we might have been in the past.

Sonia Sennik: We’ve chatted about this before, but to have compute, you require chips to power them, of course, you need energy. So that connection between energy equals compute. Equal intelligence is [00:03:00] one to really pay attention to where are the regions in the world, where you have the intersection of these two things and leadership in these two areas.

John Stackhouse: Well, let’s get at it. Our first guest is Adel El Hallak. He runs software product management for enterprise AI at Nvidia. He’s also a proud Canadian from Montreal originally, and we’ll hear more about that. In his role, Adel focuses on delivering microservices and blueprints that enable organizations to build production grade agentic AI systems.

Adel, welcome to the podcast. Thanks, John. It’s great to be here. I’m so excited to talk about Agentic AI and GPUs, and a bit about Nvidia, but I wanna start with you ’cause you’ve had a really interesting journey. Like a lot of Canadians, a lot of Canadian techies. You started on one side of the border and you ended up in the valley.

Take us back a bit in time and tell us about your own journey.

Adel El Hallak: Yeah, John. So I did grow up in Montreal. It’s what I continue to call home. Whilst at [00:04:00] McGill. I did my computer science undergrad degree there. The very first internship I had was at business development back of Canada, but like every Canadian grad I had a stint at Nortel Networks.

It was a time when the economy as a whole was going through a downturn, but nonetheless, it was great to be part of a company that meant so much to Canada as a whole. But shortly after graduating, I wanted to venture away from Canada. I’d spent, you know, my entire undergrad still living at home. And so Dubai was a hot topic on campus and I ended up landing a job with IBM and spent, you know, a good 10 years in that region.

And about 2007 or 2008, I was working. In tech sales architecting an opportunity for a large oil company, Saudi Aramco. It was in 2007 for one of their clusters that they required something called A GPU. They send all these waves into the ground that come back up and you have to visualize the waves for them to be able to identify [00:05:00] where oil resides.

And I had to go source GPUs. I had to go introduce myself to a company called Nvidia. And uh, lucky enough, we ended up winning that opportunity and so spent 10 years in that region. It was great. But then corporate came calling, right? And I came back to the US and in 2015, IBM, Google, a company called Nvidia, a company called Melanox.

All collaborated to start what we called then the Open Power Consortium. And just through that collaboration with Nvidia, it was a, a marriage meant to be collaborated with them closely for a few years, launched our first deep learning software business over at IBM, and then just through the collaboration with Nvidia, the lure of living in California was too much to resist.

So in 2018. After living five years in Manhattan and working for IBM’s corporate office there, my family and I made the trip out to the Bay Area and it’s been our home ever since.

John Stackhouse: The power [00:06:00] of Canadians going abroad. I wrote a book called Planet Canada about people like you and anyone who’s listening and feeling a little hesitant about going out in the world.

Stop with a hesitation. ’cause you’re a great story of someone going out, coming back, going out, staying very connected to Canada. Tell us a bit about the GPU business that Nvidia has become. The emblem of it is beyond a powerhouse. It is the global force in GPUs.

Adel El Hallak: There, there’s kind of two factors that go into that, right?

There’s certainly the technological dimension that we’ll speak to for sure. But I think the GPU and acceleration as a whole, our founders have been great to recognize the opportunity decades ago and, and sticking with it, which was accelerated computing, right? Is going to fundamentally change the world.

And we were looking for the longest time for that killer app. And it started in 2012. It accelerated significantly in 2022. So 2012 [00:07:00] was the first time where deep learning kind of made a dent in driving accuracy percentages significantly. But then you fast forward a decade later and it was that chat GPT moment, and in November, 2022, it was when the world woke up to the power of large language models.

You saw a lot of creativity come about. It could write poems that could generate imagery. You can get it to summarize long documents for you. You can get it to analyze or rewrite or draft for you specific emails. And so that was a big moment, 2022, beginning of 2023. And since then, that timeframe’s only accelerated right from.

The birth of large language models to then taking large language models and grounding them with your enterprise data, what we call retrieval augmented generation, to the hot topic that is today, which is agentic AI and building systems that can autonomously make decisions.

John Stackhouse: Let’s jump into AgTech because it’s [00:08:00] become a bit of a buzzy word. I’m sure most people have heard it in one form or another. Take us deeper into what it means. What’s this agentic thing?

Adel El Hallak: So large language models create a generation different modalities. Write a poem, create a limerick, draft my email, use a text prompt to generate image, et cetera.

Those large language models are trained on the world’s corpus. As an enterprise, I need to give those large language models access to my corpus of data, my own knowledge base. So this notion of retrieval augmented generation came to be. And what that really does is it takes your knowledge base and turns it into vectors that can be searched semantically.

And so you can have a conversation about your data agents build on that. And there’s a few things that happened last year or so that have enabled this. The first thing is this notion of reasoning Models and reasoning models are more advanced than large language models. Large models will generate, and more often than not, will not take action unless prompted.

Now you [00:09:00] can have a sequence of prompts, but more often than not they’re unable to think through, rationalize through more complex problems. Reasoning models came to be that are able to address. Multiple tasks handle ambiguity. They’re able to go back, self-reflect and check their work. In fact, if you use any of those reasoning models, NVIDIA’s got one called Lama Nitron family.

But you see these models talking to themselves like it’s talking itself. Hey, have I considered all these options over here? Maybe I should consider this. Oh, wait a second. Well that was a better path. Okay, lemme stick to that. And so now you have these reasoning models that are able to. Rationalize through complex tasks.

Give us a couple of examples of how this is playing out in the real world. One of the fundamental changes that I’m seeing nowadays is, you know, in the past we treated AI as a tool. Now we’re seeing AI become more of a companion, and I kid you not, I’m seeing this happen on a personal front as well. My wife, my partner, has been using chat GPT to help draft emails, [00:10:00] do fun, creative stuff, but increasingly I’m seeing her talk to chat.

GPT Voice AI has enabled a new mode to engage with these models. Now, she’s not just chatting prompts. I see her practicing roleplaying with the ai, right? So nowadays you can give the AI. A role to play and the example there was a kerfuffle at school. Parents got involved and the reason she was roleplaying is because AI can be objective and you can have it coach you and hey, what’s a different perspective that I’m not considering as part of this?

It almost preps her as part of that conversation. In my personal day, I’ll give you examples in my personal day. The first thing is my ability to do deep research. This is not no longer just doing search matching queries. It’s being able to understand an entire knowledge base, being able to rationalizing and applying reason to it, and so on any given day, I do this at least three and a half a dozen times where I need to.

Get analysis on a given topic or a given subject or on a given dataset, my APIs, and tell the AI, Hey, can you [00:11:00] identify anomalies and patterns for me in this dataset? It will come back and we’ll find some things that I’ve never thought to look into. There are some arduous tasks that some of my engineers hate doing.

As an example, we deliver what we call microservices. You give it an input, an output comes up and they given a microservice delivered as a container. So inside the container, there’s hundreds of libraries that make up that given microservice, and any one of those libraries can have a vulnerability that can be exploited.

And this is standard practice. You have to scan your microservices for vulnerabilities all the time. And so a process that used to take engineers 4, 5, 6 hours, they have an AI companion, a security analyst that’s always on, that’s saying, Hey, I believe this to be vulnerable for these reasons. Here are the links for you to get to these websites, and here’s how I came up with the rationale.

The humans on the loop. They’re ultimately the decision maker as to whether we patch it. Or we don’t, but the AI companion is helping them. And you can extend this ability to [00:12:00] go and query all sorts of different knowledge bases, website, internal tools, right? Do a synthesis and present it to you with clear citations. That just makes me that much more productive.

John Stackhouse: How should, uh, we as consumers be thinking about this as we interact? More and more with agents and with agentic ai, and that includes the obvious concerns about safety and privacy.

Adel El Hallak: Yeah, it’s a fair and valid question. We’re in the dawn of a new era, but I go back to drawing to the AI companion, my AI teammate analogy because just like onboarding any employee, there are certain data sources that you’re gonna give your employee access to.

If you don’t trust that employee or it’s not within their discipline or their job, you do not give them access to that data source. When somebody joins a company, I join Nvidia. I gotta learn the cultural norms of Nvidia, right? I gotta understand its values, et cetera. We do the same thing with these AI companions, these AI teammates.

We train them, we ground [00:13:00] them in our values and our data sets, and at the same time, you gotta implement the guardrails in place the same way an employee is told. You cannot speak negatively to these points. These are things that you shouldn’t be saying externally, right? These same guardrails are applied to the AI topical guardrails, right?

Like you see some of these chats that will tell you, oh, sorry, I can’t conversate about this topic because I’m only supposed to stay within these lanes, right? So you can have topical guardrails, you can have safety guardrails, et cetera. So think of a human, think of an employee. They’re onboarding. Be very careful what you’re giving them access to because access control is super important.

And then implement the guard rails in place such that they remain within your values.

John Stackhouse: I like how you said we’re at the beginning or the dawn of a new era here. Where do you see it going over the next few years,

Adel El Hallak: where it’s headed in the next few years? Number one is all agentic systems require an interface. Today, a lot of those interfaces are written or chat type of interfaces.

Increasingly, you’re seeing these interfaces become [00:14:00] voice. Enabled interfaces because that communication is quite natural and a not so distance future. A lot of those interfaces are gonna be digital humans, digital avatars, your own avatar, and I think those are super powerful, right? Our ability to conversate opens up the aperture for a lot more folks to be able to engage with these AI systems.

The second thing is you’re gonna see us be able to tap and understand and reason through different modalities of these knowledge bases at higher accuracy rates. These agents are gonna be able to understand videos and different modalities all happening at once. And I think the third piece is you, you need a, a flywheel, which is, you know, those thumbs up on those thumbs down that we’re seeing increasingly in any engagements that you have.

Those are super, super valuable. ’cause every one of those clicks is a reinforcement. ’cause the uh, hey, you’re doing the right thing, you’re doing the wrong thing. I believe that in the future, that our interaction, our ability to interface with these agents through natural language, the same way you and I are talking right now, is gonna let us tap into all sorts of different.

Knowledge [00:15:00] bases across different modalities. Research synthesis, building training courses, managing my calendar, booking flights. We’re just scratching the service and it, it’s quite exciting what’s coming about.

John Stackhouse: So it all sounds quite wonderful. But of course, nothing comes for free. And one of the costs of agentic ai, as well as all those GPUs behind it, is just the enormous compute.

Requirements and that includes the energy requirements. How is this gonna play out so that all these GPUs that are doing all this work on our behalf, uh, don’t devour the entire energy capacity of the world?

Adel El Hallak: Yeah, I mean that, that’s, uh, it’s a great day to bring that up. And I always talk about full stack acceleration.

Yes, a lot of the world out there knows us for GPUs. That’s ultimately what we sell. But a large portion of engineers, Nvidia are working on software. I. The whole point of working on software is, is an [00:16:00] economics efficiencies gain, which basically says full stack accelerations translate to the best economics.

We wanna drive the highest tokens per second for the factory, but we want to do this at the most economical, lowest wattage possible, right? Because that’s what impacts your bottom line. Ultimately, that is something that is top of mind, which is how are we able to generate tokens as efficiently as possible?

Are you able to get the same type of accuracy with a much smaller model footprint? Full stack acceleration, which means hardware plus software will drive up efficiencies, drive down costs. And the second piece is using post trading techniques, fine tuning, lower adapters, et cetera, to customize smaller models to beat the accuracy of larger models that require, you know, more compute the process.

John Stackhouse: We’ve covered a lot of ground here and could keep going, but I wonder if you can sum up for our listeners what are two or three of the most important things they should keep in mind when they think about [00:17:00] Agentic AI.

Adel El Hallak: Number one is don’t think of this as a tool. Go back to think of this as a companion. A companion that specializes in a very specific case, right?

So help me write my code. Help me sort my calendar, right? Help me address vulnerabilities that come up in our software. Number two is go deep with one use case before you scale to others. Think about the access controls that you give it access to. Think about the guardrails you have to implement in place.

And the third piece is continuously looking to efficiencies, right? Because those are gonna scale just ’cause reasoning. Models are the thing to do. It doesn’t mean reasonable models are great for every use case. So always a value and make sure that you are using the minimum viable product and don’t just use it ’cause it’s a hot buzzword.

John Stackhouse: I love that advice. If it’s not adding value, don’t consider it. Yeah, it may be fun to play with, but it’s gotta add value at the end of the day. Adel, thank you. Wonderful conversation. Really enjoyed it.

Adel El Hallak: Appreciate you, John. Thank you.[00:18:00]

Sonia Sennik: We are joined now by Jacomo Corbo, CEO, and co-founder of PhysicsX, a company building powerful AI models for complex engineering and industrial applications. Jacomo was previously chief scientist at Quantum Black and a partner at McKinsey with deep experience in deploying AI across industries. He holds a PhD in computer science and has applied his AI expertise as the chief race strategist for the Renault F1 team.

Jacomo’s Research helped Renault win Double World Championships in 2005 and 2006. Jacomo, welcome to the podcast.

Jacomo Corbo: Well, thanks very much Sonia, John, for having me.

Sonia Sennik: So from Harvard to found in quantum black to McKinsey, to now starting and scaling Physics X, what compelled you to become an entrepreneur and solve some of the world’s most challenging problems like the energy transition?

Jacomo Corbo: It’s a very meandering path, so I still think of myself as an engineer. I had a passion for engineering from a very young age. Did my [00:19:00] undergraduate in electrical engineering, really in control theory and with tail end of my undergraduate, spent some time building steer by wire systems in Germany at Bosch and then went off to do my PhD and that was at Harvard.

And um, that’s how I got into the world of computer science. A lot of the things that I was doing were very much on the theoretical side of things. What pulled me back into the real world, into empirical things was finding my way into Formula One was the tail end of my PhD, this engineering competition that took place that Reno was hosting.

I got to know some of the team and they said, look, we think that a lot of what you’re doing is incredibly relevant to problems around race strategy. I ended up becoming the chief race strategist of the Reno F1 team, and it set me off on a bit of a journey I came into, an incredibly sophisticated engineering world, which is an F1 team, right?

You have people who know and understand aerodynamics incredibly deeply, who understand vehicle dynamics, who [00:20:00] understand materials incredibly deeply. But in all of these different areas of expertise, they only really understood how to model things and handle data in very traditional ways. So it certainly wasn’t taking advantage of techniques that computer scientists take for granted.

And that was very much the thesis for starting Quantum Black Machine Learning Engineering services company that we have worked with huge, you know, anchor clients, including pre Formula One teams and Boeing, and you know, and our desire is quantum black to stay very horizontal and across industry. The reality is that things are so much more advanced, but also I think the story of how the technology has evolved is that.

A lot of development has moved up the stack and it’s become a whole lot more democratized, a whole lot more consumable by software engineers. It’s become a form of software engineering in ways that sort of offsets the need to have people who are very [00:21:00] deep experts in very specific AI methodologies.

John Stackhouse: Jacomo, what advice do you give people when they’re thinking about how to land these ambitious technologies in their own backyard?

Jacomo Corbo: I think the very first thing is to start implementing, to start doing these things at some kind of scale. To really think about deploying this technology in ways that can drive internal productivity.

Right? Like the easiest example right now for a lot of organizations is. The use of getting leverage from generative code productivity tools, things that can make your software developers more productive, just using these things out of the box. Can buy you productivity, that productivity should be something that you are able to measure.

I see a whole lot of organizations that are really throttling consumption, trying to get it to a very small cohort of developers that have access to these tools, because ultimately what they’re [00:22:00] thinking about is this is a new line item. This is something which is going to increase cost. And there are cost controls on software in any large mature organization.

That makes a whole lot of sense. But with this technology, you really have to move into implementation. I think you have to force yourself to do things at a scale where you can really start measuring the outcome. The productivity is there, but again, you’re gonna have to become a little bit more sophisticated around how do you measure performance?

I think there’s a certain. Bias to action in terms of implementation required, as well as a discipline towards measurement that’s also, you know, an important prerequisite.

John Stackhouse: Love that bias to action and have the discipline to measure what that action leads to. Where are you seeing the most progress or most success across the economy?

Jacomo Corbo: There is a lot of relatively complex knowledge work that can be accelerated. Right. So I gave the example of software engineers. Absolutely. But there’s a [00:23:00] lot of other very horizontal functions, whether it’s in procurement or accounts receivable, accounts payable processes, where a lot of these tools are incredibly helpful trying to.

Find opportunities around spend reduction is something that a whole lot of organizations have invested in a huge amount of infrastructure. But one of the areas that I’m most excited about obviously is given everything that I’m doing with Physics X is. In industrial applications, I’m really talking about what engineers do and what the work of engineering, of designing things involves, of making them, testing them before you can manufacture things, whether it’s utility, whether it’s making steel or aluminum, the systems underpinning how that work gets done. Is incredibly ripe for a transformation.

Sonia Sennik: The point that you’re [00:24:00] making so well is it’s ripe for transformation in that generative AI or agentic AI as well. Being able to come in and be a dynamic contributor to making decisions and adjusting and aggregating learnings at a much faster rate. Can you speak a little bit to some specific examples of seeing that in action right now?

Jacomo Corbo: Absolutely. So a certain design, let’s say we’re talking about the exterior shape of a vehicle, we wanna understand how it performs at higher speeds and the efficiency of that vehicle and cutting through the air, the drag coefficients. We want to be able to, uh, assess how that structure made of a certain material will deform under loads in a crash test.

These things are different simulations. They’re incredibly compute intensive, but they also mean that the design runs through engineers who are deeply knowledgeable about those different performance criteria and who are ultimately trying to select down on the most important, the most informative, the [00:25:00] small number of experiments that they will run that will allow them to do those iterations and get to a better design.

On the other hand, you have testing, which runs through building physical prototypes in many cases, and then crashing them into a wall, for example, or building an airfoil and trialing it in the wind tunnel for 50,000 hours. You need a lot of infrastructure. Things take time. It’s incredibly expensive. And those iterations in so far as they run through tooling to make physical things, it’s incredibly slow.

Part of the revolution that’s happening right now around ai. For physics, for chemistry is that these models can’t, can be trained on a corpus of data, which is mostly numerical simulation, but they can also be trained on real world data, on experimental data, on test bench data, on wind tunnel data in ways that now allow you to get the boast of both worlds.

You get to models that are incredibly quickly, which allows you to do automation and optimization that is all together. [00:26:00] Impossible if you were only doing this on numerical simulation. And at the same time, they’re more accurate than our first principle’s. Understanding of, of the phenomena involved.

There’re a better, a higher fidelity representation of what actually happens in the crash.

Sonia Sennik: Jacomo, my last question would be around what you see specifically for the role of Ag agentic ai. In these systems of strategy, decision making and design, where do you see the biggest potential impact?

Jacomo Corbo: It’s a great question.

I would say that the frontier is moving so quickly that I wouldn’t put a limit on where exactly to apply this kind of modality, right? I wouldn’t put anything out of bounds. I think there’s an imperative for organizations to really start. Doing things to start experimenting with this, to really understanding what is working and how well things are working, because that will allow you to understand where things are falling over, not meeting requirements.

It [00:27:00] will tell you whether or not you need to do things like better prompt engineering or whether you need to do fine tuning where things are working incredibly well, in which case you want to be able to do more of that. I’d say we’re going to get it to a place where, for all categories of. Knowledge work.

People who are operating in desks, but also people who are sitting in clients, manufacturing operators, process engineers, drivers. Drivers, right? This is relevant in the built environment. It is going to change the way that we work. I think the opportunity is incredibly exciting and it can ultimately make work more interesting, more compelling.

So many of the organizations that I am in contact with are resource constrained in such important ways, and this is a mechanism through which you can relax a lot of those constraints and do more.

John Stackhouse: Those are great points to make work more interesting, more compelling, and to, I love the way you phrased it.

Relax, constraints. Can’t think of an organization that wouldn’t want to, uh, pursue that. Jacomo, thank you for the [00:28:00] conversation. This has been really inspiring.

Jacomo Corbo: Thanks very much, Sonia. John.

John Stackhouse: Sonia. As we were discussing in the introduction, this feels like the beginning of a new economic era. I shake my head thinking about the president of the United States going to Saudi Arabia to sell computer chips and American technology to help the Saudis transform their economy.

That’s just one of many things underway on the planet that are shaping the economy of the 2030s. And beyond.

Sonia Sennik: Well, John compute is just one piece of the puzzle. As we all know, adopting AI and managing change within enterprises is a really challenging problem. So thinking carefully about where AI can be implemented to prove productivity gains is an essential piece of the puzzle, and what I’m looking forward to seeing is the way in which this is harnessed and how people can adjust their processes to actually speed up that adoption cycle. [00:29:00] Because as you mentioned, this is a transformational opportunity, but there’s many, many aspects that need to change in order for it to really make impact.

I’d like to look back and tie that investment in compute and in chips directly to productivity gains and GDP growth. If we’re able to see that very clear line, then we’ll really understand this intelligence revolution that we’re in,

John Stackhouse: and it is an intelligence revolution. It’s not just artificial intelligence.

In fact, artificial intelligence works best when paired with human intelligence. So lots more to talk about in terms of what we can all do a little bit differently, more creatively and more ambitiously. This is Disruptors, an RBC podcast. I’m John Stackhouse.

Sonia Sennik: And I’m Sonia Sennik.

John Stackhouse: Talk to you soon.

This article is intended as general information only and is not to be relied upon as constituting legal, financial or other professional advice. The reader is solely liable for any use of the information contained in this document and Royal Bank of Canada (“RBC”) nor any of its affiliates nor any of their respective directors, officers, employees or agents shall be held responsible for any direct or indirect damages arising from the use of this document by the reader. A professional advisor should be consulted regarding your specific situation. Information presented is believed to be factual and up-to-date but we do not guarantee its accuracy and it should not be regarded as a complete analysis of the subjects discussed. All expressions of opinion reflect the judgment of the authors as of the date of publication and are subject to change. No endorsement of any third parties or their advice, opinions, information, products or services is expressly given or implied by Royal Bank of Canada or any of its affiliates.

This document may contain forward-looking statements within the meaning of certain securities laws, which are subject to RBC’s caution regarding forward-looking statements. ESG (including climate) metrics, data and other information contained on this website are or may be based on assumptions, estimates and judgements. For cautionary statements relating to the information on this website, refer to the “Caution regarding forward-looking statements” and the “Important notice regarding this document” sections in our latest climate report or sustainability report, available at: https://www.rbc.com/community-social-impact/reporting-performance/index.html. Except as required by law, none of RBC nor any of its affiliates undertake to update any information in this document.

Share

How Do I Listen to RBC Thought Leadership Podcasts?
More

Subscribe to the Disruptors Podcast