What learning AI as a leader really looks like
Author: Melissa O’Rourke
Posted on Apr 20, 2026
Category: News

There’s no shortage of advice telling leaders to adopt AI and transform their organizations. It sounds straightforward. It isn’t.
When you’re already operating at full capacity, learning something entirely new, especially something technical, can feel unrealistic. So, if you’ve seen the headlines that make AI adoption look simple, let me be honest: I’m still figuring it out as I go.
I'm in my mid-thirties and part of the age group that's seen as having a natural comfort with technology, but I don’t come from a technical background. I'm an economic development specialist with expertise in strategy, operations, and leading teams; it’s not in writing code or configuring systems.
I've come to believe that technology literacy is no longer optional for leaders, but a core part of the job. If leaders don’t engage with it themselves, why would anyone else?
Research backs this up: studies show that employees learn tech adoption behaviours from their leaders and peers, and that leaders who demonstrate their own learning, as opposed to mandating training, are far more effective at driving real change within their organizations.
So, I've been trying to engage with it. Imperfectly, but consistently.
Right now, that looks like blocking a small window of time on Friday afternoons for AI learning. I treat it like going to the gym. You don't always want to do it, there are a dozen other things competing for that time, but you need to do it to stay healthy.
It doesn’t always happen. Meetings creep in and deadlines press. When that time disappears, I'll try to find an alternative path to training by listening to a podcast in the car, or by having a conversation with an AI tool while I have my morning coffee.
I'm not pretending this is easy. There are days where it's tiring. Days when learning feels like too much on top of everything else, but I keep coming back to it because I genuinely believe this will help me work more effectively in the long run.
My first interaction with AI was about three years ago. I was making chili and wondered if ChatGPT could suggest some unique ingredients and it recommended cocoa powder. It was a hit. At that point, it simply replaced Google as my go-to search tool.
From there, I started using it for small, personal questions. Which air conditioners are people using in their RVs? What would my bedroom look like with a new dresser? Over time, as the responses proved helpful, I began bringing it into my work.
Now I use AI to support research, structure ideas, and build simple internal tools. Recently, I set up a basic chatbot to help navigate internal policies and answer common staff questions. Nothing too complex, but it worked and I understood how it worked.
It's been years since that first question about a recipe, and only recently that I’ve started carving out dedicated learning time. The learning is slow and progress isn’t linear. Sometimes it comes easily, but other times I feel completely out of my depth.
But progress, while a little messy, is happening, and I'm starting to feel like my understanding and comfort is picking up speed.
I've been testing different tools and learned they each have different strengths. When I wanted to visualize a design idea, Gemini was better for image generation. When I built the policy navigation chatbot, a mix between CoPilot Studio and Claude handled it. That chatbot took about two hours to set up and mostly involved copying and pasting links. It breaks sometimes when it's not in test mode and I'm still figuring out how to fix that.
I’ve also started experimenting with advanced ideas, like AI systems that can take action on your behalf. I haven’t fully crossed into this world of “agentic AI” yet, because the fear is real.
What if I download the wrong thing and give something nefarious access on my computer? What if I accidentally expose my passwords? These aren't irrational concerns. Data protection and privacy matter, and institutional policies exist for good reason. I'm not trying to bypass those concerns. I’m trying to understand them well enough to move forward responsibly.
Balancing curiosity with caution has been one of the hardest parts.
I'm not sharing this because I have it figured out. I don’t. And I’m not sharing this to suggest there’s a right way to do this.
This is hard. The learning curve is steep when technology isn't your core area of expertise, and this kind of learning requires a different mindset than most of us are used to. To be successful, you must give yourself grace. Grace to learn slowly, to prompt poorly, to build things that break, to be inefficient and try things that don’t work.
That can be difficult and uncomfortable, especially in leadership roles where you’re expected to have all the answers. But I also don’t believe you can sit this out forever.
Not because the headlines say so, but because the nature of work is changing, and leaders who don't understand these tools will increasingly struggle to make good decisions about them for their organizations and for their people.
I don't expect to arrive in a place where I'm 100% technologically literate. I will never be “done” with this. There will always be something new to learn, something else evolving. It’s not realistic to keep up with everything.
But this feels like an area worth investing in, because over time, I think it will make me and my team more effective in the work we do. It’s also shaping how we think of the next chapter of work at the McKenna Institute in meeting people where they are in their own learning with honesty.
For now, that investment looks small. A bit of time carved out where it can fit. A willingness to keep coming back to it, even when it’s frustrating or slow.
If you’re feeling behind and unsure of where to start, you’re not alone in that. Most of us are learning this in real time.
And for me, at least, that’s been enough of a reason to keep going.
Melissa O’Rourke is the Associate Executive Director of the McKenna Institute at the University of New Brunswick.