
I’ve been a Wealthsimple customer for over 10 years. I refer them constantly. I’ve watched them go from a scrappy robo-advisor the old guard dismissed as a toy for millennials to a full-service financial platform managing $100 billion in assets for 3 million Canadians, three years ahead of their own targets. I think they’re one of the most important companies Canada has produced in a generation.
So when they publish something interesting, I pay attention.
Wealthsimple replaced the resume with a one-week challenge: build a working AI prototype. Of 1,152 applicants, they interviewed 20 and made 5 offers. The results raised fundamental questions about what traditional hiring actually measures, and whether the resume is the right tool for identifying people who can think and build in an AI-first environment.
Last week, Wealthsimple’s Chief People Officer Diana McLachlan dropped a post-mortem on their AI Builders hiring experiment. It’s a good read. Honest, specific, and a little uncomfortable in the right places.
I did notice it landed on a Friday. Companies bury things on Fridays. Bad earnings. Quiet layoffs. Stories they want to fade. So the question worth asking: was this genuinely a transparency play from a company with a track record of doing things differently, or were there parts of this experiment that stung enough to require some careful timing? Reading the piece, I think it’s mostly the former. McLachlan names the mistakes directly and doesn’t spin them. But the Friday drop is worth keeping in mind as you read.
Here’s what they did, what they found, and why I think it matters.
The experiment
Wealthsimple gave people one week to build a working AI prototype instead of submitting a resume. The brief was open. Design something where AI does real work, and show where you’d draw the line between what the machine handles and what a person has to own.
1,152 people applied.
Let that land. Over a thousand Canadians spent meaningful time building a working system just to be considered. That’s not a statement about a job posting. That’s a statement about what Wealthsimple has become as a company. People want to be part of this rocket ride badly enough to put in real work just for the chance to be considered. You don’t get that kind of response unless you’ve earned serious gravity.
They reviewed all 1,152 submissions. Interviewed 20. Made 5 offers.
What people built
McLachlan writes that they never expected the range of what came back. People built tools for healthcare, education, legal workflows, civic infrastructure. Problems with nothing to do with fintech, built by people who clearly cared about what they were trying to fix. Not demos. Working systems, with real thought behind where automation belongs and where it doesn’t.
How they evaluated
Every interview was 15 minutes and four questions. Break down your problem from first principles. How did you know your system was working as intended, not just running, but producing reliable outputs? What tools did you use and why? What’s the most interesting thing you’ve read about where AI is going?
The candidates who stood out could explain their problem from the root cause up, not the surface down. They knew their system’s edges. They’d made real choices about what not to build. And they had a clear answer for where AI stops and a human takes over.
That last part is important. This wasn’t a screen for AI enthusiasm. It was a screen for AI judgment. Very different hire.
What they got wrong
This is the part that makes the post worth reading. McLachlan doesn’t gloss over the failures.
The first rejection emails to candidates who didn’t make it to the interview phase weren’t good enough. When someone spends days building something real, they deserve better than a form letter. They course corrected, but she’s direct that the bar for how you treat candidates has to match the bar you set for the process itself, from the start.
The open brief was both the feature and the flaw. Giving people complete freedom to build whatever they wanted produced extraordinary range. But evaluating wildly different submissions across wildly different domains is genuinely hard, and she acknowledges they’re still working out whether a narrower prompt might serve the goal better next time.
The 15-minute interview window forced candidates to prioritize, which is a real skill. But she’s honest that some people had deeper thinking than the format surfaced. They lost things.
Scaling is an unsolved problem. 1,152 was manageable. 5,000 is a different question entirely.
This didn’t come out of nowhere
The AI Builders program makes more sense when you see it alongside Launchpad, Wealthsimple’s year-long program that hires high school graduates, no resume required, into paid roles on real teams doing real work.
The results there were striking. Managers kept saying the same thing: these interns operate with a level of technical agency that surprised them. They don’t wait for instructions. They identify problems and build solutions, sometimes using AI tools their managers haven’t fully explored yet. One intern built a tool to reduce hallucinations in an AI chatbot. Another was contributing production code within his first week. A third built a fully functional internal bot during an eight-hour hackathon.
Wealthsimple was honest about what didn’t work in Launchpad too. Some rotations were rushed. Managers needed more lead time. Structure mattered more than they initially assumed. They documented all of it and built those fixes into Launchpad 2.0. That’s a pattern worth noting: they run experiments, they publish the honest version of what happened, and they iterate.
Which brings me back to that Friday post.
Maybe the timing was just logistics. But if there’s a lesson here for other organizations, it’s this: the willingness to say publicly what went wrong is actually the most interesting part of what Wealthsimple is doing. Not the clever format. Not the numbers. The fact that a Chief People Officer wrote “our first rejection email wasn’t good enough” and put her name on it.
The bigger question
What are you actually learning from a resume? Where someone worked. What they say they did. Nothing about how they think, how fast they move, or whether they can ship something real when the problem is ambiguous and the constraints are tight.
Wealthsimple decided to just ask for the thing they actually wanted to know. 1,152 people answered.
Most hiring processes would have screened half of them out before a human ever looked at their work.
I’ve hired hundreds of people across 25 years in technology leadership. I’ve seen brilliant candidates get filtered out by keyword screens, and I’ve watched polished resumes walk through the door and deliver nothing. Wealthsimple’s approach wouldn’t work everywhere, and they’d be the first to say so. But the instinct behind it is right: stop asking people to describe what they can do and start asking them to show you. If I were building a hiring process today, that principle would be at the centre of it.