

Canva Now Requires Use of LLMs During Coding Interviews 84
An anonymous reader quotes a report from The Register: Australian SaaS-y graphic design service Canva now requires candidates for developer jobs to use AI coding assistants during the interview process. [...] Canva's hiring process previously included an interview focused on computer science fundamentals, during which it required candidates to write code using only their actual human brains. The company now expects candidates for frontend, backend, and machine learning engineering roles to demonstrate skill with tools like Copilot, Cursor, and Claude during technical interviews, Canva head of platforms Simon Newton wrote in a Tuesday blog post.
His rationale for the change is that nearly half of Canva's frontend and backend engineers use AI coding assistants daily, that it's now expected behavior, and that the tools are "essential for staying productive and competitive in modern software development." Yet Canva's old interview process "asked candidates to solve coding problems without the very tools they'd use on the job," Newton admitted. "This dismissal of AI tools during the interview process meant we weren't truly evaluating how candidates would perform in their actual role," he added. Candidates were already starting to use AI assistants during interview tasks -- and sometimes used subterfuge to hide it. "Rather than fighting this reality and trying to police AI usage, we made the decision to embrace transparency and work with this new reality," Newton wrote. "This approach gives us a clearer signal about how they'll actually perform when they join our team." The initial reaction among engineers "was worry that we were simply replacing rigorous computer science fundamentals with what one engineer called 'vibe-coding sessions,'" Newton said.
The company addressed these concerns with a recruitment process that sees candidates expected to use their preferred AI tools, to solve what Newton described as "the kind of challenges that require genuine engineering judgment even with AI assistance." Newton added: "These problems can't be solved with a single prompt; they require iterative thinking, requirement clarification, and good decision-making."
His rationale for the change is that nearly half of Canva's frontend and backend engineers use AI coding assistants daily, that it's now expected behavior, and that the tools are "essential for staying productive and competitive in modern software development." Yet Canva's old interview process "asked candidates to solve coding problems without the very tools they'd use on the job," Newton admitted. "This dismissal of AI tools during the interview process meant we weren't truly evaluating how candidates would perform in their actual role," he added. Candidates were already starting to use AI assistants during interview tasks -- and sometimes used subterfuge to hide it. "Rather than fighting this reality and trying to police AI usage, we made the decision to embrace transparency and work with this new reality," Newton wrote. "This approach gives us a clearer signal about how they'll actually perform when they join our team." The initial reaction among engineers "was worry that we were simply replacing rigorous computer science fundamentals with what one engineer called 'vibe-coding sessions,'" Newton said.
The company addressed these concerns with a recruitment process that sees candidates expected to use their preferred AI tools, to solve what Newton described as "the kind of challenges that require genuine engineering judgment even with AI assistance." Newton added: "These problems can't be solved with a single prompt; they require iterative thinking, requirement clarification, and good decision-making."
Seems reasonable (Score:2, Interesting)
AI won't replace you, but people who use AI will.
Re: Seems reasonable (Score:3)
Re: (Score:3)
p.s. Please answer quick, I only have 10 minutes left in the interview!
Knowing how to phrase things w/ sufficient detail (Score:3)
Hi person that uses AI, I have been asked an interview question. I happen to know that the answer is 42. What question do I ask my LLM to make it spit out the correct answer?
p.s. Please answer quick, I only have 10 minutes left in the interview!
That's the skill set they are testing for. Do you know how to phrase and rephrase the question so that the AI provides the answer you are expecting. Expecting because the topic/answer is also in your data structures and algorithms textbook. Which you skimmed through as part of your interview prep. But like using a table of contents or an index are skill for using a book, phrasing questions are skills for AI use. It takes a little practice to get your phrasing effective.
As an added bonus, know how to phra
Re: (Score:2)
Hi person that uses AI, I have been asked an interview question. I happen to know that the answer is 42. What question do I ask my LLM to make it spit out the correct answer?
p.s. Please answer quick, I only have 10 minutes left in the interview!
That's the skill set they are testing for. Do you know how to phrase and rephrase the question so that the AI provides the answer you are expecting. Expecting because the topic/answer is also in your data structures and algorithms textbook. Which you skimmed through as part of your interview prep. But like using a table of contents or an index are skill for using a book, phrasing questions are skills for AI use. It takes a little practice to get your phrasing effective. As an added bonus, know how to phrase things with enough detail to get AI heading in the correct decision, is a skill similar to writing the documentation/requirements that guide developers towards building what users actually need.
As I experiment with AI I agree with your assessment. But unless I'm catching selective dementia, the AI is doing something really strange. What I'm doing isn't committed to memory. In normal activities, I can remember the process, the theory, and mechanics of the activity and the answers, just like I always have.
With AI, the next day, I've forgotten all of it. Good thing I can type all the prompts in again.
So knowledge, skill, and efficiency is no longer needed. You only need know enough to write a p
Re: (Score:2)
But unless I'm catching selective dementia, the AI is doing something really strange. What I'm doing isn't committed to memory. In normal activities, I can remember the process, the theory, and mechanics of the activity and the answers, just like I always have. With AI, the next day, I've forgotten all of it. Good thing I can type all the prompts in again.
So knowledge, skill, and efficiency is no longer needed. You only need know enough to write a prompt.
If I look up something in my old textbook, that textbook is designed to teach. If I ask an AI about something, it is designed to answer. I wonder if that has something to do with your observation?
Re: (Score:2)
But unless I'm catching selective dementia, the AI is doing something really strange. What I'm doing isn't committed to memory. In normal activities, I can remember the process, the theory, and mechanics of the activity and the answers, just like I always have. With AI, the next day, I've forgotten all of it. Good thing I can type all the prompts in again.
So knowledge, skill, and efficiency is no longer needed. You only need know enough to write a prompt.
If I look up something in my old textbook, that textbook is designed to teach. If I ask an AI about something, it is designed to answer. I wonder if that has something to do with your observation?
I think you could be right on that, as when I read a textbook, I digest the material pretty well. And speaking of that, I'm a copious note taker, but I don't look at them after taking them, because the act of writing them down plants them in my memory.
Re: (Score:2)
I happen to know that the answer is 42. What question do I ask my LLM to make it spit out the correct answer?
Sure, I'm going to need a time machine and a planet materializer. Also a lien on the space around Earth so it doesn't get blown up to make room for a hyperspace freeway.
Re: (Score:2)
What is six by nine.
Re: (Score:2, Interesting)
Whether you are in that 20 to 30% is a coin flip largely based on whether he likes you personally and not the quality of your work.
Re: (Score:1)
Actual number from people selling insurance: 2.8%, i.e. anbout onr hour per week. That is so low it does not matter.
Re: (Score:2)
That requires investigation skills and understanding the context.
Something that so called AI can not do.
Re: (Score:1)
So I've dealt with badly written code for most of my professional SW career (15 years) and these days, I work at a company that makes test instruments... we have many legacy embedded legacy devices and LLMs have transformed how I work. It's like having a new bit driver that augments your old screwdriver set.
LLMs are chat bots. They get wound up by whatever you get juicing through them. If you're trying to fi
Re: (Score:2)
No but your productivity will increase by 20 to 30% allowing your boss to lay off 20 to 30% of his programmers.
So far, AI is helping my company ship more features with the same headcount. If that leads to more business, they might even hire more people. It will be interesting to see how this goes.
Re: Seems reasonable (Score:2)
You've heard of two men enter, one man leaves?
Re: (Score:2)
Is it a gardening joke?
Re: (Score:1)
No. You are irreplaceable, and we love you.
Re: (Score:1)
Re: (Score:2)
Hahahaha, no. But some companies that think that will be prematurely going out of business at some time in the future.
Re: Seems reasonable (Score:2)
Frankly, having played with some of the supposedly better reasoning/coding models⦠no they wonâ(TM)t. Good god the crap that stuff produces is terrible.
Re: (Score:1)
Right, because technology never improves
Re: Seems reasonable (Score:4, Interesting)
Had a perfect example recently of why AI will never fully replace developers, at least not until there is AGI that can function equivalent to a human being. Had a locking issue in my code, and while talking with a coworker he ran it through Cursor to see what it came up with to "fix" the problem. All it did was rewrite my code in a way that was slightly more readable, yet eliminated subtle things that the code was doing that in essence made it completely non-functional. On top of that, it didn't solve the problem either, but that's because the ACTUAL issue was a consequence of how an underlying service functioned, which undermined the intention of how the locking was intended to work. I fixed the issue, but an LLM would NEVER have figured that out, not without enough prompting that would have just figured out the solution yourself from the prompt.
Seems suddenly common (Score:5, Informative)
My limited sample: I have 3 long-time engineering friends currently looking for jobs, and they've all been asked to use AI tools of some sort in all of their recent interviews.
Weirdly, most of those cases have sprung the requirement on the person by surprise midway into the interview.
Re:Seems suddenly common (Score:5, Interesting)
I retired two years ago from software development, and I am so glad that I did. I hear from friends still in the industry that this AI bullshit is being pushed on them everywhere.
Retiring from IT in general seems like a good idea (Score:5, Insightful)
I can't say there's a single thing I enjoy about any of this any more. I think I understand why so many IT people retire early and take up farming.
Re: Retiring from IT in general seems like a good (Score:3)
Farming is next to impossible, you'll get prevented by John Deere to fix your machines because only they can reprogram your tractor after a repair.
Re:Retiring from IT in general seems like a good i (Score:5, Informative)
I suspect it's going to be important but I wonder if these companies that are demanding it are being premature.
I remember when syntax highlighting came in. For a good while I disabled it as I found it distracting but over time I got used to it and now I prefer to have it on (although I'm still not greatly inconvenienced if I don't have colour)
I remember when LSPs became available. For a while I didn't bother but now I find it hard to work without one configured (in vim) although I do still resort to grep/sed etc, especially where the LSP doesn't work in some C++ where type deduction is hard.
And now LLMs. I use them via a web prompt and they can save considerable time where I'm looking something up. But they've also cost me considerable time where I've relied on the answer which, once I actually spent the five minutes reading the (long) manpage, was so obviously wrong that I'd be concerned about any programmers long term success and I'd certainly be trying to limit their access to any code that I'm responsible for!
I think more than anything it's this utterly absurd wrongness that they do with such verisimilitude that make them dangerous currently.
But:
Re: (Score:3)
They will invent all kinds of completely wrong shit.
That being said, I do use them in code assistance capacity.
I find them pretty useful.
I've had some pretty clever refactors produced by them. Of course- I wouldn't trust them in anything important without going over every single line myself.
Right now, I find their most useful aspect being their ability to produce a throwaway "app" in moments.
I wante
Re: (Score:3)
And now LLMs. I use them via a web prompt and they can save considerable time where I'm looking something up. But they've also cost me considerable time where I've relied on the answer which, once I actually spent the five minutes reading the (long) manpage, was so obviously wrong that I'd be concerned about any programmers long term success and I'd certainly be trying to limit their access to any code that I'm responsible for!
I think more than anything it's this utterly absurd wrongness that they do with such verisimilitude that make them dangerous currently.
The great seeming confidence that AI has when giving a wrong answer reminds me of how people get in trouble blindly following their GPS devices.
And there is the horrible problem. As actual knowledge and understanding of what the person is trying to do becomes irrelevant, when whatever the LLM spits out becomes the "truth" especially after AI Starts referencing itself, It's gonna get ugly until people who actually know what they are doing come along and pick up the pieces.
Re: Retiring from IT in general seems like a good (Score:2)
Yep. They go farming and start automating their farms.
Re: (Score:2)
Yep. That is why I am back to Ant Farming these days. ;)
Re:Seems suddenly common (Score:5, Interesting)
You know where it isn't? Anywhere that has safety regulation, or is sensitive about proprietary information.
They know the results of "vibe coding" and aren't interested at all.
Re: (Score:2)
Indeed. Such a surprise.
Re: (Score:3)
Places that produce code that must conform to safety regulations already treat each and every author as if they were an LLM- completely untrusted.
Also, corporations send off proprietary information to all kinds of places outside our walls. We rely on contracts and law to do that safely.
You're quite literally talking out your ass. [airbus.com]
Why do you do it?
Re: (Score:2)
Oh, so at the job where I work on actual systems that go into actual aircraft, including Airbus as a customer doesn't have all kinds of safety certification requirements to ship because you say so.
Thanks for letting us know by linking to some marketing horseshit that doesn't actually say in any concrete way what they use it for today, but instead what they "might" use it for.
Here's a hint: what Airbus puts on their website is materially different than what they put into their product and systems requirement
Re: (Score:2)
Oh, so at the job where I work on actual systems that go into actual aircraft, including Airbus as a customer doesn't have all kinds of safety certification requirements to ship because you say so.
lolllll
How fucking sad is your existence that you have to make shit like that up on the internet?
Why would you think you know better than someone who actually works in avionics, what does and doesn't "fly" in avionics development?
Every great engineer needs someone to empty their garbage can. I'm glad that somebody has you.
Re: (Score:2)
Again, I'm glad you're here to tell me what my job is.
Fucking idiot.
Re: (Score:2)
Re: (Score:2)
I've been programming for decades, and have run my own software business for 25 of those.
25 decades and still going strong! Good genes, or a pact with the devil? Also, computers have been around for a LOT longer than I ever realized... ;-)
Re: (Score:2)
I hear from friends still in the industry that this AI bullshit is being pushed on them everywhere.
It strikes me that all programmers using AI to help with coding are also also training AI to ultimately replace them. So I'm not surprised that the "AI bullshit" is being pushed so hard.
Re: (Score:2)
Weirdly, most of those cases have sprung the requirement on the person by surprise midway into the interview.
Which would cause me to terminate the interview at that moment.
Re: (Score:2)
GIGO... (Score:5, Insightful)
Yes, AI can give you code, drawings, text, and other stuff... but it will confidently spit up stuff that isn't intended or just plain wrong. I've had it spit out songs which didn't exist under bands, write code with function arguments that are not implemented, and so on.
How about an interview where one is asked to code something to fulfill a task, and who cares how it is done, but the interviewers want to see the process? Or is the addiction to cheap body ship contracting firms too great to ask for this?
Re: (Score:2)
Re: (Score:2)
I've had it spit out songs which didn't exist under bands, write code with function arguments that are not implemented, and so on.
It will confidently spit out functions that don't exist and give you a detailed explanation of how they supposedly work. Anyone who trusts it is mentally deficient. No shortage there, though.
Re: (Score:2)
How about an interview where one is asked to code something to fulfill a task, and who cares how it is done, but the interviewers want to see the process? Or is the addiction to cheap body ship contracting firms too great to ask for this?
Did you read past the headline? (Of course not—this is Slashdot.)
Because what you’re describing—an interview where the candidate solves a real task and the interviewers observe how they solve it—is exactly what Canva has implemented. They retired the “invert a binary tree on the whiteboard and then find the RREF of this matrix” charade. Now they present open-ended, ambiguous problems that can’t be solved by copy-pasting a prompt into ChatGPT. The goal isn’t t
Re: (Score:2)
My tone just reflects what I see. People being sold on flashy stuff. Ten years ago, all problems were solved by blockchain solutions, now it is AI. A body shop using LLMs is often sold to a company... and it doesn't fit their needs. It is a hammer used to go after all nails right now, instead of actual solutions.
Being able to "master" a chatbot only gets you so far. You might be able to get it to do some code... but if one doesn't know what the code does, it might be a security issue waiting to happen.
You must have 25 years experience with Cursor AI (Score:3)
Sure, it has only been around for less than a year, but we know the ideal candidate exists.
What no testing? (Score:3, Funny)
It's Canva so testing isn't a priority.
Re: (Score:2)
On the bright side, their new "programmers" (Score:2)
...are probably good at flipping burgers and pulling the fries when the bell goes "ding".
Easy (Score:1)
Do not apply there and do not depend on their products continuing to work.
Code AI is stupid... (Score:2)
As someone who has been programming since sometime in the 80s and works on fairly large C++ codebases, I wouldn't trust code AIs like Copilot at all and certainly wouldn't use one to write code for me.
Re: (Score:2)
Code AI is still basic. First I would not use copilot. And then I would not do vibe coding. After that you can let AI help you. You'll learn what your model can and what it cannot do and then just don't ask it for the things it gets wrong, just like you wouldn't ask your junior colleague the complicated questions or expect them to know the codebase by heart.
I think who's saying AI will never do programming well is naive. Programming contains a lot of pattern, good programming usually more than bad programmi
have interviews ever tested the right thing? (Score:3, Interesting)
Re:have interviews ever tested the right thing? (Score:4, Interesting)
Re: (Score:3)
Re:have interviews ever tested the right thing? (Score:4, Interesting)
Competent developers are getting really difficult to find.
We've made a trial hire who is a very LLM-focused developer. It's been a pretty weird experience.
Dude literally can't fucking talk to you without consulting ChatGPT in the middle of a conversation to produce arguments for why he thinks he's right. The final evolution in instant echo chamber.
That being said, he's a good kid, and he has produced some shit.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Actually when it comes to AI and claims placed people haven't been as rigorous as they could have been. We have the scientific method, start using that to determine just how effective it is. Lay this issue to rest.
I recall a comment... (Score:5, Funny)
I recall a comment by a programmer saying: "I asked AI to refactor my code. Now it's clean, elegant, and nothing works."
Why not? (Score:2)
They are tools; asking a candidate to show how they use tools is fine, on the face of it.
You can demonstrate whether or not you know how to use them intelligently and carefully.
Re: (Score:3)
Re: (Score:2)
very appropriate (Score:4, Insightful)
I fail to understand why so many people here have negative opinions about AI coding assistance. I chalk it off as a basic resistance to change.
"essential for staying productive and competitive", that is so definitely true. Working with AI is a skill you have to learn, and I would much rather hire someone who knows and likes it than some curmudgeon who will gripe about it every step of the way.
Meanwhile, yesterday I spent a few hours directing AI to evaluate several specific sections of existing code for me and suggest improvements. I didn't like everything it offered but several were right on the money. Some minor security holes I had missed, improvements in elegance and readability and flexibility. It made the changes quickly, I ran my unit tests before accepting them, and there was minimal hassle or effort on my part.
Re: (Score:2)
I fail to understand why so many people here have negative opinions about AI coding assistance.
You have to be smart enough to use it correctly and understand its limitations.
Re: (Score:2)
Said about every tool ever. So what has changed then?
Re: (Score:1)
I fail to understand why so many people here have negative opinions about AI coding assistance. I chalk it off as a basic resistance to change.
Resistance to change is certainly a part of it. But also, some of these people are so old it's actually hard for them to learn something new. I work with people that can't figure out their iPhone. There's no hope for them to understand and utilize something this different.
Re: (Score:2)
> I work with people that can't figure out their iPhone
I've seen this multi-generational thing too. It was too easy to dismiss this as (middle-age or older == doesn't want to learn) until one of them mentioned that remembering how to do one thing on an iPhone about one time a year was not worth remembering compared to knowing the inner details of a complex business process.
In other words (Score:2)
frontend, backend, and machine learning engineering roles
Bullshit web two-oh and AI jobs.
Of course they need proficiency in slop generators.
Hiring is broken (Score:3)
I predict is will have mo measurable impact on their engineering capability.
I also predict their HR will hail this as a great success.
Bravo (Score:2)
Regardless of your thoughts on using coding assistants, this is a positive move forward. Coding interviews have always been unrealistic and discriminatory processes that filter out many good candidates.
Prompts (Score:1)
Hopefully someone will come up with an AI that will generate AI prompts for me. Think of all the savings!
Canva gets it (Score:2)
I’m glad to see Canva flipping the script on technical interviews. If half your engineering team already uses LLMs like Copilot or Claude daily, why pretend otherwise when hiring? Their new approach—evaluating how candidates collaborate with AI, not just how they code in isolation—isn’t just a gimmick. It’s a course correction.
This is what it looks like when a company stops treating AI like a cheat code and starts treating it like a tool. I remember the first time I invoked EM
really hard to evaluate a candidate this way (Score:2)
However, as somebody who has interviewed a moderate number of people (including one last week), I don't know how I would incorporate AI use in a live interview. There's so much randomness when you use AI to assist you in solving a complex problem: sometimes it works right away, som