Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Businesses AI

Canva Now Requires Use of LLMs During Coding Interviews 84

An anonymous reader quotes a report from The Register: Australian SaaS-y graphic design service Canva now requires candidates for developer jobs to use AI coding assistants during the interview process. [...] Canva's hiring process previously included an interview focused on computer science fundamentals, during which it required candidates to write code using only their actual human brains. The company now expects candidates for frontend, backend, and machine learning engineering roles to demonstrate skill with tools like Copilot, Cursor, and Claude during technical interviews, Canva head of platforms Simon Newton wrote in a Tuesday blog post.

His rationale for the change is that nearly half of Canva's frontend and backend engineers use AI coding assistants daily, that it's now expected behavior, and that the tools are "essential for staying productive and competitive in modern software development." Yet Canva's old interview process "asked candidates to solve coding problems without the very tools they'd use on the job," Newton admitted. "This dismissal of AI tools during the interview process meant we weren't truly evaluating how candidates would perform in their actual role," he added. Candidates were already starting to use AI assistants during interview tasks -- and sometimes used subterfuge to hide it. "Rather than fighting this reality and trying to police AI usage, we made the decision to embrace transparency and work with this new reality," Newton wrote. "This approach gives us a clearer signal about how they'll actually perform when they join our team."
The initial reaction among engineers "was worry that we were simply replacing rigorous computer science fundamentals with what one engineer called 'vibe-coding sessions,'" Newton said.

The company addressed these concerns with a recruitment process that sees candidates expected to use their preferred AI tools, to solve what Newton described as "the kind of challenges that require genuine engineering judgment even with AI assistance." Newton added: "These problems can't be solved with a single prompt; they require iterative thinking, requirement clarification, and good decision-making."

Canva Now Requires Use of LLMs During Coding Interviews

Comments Filter:
  • Seems reasonable (Score:2, Interesting)

    by jkechel ( 1101181 )

    AI won't replace you, but people who use AI will.

    • I am a person that uses AI, will I replace me?
      • Hi person that uses AI, I have been asked an interview question. I happen to know that the answer is 42. What question do I ask my LLM to make it spit out the correct answer?

        p.s. Please answer quick, I only have 10 minutes left in the interview!

        • Hi person that uses AI, I have been asked an interview question. I happen to know that the answer is 42. What question do I ask my LLM to make it spit out the correct answer?

          p.s. Please answer quick, I only have 10 minutes left in the interview!

          That's the skill set they are testing for. Do you know how to phrase and rephrase the question so that the AI provides the answer you are expecting. Expecting because the topic/answer is also in your data structures and algorithms textbook. Which you skimmed through as part of your interview prep. But like using a table of contents or an index are skill for using a book, phrasing questions are skills for AI use. It takes a little practice to get your phrasing effective.

          As an added bonus, know how to phra

          • Hi person that uses AI, I have been asked an interview question. I happen to know that the answer is 42. What question do I ask my LLM to make it spit out the correct answer?

            p.s. Please answer quick, I only have 10 minutes left in the interview!

            That's the skill set they are testing for. Do you know how to phrase and rephrase the question so that the AI provides the answer you are expecting. Expecting because the topic/answer is also in your data structures and algorithms textbook. Which you skimmed through as part of your interview prep. But like using a table of contents or an index are skill for using a book, phrasing questions are skills for AI use. It takes a little practice to get your phrasing effective. As an added bonus, know how to phrase things with enough detail to get AI heading in the correct decision, is a skill similar to writing the documentation/requirements that guide developers towards building what users actually need.

            As I experiment with AI I agree with your assessment. But unless I'm catching selective dementia, the AI is doing something really strange. What I'm doing isn't committed to memory. In normal activities, I can remember the process, the theory, and mechanics of the activity and the answers, just like I always have.

            With AI, the next day, I've forgotten all of it. Good thing I can type all the prompts in again.

            So knowledge, skill, and efficiency is no longer needed. You only need know enough to write a p

            • by drnb ( 2434720 )

              But unless I'm catching selective dementia, the AI is doing something really strange. What I'm doing isn't committed to memory. In normal activities, I can remember the process, the theory, and mechanics of the activity and the answers, just like I always have. With AI, the next day, I've forgotten all of it. Good thing I can type all the prompts in again.

              So knowledge, skill, and efficiency is no longer needed. You only need know enough to write a prompt.

              If I look up something in my old textbook, that textbook is designed to teach. If I ask an AI about something, it is designed to answer. I wonder if that has something to do with your observation?

              • But unless I'm catching selective dementia, the AI is doing something really strange. What I'm doing isn't committed to memory. In normal activities, I can remember the process, the theory, and mechanics of the activity and the answers, just like I always have. With AI, the next day, I've forgotten all of it. Good thing I can type all the prompts in again.

                So knowledge, skill, and efficiency is no longer needed. You only need know enough to write a prompt.

                If I look up something in my old textbook, that textbook is designed to teach. If I ask an AI about something, it is designed to answer. I wonder if that has something to do with your observation?

                I think you could be right on that, as when I read a textbook, I digest the material pretty well. And speaking of that, I'm a copious note taker, but I don't look at them after taking them, because the act of writing them down plants them in my memory.

        • by djinn6 ( 1868030 )

          I happen to know that the answer is 42. What question do I ask my LLM to make it spit out the correct answer?

          Sure, I'm going to need a time machine and a planet materializer. Also a lien on the space around Earth so it doesn't get blown up to make room for a hyperspace freeway.

        • by allo ( 1728082 )

          What is six by nine.

      • Re: (Score:2, Interesting)

        by rsilvergun ( 571051 )
        No but your productivity will increase by 20 to 30% allowing your boss to lay off 20 to 30% of his programmers.

        Whether you are in that 20 to 30% is a coin flip largely based on whether he likes you personally and not the quality of your work.
        • by gweihir ( 88907 )

          Actual number from people selling insurance: 2.8%, i.e. anbout onr hour per week. That is so low it does not matter.

        • It will not. Because 99% of my work is figuring out what that bad written piece of legacy code actually does.
          That requires investigation skills and understanding the context.
          Something that so called AI can not do.
          • This is exactly what I was saying... up until I invested more time using the agent-based ones.

            So I've dealt with badly written code for most of my professional SW career (15 years) and these days, I work at a company that makes test instruments... we have many legacy embedded legacy devices and LLMs have transformed how I work. It's like having a new bit driver that augments your old screwdriver set.

            LLMs are chat bots. They get wound up by whatever you get juicing through them. If you're trying to fi
        • No but your productivity will increase by 20 to 30% allowing your boss to lay off 20 to 30% of his programmers.

          So far, AI is helping my company ship more features with the same headcount. If that leads to more business, they might even hire more people. It will be interesting to see how this goes.

      • You've heard of two men enter, one man leaves?

      • No. You are irreplaceable, and we love you.

      • I replace me, except programming c.
    • by gweihir ( 88907 )

      Hahahaha, no. But some companies that think that will be prematurely going out of business at some time in the future.

    • Frankly, having played with some of the supposedly better reasoning/coding models⦠no they wonâ(TM)t. Good god the crap that stuff produces is terrible.

    • Re: Seems reasonable (Score:4, Interesting)

      by Xardion ( 215668 ) on Thursday June 12, 2025 @10:24AM (#65444705)

      Had a perfect example recently of why AI will never fully replace developers, at least not until there is AGI that can function equivalent to a human being. Had a locking issue in my code, and while talking with a coworker he ran it through Cursor to see what it came up with to "fix" the problem. All it did was rewrite my code in a way that was slightly more readable, yet eliminated subtle things that the code was doing that in essence made it completely non-functional. On top of that, it didn't solve the problem either, but that's because the ACTUAL issue was a consequence of how an underlying service functioned, which undermined the intention of how the locking was intended to work. I fixed the issue, but an LLM would NEVER have figured that out, not without enough prompting that would have just figured out the solution yourself from the prompt.

  • by dcollins ( 135727 ) on Wednesday June 11, 2025 @11:35PM (#65443967) Homepage

    My limited sample: I have 3 long-time engineering friends currently looking for jobs, and they've all been asked to use AI tools of some sort in all of their recent interviews.

    Weirdly, most of those cases have sprung the requirement on the person by surprise midway into the interview.

    • by dskoll ( 99328 ) on Wednesday June 11, 2025 @11:49PM (#65443987) Homepage

      I retired two years ago from software development, and I am so glad that I did. I hear from friends still in the industry that this AI bullshit is being pushed on them everywhere.

      • by ebunga ( 95613 ) on Thursday June 12, 2025 @12:09AM (#65444019)

        I can't say there's a single thing I enjoy about any of this any more. I think I understand why so many IT people retire early and take up farming.

        • Farming is next to impossible, you'll get prevented by John Deere to fix your machines because only they can reprogram your tractor after a repair.

        • by locofungus ( 179280 ) on Thursday June 12, 2025 @04:15AM (#65444197)

          I can't say there's a single thing I enjoy about any of this any more. I think I understand why so many IT people retire early and take up farming.

          I suspect it's going to be important but I wonder if these companies that are demanding it are being premature.

          I remember when syntax highlighting came in. For a good while I disabled it as I found it distracting but over time I got used to it and now I prefer to have it on (although I'm still not greatly inconvenienced if I don't have colour)

          I remember when LSPs became available. For a while I didn't bother but now I find it hard to work without one configured (in vim) although I do still resort to grep/sed etc, especially where the LSP doesn't work in some C++ where type deduction is hard.

          And now LLMs. I use them via a web prompt and they can save considerable time where I'm looking something up. But they've also cost me considerable time where I've relied on the answer which, once I actually spent the five minutes reading the (long) manpage, was so obviously wrong that I'd be concerned about any programmers long term success and I'd certainly be trying to limit their access to any code that I'm responsible for!

          I think more than anything it's this utterly absurd wrongness that they do with such verisimilitude that make them dangerous currently.

          does unlink in debugfs update the free block list?

          Yes, when you use unlink on a file in debugfs (or via any means in the Linux filesystem stack), it does eventually update the free block list -- but not immediately at the debugfs tool level unless explicitly committed or updated.

          In debugfs specifically:

                  debugfs is a low-level filesystem debugger. It does not mount the filesystem or execute actions in real time the way the kernel does during normal operations.

                  If you run the unlink command in debugfs, it removes the directory entry, and depending on the inode's link count, it may also release the inode.

                  But to update the free block list, debugfs must explicitly free the inode and associated blocks.

          In recent versions of debugfs:

                  unlink can update the block and inode bitmaps if used correctly and if the file has no other hard links.

                  But sometimes manual intervention (e.g., using clri to clear inodes or freeb to free blocks) is necessary in corrupted filesystems or after a crash.

          Summary:

                  Yes, unlink in debugfs can update the free block list.

                  But this happens only if the file is fully unlinked (no hard links remain).

                  Remember that debugfs works outside the kernel, so changes are made directly to disk structures -- use with caution.

          But:

          man debugfs

                        unlink pathname
                                      Remove the link specified by pathname to an inode. Note this does not adjust the inode reference counts.

          • Ya- getting information like that from their pretraining data is straight up fucking Russian Roulette.
            They will invent all kinds of completely wrong shit.

            That being said, I do use them in code assistance capacity.
            I find them pretty useful.

            I've had some pretty clever refactors produced by them. Of course- I wouldn't trust them in anything important without going over every single line myself.
            Right now, I find their most useful aspect being their ability to produce a throwaway "app" in moments.
            I wante
          • And now LLMs. I use them via a web prompt and they can save considerable time where I'm looking something up. But they've also cost me considerable time where I've relied on the answer which, once I actually spent the five minutes reading the (long) manpage, was so obviously wrong that I'd be concerned about any programmers long term success and I'd certainly be trying to limit their access to any code that I'm responsible for!

            I think more than anything it's this utterly absurd wrongness that they do with such verisimilitude that make them dangerous currently.

            The great seeming confidence that AI has when giving a wrong answer reminds me of how people get in trouble blindly following their GPS devices.

            And there is the horrible problem. As actual knowledge and understanding of what the person is trying to do becomes irrelevant, when whatever the LLM spits out becomes the "truth" especially after AI Starts referencing itself, It's gonna get ugly until people who actually know what they are doing come along and pick up the pieces.

        • Yep. They go farming and start automating their farms.

        • by antdude ( 79039 )

          Yep. That is why I am back to Ant Farming these days. ;)

      • by MachineShedFred ( 621896 ) on Thursday June 12, 2025 @12:32AM (#65444055) Journal

        You know where it isn't? Anywhere that has safety regulation, or is sensitive about proprietary information.

        They know the results of "vibe coding" and aren't interested at all.

        • by gweihir ( 88907 )

          Indeed. Such a surprise.

        • Ya, bullshit.

          Places that produce code that must conform to safety regulations already treat each and every author as if they were an LLM- completely untrusted.
          Also, corporations send off proprietary information to all kinds of places outside our walls. We rely on contracts and law to do that safely.

          You're quite literally talking out your ass. [airbus.com]
          Why do you do it?
          • Oh, so at the job where I work on actual systems that go into actual aircraft, including Airbus as a customer doesn't have all kinds of safety certification requirements to ship because you say so.

            Thanks for letting us know by linking to some marketing horseshit that doesn't actually say in any concrete way what they use it for today, but instead what they "might" use it for.

            Here's a hint: what Airbus puts on their website is materially different than what they put into their product and systems requirement

            • Oh, so at the job where I work on actual systems that go into actual aircraft, including Airbus as a customer doesn't have all kinds of safety certification requirements to ship because you say so.

              lolllll

              How fucking sad is your existence that you have to make shit like that up on the internet?

              Why would you think you know better than someone who actually works in avionics, what does and doesn't "fly" in avionics development?

              Every great engineer needs someone to empty their garbage can. I'm glad that somebody has you.

      • I've been programming for decades, and have run my own software business for 25 of those. A couple of years ago I started spending more time making money from one of my other passions ... carpentry and woodwork. At least those won't get screwed over any time soon.
        • I've been programming for decades, and have run my own software business for 25 of those.

          25 decades and still going strong! Good genes, or a pact with the devil? Also, computers have been around for a LOT longer than I ever realized... ;-)

      • I hear from friends still in the industry that this AI bullshit is being pushed on them everywhere.

        It strikes me that all programmers using AI to help with coding are also also training AI to ultimately replace them. So I'm not surprised that the "AI bullshit" is being pushed so hard.

    • by gweihir ( 88907 )

      Weirdly, most of those cases have sprung the requirement on the person by surprise midway into the interview.

      Which would cause me to terminate the interview at that moment.

  • GIGO... (Score:5, Insightful)

    by ctilsie242 ( 4841247 ) on Thursday June 12, 2025 @12:07AM (#65444011)

    Yes, AI can give you code, drawings, text, and other stuff... but it will confidently spit up stuff that isn't intended or just plain wrong. I've had it spit out songs which didn't exist under bands, write code with function arguments that are not implemented, and so on.

    How about an interview where one is asked to code something to fulfill a task, and who cares how it is done, but the interviewers want to see the process? Or is the addiction to cheap body ship contracting firms too great to ask for this?

    • by N1AK ( 864906 )
      I suspect at least some of the news about this is people conflating being allowed, or pratically speaking needing to, use AI tools with being tested on or required to use them. I still remember a IT leadership role I interviewed for about a decade ago where they had a technical test and expected it to be done completely offline, because of course you'd expect a non-developer role to be able to define and implement anything from a programming language without any need to check reference material though so I'
    • I've had it spit out songs which didn't exist under bands, write code with function arguments that are not implemented, and so on.

      It will confidently spit out functions that don't exist and give you a detailed explanation of how they supposedly work. Anyone who trusts it is mentally deficient. No shortage there, though.

    • How about an interview where one is asked to code something to fulfill a task, and who cares how it is done, but the interviewers want to see the process? Or is the addiction to cheap body ship contracting firms too great to ask for this?

      Did you read past the headline? (Of course not—this is Slashdot.)

      Because what you’re describing—an interview where the candidate solves a real task and the interviewers observe how they solve it—is exactly what Canva has implemented. They retired the “invert a binary tree on the whiteboard and then find the RREF of this matrix” charade. Now they present open-ended, ambiguous problems that can’t be solved by copy-pasting a prompt into ChatGPT. The goal isn’t t

      • My tone just reflects what I see. People being sold on flashy stuff. Ten years ago, all problems were solved by blockchain solutions, now it is AI. A body shop using LLMs is often sold to a company... and it doesn't fit their needs. It is a hammer used to go after all nails right now, instead of actual solutions.

        Being able to "master" a chatbot only gets you so far. You might be able to get it to do some code... but if one doesn't know what the code does, it might be a security issue waiting to happen.

  • by ebunga ( 95613 ) on Thursday June 12, 2025 @12:08AM (#65444015)

    Sure, it has only been around for less than a year, but we know the ideal candidate exists.

  • by dubist ( 2893961 ) on Thursday June 12, 2025 @12:11AM (#65444027)

    It's Canva so testing isn't a priority.

    • by J-1000 ( 869558 )
      Where are my mod points when I need them? In the age of LLMs the need for developer testing (or maybe developer scrutiny over tests written by the LLM) is going to outweigh the need for LLM use.
  • ...are probably good at flipping burgers and pulling the fries when the bell goes "ding".

  • by gweihir ( 88907 )

    Do not apply there and do not depend on their products continuing to work.

  • As someone who has been programming since sometime in the 80s and works on fairly large C++ codebases, I wouldn't trust code AIs like Copilot at all and certainly wouldn't use one to write code for me.

    • by allo ( 1728082 )

      Code AI is still basic. First I would not use copilot. And then I would not do vibe coding. After that you can let AI help you. You'll learn what your model can and what it cannot do and then just don't ask it for the things it gets wrong, just like you wouldn't ask your junior colleague the complicated questions or expect them to know the codebase by heart.
      I think who's saying AI will never do programming well is naive. Programming contains a lot of pattern, good programming usually more than bad programmi

  • by PatKa ( 1043990 ) on Thursday June 12, 2025 @02:22AM (#65444147)
    "This dismissal of AI tools during the interview process meant we weren't truly evaluating how candidates would perform in their actual role," This is hilarious. So, then why focus so much on basic data structures and algorithms in the interview? Yes, you should know basic algorithms and yes, if you work on certain types of applications you need that knowledge but the vast majority of companies writes code to support their business or to build a product they sell. Most of the time you will use the algorithms provided by your SDK because usually they have been optimized over the years to work really well. Yet, a lot of companies focus in their interviews to just check how well somebody can implement merge sort, linked lists and other kind of data structures and algorithms just to say afterwards "here is our gigantic code base consisting of web controllers, database access code and business logic. Have fun." None of the interviews I ever had tested how I would perform in the actual role because I do not need to implement a linked list myself in order to calculate a VAT to display the end price of a product to a customer.
    • by JaredOfEuropa ( 526365 ) on Thursday June 12, 2025 @04:05AM (#65444193) Journal
      Data structures and algorithms are used in interviews, not because you'll be expected to implement sorts and linked lists, but because they demonstrate that the candidate can implement and troubleshoot software from specs, using cases they should be familiar with if they have a CS / coding background. Testing for skills rather than knowledge. By the same token, it makes sense to test candidates on the use of AI assistants, if they'll be using those in their work. Though rather than let them produce textbook algorithms with prompts, I'd give them a prompt that produces known faulty output, then ask them to test the result, figure out what is wrong with it, and fix the code and/or the prompt.
      • by N1AK ( 864906 )
        This. My general experience is that a lot of these questions are really there to filter candidates who are obviously inappropriate, or lying about capability, vs actually trying to differentiate OK from great. I've had candidates who claim to be experienced with a programming language who literally couldn't explain the basics of the syntax let alone do anything useful
      • by PatKa ( 1043990 )
        Well I am not saying it is completely useless and maybe we really do not have anything better but I can see that we frequently hire people who are great at implementing all these algorithms but are unable to produce an even remotely nice results when it comes to object oriented programming, which in the end is what we actually do. You end up having the classes that are required by the framework plus a bunch of Helper, Util and Constants classes where the actual magic happens. No need to mention that this is
      • Personally I've almost never asked algorithm questions to candidates in interviews. I feel like the typical algorithms in these questions are hard enough that if you don't know it/remember it you'll struggle to come up with it in a few minutes during an interview, while also being easy enough that if you do remember it, it won't be too hard to come up with an implementation. E.g. dijkstra's algortihm is really simple once you know it, but dijkstra was a pretty clever fellow to come up with it in the first p
    • Actually when it comes to AI and claims placed people haven't been as rigorous as they could have been. We have the scientific method, start using that to determine just how effective it is. Lay this issue to rest.

  • by Stormwatch ( 703920 ) <rodrigogirao@nOSPaM.hotmail.com> on Thursday June 12, 2025 @02:29AM (#65444149) Homepage

    I recall a comment by a programmer saying: "I asked AI to refactor my code. Now it's clean, elegant, and nothing works."

  • They are tools; asking a candidate to show how they use tools is fine, on the face of it.

    You can demonstrate whether or not you know how to use them intelligently and carefully.

    • I see a huge difference between asking, "How would you solve this problem?" And asking, "How would you solve this problem using a specific tool?" For the first question, someone might consider using LLM if they thought it was the right tool for the job. For the second, they seem to be making the assumption that LLM is the right tool for the job, and from the article they seem to be assuming that this is translating into long term benefits. I question if this is based on actual productivity gains, or if
    • Yeah the HR guys probably view it as "Can you use basic Office type applications? Yes/No" Which sounds silly if you're a millenial/GenX but a lot of younger folks can be flummoxed by non-mobile OSs at times so it can be good to list as a 'nice to have'. But this AI stuff has gone so quickly from 'useless but fun' to 'useful if you're careful' that anything specific for an interview will be obsolete by the time they're hired.
  • very appropriate (Score:4, Insightful)

    by ZipNada ( 10152669 ) on Thursday June 12, 2025 @07:59AM (#65444361)

    I fail to understand why so many people here have negative opinions about AI coding assistance. I chalk it off as a basic resistance to change.

    "essential for staying productive and competitive", that is so definitely true. Working with AI is a skill you have to learn, and I would much rather hire someone who knows and likes it than some curmudgeon who will gripe about it every step of the way.

    Meanwhile, yesterday I spent a few hours directing AI to evaluate several specific sections of existing code for me and suggest improvements. I didn't like everything it offered but several were right on the money. Some minor security holes I had missed, improvements in elegance and readability and flexibility. It made the changes quickly, I ran my unit tests before accepting them, and there was minimal hassle or effort on my part.

    • I fail to understand why so many people here have negative opinions about AI coding assistance.

      You have to be smart enough to use it correctly and understand its limitations.

    • I fail to understand why so many people here have negative opinions about AI coding assistance. I chalk it off as a basic resistance to change.

      Resistance to change is certainly a part of it. But also, some of these people are so old it's actually hard for them to learn something new. I work with people that can't figure out their iPhone. There's no hope for them to understand and utilize something this different.

      • by will4 ( 7250692 )

        > I work with people that can't figure out their iPhone

        I've seen this multi-generational thing too. It was too easy to dismiss this as (middle-age or older == doesn't want to learn) until one of them mentioned that remembering how to do one thing on an iPhone about one time a year was not worth remembering compared to knowing the inner details of a complex business process.

  • frontend, backend, and machine learning engineering roles

    Bullshit web two-oh and AI jobs.

    Of course they need proficiency in slop generators.

  • by abulafia ( 7826 ) on Thursday June 12, 2025 @09:44AM (#65444589)
    They are taking a complete 180 on a core aspect of their interview.

    I predict is will have mo measurable impact on their engineering capability.

    I also predict their HR will hail this as a great success.

  • Regardless of your thoughts on using coding assistants, this is a positive move forward. Coding interviews have always been unrealistic and discriminatory processes that filter out many good candidates.

  • Hopefully someone will come up with an AI that will generate AI prompts for me. Think of all the savings!

  • I’m glad to see Canva flipping the script on technical interviews. If half your engineering team already uses LLMs like Copilot or Claude daily, why pretend otherwise when hiring? Their new approach—evaluating how candidates collaborate with AI, not just how they code in isolation—isn’t just a gimmick. It’s a course correction.

    This is what it looks like when a company stops treating AI like a cheat code and starts treating it like a tool. I remember the first time I invoked EM

  • I appreciate the idea behind letting people use whatever tools they use in the real world in the interview: it's a bit like open book exams. If you can quickly look up something, it's as good as knowing it yourself in practice.

    However, as somebody who has interviewed a moderate number of people (including one last week), I don't know how I would incorporate AI use in a live interview. There's so much randomness when you use AI to assist you in solving a complex problem: sometimes it works right away, som

One way to make your old car run better is to look up the price of a new model.

Working...