Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Businesses

Startup Tells New Hires They Need To Know ChatGPT For a Job (bloomberg.com) 86

As businesses grapple with how artificial intelligence tools like ChatGPT will affect working practices, one Japanese fintech firm is making it compulsory for new recruits to use the technology and even testing them on it. From a report: With concerns growing about its ability to make jobs obsolete and data protection, Tokyo-based LayerX, is bucking the trend, with a recent job ad for new graduates making it mandatory for recruits to be tested on their use of the chatbot made by OpenAI, and another called Notion AI. The startup, which focuses on promoting digitizing business transactions, is confident it's on the right side of a growing divide over the use of the technology.

Many Wall Street banks have restricted its use, while schools in places like New York City have banned it. Major Japanese firms have done likewise, with Softbank Group Corp, and banks including Mizuho Financial Group and Mitsubishi UFJ Financial Group clamping down in recent months. "We recognize that ChatGPT is not perfect," said Takaya Ishiguro, chief human resources officer at LayerX, in an interview. "However, it is also dangerous to be too afraid to utilize new technology." Recruits are asked during their entry assessments to give prompts to ChatGPT. Assessors review whether they initiate the process well, rather than the actual answers. Candidates are also asked to conduct research to identify the limitations of the technology.

This discussion has been archived. No new comments can be posted.

Startup Tells New Hires They Need To Know ChatGPT For a Job

Comments Filter:
  • by rsilvergun ( 571051 ) on Wednesday March 15, 2023 @10:05AM (#63372719)
    this sounds like the company wants to use the "chatgpt" buzzword to raise capital. I remember when crypto blew up briefly and companies who's names sounded like they were related would get these huge bumps on the stock market. Same thing here.
    • by backslashdot ( 95548 ) on Wednesday March 15, 2023 @10:41AM (#63372815)

      AI is the future though. Many professions will be transformed by augmentation with it. In fact, I can't think of one that won't be. Any profession that relies on you remembering how to do something. The job will be made 10x easier by augmentation with AI.

      • by rsilvergun ( 571051 ) on Wednesday March 15, 2023 @10:56AM (#63372845)
        there isn't a lot of training to how to use it by it's very nature. Nobody's going to be hiring AI operators like we used to hire mainframe operators. The entire point of using these tools is to replace labor, not increase it.
        • Nobody's going to be hiring AI operators

          Companies are literally already hiring AI operators, because you need humans to get good results out of AI. It doesn't know when it's produced a clusterfuck.

          • Oh boy.... (Score:5, Interesting)

            by rsilvergun ( 571051 ) on Wednesday March 15, 2023 @11:25AM (#63372923)
            no, they're not "getting good results out of AI". They're replacing existing workers with an AI. Often much more skilled workers. There's no magic to AI, it's just combining existing data sets based on the most popular/likely combination. It only seems like magic because the math is really, really complicated, like Parity Archives but moreso.

            Companies aren't "hiring AI Operators" like they did when they hired mainframe operators. This isn't new work that was created, this is existing work that used to be done by people being done by a machine. It's like when shoemakers were replaced by the assembly line. And yeah, it's good to have more shoes, but only if there's enough other work to drive an economy so that people can buy those shoes, and if that work pays enough to keep the economy going.

            We're replacing skilled craftsmen with guys typing text at a computer. This is a huge problem. We solved it last time with Unions when low skill labor formed massive collective bargaining organizations. But the guys running the factories are wise to that and are taking steps against it. And beyond Unions I don't know of a solution to the problem where skilled craftsmen get replaced, wages go down, and we end up in a deflationary spiral due to an economy with nothing but low wage work.
            • We're replacing skilled craftsmen with guys typing text at a computer. This is a huge problem. We solved it last time with Unions when low skill labor formed massive collective bargaining organizations. But the guys running the factories are wise to that and are taking steps against it. And beyond Unions I don't know of a solution to the problem where skilled craftsmen get replaced, wages go down, and we end up in a deflationary spiral due to an economy with nothing but low wage work.

              I don't know, I think this problem will resolve itself when those companies can't produce anything worthwhile.

              • We haven't enforced antitrust laws in over 40 years. They don't have to produce anything worthwhile they just have to make sure nobody else produces anything worthwhile.
                • We haven't enforced antitrust laws in over 40 years. They don't have to produce anything worthwhile they just have to make sure nobody else produces anything worthwhile.

                  Maybe I should rephrase that to purchasing something anyone is willing to purchase. Of course anything AI will get venture capitalist funding so meh?

            • Re:Oh boy.... (Score:5, Insightful)

              by ChatHuant ( 801522 ) on Wednesday March 15, 2023 @01:32PM (#63373345)

              We're replacing skilled craftsmen with guys typing text at a computer.

              That has been the main mechanism for progress for the whole existence of humanity. New tools or processes are discovered or invented, and tasks that only small groups of specialized folks could perform suddenly become available to everybody. The democratization of knowledge and capabilities has improved everybody's lives immeasurably. Would you object to the printing press because people who may not be very educated can operate it by sticking lead shapes in a frame (or, more recently, by typing text on a Linotype) and they can create books without being skilled calligraphers?

              This is a huge problem.

              No, it isn't. The problem is poverty, and the solution isn't to be a Luddite and destroy the machines, the AIs, the robots, because they take away jobs. On the contrary, I believe better automation, better machines, better robots are the way to improve lives - the change needs to be in distribution of the wealth produced by machines. I think the right direction is something like UBI.

              • Following the industrial revolutions there was massive social unrest due to massive numbers of people being put out of work by machines. It took decades for technology to catch up and create new lines of work. So much so that we had two world wars and the only reason our economy started to get moving again as we blew everything up and everyone got jobs putting it back together. I'm not personally a fan of broken window economics though.

                Everyone throws about the word luddite without considering that the
                • Everyone throws about the word luddite without considering that the luddites for real people who had lost the means to make a living.

                  True, but the solution you suggest is completely wrong. Stopping technological progress because some folks will lose their jobs is not only a bad idea, but it isn't even possible. What Luddites want is to stop progress, give up not only on the present invention, but also on all future ones depending on it, all for the sake of maintaining their current status. It's important for them, of course, but it's harmful to the rest of humanity.

                  When push comes to shove, thousands (or even hundreds of thousands) of pe

              • I still can't imagine what job could be replaced by ChatGPT in its current form. Or are there lots of companies that hire people who'se job is to give bullshit answers to questions? If there was a human giving the same answers as ChatGPT the human would be fired for being stupid and providing bad answers and advice.

                Now this technology can improve. GPT-3 is a great start on the _input_ part of AI. But there are a lot of advances still needed before the _output_ is usable.

                • Or are there lots of companies that hire people who'se job is to give bullshit answers to questions?

                  Politicians?
                  (Some) journalists?
                  PR folks?
                  (Some) salespeople?
                  Spokepersons?
                  Religious leaders?
                  (Some) lifecoaches, motivational speakers, self-help gurus?

                  (that was only off the top of my head, I'm sure there are more)

                • by pacinpm ( 631330 )

                  Replaced? No. Make more efficient so you need less people - easily.

              • BUT who is going to fix the stuff that enables the stupid and ignorant to nonetheless accomplish real work?

                And who is going to sanity-check the results? ChatGPT lies. It makes up facts. Ask about yourself, and give it some information so it finds YOU instead of some/one/all of the other three million people with the same name.

                If it's not a toy it's a trap.

            • no, they're not "getting good results out of AI"

              You represent yourself as knowledgeable about AI, and don't know that people are getting high-quality results out of it for less hours than it would take to do it without it?

              They're replacing existing workers with an AI.

              That may be happening, but it's not what I'm talking about. AI can't replace most existing workers yet, but it can let one worker with AI replace multiple workers without it in many more cases.

              There's no magic to AI

              No one invoked magic but you.

              • The person I was replying to was implying that there would be jobs for people who are good at getting those good results out of AI. My point is that being able to get good results out of AI is significantly less skilled work than actually being able to produce those results without AI. In fact it's not very skilled work at all.

                I'm guessing someone else misunderstood my comment too and decided to mod you up... I guess sarcasm is dead and you have to say everything bluntly around here now
              • Do you have any examples of "high quality results" from Chat GPT?

            • Comment removed based on user account deletion
            • Um, ChatGPT replaces 'skilled craftsmen'?

              Explain please. And ignore the limited pronoun I implied, the concept is sufficient to embrace all, no matter their own personalities etc.

          • Even with good operators you still don't get good results from ChatGPT. Maybe in the future, but right now there's no use for this outside of chat, and even with chat you still have to be accepting of not getting accurate or reliable answers.

            ChatGPT is not a knowledge machine, it is a chat bot, plain and simple. It's strength is in processing natural language, that is what it was built to do. It was NOT built to give you accurate answers do your questions. It very clearly fails badly at giving quality a

        • Susan Calvin made her living knowing exactly how to phrase things to get the right results out of robots, and today in real life there are people with the job title "prompt engineer" getting paid over $300,000 to develop just the right inputs to large language models.

          • The more I read about âoePrompt Engineering,â the more this feels like a marketing term designed to dress up a menial occupation as something more complex. Not unlike sanitation engineer, but at least in that case itâ(TM)s a physically demanding job with a ton of health risks involved so I can respect it.

            Whereas this, I donâ(TM)t get why it needs its own specialty. Was it just too menial to incorporate it into the ML/AI engineering space? Writing prompts is something that a high schooler

      • by kschendel ( 644489 ) on Wednesday March 15, 2023 @11:06AM (#63372859) Homepage

        10x easier until the AI fscks it up. And since you can't predict when that will happen, it's not really 10x easier, is it?

        I can see AI augmentation dealing with a certain amount of boilerplate and busywork. I can also see it leading to some epic disasters when it's depended on without proper vetting.

        • it's much cheaper to pay somebody to check the work than to do the work. They don't need to know how it all works, they just need to know what the inputs and outputs are. So "AI will fuck it up" isn't really an issue. Your QA will catch it at the same rates they caught programmer mistakes before. If there's a few more of those mistakes that's OK, compared to putting 10, 20 maybe even 30% of programmers out of work (and flooding the market with cheap programmers) that's a small price to pay.

          Again, we're
          • It's even cheaper to not bother checking the results.
            As long as those results only effect *other people*.
            • by syn3rg ( 530741 )

              As long as those results only effect *other people*.

              Microsoft Windows Quality Assurance Team, is that you?

          • by J-1000 ( 869558 )

            it's much cheaper to pay somebody to check the work than to do the work

            False.

            Sure if you compare only the salaries, you're paying less for a QA role. But if you're talking overall time investment in fixing a bug late in the game, not to mention the added programming investment required to work with such an inefficient workflow, QA finding all the bugs is far, far more expensive.

          • by kschendel ( 644489 ) on Wednesday March 15, 2023 @02:44PM (#63373579) Homepage

            it's much cheaper to pay somebody to check the work than to do the work. They don't need to know how it all works, they just need to know what the inputs and outputs are. ...

            THAT fallacy has led many a manager down the primrose path of saving money, only to discover that the customer has become the QA person. And the customer isn't happy. And once again, the manage will re-discover the fact that it almost always costs more to fix a fuckup than it does to get it right in the first place.

            Maybe you can just compare inputs and outputs if it's a trivial program, like computing Fibonacci numbers or something. Or, maybe if it's a fully stateless, non-concurrent operation. Something like that, I might trust an AI to get right nearly all the time. But then, something like that isn't what you are paying your expensive programmers to do in the first place.

            I'm bemused by the number of people who are wishing as hard as they can that THIS time, we'll see a qualitative difference. Fat chance, folks.

        • Just like every tool that humans have invented, there's a right and wrong way to use it. And of course they sometimes fail. That's why we invented education, reputation, certification.

        • It's still worth using them for jobs.

          The output from the machine translation for heavy equipment repair that my wife worked on needed hand-checking. Mistakes could get a technician hurt or killed. But it was still worthwhile to the heavy equipment maker who funded it.

        • by J-1000 ( 869558 )

          And since you can't predict when that will happen, it's not really 10x easier, is it?

          This is correct. More specifically, any company that thinks they can hire less intelligent or less accomplished developers are going to be having a bad time. AI will make existing roles more productive, but it will not remove the need for a programmer's most valuable skills, like troubleshooting, reasoning, and judgment.

          Imagine tracking down and fixing an urgent production bug using the very AI that introduced the bug! And imagine doing it without the requisite skillset.

        • ChatGPT is awesome at building small blocks of logic that isn't performance constrained.

          It makes some interesting mistakes, but it makes some clean code, and put's in some reasonable comments and uses good variable names.

          But for anyone that makes their living as a "short order coder" this can get through their week of coding in an afternoon.

          And comparing ChatGPT4 to the AI from last month, It was making smarter code, it was optimizing the performance with a better algorithm.

          It's won't be long before
        • What concerns me is when people starting looking to AI to make moral / value judgements for them.

          Like many organizations, my current employer is exploring how best to leverage AI. One of the things polls show people are most interested in is having a model like ChatGPT generate summaries of documents and conversations. This strikes me as potentially dangerous. It is asking the AI model to make a value judgment as to what information is essential vs what is superfluous.

          Imagine the boss breathing down your ne

        • This is basically the self-driving car all over again. It actually requires more work to make sure you're getting proper outputs. Using it as an idea generating tool is much more useful than to use it for final output.

      • AI is the future though.

        According to idiots who don't know how to do stuff, sure.

      • Comment removed based on user account deletion
      • AI is the future though. Many professions will be transformed by augmentation with it. In fact, I can't think of one that won't be. Any profession that relies on you remembering how to do something. The job will be made 10x easier by augmentation with AI.

        Sounds like all that blockchain buzz all over again.

        I guess blockchain tech is now so 2010s?

    • Not necessarily. While quite often some latest buzzword is often associated with startups and capital raising it would be naive to assume that every case is like that.

      Especially a technology like ChatGPT which is actually having a material impact. This isn't like the blockchain, a solution looking for a problem, but very much something that can help (or hinder) many fields of work.

  • Just say... (Score:5, Funny)

    by jddj ( 1085169 ) on Wednesday March 15, 2023 @10:09AM (#63372729) Journal

    Funny: I had it write the resume that got me this interview...

  • Huh. (Score:5, Interesting)

    by flippy ( 62353 ) on Wednesday March 15, 2023 @10:21AM (#63372759) Homepage
    "Candidates are also asked to conduct research to identify the limitations of the technology" sounds a LOT like "We were going to do this research ourselves, but we found a way to get people to do it for us for free!"
    • "We were going to do this research ourselves, but we found a way to get people to do it for us for free!"

      You've literally just described why we hire people for any job that isn't manual labour. I'm not employed to fill a desk. I am hired as an expert to do something for a company.

      Yeah it's quite likely the company doesn't have the time / skills to do it themselves if they are looking to hire someone and listing required knowledge on the job advertisement.

      • His joke is that the research they provide in their cover letter is value that the company is going to retain without actually hiring anyone.
        • by flippy ( 62353 )
          Yes, that is exactly what I was saying - that the company gets the value of the research without having to hire or pay an employee. Get the applicant to do the research as part of their application process and then DON'T hire them.
          • Yes, that is exactly what I was saying - that the company gets the value of the research without having to hire or pay an employee. Get the applicant to do the research as part of their application process and then DON'T hire them.

            Then I hope you were only joking and not serious. Hiring (or even faking a hiring process) is an incredibly expensive process.

            • by flippy ( 62353 )

              I am completely serious.

              Let's assume, for a moment, that the company actually intends to hire 1 new employee through the process. Let's also assume that they get 5 applicants.

              "Candidates are also asked to conduct research to identify the limitations of the technology" reads as if the company actually wants real research and analysis on "the limitations of the technology" (in this case, ChatGPT).

              They just got 5 people to do research, and are only ever going to pay 1 of them for that. I have a BIG problem wit

  • It's not a bad idea, IMHO. It feels like AI is overhyped right now, particularly if you've spent the last 40 years being disappointed by "amazing breakthroughs". It's not. The hype is real, and there is something here that in a decade will be ubiquitous.

    I'm expecting the same kind of impact from this that we saw with the "information superhighway" from ~1995-2005.

    • by aqui ( 472334 ) on Wednesday March 15, 2023 @11:03AM (#63372857)

      We also need people to know if ChatGPT is giving us correct answers or just stating authoritative BS.

        Reality is were going to start seeing more and more AI produced BS that sounds right. I expect "working with AI" will mean knowing when and how to know the AI response is full of errors.

      Combine this with humans laziness and inability to gauge risk we'll be seeing "death by AI answer" soon.

      I'm also curious what will happen to the internet once Special interest groups, Scammers and Spammers will start using AI to scam people... I think we'll have an even bigger problem where many people will have trouble telling "truth" from "BS".

      I think there's a very real chance the internet will get overwhelmed with AI generated content... the questions will be will they be correct or not...
      I may very well be a big change the question is will it be for the better?

      Anyone working in automation or with AI+automation knows if you can succeed "at scale" you can "fail at scale"...and knowing the difference if you've succeed or failed may be hard in some cases, in which case laziness will likely trump caution.

      • At risk of showing my age, the skills you describe, inferring what's BS and knowing when to question the answer you're given, are very similar to the skills we all learned back when we learned to ask the google. It's a tool you have to learn how to use, and if you use it wrong you can hurt yourself.

        For AI that'll mean learning non-intuitive stuff like
        * I have to tell it that it's okay to tell me it doesn't know.
        * Its knowledge is out of date, since it trained on the internet from a while ago. I can show i

        • Just last week I was listening to bullshit in a meeting when my boss send a message on the side asking "is this bullshit?" Bullshit detectors are valuable. Maybe we need more research into AI that detects bullshit? I think GPT can be used for this purpose, something that has the good natural language processing that GPT has only instead of being trained to output bullshit it would be trained to detect bullshit.

      • Reality is were going to start seeing more and more AI produced BS that sounds right

        This is the big problem. We have huge numbers of people who believe any bullshit statement as long as it's on the internet, or as long as it sounds authoritative, or as long as it matches what they think the answer should be. In other words, we have an extremely gullible population out there, especially on the internet. BullshitGPT is going to eat them for lunch.

    • by gillbates ( 106458 ) on Wednesday March 15, 2023 @12:50PM (#63373211) Homepage Journal

      No it isn't.

      I work with ChatGPT on a regular basis. It can do simple things amazingly well. But it does complicated things in an abysmal manner. Quick example: it is pretty good at backporting a python code snippet from python 3.8 to 2.7. But give it a program, even in function-sized chunks, and it quickly reaches a point where its replies are incomplete or just plain wrong.

      ChatGPT is a good way to come up to speed on a new language, but it won't replace the understanding of computer science fundamentals necessary to make sense of its answers. You already have to be an expert to use it well. And even then, it sometimes gives you wrong answers - the discovery of which actually undoes whatever productivity boost it would have otherwise provided.

      Yes, it can write code snippets for you. It can replace your typing of the program you would have written with a copy and paste. But it can't replace a software engineer, because an engineer knows which questions to ask, and ChatGPT doesn't. It can only give you an answer to the question you asked, not the question you should have asked.

      In the 80's, compilers which could create assembly code from C must have seemed like magic. Yet the algorithm for creating language grammars is today a solved problem. ChatGPT is similar - it's more or less a high level compiler that works with human languages rather than computer languages.

    • The Information Superhighway was invented in 1969 ...so it might be a few years yet ...

  • So, if the candidates profess fluency with ChatGPT, they don't get called back for a second interview. Right?

  • Good. (Score:2, Informative)

    by backslashdot ( 95548 )

    ChatGPT will be the baseline very soon. Telling someone they can't use ChatGPT would be like telling an engineer they can't use NX or AutoCAD. If you need to produce any work product, first ask chatGPT or you're likely to miss something. Many professions will start using ChatGPT instead of, or alongside, a checklist or a template. If you're a doctor or health professional and need to ask screening questions, ChatGPT will be the way to go. I mean, it's passing the US medical licensing exams. Not just the mul

    • by vux984 ( 928602 )

      Really? A coworker asked it how to write a windows installer the other day...

      It suggested using 'built in wix' and then told us to download it. (So its not built in I guess?) It provided a a download link for version 4, but provided code for version 3 (that wasn't compatible with v4). The xml templates were pretty basic copypasta that really did nothing more than copy a single file. (no creating a desktop shortcut, no ui), no explanation of what anything did or why. The xml contained extraneous stuff that

      • I'll see your example and raise you one. I asked chatGPT if it could be used without a network connection. It responded in the affirmative. I then asked how I would do this, and it responded that I would use the API, including example code. I then asked how I would download the source code for chatGPT, and it responded that I would use the API instead. It told me a needed an API key. I asked why I would need an API key, and it responded that this was how it contacted the model in the cloud. I asked

    • "Baseline"? For what?

  • by necro81 ( 917438 ) on Wednesday March 15, 2023 @10:32AM (#63372799) Journal
    In the job requirements did it say "5 years' experience with ChatGPT a must" ?
    • In the job requirements did it say "5 years' experience with ChatGPT a must" ?

      Come now. You know if it's listed in the job requirements the bare minimum would be ten years experience, fifteen preferred.

      • But 15 years means you are soon going to be too old to do tech work and must be fired for age (but HR will make up another excuse.)
        You are not young enough to say yes to everything and unable to make convincing arguments that may show the ineptitude of management to staff, who will figure that out anyway (for extreme examples see Musk.)
        You are not young enough to dispose of your personal time by letting them STEAL your uncompensated time.
        You have children who are getting too old to shovel off onto somebody

        • None of what you said affects the HR drones that create job requirements. How it usually works is:

          A competent person puts together a list of job requirements. On this list will be several items stating, "Experience utilizing $some_new_tech."

          HR Drone sees $some_new_tech and assumes all knowledge is forever and looks up their handy-dandy experience chart for all jobs and goes, "Yup. 10 years experience or degree. Fifteen to twenty years experience for no degree."

          That's how you end up with nonsense job requi

    • Just arrive in a DeLorean in case they ask how.

  • It would be significant if say 60% of all new startups required chat GPT for a job
  • Experience needed as well as knowledge. Ten years of verified ChatGPT expertise must be shown on job application.
  • by Hasaf ( 3744357 ) on Wednesday March 15, 2023 @11:16AM (#63372887)
    I have played with it to "Improve my code." I was working with Arduino sketches (C++). In nearly every case, the code did not work. It would compile, but the hardware did not work as intended, even in cases where the original code worked fine.
    • by leptons ( 891340 ) on Wednesday March 15, 2023 @01:20PM (#63373311)
      ChatGPT is pure hype, even the company that makes it said "people are going to be very disappointed" because the people talking it up think it's an "artificial general intelligence", but it isn't. It's nothing close to AGI. It's closer to a magic 8 ball than it is to actual "intelligence".
      • > It's closer to a magic 8 ball than it is to actual "intelligence".

        Damn right. However, that won't stop people treating it as
        if were some faltless oracle, likely because it spent some
        time flaterring the idiot.
        That was even the case a half century back with Eliza.

  • So, in other words, they're looking to hire people who managed to fire up chatGPT and type in a few questions?

    I'm sure the can find a few genZers who have that highly sought after skill set.

    Ok no more sarcasm. More likely, next month, we'll be seeing jobs requiring "5 or more years as an advanced chatGPT developer". And hundreds of thousands of resumes will be claiming exactly that....

    Yes, I'm old and cynical.
  • Recruits are asked during their entry assessments to give prompts to ChatGPT. Assessors review whether they initiate the process well, rather than the actual answers.

    So is the entire 'test' just having someone type questions into a box? What the hell does 'initiate the process well' mean? Don't hit the 'enter' key too hard or too soft?

  • Japanese businessman: "However, it is also dangerous to be too afraid to utilize new technology."
    Amish: "However, it is dangerous to adopt a new technology too quickly without understanding the implications and effects on society and our way of life."
  • by Anonymous Coward
    Eliza experience accepted at 1/2 value of chatgpt.
    How does it feel to ask why do you always ask about my mother I thought this was a job interview?
  • that says (to those with enough experience to read it) "do not invest here" and "do not expect a job here to last longer than any initial investment funds"

    Companies that get started based on buzzwords and/or the current hype-thing are nearly always scams; those that aren't are usually run by folks who did not realize how scammy the buzzword they latched onto was (and THAT is also not a good sign of competence)

"When the going gets tough, the tough get empirical." -- Jon Carroll

Working...