Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AI Japan Businesses

More Than 40% of Japanese Companies Have No Plan To Make Use of AI 56

An anonymous reader quotes a report from Reuters: Nearly a quarter of Japanese companies have adopted artificial intelligence (AI) in their businesses, while more than 40% have no plan to make use of the cutting-edge technology, a Reuters survey showed on Thursday. The survey, conducted for Reuters by Nikkei Research, pitched a range of questions to 506 companies over July 3-12 with roughly 250 firms responding, on condition of anonymity. About 24% of respondents said they have already introduced AI in their businesses and 35% are planning to do so, while the remaining 41% have no such plans, illustrating varying degrees of embracing the technological innovation in corporate Japan.

Asked for objectives when adopting AI in a question allowing multiple answers, 60% of respondents said they were trying to cope with a shortage of workers, while 53% aimed to cut labour costs and 36% cited acceleration in research and development. As for hurdles to introduction, a manager at a transportation company cited "anxiety among employees over possible headcount reduction." Other obstacles include a lack of technological expertise, substantial capital expenditure and concern about reliability, the survey showed.
This discussion has been archived. No new comments can be posted.

More Than 40% of Japanese Companies Have No Plan To Make Use of AI

Comments Filter:
  • Good. (Score:5, Insightful)

    by serviscope_minor ( 664417 ) on Thursday July 18, 2024 @08:07AM (#64634801) Journal

    More than 40% of companies are not planning on investing time into a very experimental system which may be able to assist a bit, or may be utterly incapable until brand new research is done.

    AI is not a panacea. It can do some things well, other things very very badly. And it's hard to control to make it do what you actually want.

    What we will inevitably see is 99.9% of companies who try AI will fail horribly and give up in frustration then maybe later buy a product which does what they need which may use some AI techniques under the hood.

    • Re:Good. (Score:5, Interesting)

      by Comboman ( 895500 ) on Thursday July 18, 2024 @08:20AM (#64634819)

      Given that Japanese companies were among the first to embrace robotic automation in manufacturing, their wariness around AI is particularly telling.

      • Given that Japanese companies were among the first to embrace robotic automation in manufacturing, their wariness around AI is particularly telling.

        They're simply taking their time to see how they can integrate AI into their Gundam [gunjap.net].
      • "Given that Japanese companies were among the first to embrace robotic automation in manufacturing, their wariness around AI is particularly telling."

        "Embrace" would be a misnomer: They were still building out their post-war industrial base when they made those decisions, so there was literally no downside. The Allied reconstruction effort handed them an economy, and they made the most of it. But Japanese business since that energy waned in the '90s has been largely conservative and averse to change. Th

        • They are (or were till recently) commonly using tape recorders, fax machines, and other 'old school's office tech well after the rest of the world moved on from those things. As beyond cutting edge and ahead of the world they were seen as, they were doing a lot of things in an old-fashioned and regressive/stagnant manner in day to day business.
      • by dvice ( 6309704 )

        It is said that in 1980 Japan was already living in year 2000. And in 2020 it was still living in year 2000. It is the country of fax machines and stamps.

      • Because this particular AI is new, but not the only form of automation that exists. AI has been around for ages, it's just not called that. Once you understand the magic trick it stops being magic and starts being mundane technology. And Japan has been using advanced technology for decades. The technology that drives robotics in manufacturing is a result of AI research, just not the chat-bot style of AI that has rich people drooling.

        There's no reason for them to invest billions just to reap a profit of a

    • Re:Good. (Score:4, Insightful)

      by Visarga ( 1071662 ) on Thursday July 18, 2024 @08:58AM (#64634897)
      Japanese businesses are still trying to give up the fax machine. I am not surprised 40% haven't heard of LLMs.
      • businesses have largely abandoned fax, its the government that still uses it for official purposes
        • by HBI ( 10338492 )

          Mostly because courts have ruled that fax transmissions have the effect of original documents. Absent that, they'd be dead.

          • Also they used to design and manufacture those things, probably still do. Need to get as much milage out of them as possible.
      • The article doesn't say they haven't heard of AI, it says they have no plans to make use of it.

        AI has some uses, but it's hard to know how it will fit in, if at all, to most businesses.

        You can make shitty customer service chatbots that give bad customer service and can be jailbroken into going off on a nazi rant with your company branding. You can use it to auto-unsummatize text making much more for people to read, thereby removing an important barrier of human effort when it comes to wasting other people's

    • Re:Good. (Score:5, Interesting)

      by Vlad_the_Inhaler ( 32958 ) on Thursday July 18, 2024 @09:03AM (#64634915)

      How large are these companies?
      Hashimoto's Dry Cleaning Emporium Ltd has no plans to make use of AI? I'm shocked, shocked!

      • Personally, I have the impression that AI has the same problem we humans do. Even worse. It will get better, but it in essence it will remain a pattern searcher. It will see stuff that isn't there, make mistakes, get influenced badly. Would not be surprised if it would need "sleep" to adjust its coefficients after a day's work. Constantly will need to retrain for new circumstancrs... It will have to get a will of its own to resist or ameliorate badly formed commands... Probably will threat to leave the comp
      • by znrt ( 2424692 )

        not only that, if polls in general are already something not to be taken very seriously, one about such a sensitive matter much more so. no wonder that half of them declined to participate to begin with.

        then again, if this result were true it wouldn't really surprise me either, japan has a very strong work and duty culture. they have job descriptions that exist nowhere else in the world, mainly because everyone able has to have a job.

      • How will they be competitive if they don't have a chatbot to mostly give OK but basic advice and occasionally tell customers or staff to wash the clothes in lighter fluid then set them on fire to dry them fast?

      • by cstacy ( 534252 )

        How large are these companies?
        Hashimoto's Dry Cleaning Emporium Ltd has no plans to make use of AI? I'm shocked, shocked!

        It is highly likely that the cleaners are already using "AI".
        Not an LLM, of course.

    • At first, I was thinking that Japan is maybe not as sensitive to hype. But maybe it is more so that the US, and in particular California/Silicon Valley are extremely sensitive to it, and actually most of the world is more rational.

      I would also recommend reading this blog post, by someone who actually knows something about data science: https://ludic.mataroa.blog/blo... [mataroa.blog]

    • by gweihir ( 88907 )

      Indeed. LLMs may well turn out to be harmful in many contexts. Remember that one mistake can outweigh thousands or millions of acceptable decisions. Hence LLMs, used in the wrong areas (which are basically all areas where a human actually has to do a tiny bit of thinking), probably will turn out to be a complete disaster. In particular, because spotting the mistakes an LLM makes can be harder than doing it right yourself. Hence the fact-checkers that an LLM requires will need to be more qualified. You canno

    • I think it's a sincere FP, but definitely NOT "Insightful", so I regard that as another moderation failure. The basic problem with your analysis is that maybe only that last 0.1% of companies will survive and the rest will be bought out or vulturized.

      Ergo, my feeble attempt at a joke Subject.

      I should have something to say about the local aspects of the topic... But the most relevant and potentially interesting examples that seem to come to mind would cause breaches of privacy. I do think there has been a ki

      • To borrow from another poster, what about Hashimoto's Dry Cleaning Emporium Ltd? There are millions of those.

        • by shanen ( 462549 )

          Too late to justify a substantive answer, but that answer would have called for clarification of how "companies" are defined in Japan. Short answer: Family businesses are in a separate category. I'm basically certain they were not considered as part of the population of "companies" for that report. There are two or three main categories of companies at the top of the pyramid.

          If I had been seriously engaged by the story, then I would have had to chase down the Japanese sources. Sorry, but no thanks.

    • by dvice ( 6309704 )

      > More than 40% of companies are not planning on investing time into a very experimental system

      AI has many products. Classifiers are old and very well understood AI systems, not experimental. You pretty much get what you expect from those. Very good solution if you have a lot of sorting you need to do and you are fine with 90% accuracy that comes with a cheap price.

      > AI is not a panacea.

      AlphaFold3 will most likely be pretty close to that, considering that it is a tool for creating new drugs for everyt

      • AlphaFold3 will most likely be pretty close to that, considering that it is a tool for creating new drugs for everything that can be cured with some molecule. But I assume you didn't mean that literally.

        I did mean that and no it is not. It predicts the shape of some molecules and their interactions. That's useful. It doesn't create drugs. It can, maybe, predict a molecule which will affect a certain protein (or other molecule) in a certain way.

        Someone needs to then synthesize that, with sufficient purity th

  • by Revek ( 133289 ) on Thursday July 18, 2024 @08:18AM (#64634815)
    The truth is a conservative approach means that these companies can wait until a clear useful adaptation of LLM's is proven. Then they will implement it without all the waste taking place right now in the pursuit of following the latest gold rush.
  • I can't tell from the fine summary or the article, but pendantics matter here: having no plan to use AI is not the same as planning to not use AI.

    • also, as others have said, having no plan to use AI NOW doesn't mean that they may not develop a plan to use it once it a) becomes more viable/reliable and b) they can determine the business use for it that would affect their companies bottom line positively. Seems like whomever wrote the title for this article was perhaps being just a tad 'clickbaity'
    • By the time AI improves, it will not be *this* AI. So in a pedatic sense one could say it is planning not to use AI.
    • by godrik ( 1287354 )

      In business speak it is about the same thing. It is extremely rare that companies tell you that they will never do X and actually commit to it. There could be a new board in the company next month and all the plans have changed.

      In practice "having no plan to use X" and "planning to not use X" is about the same.

      • In business speak it is about the same thing. It is extremely rare that companies tell you that they will never do X and actually commit to it. There could be a new board in the company next month and all the plans have changed.

        In practice "having no plan to use X" and "planning to not use X" is about the same.

        Fair point, but it also reinforces the age-old meme "lies, damn lies, and statistics..."

  • AI is the latest investor and marketing buzzword, but it in many cases it is nullshit, not solving a real problem. For example I was talking to someone I knew about automated trains, and they remarked “oh they use AI?”, to which I had to counter “no it is just algorithms and logic”.

    For many people AI is some sort of magic beast that solves everything. While it has its place, many things are still solved by algorithms, logic and a human thinking how to solve a problem.

    • I was telling my company that "functions" (input = output, reliably) are scalable and get you the results you want, but AI is making it easier to implement those functions. Not because AI is in the resulting function, but because AI makes it easier to figure out how to do it.

      • by narcc ( 412956 )

        I'm not sure if you're on track or not.

        People have some odd ideas about what AI is and what it can do. Part of the problem is that we don't do a very good job of distinguishing models from model making. Models are just functions that map inputs to outputs. The power of AI isn't in the resulting model, but the various processes we use to make models. (It's like evolution that way.) We use the term 'AI' for both things, despite the confusion that causes.

        When what a function should do isn't clear or too dif

        • You could have also meant that LLMs help you write code, which I'd dispute

          That's what I was mainly saying. I'm not in line with the "10x engineer" believers, but 1.2x is completely feasible. I have personal experience with copilot autofilling things when using the Amazon CDK or other not-super-popular frameworks to provide examples or such. AI is great at providing examples.

          That said, I agree that once you reach some point in the novel scale, it starts to give you false information that can slow you down.

    • Even if AI can only imitate us, it can learn from millions. OpenAI has 180M users, they solve 1B tasks per month. Each session is an opportunity to learn about what works and what doesn't. If they retrain often, they can assimilate insights from everyone and then apply them contextually as needed. An experience flywheel. This time the models don't learn just how to generate human-sounding text, but how to approach problems. They learn in problem space not just in language space.
      • Even if AI can only imitate us, it can learn from millions.

        No. They don't "learn" things. Even "training" is a misnomer.

        If they retrain often, they can assimilate insights from everyone and then apply them contextually as needed. An experience flywheel. This time the models don't learn just how to generate human-sounding text, but how to approach problems.

        No, that's a fundamental misunderstanding. You're not going to make LLMs intelligent by stuffing them with more information. A different approach has to be used. It might well be something else COMBINED WITH the current approach. But LLMs only handle the hallucination and regurgitation parts. There's no cogitation. They don't understand anything they've output. That is no different from many people's standard MO, but those people aren't actually t

        • by chthon ( 580889 ) on Thursday July 18, 2024 @09:34AM (#64634997) Journal
          AI does not hallucinate, it bullshits [scientificamerican.com]
        • by gweihir ( 88907 )

          Exactly. Sure, LLMs can simulate what a very dumb human with an excellent memory can do while on "autopilot". To a degree. Maybe. But that is about it. LLMs have zero understanding, zero reasoning capability and zero fact-checking capability. In addition, they cannot do abstraction, they only can do statistical clustering. For some very simple problems that can replace abstraction capabilities, but for anything a tiny bit more advanced, it cannot. Hence LLMs do not and cannot "learn how to approach problems

          • by JustNiz ( 692889 )

            Disclaimer I know nothing about how LLMs actually work, but what you wrote used to coincide with my understanding too: That an LLM is really just cluelessly regurgitating symbols that it doesn't even understand based on statistical relationships to symbols extracted from the user's input. It also can't learn beyond its "training" phase. As a result the emergent behviour pretty much just coincidentally happens to sound enough like an intelligent agent that people incorrectly believe it must be intelligent.

            Ho

            • by gweihir ( 88907 )

              Thanks. Nice to see there are people that still see what is actually going on. My comment about mathematics is not that relevant and any smart person that actually looks can see how utterly limited LLMs are. What I really do not get is that people assume LLMs are intelligent from very limited indicators. They might as well assume a lexicon is "intelligent", that would make about as much sense.

              The "remembering" is a temporary state: The LLM does not answer only to your last question, but the questions and st

              • by JustNiz ( 692889 )

                ahh got it thanks. I haven't tried what happens after a bunch of queries. I guess that's what the "context" parameter actually is.

            • by narcc ( 412956 )

              can anyone explain what is actually going on?

              First, the model isn't changing as you use it. That's simply not possible. What does change, however, is the input you're giving it. That includes not just what you've written, but the output from the model as well. If you've heard the term "context window" this is what that means.

              It's also important to remember that these things do not operate on facts and concepts, but on 'learned' relationships between tokens. If you imagine the model as a complex shape, input stretches and squashes that shape for th

    • by gweihir ( 88907 )

      Indeed. The main limiting factor of LLMs is that they have zero reasoning ability and cannot check the output they generate. It may, if simple, be perfectly fine. It may be partially right. It may miss important detail. And it may be completely disconnected hallucinations. An LLM cannot tell. And that limits its use to low-risk extremely fault-tolerant uses and there are actually not that many of those around. The only real application I see is replacing low-level, no-decision-power desk workers. That means

  • by Ksevio ( 865461 ) on Thursday July 18, 2024 @08:47AM (#64634867) Homepage

    To put that another way, the majority of Japanese companies have plans to make use of AI. That's pretty outstanding for such a new technology, though I guess that could be including minor uses like reviewing letters on chatGPT

    • It is well known that current AI language technology works "best" in English. Intuitively, you can understand that statement as an acknowledgement that training/copy materials are predominantly in English, or as an acknowledgement that language features in Japanese are less simple than the Latin conventional stream of words, or as a statement about market size, priorities and allocated resources, or as an observation about the cultural backgrounds of the LLM developers.

      In any case, the current LLM craze

  • ... which is still crap. Good to say that these companies see things clearly.

  • by Megane ( 129182 ) on Thursday July 18, 2024 @10:02AM (#64635065)
    They've been burned by that kind of that hype before: Fifth Generation Computer Systems [wikipedia.org]
  • by Misagon ( 1135 ) on Thursday July 18, 2024 @11:06AM (#64635211)

    I am sad to hear that the number is not higher.

  • Again proving that AI is not entirely necessary in the preparation of sushi, or in fine calligraphy.

  • Japan and AI (Score:4, Informative)

    by cstacy ( 534252 ) on Thursday July 18, 2024 @11:44AM (#64635329)

    Japan led the way on the previous AI Bubble in the 1980s.
    This was a huge push by government and industry,
    called "The Fifth Generation Project".
    Here's a random article about it:
    https://www.sjsu.edu/faculty/w... [sjsu.edu]

    They know all about AI over there.

    Now we have the LLM Generation.
    I understand it refreshingly brings back your dead ancestors.
    Or so an LLM might tell you.

  • So those Sony phones, tablets, and laptops...they're really setting the world on fire, aren't they? I LOVE my Canon camera, but none of the Japanese cameras on the market have cloud integration (TMK). None do much with computational photography. Only recently did their very expensive professional flashes STOP using AA batteries and move to rechargeable batteries. Only relatively recently could you even charge a camera via USB.

    I'm an XBox user, so can't comment on the PlayStation world. However, the on
  • Can't deal with 2 major game changers at the same time, right?

Disclaimer: "These opinions are my own, though for a small fee they be yours too." -- Dave Haynie

Working...