![AI AI](http://a.fsdn.com/sd/topics/ai_64.png)
![Businesses Businesses](http://a.fsdn.com/sd/topics/business_64.png)
Tech Leaders Hold Back on AI Agents Despite Vendor Push, Survey Shows 16
Most corporate tech leaders are hesitant to deploy AI agents despite vendors' push for rapid adoption, according to a Wall Street Journal CIO Network Summit poll on Tuesday. While 61% of attendees at the Menlo Park summit said they are experimenting with AI agents, which perform automated tasks, 21% reported no usage at all.
Reliability concerns and cybersecurity risks remain key barriers, with 29% citing data privacy as their primary concern. OpenAI, Microsoft and Sierra are urging businesses not to wait for the technology to be perfected. "Accept that it is imperfect," said Bret Taylor, Sierra CEO and OpenAI chairman. "Rather than say, 'Will AI do something wrong', say, 'When it does something wrong, what are the operational mitigations that we've put in place?'" Three-quarters of the polled executives said AI currently delivers minimal value for their investments. Some companies are "having hammers looking for nails," said Jim Siders, Palantir's chief information officer, describing firms that purchase AI solutions before identifying clear use cases.
Reliability concerns and cybersecurity risks remain key barriers, with 29% citing data privacy as their primary concern. OpenAI, Microsoft and Sierra are urging businesses not to wait for the technology to be perfected. "Accept that it is imperfect," said Bret Taylor, Sierra CEO and OpenAI chairman. "Rather than say, 'Will AI do something wrong', say, 'When it does something wrong, what are the operational mitigations that we've put in place?'" Three-quarters of the polled executives said AI currently delivers minimal value for their investments. Some companies are "having hammers looking for nails," said Jim Siders, Palantir's chief information officer, describing firms that purchase AI solutions before identifying clear use cases.
Decision trees are not equal to fiction generators (Score:4, Interesting)
The main reason AI is unreliable is because it isn't intelligent. While R1 subroutines have added at least some basic logic testing, logic is not fact checking, and these machines still don't know the difference between fact and fiction.
What is needed for AI adoption will be 0% hallucination rate- it has to be better than a political American conspiracy theorist.
Re:Decision trees are not equal to fiction generat (Score:4, Interesting)
AI has been fully adopted by management for everyday tasks. I worked with a guy who drafts all of the team's ticket and has ChatGPT correcting them. Tickets are now very pretty and fluffy. But they have no substance. There is NOTHING in them that wasn't said in the 2 or 3 bullet points
AI slop is a waste of time. I have to scan through screenfuls of useless text. The ticket descriptions have "acceptance criteria" such as "make sure the client follows the API specification". Yeah and what else is it gonna do if not?
Today we sent a notification to a client letting them know of the payload we were going to use with their system. The response was a fluffy AI slop with 3 bullet points saying how great my payload was, it's awesome that it uses JSON for maximum flexibility, and the fact that i called the "push" function was "ideal" for their integration. "In short, it's a great job from our side"
At this point I'm feeling insulted. A "great, looks good from our end" email is more than enough.
AI was supposed to increase productivity but sifting through screen over screen of AI slop just adds more time.
Oh I know, we need an AI Agent to summarize emails!!!
Re: (Score:2)
You can't train out the "hallucinations" unless you bias the model. Uh, oh, did I say a bad word?
Basically a 0%
Haven't had real value from LLM yet... (Score:2)
So far if I ask it something that is neither obvious off the top of my head nor pretty much verbatim in the top two or three results in google, it comes back confidently incorrect. It mashes plausible sounding words together in ways that will tend to be on theme, but useless.
For some folks it may be useful, but at least for me it hasn't provided value. I could imagine it augmenting my development with somewhat fancier autocomplete for obvious but tedious snippets that come up, but the crap shoot is current
The bubble is getting closer to popping. (Score:4, Interesting)
It reminds me of computers...definitely overhyped when they were introduced into the workplace in what...the 60s?....but they could solve basic math perfectly. They perfectly solved some limited use cases initially. They allowed for banking and accounting...we saw evolution of COBOL/fortran programs into basic calculators and spreadsheets until we got to the information age...the technology had to crawl before it could run....but right now, with Generative AI...there's no "crawling." Once I see the crawl step, then I'll be a believer. But I think this is just evidence that the hype bubble really isn't selling and will hopefully pop soon.
Re: (Score:2)
The funny part is the biggest threat is not the difficulty with landing an unambiguous use case, it's that competitors have shown 'fast following' can be done super quickly and at a fraction of the cost.
Investors seemed relatively content pouring money into unclear results, until they find out that competitors can replicate the behavior and in the process point out to the casual investor that self-hosting models is a possibility, which really destroys their "everything must be rented" dreams.
Re: (Score:2)
AI has helped me in tremendous ways.
As a front-end developer, I use it regularly to solve coding problems that would have taken days for me to solve in the past. It provides a fantastic starting point, and then I work through the details with iterations of the code to get it to where I want.
It's not perfect, which is why I need to be there to give it a helping hand, and I am fine with that. It's a tool.
More importantly, it provides fantastic ideas about how to solve problems I never would have thought of o
Re: (Score:2)
So you're a terrible programmer. Please let me know what projects you work on so I can avoid them.
Re: The bubble is getting closer to popping. (Score:2)
To be honest, LLMs can be really good at suggesting ideas, not only in programming, but in other disciplines as well. The catch is that some human needs to have produced the idea first, and have written it somewhere. This is one of the few useful things LLMs can do.
But to spend trillions on this? Nah.
"We move fast and break things," (Score:3)
"and we want YOU to move fast and break things too. And our AI tech can certainly help you break things fast - that's a promise!"
"We've spent a shit-ton of money on this AI thing, and we're not making it back fast enough. DeepSeek, and the similarly open LLMs that it will encourage, are threatening our ROI. It looks like we're about to take a bath, but we'd rather avoid that. So we're appealing to you for help; but since we're fundamentally unable to be up-front and honest, we're going to try running another con on you."
"You have always been our best beta testers; you pay us to test our products rather than us paying you for the service you provide, and you keep coming back for more. Thanks for being our marks for so long - won't you please join us for yet another round? We're counting on you to help us out, because we think you're too stupid to know what we're doing and/or are too rich to care".
I think that about sums up this news story. I'm torn. On one hand, I hope said 'tech leaders' hear "don't call us we'll call you" from the companies they're attempting to fleece. On the other hand, there are a lot of companies that I'd love to see get shafted. So pass the popcorn please!
A strange game. The only winning move ... (Score:2)
ChatGPT is like smartphones. If there are bugs or inaccuracy, no big deal. It's an annoyance, but we just ignore it or reboot. Self-driving cars and autonomous vehicles, on the other hand, are agentic systems. Allowing a computer system to move an actuator can have safety or significant financial or life consequences.
This is how Skynet is born. If you don't believe in this conspiracy, just ask ChatGPT.
More seriously, agentic systems that are allowed to do things that expert systems and other programs have a
It's good to be hesitant to deploy... (Score:3)
...very early prototypes
I have no doubt that future AI will be very useful, but today's tech is suitable only for testing and evaluation, not widespread deployment
Please train AI on /. AI story comments (Score:2)
Sensible (Score:2)
If you are just doing 'classical' LLM chatbot there's certainly a risk of getting confident nonsense, or awkward cases where