

IDC: For 1 In 4 Companies, Half of All AI Projects Fail (venturebeat.com) 77
A new study from International Data Corporation (IDC) found that of the organizations already using AI, only 25% have developed an "enterprise-wide" AI strategy, and it found that among those in the process of deploying AI, a substantial number of projects are doomed to fail. VentureBeat reports: IDC's Artificial Intelligence Global Adoption Trends & Strategies report, which was published today, summarizes the results of a May 2019 survey of 2,473 organizations using AI solutions in their operations. It chiefly focused on respondents' AI strategy, culture, and implementation challenges, as well as their AI data readiness initiatives and the production deployment trends expected to experience growth in the next two years. Firms blamed the cost of AI solutions, a lack of qualified workers, and biased data as the principal blockers impeding AI adoption internally. Respondents identified skills shortages and unrealistic expectations as the top two reasons for failure, in fact, with a full quarter reporting up to 50% failure rate.
However, that's not to suggest success stories are few in far between. More than 60% of companies reported changes in their business model in association with their AI adoption, and nearly 50% said they'd established a formal framework to encourage the ethical use, potential bias risks, and trust implications of AI, according to IDC. Moreover, 25% report having established a senior management position to ensure adherence.
However, that's not to suggest success stories are few in far between. More than 60% of companies reported changes in their business model in association with their AI adoption, and nearly 50% said they'd established a formal framework to encourage the ethical use, potential bias risks, and trust implications of AI, according to IDC. Moreover, 25% report having established a senior management position to ensure adherence.
Re:Retard. BeauHD. You are a TOTAL MORON. (Score:1)
Give the kid a break. He doesn't know he's redefining well-established terms to play buzzword bingo. Dunning-Kruger applies aptly here.
Re: (Score:2)
This is what happens when you have idiots writing the news.
Can we replace them with an AI?
Re: (Score:2)
I wrote papers on fuzzy logic in the '90s. We did NOT call it AI.
"I wrote papers on transistors but we didn't call it computers."
Re:Retard. BeauHD. You are a RETARD. (Score:4, Interesting)
There is no universal definition of "AI". And you can't use the human mind as the standard because we don't know how it works. If the definition is "at least as good as average humans" on a variety of task tests, that would mean AI cannot exist at all until it reaches that level. That's not practical because we still need something to describe results that are part-way there.
If you can define a clear-cut and practical definition, please do. (I've been in many similar debates and I'm not optimistic you can pull it off.)
Re: (Score:2)
How about "Data Processing"?
Re: (Score:2)
Re: (Score:1)
There are different kinds of "data processing" and we need to distinguish. Perhaps "statistical or weight-based algorithms" for some of the techniques being called "AI".
Re: (Score:3)
There is no universal definition of "AI".
General AI (or strong AI) is "a machine that has the capacity to understand or learn any intellectual task that a human being can." If we don't have that, then it's not strong AI.
Weak AI is "algorithms and solutions that were discovered while trying to create general AI."
If you can define a clear-cut and practical definition
It's not a single definition, it's two, because people use the term "AI" to refer to two different things. (You can probably add a third definition related to fantasy/science fiction.)
Re: (Score:2)
Re: (Score:1)
Okay, I was rude, I admit. Bad me.
The problem with that definition is that it's based on how an algorithm is developed rather than what it does or its inherent structure. That's like classifying animals based on who discovers them. (We do name them after discoverers sometimes, but that's different than their placement in the ontology "tree".)
Re: And yet, that is how the term is used (Score:2)
that definition is that it's based on how an algorithm is developed
Yes, exactly. And yet, that is how the term is used. That is why I insist on calling them "weak AI," to emphasize that they aren't really AI, their inventors just wish they were.
Re: (Score:1)
First, I don't agree that's how it's mostly being used. It's used a variety of ways, for good or bad.
Second, that still diminishes the practical usefulness of the term "weak AI" because it says almost nothing about the nature of the thing being defined. That's usually not a good characteristic of a definition. There are exceptions, but I doubt we encountered one here.
Re: And yet, that is how the term is used (Score:2)
Re: (Score:2)
Re: (Score:1)
Do you mean "how the brain works"? Do you know anybody of merit who makes such a claim? Even if one does, the only real proof is to actually build (emulate) one. Human egos create exaggerations consciously and subconsciously.
Re: (Score:1)
Under this, if I discover an interesting way to plot complex data visually, it would be "AI" even though most wouldn't consider it having anything to do with AI.
A data scientist is an important key... (Score:2)
You can just throw together some AI stuff but to have something really successful as an enterprise AI project, you really need someone who knows what they are doing.
Yes there are real AI systems in production today in some large and small companies, and have been for some time now.
Re: (Score:2)
WHAT!?
You mean I can't add "ai" to my product offering inside one quarter and do it on the cheap? Just think of the stock price ramifications! I might get fired. No, far better to do some half-arsed attempt at AI, over-hype it before delivery and then quietly drop it a year later. That way I'll keep my job, can keep playing golf and smoke expensive cigars. Hell, I might even get a promotion.
Puh! Data scientists indeed. We'll do just fine without them, thanks.
However, in Soviet Russia . . . (Score:2)
IDC: For 1 In 4 Companies, Half of All AI Projects Fail
. . . AI Projects Fail You!
Works every time (Score:3)
"60% of the time, it works every time"
Sex panther (Score:2)
Your kung fu is strong. No that's my scent.
Re: (Score:2)
Isn't the bigger news (Score:3)
Re: (Score:1)
at least 12.5% of projects succeed
If the bottom 25% reports a failure rate "up to 50%", the conclusion is that the overall success rate is between 50% and 100%; there is nothing in that datum to indicate that *any* failures occur. I suspect that they actually meant that the bottom quartile experiences failure rates of 50% or higher, in which case we could only conclude that the overall success rate is somewhere between 0% and 87.5%. In either case, no useful information is contained in the cherry-picked statistics in the summary.
Re: (Score:2)
Engineer: The glass is twice as big as it needs to be.
No blockchain so why expect it would succeed? (Score:1)
Wanted: Instant Magic, Experience Preferred (Score:4, Informative)
Translation: "There is no mortal capable of delivering on our PHB's nutty promises."
Re: (Score:1)
Humans suck in all fields. You can't avoid silly humans.
Is this better or worse than regular projects? (Score:3)
So 1 in 4 companies report up to 50% of projects fail, meaning ... that 3 in 4 companies didn't report? Or didn't have metrics solid enough to distinguish failure from success? And how do their regular projects do by the same metrics?
Worse, what is an "AI project" in the first place? AI tools are things you can use in pursuit of other goals, not an end unto themselves. If someone said "Make a project using TensorFlow", it's very likely to fail because you have ill-defined goals. The deliverables might work, they just might not solve any useful problems. The same is true if you say "Make a project using object-oriented programming" or "Make a project using Rust" or "Make a project using the blockchain".
[Oh, sorry, my fail, I didn't notice that this was from the IDC. Nevermind, carry on, nothing to see, here.]
That's a ridiculously high success rate (Score:2)
If true, that's a ridiculously high success rate. The vast majority of non-AI software projects fail.
Of course they fail (Score:2)
Because you didn't use AI to actualize your cloud based blockchain.