Comment AI is deadly or helpful. Humans decides which. (Score 1) 248
AI can be either very deadly or very helpful; two groups of humans decide which: the solution designers and the end users.
All research must have clear sources and must be verified to be trusted. Bad things happen when humans assume and don't verify AI recommendations.
Good AI solution implementations have designers who make AI intervention controllable, optional, visible and trainable by end users.
Putting broken AI into phone systems so you can't reach a human to get anywhere or companies thinking they can leverage useless AI Chatbots to avoid paying staff to answer SMS or support chats with no ability to escalate to a human. Those two paths are a cancer that really needs to be illegal. Some organizations failed so badly at that that some call centre personnel are getting stressed out from people screaming at them to talk to a human.
The very deadly part comes in when AI is used on input streams, without the end user's consent, constantly makes dangerous errors and there is no way to turn it off. Voice input when driving in Android is a classic example of poor non-optional AI implementation going deadly. To safely use a cell phone in a car you must have working voice controls. Sadly the company that controls Android as an ecosystem broke voice controls three years ago, keeps breaking them in different ways every few weeks, blocks end users from using safe alternatives for "safety reasons" and will not engage on fixing them. It's one thing when it insults family members calling them fruit or rubber tardy. It's quite another thing when you say "call 911" and the idiotic AI you don't want "helping" with your critical voice input hallucinates and replaces that with "Thanks for listening!" repeatedly. Another example is when that system gets "improved" and "call police" which is a known contact you have in your phone you have called before starts calling a new police supply store you have never dealt with on a different continent without asking for confirmation. Stupidity like that can be very deadly. When AI misbehaviour, you can't turn off, almost kills you (and it has done that repeatedly with my family but I don't want to get into details here) that needs urgent attention. After literally thousands of attempts here, it's basically good luck getting that. It is impossible to get through the AI reading the "feedback" to reach the designers who have implemented systems that are a public safety threat. The problem here really isn't the AI but the out of control solution designers who think constant change is good and answer to no one and companies being monopolistic and poorly implementing AI in critical systems in very dangerous ways.
Research level AIs can be incredibly useful but really depends on the topic. AI hallucinations are still very common and it isn't always obvious when they are happening. Even when correcting errors I've seen AIs stay focused on the same errors over and over again. After correcting it half dozen times I gave up. AI has a really hard time keeping a complex solution the same, especially when it involves writing code, the same way and fixing a minor issue with it. For example this code is awesome but I need you to add a call to ring a bell at the end so we know it's done. Completely different code comes out with a response.
Research level AIs can be good for both engineering and medical solutions to problems and finding particular items you need but they can really suck with actual sourcing with working links to where you can buy something. Sometimes this works other times it's a bunch of 404s or references to companies who don't even carry the product line. Something as simple as finding who actually carries a particular toilet anywhere in an entire country, forget who has it in stock. Total fail.