Comment Re:wow (Score 1) 87
It should be The USA is now just like the MPAA, RIAA, and DCMA, controlled by corporations.
It should be The USA is now just like the MPAA, RIAA, and DCMA, controlled by corporations.
Just remember the triumphal success of Elizabeth Holmes of Theranos or Sam Bankman-Fried at FTX. You could be as rich and successful as those founders are now if you invest soon enough, or stake your business on AI/LLM systems that guarantee success. The only way to loose is not do AI/LLM. Trust me.
Thai authorities have cut off internet, power and fuel along the Thailand and Myanmar border in a bid to thwart the operation of scam centres there. The move on February 5, 2025, came amid growing pressure for Thailand to do more to help clamp down on the illegal compounds that have ensnared vast numbers of people from many jurisdictions. The Thai Provincial Electricity Authority disconnected power supplies at five points along the northern border, including Myawaddy in Myanmar’s Shan state.
The scam centers have massive power backup and may also be able to tap power from Laos. Hundreds of thousands of people have been trafficked to work at the centers.
So imagine some moron who thinks that AI is "better than 99% of software developers" makes an AI generated product that harms a large number of people's property or physical self. That whole courts/laws/trial thing exists to punish that bad actor, compensate the victims and hopefully keep these kinds of events from reoccurring. It can get real expensive and sometimes the perpetrator even goes to jail.
Saying human beings are worse is a non-starter in this environment. With actual programmers there are processes, procedures and some kind of document trail that can show due diligence. With code magically appearing with a wave of an AI/LLM magic wand how can it be argued the system was correctly built? A claim of AI infallibility will go no nowhere. No amount of planning and testing can replace the fact that humans operate in known ways and can be shown to be trustworthy or to have failed.
Naming an AI/LLM after a revered computer pioneer doesn't make it work better. Making claims of near perfection about software where the internal workings are completely opaque is utter stupidity.
The requirement is moot. No one in the current White House is ethical, honest or moral. Everyone who supports them is just as dirty and personally corrupt as they are. The swine are all in the same hog wallow.
By the way, don't forget to "grab em by the pussy".
These days when someone designs a circuit or builds a structure there are practices and software to ensure with reasonable confidence that the result will be acceptable. There is no equivalent ecosystem for AI/LLM software. It will fail in unpredictable ways at unpredictable times and when it breaks the remedy is to literally rebuild it from scratch after tweaking the training sets. That is not in any way engineering or scientific practice. It's more akin to alchemy or sacrificing chickens.
There is no objectivity among anyone involved in the AI/LLM mania. Not the technical types, the managers, the investors or even external authorities, say the Securities and Exchange regulators. Everyone expects to become rich, influential, and important because they are the vanguard of the brave new future. There is very clearly no doubt allowed. If you are not a true believer you are not allowed to participate.
Here's a real world as of Feb 11th example from academia: Sandwich-making robot is just one of the many Arizona State University projects advancing AI. The proof of concept use case is "a sandwich-making robot (to) help seniors age in their own homes."
The exciting next milestone: "It can do five different kinds of sandwiches right now," Professor Nakul Gopalan said. "We want to scale it up to do 50 cold meals."
In what universe is that a viable business idea? Of course the meat-heads running Subway are orgasmic thinking they could sell their crap, which cannot be identified by DNA, without minimum wage workers. That will fly as well as McDonald's ice cream machines.
The terrifying truth is it's being used as an infallible magic wand by both the technically ignorant and those who should know better. They've swallowed the bait hook line and sinker. Meanwhile bad actors are running amok polluting both the internet and the real world with misinformation, lies and fraud at terrabyte/sec rates.
It's not in any way equivalent to human fallibility. The scale is radically different and it's being used thoughtlessly. AI/LLM technology is a cross between grey goo and the zombie apocalypse.
Since there seems to be no way to meaningfully evaluate how things go wrong, except for a human looking at the results, pointing out it can be deliberately fooled is kind of a waste of time.
But he's a white racist kid from conservative Lancaster north of Los Angeles so that wasn't going to happen.
Unless it's a scam and you know it doesn't work like you claim it does and the company would crash and burn if it depended on it's own over-hyped bullshit product.
Just asking.
So complete censorship of online access is a reasonable option, while reducing out of control gun violence is an assault (pun interned) on our sacred rights.
Business is a good game -- lots of competition and minimum of rules. You keep score with money. -- Nolan Bushnell, founder of Atari