Comment Re: What a win for xAI (Score 1) 51
I was actually in college in the 1990s, but yes, a middle schooler today with python on a raspberry pi and a pretty simple GPS module could do this.
I was actually in college in the 1990s, but yes, a middle schooler today with python on a raspberry pi and a pretty simple GPS module could do this.
I didn't say it wasn't abhorrent or alarming. I'm presenting the scenario that this task of "defend this three dimensional coordinate box" doesn't require AI.
Yes, it did. The beacon signals weren't that good back then, neither were the sensors. I had the same problem in the fake robot battles I was involved in.
The answer turned out to be a solution not from Defense industries, but from Genie Garage Door Openers.
The robot doesn't care. The robot's job isn't foreign policy. The robot's job is "here's a box defined by this coordinate cloud, defend it"
Like I said, I programmed it for a fighting robot back in the 1990s. It ain't that complex, and with today's drone factory ships, the Navy can now output this level of AI in killbots at a rate of 10,000 a day.
This will not end well.
Why is this a problem?
I do not want my software censoring anything I make.
(+1, Truth)
Of all the major streaming platforms, Paramount+ stands alone in how often it just doesn't work. It doesn't work reliably on state-of-the-art streaming boxes. It doesn't work reliably on desktop PCs. In fact, of all the devices we have in our household, it works reliably on a total of zero of them.
We have several of the other commercial streaming platforms plus the apps or online services for several of our main national TV channels as well and almost all of them work almost all of the time. It's bizarre how bad Paramount+ manages to be compared to literally everyone else. It must be hurting their bottom line to some degree or surely will do soon if they don't get a handle on it, because why pay for something you literally can't watch?
Kill decisions are simple in comparison: Stay within your predefined geofence, kill anything that moves that isn't transmitting Friend beacon. We don't need AI for that, I coded a form of it in both Basic and Forth back in the 1990s.
And if they don't, some other startup will.
Neoliberalism and liberalism are two totally different things.
Ai will just shine the light on the class of workers who have make-work just that exist solely to push them to vote harder.
When we can let markets replace them, itâ(TM)ll be a tragedy for a few generations and then it will be forgotten.
It is the Microsoft way.
There's a difference between not using AI tools at all and not using code generated by AIs.
The latter involves a lot of risks that aren't well understood yet -- some technical, some legal, some ethical -- and it's entirely possibly that some of those risks are going to blow up in the face of the gung-ho adopters with existential consequences for their businesses.
I mostly work with clients in industries where quality matters. Think engineering applications where equipment going wrong destroys things or kills people and where security vulnerabilities are a proxy for equipment going wrong.
I know plenty of smart, capable people working in this part of the industry who are totally fine with blanket banning the use of AI-generated code on these jobs. A lot of that code simply isn't up to the required standards anyway, but even if it does produce something you could actually use, there are still all the same costs for review and certification that any other code incurs. That includes the need for at least one human reviewer to work out why the AI wrote what it did, which may or may not have any better answer than "statistically, it seemed like a good idea at the time".
The claims also seem a bit sus. "Eighty percent of new developers on GitHub use Copilot within their first week." Is this the same statistic someone was debunking recently where anyone who had done something really basic (it might have been using the search facility?) was counted as "using Copilot"? A lot of organisations seem to be cautious about using code generated by AIs, or even imposing a blanket ban, so things must be very different in other parts of the industry if that 80% is also representative of professional developers using Copilot significantly for real work.
Where are the calculations that go with a calculated risk?