Comment Re:steam deck? (Score 2) 23
Give it time, he also doesn't watch TV, but only just recently stopped announcing it.
Give it time, he also doesn't watch TV, but only just recently stopped announcing it.
It's actually pretty understandable.
Despite the meme power of a broken login, the bug affects a fallback feature you might well go years without using.
It requires you have PIN/Touch sign in enabled; which if you've enabled that, that means that is how you normally login.
And that works just fine. Nothing is broken there.
What is missing is a "password" icon in the 'fallback' options to "sign in a different way" (using a password, e.g., instead of a PIN or fingerprint.)
So despite being on the login screen, its not actually something you are going to regularly interact with normally, unless you forgot your pin or something. And its hardly something human beta testers are going to think to explicitly test for, every single build. And since the bug is a missing element as opposed to a visibly broken element, well, its easy to fail to notice something you almost never use isn't there.
Meanwhile, clicking where its supposed to be still actually works, so its entirely plausible that you could have automated test scripts that continue to pass if they've been scripted to click at coordinate (X,Y), or to select the password button programmatically by an identifier or something, and then 'expect' something to happen in response, because the button is there and it works just fine, its just missing its texture or something. This would slip past a lot of test frameworks, the button is "in the model", "its active/enabled", "its selectable", "its clickable", and "it fires a click event if you click", "and whatever it is supposed to do happens", and its probably even "visible" (though you can't see it); most likely the icon or texture is missing or unassigned or referencing a transparency by mistake, and its just a "transparent button". So unless you specifically add checks to screen capture and compare a pixel block range to a reference image bitmap or something, you aren't even going to catch it with an automated test.
Tests like THAT do exist and can be written, but its not usually very useful, and the cost to write and maintain such tests with reference images is huge. change an icon or font or background color and a zillion tests need to be updated. Its a difficult balancing act to decide what to test, even for a highly competent QA team.
It's possible it just outright incompetence too... but in this case, for this bug... its pretty understandable.
"even one time"
Unless you never use a password, in which case, you log in via all the other available options BUT password. You don't notice it missing. Passwords are so 1980s, get with the program.
I don't use biometrics because
Dude, learn about punctuation if you want to be understood.
That wasn't *all* I said, but it is apparently as far as you read. But let's stay there for now. You apparently disagree with this, whnich means that you think that LLMs are the only kind of AI that there is, and that language models can be trained to do things like design rocket engines.
This isn't true. Transformer based language models can be trained for specialized tasks having nothing to do with chatbots.
That's what I just said.
Here's where the summary goes wrong:
Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs.
Artificial Intelligence is in fact many kinds of technologies. People conflate LLMs with the whole thing because its the first kind of AI that an average person with no technical knowledge could use after a fashion.
But nobody is going to design a new rocket engine in ChatGPT. They're going to use some other kind of AI that work on problems on processes that the average person can't even conceive of -- like design optimization where there are potentially hundreds of parameters to tweak. Some of the underlying technology may have similarities -- like "neural nets" , which are just collections of mathematical matrices that encoded likelihoods underneath, not realistic models of biological neural systems. It shouldn't be surprising that a collection of matrices containing parameters describing weighted relations between features should have a wide variety of applications. That's just math; it's just sexier to call it "AI".
Changing economic systems does not make it less totalitarian.
Its apologists should be purged from the West without apology.
Like I've said before, this is just yet another financial system being created to have a minority of people manage the majority of the wealth, to their own advantage. This is just a new competing system with less regulation created by the crypto bros to wrestle the current system away from the Wall St. bros.
I think this view gives the crypto bros too much credit. They might now be thinking about taking advantage of the opportunity to wrestle the system away from the Wall Street bros, but there was no such plan.
too much hassle. build a shadow fleet of well-armed fast interceptors with untraceable munitions and sink the saboteurs.
To intercept them you still have to identify them, which you can't do until after they perform the sabotage. Given that, what's the benefit in sinking them rather than seizing them? Sinking them gains you nothing, seizing them gains you the sabotage vessel. It probably won't be worth much, but more than nothing. I guess sinking them saves the cost of imprisoning the crew, but I'd rather imprison them for a few years than murder them.
FORTRAN is the language of Powerful Computers. -- Steven Feiner