Comment Re:"inactive during gameplay" (Score 2) 81
I wager it also requires an endorsement certificate associated with a 'trusted' vendor, and thus swtpm would probably fail to pass their criteria...
But I don't know for sure.
I wager it also requires an endorsement certificate associated with a 'trusted' vendor, and thus swtpm would probably fail to pass their criteria...
But I don't know for sure.
The issue is the expectations.
People expect these things to be thinking entities, providing an independent perspective on whatever you submit to it. A great deal of care must be taken to make it clear and culturally understood that these things are like very very fancy parrots more than an independent human. Which is an uphill battle because we want to anthropomorphize *anything* at the slightest hint, and a puree of training material blended with your prompt and anything stuffed into the prompt (context/RAG) in rather convincing natural language is just really likely to make people think it's more than it is.
The Scrabble analogy is not that great, as anyone can plainly see they are just letters, but to understand the resemblance of LLM to that, you have to go beyond how it *looks* and dig into the nuance of the workings of it, and even then some people have fallen into the trap of "well maybe humanity is nothing more than this anyway".
If applications were automation friendly, then sure.
Problem is the paradigm of application development has been monolithic applications, making it hard to handle workload 'piecewise'.
So the industry has been coming to the realization that Agentic LLM is damn near impossible for these sorts of applications, and have pushed the 'MCP concept', which if you get into it, is roughly like defining CLI interfaces for your application to let a text oriented orchestrator reach into your application to do some work and potentially mix your functionality with other applications. Whatever value the 'agentic' might offer, the ecosystem has to shift to consider that sort of interface must be accommodated, and thus the 'OS' becomes the place to go, as the platform dictates these sorts of design guidelines.
Though I do wonder what MS does as a UI paradigm, they already tend to be pretty bad about UI design, this could be a pretty severe worsening beyond that.
AI doesn't listen though, it regurgitates.
There's as much engagement as writing in a journal no one will ever read. A conversation with yourself is every bit as useful in this context as throwing your text at an LLM.
A conversation seeks another active perspective, an LLM has no perspective, only the ability to dispense a puree of content launching off of whatever prompt that fed it. There are applications for this, but psychotherapy is absolutely not one of those, and substituting an echo chamber for actual human engagement is a recipe for very bad stuff. Online communities are bad enough at unhealthy echo chambers as it is, it's a terrible idea to completely close the loop so a person just hears what they said being said back to them in a different way.
To view LLM as a valuable outlet for mental health is begging for trouble. It doesn't have perspective but it *looks* like it gives perspective. You have only what you brought with you, but you start thinking that someone is agreeing with you no matter what you are saying.
Just like a random dude on the street cannot just say "I can provide psychotherapy services", it absolutely makes sense to also apply those sorts of guardrails to AI, and currently AI is not even vaguely geared toward psychotherapy and just resembles it near enough to be pretty dangerous.
But it only gives the 'illusion' of a response, with an emphasis on reinforcing *whatever* the prompt directs. This can be catastrophic for mental health scenarios, where the provider needs to challenge the patient as appropriate.
Sure, have your chats, but no one should ever call it a substitute for therapy from a provider. Nor should a provider just foist a customer into an AI chat to get more billable hours for irresponsible behavior.
I'm not saying the AR glasses will never exist, I'm saying that's a separate point from AI.
Did you read these comments or did you have the phone dictate everything? Why not have it dictate?
The point is that the "AI replaces phone" is a pretty silly take, because it would need something like the phone to operate, and whatever replaces phones would be able to deal with non AI usage in just a compelling way as AI usage.
The only way AI replaces phones is if it eliminates the demand for visual feedback completely. For "headless" usage, a phone can do that from a pocket just as well as some "only AI" device. The couple of attempts at such a device were utter failures because they were a strict subset of what a phone could do.
So of course AI won't replace handheld computers, some wearable device(s) will probably do it one day, but not because of AI.
Don't even have to argue about the quality of AI, just recognize that people will want to use a screen to interact with AI. It *might* displace a lot of 'virtual keyboard' interaction or complex UI interaction with natural language on the input side, but people will want the screen output even if AI is driving the visuals.
Now I'm imagining trying to play mobile games like you are talking to some DM back and forth audibly..
Except they were kind of right about laptops, most people have a full fledged laptop for 'big interaction', because the phone is fantastic and all, but when the interaction is too complicated, it's a nightmare.
In terms of 'AI' somehow displacing phones, it would only do so with some as-yet unseen AR glasses that could do the job without being hundreds of grams of gadgetry on your face, combined with maybe a smart ring to provide some sort of tactile feedback to 'virtual interfaces'.
This is all orthogonal to AI, AI isn't going to make a screen less desirable, whether on a phone or in glasses. If anything, AI makes some things demand screens even more. People don't want to *listen* to voicemail, they want to read a transcription because it is so much faster. Trying to 'skim' is only possible visually. People take voice feedback as a consolation prize, if they are driving or cannot actually look, or *maybe* for audiobook to enjoy the speaker's voice and casual pace for recreational story, but usually people want text to read for speed's sake. This is ignoring visuals which obviously demand screens.
I think it's an excellent case study in a company thinking of what sounds good for *them* rather than for the customer and the sort of failure that can happen after smelling your own farts that much.
Yeah, Windows core was ridiculous. They championed how they had a GUI-free experience, and then you boot it up and... GUI.
It was such a pointless exercise, and missing the point of why so many of the Linux systems didn't run a GUI. They thought the server admins just didn't want a start menu/taskbar. But they needed to actually still be GUI because applications still needed GUI to do some things. Linux servers not running GUI was mostly because the ecosystem doesn''t really need it, and that sort of ecosystem lends itself to a certain orchestration style. Microsoft failed to make that orchestration happen, just removed taskbar/start menu as more of a token gesture. They have *an* orchestration strategy, but it's just very different and also no consistency between first party and third party, or hell, much consistency among Microsoft first party offerings.
Ironically, ChromeOS is succeeding in select niches precisely because it is built around that "only web apps" use case. An utterly disposable client device because all applications and data are internet hosted. Windows 11SE fails in those niches because it goes too far into apps and the device actually mattering a bit more.
Of course, ChromeOS is a platform that institutions like schools love inflicting on people, but not really a choice people choose for themselves, and so not a lot of growth beyond that. So the result is people "growing out of ChromeOS" as they get out of school. Google hopes to change this by just tucking it all into Android and having at least some platform with residual relevance to a "grown up" computing experience.
But Windows 11SE has always been in a super weird awkward in-between. More 'capable' than ChromeOS in common usage, yet you could just get "real Windows" and run anything you like. The biggest problem is Microsoft didn't understand that lock-in to the Microsoft Store is not what would make them compete with ChromeOS, they just convinced themselves because that was the customer concept that would have been most profitable to them if they existed.
I meant to say that most of those 'safe' jobs specifically are at high risk of 'AI' replacement, even if not generative.
Now I know it isn't *generative* AI specifically, but most of those jobs are at pretty high risk of some related form of 'AI'. Was in a store and the floor polisher was operating autonomously among the shoppers.
On the impacted, the passenger attendant one strikes me as odd. The airlines don't actually care to provide the service that much, but since they are mandated by law to have that much staff to help with potential emergency situations, they put them to work doing attendant work for the 99% of the time they have nothing related to their legal obligation.
Sure, much of the list makes sense but there are certainly some oddities.
But it does move! -- Galileo Galilei