Comment Why is casting a feature of the app not the OS? (Score 1) 57
Shouldn't the OS be able to cast the screen, rather than this being a function of the application?
Shouldn't the OS be able to cast the screen, rather than this being a function of the application?
I propose a law requiring companies to continue to provide old versions of software. They can remove a feature from the new version, but I can still get the old one. In the past, if Microsoft removed a feature from Word 2005 for example, then one could refuse to upgrade. I could save the installer. Yes, eventually it won't work any longer, and I am not saying they must support every version into perpetuity. But if Netflix removes a feature, I can't download the old version. So they should be barred from putting a barrier in place preventing access or use of it.
I find it interesting anyone would offload error detection and correction to application software. Not only are you needlessly increasing local complexity in doing that any possible machinery you implement to accomplish this in software is itself subject to failure from same sources of arbitrary corruption.
Why would someone do this instead of using hardware with some sort of RAS with memory mirroring, pool scrubbing, multi-bit error correction...etc? If you are extra paranoid just add more memory and or CPU cycles to meet your desired level of reliability.
It makes sense for high reliability systems to guard against hardware or software failures by having multiple discrete systems performing redundant operations and voting yet here the cause is the legendary cosmic ray. I don't understand why anyone would design a system like this.
Dangit, stupid HTML filter! The was supposed to be "Oh no, open source doesn't support [INSERT ENSHITTIFICATION HERE] like customers want!"
I think it was intended to be drippingly sarcastic. Sloppy is pretending that the failure to screen cast is a feature that customers wanted, then constructing a sentence like "Oh no, open source doesn't support like customers want!"
Jesus wept, can you imagine the unholy abomination that a Microsoft/Oracle hybrid would be? It makes my brain hurt to imagine such a hammerfuck of failure and technological despair.
Well said! I know you didn't mean to be humorous, but I'm still laughing anyway. And now I'm wracking my brain trying to figure out how to casually work "hammerfuck of failure and technological despair" into a conversation.
Do you or your associates have to work with Salesforce very often? That may offer you an opportunity or three.
It's a recall when a software update is important enough to ground planes until it's been done...
Here in Australia if your phone is stolen you can have the IMEI blacklisted and it will no longer be able to connect to any network. And both iOS and Android already have tracking features that let you find the device remotely and even lock or wipe it. Not to mention the cloud lock many phones have to prevent factory resets.
The operative part of your 2nd sentence is the 'can be'.
It is premature to handwave this, particularly when so much of the market is made up of grifters who have already made impossible claims.
No, the things I was referring to using LLMs to predict protein shapes and evolution of plasma have already been demonstrated.
This isn't true. Transformer based language models can be trained for specialized tasks having nothing to do with chatbots.
That's what I just said.
No, what you said was the following "Artificial Intelligence is in fact many kinds of technologies. People conflate LLMs with the whole thing because its the first kind of AI that an average person with no technical knowledge could use after a fashion." Your statement is incorrect.
You cannot bypass solving the Navier-Stokes equations with transformers. You will, of course, get some predicted flows with a black box model, and you can, if you choose, claim that prediction accuracy is close enough for 85% of the random samples from your test data, but that will not get you new propulsion physics.
Who is talking about "new propulsion physics" and what does this even mean? What I mentioned has already been demonstrated using LLMs.
I've never seen polling to this effect. The best I've been able to find was a pew poll from early last year which stated 58% of government regulation does not go far enough WRT AI. The lack of useful specificity on what they mean by common sense is concerning. People should know what they are supporting beyond nebulous slogans especially in this space.
If AI ever really starts getting real it won't just be AI companies pushing for regulation to protect their own market share it will be industry generally to protect corporate interests from erosion by those who might leverage the technology to disrupt and displace them.
The billion dollar global scare mongering AI lobbying blitz after all was pushing FOR regulation not against it. What the AI companies are angry about is not a lack of regulation but regulation that cuts against their interests and patchwork regulation in which every state has their own disjointed set of rules.
The most immediate salient public threat from AI is those with power viewing AI as a source of more power. Insane money pouring into AI isn't just about misplaced investment it is driven by fever dreams of the powerful. None of what they are dreaming about has a nexus to the public good.
This is one case where details especially matter given both sides want regulation and there is no such thing as "common sense" when it comes to AI.
It doesn't really matter whether this is predictable. We are essentially letting these companies experiment on human beings with no guard rails at all. They need to be forced to prove its safe and then get informed consent from the people they are experimenting on. Which really means informed, not that they put a terms of service link on their web sites. We are letting sociopaths run amok.The only real measure of success that they recognize is the bottom line of their profit and loss statement. Dead and damaged people are irrelevant unless they have a good lawyer. Then their lawsuits are just another business cost that needs to be built into the pricing model.
As a society, we decided a long time ago that profit is our only metric that matters. Sociopaths excel at generating profit, because they see no moral or ethical barriers to slow the generation of profit in the way. We put the sociopaths in charge because they are the best way to keep profit accelerating.
Do we really have to pretend to be outraged by seeing the fruits of our society's direction grow in exactly the way you would expect them to?
Oracle was doomed until they bet everything on AI. Knowing Ellison, there’s a chance he might pull it off. Worse case scenario, Microsoft will acquire them.
Jesus wept, can you imagine the unholy abomination that a Microsoft/Oracle hybrid would be? It makes my brain hurt to imagine such a hammerfuck of failure and technological despair.
They present themselves as "scientists" , like summer transcend pure elite personalities. In reality 90%+ of them are crooks.
I didn't used to think so, but the longer I live, the more convinced I get that 90%+ of all of humanity are crooks. Why should scientists be immune from that trend?
Quantum Mechanics is God's version of "Trust me."