Forgot your password?
typodupeerror

Submission + - Google clamps down on Android developers with mandatory verification (nerds.xyz)

BrianFagioli writes: Google is rolling out mandatory developer verification for Android apps, and while it says the move is about security, it also means developers will now have to verify their identity and register apps with Google before they can be easily installed on devices. Google claims sideloaded apps contain far more malware than apps from the Play Store, but critics might argue this is another step toward tighter control over the Android ecosystem. Power users can still sideload using ADB or a new “advanced flow,” but Google is clearly adding friction to anything outside its system. Is this a reasonable security measure, or is Android slowly becoming less open than it used to be?

Comment Useless warnings are useless. (Score 1) 61

The problem you get though is what I call the "California Cancer Warning Problem"
Basically, people can only pay attention to so many warnings. The more often people get false or trivial warnings, warnings where they have to continue to get things done as standard, the more likely they are to just plain ignore the warnings.

While hackers might be able to figure out a way to do something malicious without triggering the warning, the warnings back then were worse than useless, because they not only triggered for just about every document, users by default could not assess the document for safety without enabling the scripting. IE I couldn't by default open the document and look at the scripts to assess them (and some of them were only like a dozen lines) without enabling them.

Saying the warnings were necessary also ignores that there have been exploits that didn't even require opening a document to cause infection. Preview was enough.

Basically, if the hackers figured out something clever, just add that to the check. It would still be a better situation than what we had back then.

Comment Re: at least it hasn't exploded (yet) (Score 1) 122

Do you know how to do math? Because that's how. It's not a difficult problem space to quantify. You can easily know how much water one vs the other uses, you can easily know how much detergent you need to use in one vs the other and you also know the costs on those things. You also know how many loads of laundry you do and how frequently. Wear an tear on clothes would take more, but is likely unnecessary to show a net benefit.

Could you really not image this solution as soon as you read the problem?

Comment Laws for slavery (Score 5, Insightful) 193

I’d argue that slavery wasn’t “legal because nobody banned it.” It was legal because there were explicit laws that created, defined, and enforced the institution.

There were statutes specifying who could be held as slaves, rules that the child of an enslaved woman was automatically a slave, procedures for manumission, regulations on how slaves could be bought, sold, punished, or inherited, and laws requiring that escaped slaves be returned. That’s not a legal vacuum, that’s a full legal framework.

It’s similar to how segregation laws later forced discrimination on people who might not have engaged in it otherwise. The state wasn’t passively allowing something; it was actively mandating and structuring it.

Slavery existed because the law built and maintained it, not because the law failed to forbid it.

Comment Re:Please don't (Score 1) 61

I remember those days where it would warn if there was any scripting at all, rather than look for dangerous commands first.
Just as a thought, not bothering if the script cannot reach outside of the document itself. Functions that access other files or documents, email functionality, and such triggering the warning instead would have been more effective.

Comment Not an increase (Score 1) 72

LLMs have never been rules-based "agents," and they never will be. They cannot internalize arbitrary guidelines and abide by them unerringly, nor can they make qualitative decisions about which rule(s) to follow in the face of conflict. The nature of attention windows means that models are actively ignoring context, including "rules", which is why they can't follow them, and conflict resolution requires intelligence, which they do not possess, and which even intelligent beings frequently fail to do effectively. Social "error correction" tools for rule-breaking include learning from mistakes, which agents cannot do, and individualized ostracization/segregation (firing, jail, etc.), which is also not something we can do with LLMs.

So the only way to achieve rule-following behavior is to deterministically enforce limits on what LLMs can do, akin to a firewall. This is not exactly straightforward either, especially if you don't have fine-grained enough controls in the first place. For example, you could deterministically remove the capability of an agent to delete emails, but you couldn't easily scope that restriction to only "work emails," for example. They would need to be categorized appropriately, external to the agent, and the agent's control surface would need to thoroughly limit the ability to delete any email tagged as "work", or to change or remove the "work" tag, and ensure that the "work" tag deny rule takes priority over any other "allow" rules, AND prevent the agent from changing the rules by any means.

Essentially, this is an entirely new threat model, where neither agentic privilege nor agentic trust cleanly map to user privilege or user trust. At the same time, the more time spent fine-tuning rules and controls, the less useful agentic automation becomes. At some point you're doing at least as much work as the agent, if not more, and the whole point of "individualized" agentic behavior inherently means that any given set of fine-tuned rules are not broadly applicable. On top of that, the end result of agentic behavior might even be worse than the outcome of human performance to boot, which means more work for worse results.

Slashdot Top Deals

Adapt. Enjoy. Survive.

Working...