Forgot your password?
typodupeerror

Comment How is the lack of govt information relevant? (Score 3, Insightful) 75

Assuming it's remotely true (and there's good reason for thinking it isn't), it still means the FBI director was negligent in their choice of personal email provider, that the email provider had incompetent security, and that the government's failure to either have an Internet Czar (the post exists) or to enforce high standards on Internet services are a threat to the security of the nation (since we already know malware can cross airgaps through negligence, the DoD has been hit that way a few times). The FBI director could have copied unknown quantities of malware onto government machines through lax standards, any of which could have delivered classified information over the Internet (we know this because it has also happened to the DoD).

In short, the existence of the hack is a minor concern relative to every single implication that hack has.

Comment Re:Coming soon off the back of this (Score 1) 112

Doesn't have to be a credit card. A class III user digital certificate requires a verification firm be certain of a person's identity through multiple proofs. If an age verification service issued such a certificate, but anonymised the name the certificate was issued to to the user's selected screen name, you now have a digital ID that proves your age and optionally can be used for encryption purposes to ensure your account is only reachable from devices you authorise.

Comment Re:Dumb precedent. Addiction is on the user. (Score 3, Insightful) 112

And those come with warnings, legal penalties on vendors who sell to known addicts or children, legal penalties for abusers, financial penalties to abusers, etc. There are cars which have their own breathalisers.

So, no, society has said that the responsibility is distributed. Which is correct.

Comment Re:Exploitation of children is inevitable??? (Score 1) 45

It is legitimate for any service that constitutes a "common carrier" to be free of consequences for what it carries. But Meta do not claim to be a "common carrier", and that changes the nature of the playing field substantially. As soon as a service can inspect messages and moderate, it is no longer eligible to claim that it is not responsible for what it carries.

Your counter-argument holds some merit, but runs into two problems.

First, society deems any service that monitors to be liable. That may well be unreasonable at the volumes involved, but that's irrelevant. Meta chose to monitor, knowing that this made it liable in the eyes of society. There are, of course, good reasons for that - mostly, society is sick and twisted, and criminality is encouraged as a "good thing" and "sticking it to the man". This is a very good reason to monitor. But Meta chose to have an obscenely large customer base (it didn't need to), Meta chose to monitor (it is quite capable of parking itself in a country where this isn't an obligation), and Meta chose to make the service addictive (which is a good way of encouraging criminals onto the scene, as addicts are easy prey).

Second, Meta has known there's been a problem for a very long time (depression and suicides by human moderators is a serious problem Meta has been facing for many years at this point). Meta elected to sweep the problem under the rug and create the illusion of doing something by using AI. If a serivce knows there's a problem but does nothing, and in particular a very cheap form of nothing, then one must consider the possibility said service is not solving said problem because there's more money to be made by having the abusers there than by removing them.

Can one block every criminal action? Probably not, which means that that's the wrong problem to solve. Intelligent, rational, people do not try to solve actually impossible problems. Rather, they change the problems into ones that are quite easy. This is very standard lateral thinking and anyone over the age of 10 who has not been trained in lateral thinking should sue their school for incompetence.

Submission + - FCC Bans Nearly All Wireless Routers Sold in the U.S. (reason.com)

fjo3 writes: This week, the Federal Communications Commission (FCC) effectively banned the sale of nearly all wireless routers in the U.S., in yet another example of the government making Americans' consumer decisions for them.

Ninety-six percent of American adults use the internet, and 80 percent of them use wireless routers—devices that transmit a signal throughout your home via radio waves and allow you to get online without plugging into the wall.

In a Monday announcement, the FCC deemed "all consumer-grade routers produced in foreign countries" potentially unsafe. This followed a national security determination last week, in which members of executive branch agencies concluded that "routers produced in a foreign country, regardless of the nationality of the producer, pose an unacceptable risk to the national security of the United States and to the safety and security of U.S. persons."

Comment Re: YUP! (Score 1) 118

I'm in favor of fixing this properly before the politicians mandate something stupid.
My proposal is a sysctl value set by a pam module (or systemd on systems infected with that). The browser then does something like language verification much like the HTTP Language headers. Those can be intercepted, checked or forced in environments that have to provide web access to kids like schools. A web site should be able to ask for something like Australia's under 16 and can return a AGE_AU_VIC_Under_16=True if and only if configured to do so. This allows things like online news papers to allow under 16 access to news but not the discussion forums. The proposal still needs work, but it allows for parents to set things as they wish and keep the politicians out of it while letting them claim they fixed it. In the past local ISPs were required to give out software to lock down kids computers and the take up was smaller than the number of people who supported the law.

Submission + - Federal Cyber Experts Thought Microsoft's Cloud Was "a Pile of Shit." (propublica.org)

madbrain writes: Federal Cyber Experts Thought Microsoft’s Cloud Was “a Pile of Shit.” They approved it anyway.

To move federal agencies to the cloud, the government created a program known as FedRAMP, whose job was to ensure the security of new technology.

FedRAMP first raised questions about Microsoft's Government Community Cloud High s security in 2020 and asked Microsoft to provide detailed diagrams explaining its encryption practices. But when the company produced what FedRAMP considered to be only partial information in fits and starts, program officials did not reject Microsoft’s application. Instead, they repeatedly pulled punches and allowed the review to drag out for the better part of five years. And because federal agencies were allowed to deploy the product during the review, GCC High spread across the government as well as the defense industry. By late 2024, FedRAMP reviewers concluded that they had little choice but to authorize the technology — not because their questions had been answered or their review was complete, but largely on the grounds that Microsoft’s product was already being used across Washington.

Comment Re:Working with other people's code (Score 0) 150

Yes. So far, the LLM tools seem to be much more useful for general research purposes, analysing existing code, or producing example/prototype code to illustrate a specific point. I haven't found them very useful for much of my serious work writing production code yet. At best, they are hit and miss with the easy stuff, and by the time you've reviewed everything with sufficient care to have confidence in it, the potential productivity benefits have been reduced considerably. Meanwhile even the current state of the art models are worse than useless for the more research-level stuff we do. We try them out fairly regularly but they make many bad assumptions and then completely fail to generate acceptable quality code when told no, those are not acceptable and they really do need to produce a complete and robust solution of the original problem that is suitable for professional use.

Comment Re:We should be very very careful here. (Score 1) 110

First up, before anything else, I am extremely glad you got the hope and encouragement you needed. Grief is rough, especially when you're going through it alone.

You are correct, so I'm somewhat careful with the AI dance. I will rarely discuss inner feelings with it, because that pushes the risk higher precisely because it is a mirror. Like the one in the Harry Potter novel, it will show your innermost desires. If you'd rather a different analogy, it's an amplifier, and if you talk for too long with it, the positive feedback loop does some interesting things with your mind that aren't terribly printable. And that's not always the greatest idea.

So I use it for wild speculation in science fiction/fantasy. Enough that it helps with the intellectual boredom, but not so much that I venture into believing it's real.

Slashdot Top Deals

About the time we think we can make ends meet, somebody moves the ends. -- Herbert Hoover

Working...