Forgot your password?
typodupeerror

Comment It is going to happen so propose a useful solution (Score 1) 92

The laws in several countries are going to require it. My preferred way is for the OS to offer a flag of "This user is of legal age in this region based on information provided to the administrator of this computer." I'll leave it up to the people with compilers to comply or not with their local laws.

My proposal is stuff the flags in a sysctl user.$UID.age var. and then let the browser send info off to other sites just like it does with language selection. That way a pam module (or systemd) can set an over/under age of majority for the region and then let the browser send a "yes/no" flag. The pam module or sysd can calulate that based on a birthday or a +18 flag so you may have to log in to reset it but the birthdate is never sent to the browser let alone to the end web sites.

This gives schools a way to control content. It allows parents to control content. It allows home router vendors to claim to control content. It allows web sites to stop annoying users about being above 16,18 or 21 depending on what they are pushing. The politicians will look at it and say the industry is working with them while patting themselves on the back.

The other solution is let the politician's owners come up with a solution and that will be an expensive id solution that tracks everyone through the web with no way to opt out.

Comment How is the lack of govt information relevant? (Score 3, Insightful) 80

Assuming it's remotely true (and there's good reason for thinking it isn't), it still means the FBI director was negligent in their choice of personal email provider, that the email provider had incompetent security, and that the government's failure to either have an Internet Czar (the post exists) or to enforce high standards on Internet services are a threat to the security of the nation (since we already know malware can cross airgaps through negligence, the DoD has been hit that way a few times). The FBI director could have copied unknown quantities of malware onto government machines through lax standards, any of which could have delivered classified information over the Internet (we know this because it has also happened to the DoD).

In short, the existence of the hack is a minor concern relative to every single implication that hack has.

Comment Re:Coming soon off the back of this (Score 1) 112

Doesn't have to be a credit card. A class III user digital certificate requires a verification firm be certain of a person's identity through multiple proofs. If an age verification service issued such a certificate, but anonymised the name the certificate was issued to to the user's selected screen name, you now have a digital ID that proves your age and optionally can be used for encryption purposes to ensure your account is only reachable from devices you authorise.

Comment Re:Dumb precedent. Addiction is on the user. (Score 3, Insightful) 112

And those come with warnings, legal penalties on vendors who sell to known addicts or children, legal penalties for abusers, financial penalties to abusers, etc. There are cars which have their own breathalisers.

So, no, society has said that the responsibility is distributed. Which is correct.

Comment Re:Exploitation of children is inevitable??? (Score 1) 45

It is legitimate for any service that constitutes a "common carrier" to be free of consequences for what it carries. But Meta do not claim to be a "common carrier", and that changes the nature of the playing field substantially. As soon as a service can inspect messages and moderate, it is no longer eligible to claim that it is not responsible for what it carries.

Your counter-argument holds some merit, but runs into two problems.

First, society deems any service that monitors to be liable. That may well be unreasonable at the volumes involved, but that's irrelevant. Meta chose to monitor, knowing that this made it liable in the eyes of society. There are, of course, good reasons for that - mostly, society is sick and twisted, and criminality is encouraged as a "good thing" and "sticking it to the man". This is a very good reason to monitor. But Meta chose to have an obscenely large customer base (it didn't need to), Meta chose to monitor (it is quite capable of parking itself in a country where this isn't an obligation), and Meta chose to make the service addictive (which is a good way of encouraging criminals onto the scene, as addicts are easy prey).

Second, Meta has known there's been a problem for a very long time (depression and suicides by human moderators is a serious problem Meta has been facing for many years at this point). Meta elected to sweep the problem under the rug and create the illusion of doing something by using AI. If a serivce knows there's a problem but does nothing, and in particular a very cheap form of nothing, then one must consider the possibility said service is not solving said problem because there's more money to be made by having the abusers there than by removing them.

Can one block every criminal action? Probably not, which means that that's the wrong problem to solve. Intelligent, rational, people do not try to solve actually impossible problems. Rather, they change the problems into ones that are quite easy. This is very standard lateral thinking and anyone over the age of 10 who has not been trained in lateral thinking should sue their school for incompetence.

Submission + - FCC Bans Nearly All Wireless Routers Sold in the U.S. (reason.com)

fjo3 writes: This week, the Federal Communications Commission (FCC) effectively banned the sale of nearly all wireless routers in the U.S., in yet another example of the government making Americans' consumer decisions for them.

Ninety-six percent of American adults use the internet, and 80 percent of them use wireless routers—devices that transmit a signal throughout your home via radio waves and allow you to get online without plugging into the wall.

In a Monday announcement, the FCC deemed "all consumer-grade routers produced in foreign countries" potentially unsafe. This followed a national security determination last week, in which members of executive branch agencies concluded that "routers produced in a foreign country, regardless of the nationality of the producer, pose an unacceptable risk to the national security of the United States and to the safety and security of U.S. persons."

Comment Re: YUP! (Score 1) 118

I'm in favor of fixing this properly before the politicians mandate something stupid.
My proposal is a sysctl value set by a pam module (or systemd on systems infected with that). The browser then does something like language verification much like the HTTP Language headers. Those can be intercepted, checked or forced in environments that have to provide web access to kids like schools. A web site should be able to ask for something like Australia's under 16 and can return a AGE_AU_VIC_Under_16=True if and only if configured to do so. This allows things like online news papers to allow under 16 access to news but not the discussion forums. The proposal still needs work, but it allows for parents to set things as they wish and keep the politicians out of it while letting them claim they fixed it. In the past local ISPs were required to give out software to lock down kids computers and the take up was smaller than the number of people who supported the law.

Submission + - Federal Cyber Experts Thought Microsoft's Cloud Was "a Pile of Shit." (propublica.org)

madbrain writes: Federal Cyber Experts Thought Microsoft’s Cloud Was “a Pile of Shit.” They approved it anyway.

To move federal agencies to the cloud, the government created a program known as FedRAMP, whose job was to ensure the security of new technology.

FedRAMP first raised questions about Microsoft's Government Community Cloud High s security in 2020 and asked Microsoft to provide detailed diagrams explaining its encryption practices. But when the company produced what FedRAMP considered to be only partial information in fits and starts, program officials did not reject Microsoft’s application. Instead, they repeatedly pulled punches and allowed the review to drag out for the better part of five years. And because federal agencies were allowed to deploy the product during the review, GCC High spread across the government as well as the defense industry. By late 2024, FedRAMP reviewers concluded that they had little choice but to authorize the technology — not because their questions had been answered or their review was complete, but largely on the grounds that Microsoft’s product was already being used across Washington.

Slashdot Top Deals

Cobol programmers are down in the dumps.

Working...