Comment Re:The intent is universal surveillance and silenc (Score 1) 147
For family, it's Settings->Family->->Apple Account->'Age range for apps'
I started lurking in 4K enthusiast groups to see if they were all cracked up to be. The arguments about relative quality of various BD/4K releases isn't even the most interesting part.
It turns out that there are a lot of issues with set top boxes playing particular disks. The disks themselves also seem terribly fussy.
The problem here is that developers can take responsibility for the action while AI can not. Humans do make mistakes and that's ok; best practice is not to just can employees for messing up. Once is a mistake. Twice is an HR event. When someone does something dumb we forgive but we also insist that meaningful steps are taken to prevent that problem in the future. AI can't really take those steps because AI can't be accountable for "don't do it again." Taking down production because you dropped a table once is forgivable. Taking it down twice for the same reason is a different matter.
The developer can be accountable. And if HR fails to hold them to account for it, HR is accountable. And if HR isn't held accountable, leadership is. And if leadership isn't held accountable, the board is. And if the board isn't held accountable, the stockholders have some hard decisions to make. And if they choose not to make them than it wasn't really that big a deal, was it?
But with an AI the option is "we stop using AI" or "we live with the result."
Everyone is so excited about not having to pay software engineers to write code that they've forgotten what engineers actually do. It's less common in the software world but go find a civil engineer or an electrical engineer or an aerospace engineer and follow them around for a week.
At some point, there's going to be a document in front of them laying out how something is going to be built and they're going to be asked to approve it. And when they do that they're taking responsibility for the design. If it falls down, if it catches on fire, or if it crashes into the mountains and kills people, they're the name on the form saying that won't happen. They're responsible.
Claude 4.5 Opus is very impressive, but if it writes a software application that kills people it can't take responsibility. It can't be punished. It can't even really be sued.
I just don't see how we, as a society, can trust fundamentally unaccountable entities to build systems that can do real harm if they go wrong. I suppose the alternative is that Anthropic accepts full legal liability for everything its models do. Their unwillingness to make that move tells you all you probably need to know about their own internal confidence in those models.
---
Anthropic is prepared to loosen its current terms of use, but wants to ensure its tools aren't used to spy on Americans en masse, or to develop weapons that fire with no human involvement.
The Pentagon claims that's unduly restrictive, and that there are all sorts of gray areas that would make it unworkable to operate on such terms. Pentagon officials are insisting in negotiations with Anthropic and three other big AI labs â" OpenAI, Google and xAI â" that the military be able to use their tools for "all lawful purposes."
This outrageous level of paranoia over "alleged" drone sightings will cost human lives soon.
Here we have the US military mis-identifying a party balloon as a drone and firing a powerful laser at it -- while members of the public get prosecuted every year for flicking their laser-pointers at helicopters and airliners.
In Germany, police will be allowed to shoot at "alleged" drones even though it has been clearly proven that most (if not all) of the recent drone sightings were simply mis-identified aircraft lights.
Can anyone see the potential for disaster here?
The mis-identification of aircraft flying at night as "drones" has become rife, dating back beyond the NY/NJ "drone" incidents that caused such concern in the USA a year or two ago. Almost without exception, these "drones" are real aircraft (often passenger flights) carrying people through the skies. How long before one of them is shot down by paranoid trigger-happy idiots?
Paranoia is a mental health issue and it's infecting governments and authorities around the world.
Before someone says "but... Ukraine..." I ask you: how many people have died as the result of actions by bad actors using drones in the USA or outside the war zones in Europe?
That's a big fat ZERO!
Yes, it "could" happen but right now it's far more likely that innocent people will die from friendly fire produced by paranoid idiots on the ground with guns and lasers.
One thing the science does tell us is that we all have a very hard time separating the world that existed when we were children from our perception of that world through the eyes of a child.
Ask nearly any population in the United States when this country was best and you'll get a majority who'll swear to you it was when they were teenagers. The age of the group doesn't matter. You get the same result from 20 year olds as 40 year olds as 60 year olds as 80 year olds. And what you're seeing is people looking back to a time when they had lots of free time, lots of freedom, and most of their income was disposable and thinking "that was pretty great." And it was.... except they were living under a roof someone else paid for and still experiencing the risks and complexities of the world through the filter and safety net provided by their parents.
And since we're being scientific about this: yes, obviously not everyone. I'm sure someone reading this right now is thinking "I had a tough childhood." And I'm sure they did but anecdotes are not data.
The 1980s were -- and I say this as both a historian and someone who lived through them -- fucked. Reagan torched the New Deal consensus. The AIDS crisis was literally laughed out of the White House press room. Our government perpetuated a long string of dirty intelligence/foreign-policy interventions. The wealthy and powerful were juiced to the gills on cocaine.
There was a sense of decorum which has sense evaporated from American politics but that's about it.
https://livingwage.mit.edu/met...
Typical annual salary, according to MIT's Living Wage Calculator for the NYC Metro, is $84,860.
Poverty wage is $7.52/hr (no kids) and minimum wage is $15.50 which, according to the calculator, should cover 1 adult with 3 kids.
The Brearley School, regarded as the best private school for girls in the nation and charging around $70K, is a non-profit.
What shareholders?
Vitamin C deficiency is apauling.