Comment NASA (Score 1) 28
1) Give NASA a proper budget
2) Congress should stop interfering
Did congress or the president micromanage them during the 1950s thru 1970s?
1) Give NASA a proper budget
2) Congress should stop interfering
Did congress or the president micromanage them during the 1950s thru 1970s?
From the very beginning, this has been about one fundamental principle: the military being able to use technology for all lawful purposes.
There is no such principle, this is completely made up. To make it worse, there are basically no laws restricting AI use, which means the Pentagon is asserting that they must be able to use it for literally anything they want to.
Well, to be fair, there are legal restrictions on what the DoD can do. For example, they can't blow up random boats off of the South American coast, they can't occupy American cities, they can't unilaterally invade a foreign country and kidnap its head of state, and they can't just start bombing shit without any congressional authorization.
So, you know, they're restricted to using the AI only for things that they are legally allowed to do.
The government will lose in court... eventually. But the punishment will have already taken place.
Capitulate or suffer the consequences. Resistance is painful.
Painful, but how painful, really? The publicity has boosted Anthropic's subscriptions significantly, and the summary overstates the impact of the label. It is not true that all companies that do business with the DoD will have to cut ties with Anthropic, the label just means that companies that do business with the DoD can't use Anthropic's AI on their DoD contracts. They're still free to use it for any non-DoD work they do, or to run their own business operations.
So, yes, it'll inflict some pain, but I think they can handle it. Anthropic is far healthier than Open AI, financially.
Hold on, the news channel that runs 24x7 on my tv is telling me this is the fault of illegal immigrants and liberals.
What happens if you ask them to be quiet?
An elderly man shoots you dead. https://abc7chicago.com/post/c...
Announcing and doing are different. What if he hasn’t? There are no repercussions or consequences.
King of the grift. He’s raking in cash at our expense. https://www.forbes.com/sites/d...
I mean he created his crypto shit coins so you don’t need USD to buy a pardon or favor.
How are the no new wars promise and the oil prices doing?
If you know there’s a school nearby why would you bomb what amounts to an office building for their navy?
Concerning this story, do your other sources say something different?
I created an applicable meme. https://imgflip.com/i/aluaht
BLE does enable precise location of your device, just as much as GPS does, at least unless you're out in the boonies far from any Bluetooth devices with known locations. That's not Google being obnoxious, that's Google being careful with your privacy.
And, FWIW, it's not that I "know good people working at Google" (I do), it's that until a few months ago I worked at Google, on Android, on the security team... and I played a role in that exact decision you're complaining about. And it's the right decision for user privacy. If you don't want your location to be tracked, you really can't use Bluetooth. If your phone gave you the option to turn Bluetooth on but GPS off, it would be giving you a false sense of privacy. It would be lying to you. The Android Security team thinks lying to users is bad. And if the security team didn't, the legal team would.
Tell them to get you claude. Claude will tell you that you're an idiot then fix your crap for you.
In all seriousness, pitting LLMs against each other is a very effective way to decrease slop and increase output quality. You don't even need to use different models. Just have one agent critique the code and write a report, then another one read the report and fix the code. They need to be different "conversations" (or one can be a subagent of the other). Telling an LLM to critique *and* fix the code will frequently result in it justifying not fixing the code (sometimes the justifications are entertaining, but usually I want to get work done). It also helps to use multiple critic agents, each one laser focused on one particular type of badness you want identified and weeded out.
I've built up a stable of critic agents and I tell my LLM to run all of them and fix everything they point out, iteratively, until there are either no more things to fix or else the agent has good reasons for not fixing all of the things, at which point it should present me with a report. Only after that do I look at the code... at which point I make it fix some more things and then refine (or add) relevant critic(s).
I don't want to start an analogy war, but I can't help but point out that cars and kitchen tools don't interlocute at length with their users. If they did, and they started encouraging their users to harm themselves or others, then a lawsuit against the manufacturer would be in order.
I've read fantasy novels where an evil sword tried to talk its wielder into mass murder. Nobody sued the evil wizard that made the sword!
OTOH, somebody usually murdered the wizard (if they weren't already long dead), so... yeah.
Programmers used to batch environments may find it hard to live without giant listings; we would find it hard to use them. -- D.M. Ritchie