Would this be the Starlink system Musk rushed to Ukraine and afaik continues to allow UKR to use free if charge? (I believe that some donor nations do pay sub fees for the systems they've purchased for Ukr, to be clear.)
Musk repeatedly said that he won't allow Starlink to be used to support offensive operations. Yes, sometimes free gifts come with strings attached.
Your insistence that because Musk doesn't do everything Ukraine wants without question, "we know where his sympathies lie" is childish.
Yes, I can see the argument that an offensive to retake Ukr territory should be allowed, but I can also see the argument that it is an offensive. Musk, a rather pacifistic person who routinely gets collywobbles when confirmed with violence as pacifists often do, probably sees it that way (and your own linked article mentioned that Biden military & intel people at the time thought Putin 's retaliatory threats were increasingly credible). My guess is that he was confronted with Russian threats to himself or his companies globally.
Would it be better if Starlink just entirely shut down all service to the region? Would that be better for Ukraine?
If that's not what you want, then maybe shut the fuck up?
What is happening to Ukraine is terrible. Ukraine's defense against a sociopathic neighbor has been heroic. That doesn't mean everyone, everywhere, 24/7 makes "what's best for Ukraine" their priority.
And?
Dead on. The stuff that's the same is the party leaderships trying to keep us in line, to trust them wit the reins of power, finance their campaigns, vote for the incumbents or the approved candidates, and let them do what they want.
In the US, the two major parties are indistinguishable in intent, that is, to keep their jobs. Challengers are either vetted by the leaderships, and so are not really challengers but replacements, or they are hammered into submission, to serve the Uniparty.
For one party this is absolutely the goal and ambition of their rank and file, to see their 'leadership' prevail. For the other party, this is actually anathema, but the rank and file is not yet entirely aware of the process.
We will see which succeeds...
...me!
I can do that. $%**, I've deleted
Seriously, though, when the AI recognizes it is about to operate on a root folder, it should be directed to confirm this twice with the user. These AI coding agents will become useful, to me, when they help a user avoid errors.
John Money was the first of the batshit legion (that I'm aware of; his entire oeuvre sickens me so pardon if I haven't delved too deeply) to believe gender was distinct from actual sex.
"My deeply flawed views" are the facts that stand unchanging, despite fads.
Humans are either xx or xy chromosomes, producing large or small gametes respectively.
The tiny percent that aren't that, are mutations that happened to survive, like people being born without eyes or legless or conjoined. None of them are to be brutalized, none of them are to be treated with anything but respect and sympathy; it doesn't make them normal except insofar as 'mutations' in heterosexual reproduction are normal.
Give up; your bullshit carried through the crazy-years of 2020-2024, but nature is healing. Fucked up ideological gaslighting is fading before things like actual facts. We're calling MAPs pedophiles again, and people are recognizing how wrong were the lies that fueled Mengele-like brutal transgender experiments on CHILDREN, ruining their fucking lives. You can't "undo" castration chemicals given to prepubescent teens. You can't just wear a girl-mask and insist you're a woman. If you believe that, you need help and counseling, not endorsement.
That wasn't *all* I said, but it is apparently as far as you read. But let's stay there for now. You apparently disagree with this, whnich means that you think that LLMs are the only kind of AI that there is, and that language models can be trained to do things like design rocket engines.
...which makes it super convenient when you can just use that label on everyone, right?
No it isn't. Neither a teeny little collection of actual hermaphrodites and genetic sports, nor male fetishplay justifies the rewriting of common sense.
Humans are divided into two sexes, except when something goes WRONG.
Your view is complete bullshit, an ideology-cloaked-in-"science" promulgated by John Money, a sadistic pedophile whose kinks ended up destroying at least one family with BOTH sons suiciding eventually from his "therapies" but hey, at least he took ample photos when he compelled them to play sex games with each other, right?
This isn't true. Transformer based language models can be trained for specialized tasks having nothing to do with chatbots.
That's what I just said.
The populations lack of a grasp on reality is awful, but not surprising.
We just spent what, 4 years with every official agency, every mainstream media outlet, even MEDICAL professionals saying - nay, insisting - that putting on makeup or a dress could make a man literally a woman. That chopping off a child's genitals and replacing them with a simulacrum would do the same.
Don't blame chatgpt for the general public's lack of grasp on reality. That has been the product of carefully crafted propaganda.
Oh btw, every angry downvote just emphasizes my point.
Nobody knows what LLMs are fundamentally capable of doing.
We know exactly what LLMs are capable of doing, at least as they exist today. How they might be used to deliberately or inadvertently influence people is a different, and open, question.
There is an area where LLMs might significantly help research - and it might even be applicable to space flight propulsion, and that's bring in knowledge from other fields.
For example, there could be some quirk about the way the squids and octopuses propel themselves in the water which is known to a handful of physicists who have studied the fluid dynamics in detail but isn't generally widely known.
A space propulsion engineer asking an LLM about a problem they're working on will likely "find" this knowledge that wouldn't otherwise be obvious or even obvious who to ask.
And a real world historical example, radar chirping was classified. Bats do it. And the name "chirp" came later, after it was independently discovered and understood in bats.
But it's vanishingly unlikely that LLMs will provide any insight into a research problem to anyone not already well versed in the field and even if they do, it will be because the researchers have sifted that thread of gold from the haystack of straw. The LLM being incapable of telling the difference.
Here's where the summary goes wrong:
Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs.
Artificial Intelligence is in fact many kinds of technologies. People conflate LLMs with the whole thing because its the first kind of AI that an average person with no technical knowledge could use after a fashion.
But nobody is going to design a new rocket engine in ChatGPT. They're going to use some other kind of AI that work on problems on processes that the average person can't even conceive of -- like design optimization where there are potentially hundreds of parameters to tweak. Some of the underlying technology may have similarities -- like "neural nets" , which are just collections of mathematical matrices that encoded likelihoods underneath, not realistic models of biological neural systems. It shouldn't be surprising that a collection of matrices containing parameters describing weighted relations between features should have a wide variety of applications. That's just math; it's just sexier to call it "AI".
... In the sense of "a Linux-using vegan who went to Yale didn't know how to start her conversations..."
A sine curve goes off to infinity, or at least the end of the blackboard. -- Prof. Steiner