Comment Re:Country based names (Score 1) 18
There's a long history of things like
There's a long history of things like
This is an easy example of how badly the TLD policies were drafted/managed. Tech people making decisions like this is a continual experiment in governance, except nobody ever learns from the mistakes because they don't realise it is governance.
The misuse of
In the past I've occasionally sent mostly-dead disks to data recovery places if I really really needed something back.
Is this security from outsiders really worth giving up on the potential for that?
What, did they kill your cat or something? Were you an OS/2 user? Apple fan?
Corporations are not your friend, but they also don't have stable personalities over the long term that it makes sense to attach blame (or praise) to. The bad old days of Microsoft probably don't have a lot of employee overlap with their current employees. Meaning that you're probably mostly just angry at a name.
Not joking. In situations like this, when someone else makes nice, you make nice.
It'd be nice if we had people who were not trying to be ungracious in writing about these things. If you shit on people when they throw you a bone, don't expect more bones, or anything better.
I don't want CTOs designing social policy - when they try to write our laws or just ignore them, that's a big problem (Musk being an obvious example), but I likewise want there to be a really good reason whenever we're voting to decide what technology exists; it shouldn't be a normal thing.
There's nothing wrong with wanting to put pressure on China over concrete things (potential invasion of Taiwan, efforts to control lives of people of Chinese ethnicity in other countries, poor civil rights, etc). This is not an intelligent way to do it; export controls won't work with a country like China.
Biology would like a word.
If we ever get near AGI, the kind of LLMs we have now will seem like toys. LLMs significantly make up for the fact that real language doesn't work without context and without access to a lot of other parts of a mind. If we imagine in an AGI that those parts are there, we'd want to ditch every LLM and develop language subsystems more naturally.
The idea that the technology could be both open and have guardrails glued-on was a fantasy, and it's more important that the technology be open. It's not like these LLMs are producing something a reasonably intelligent college-age couldn't put together with a bit of research. Seeing this as a danger is stupid.
I think it's just going after some incorrect claims. It's still an achievement, it's just important to talk about it accurately.
If you're only interested in the worst case rather than the average case, and ignore that cases like the above both create outrage and, as extreme outliers, get corrected, you're missing the plot.
Real support structures are indeed highly integrated, but they're not always voluntary. There are times when involuntary is justified (not always), and even when things are not voluntary, force should be rare. When people won't come in off the street voluntarily, leaving them there is neglect and it's worse for everyone.
I agree that autonomy is important, and that power corrupts, but that shouldn't be an excuse for the widespread neglect that comes from going hardline against paternalism.
There's nothing wrong with a nanny state.
Beyond that, there's a general understanding that if someone's about to die in some bit of the wilderness, some institution will try to rescue them. Even if they entered into it with eyes open, accepted the danger, signed the paperwork, whatever. Preventing people from doing really stupid stunts that will almost certainly require later rescue is pretty reasonable.
In seeking the unattainable, simplicity only gets in the way. -- Epigrams in Programming, ACM SIGPLAN Sept. 1982