Comment Re:I know people who use Twitter (Score 1) 58
Bluesky's only selling point is that it's supposed to be a better place than Xitter. You're so busy bashing Xitter that it comes across as conceding that Bluesky isn't any better.
Bluesky's only selling point is that it's supposed to be a better place than Xitter. You're so busy bashing Xitter that it comes across as conceding that Bluesky isn't any better.
That works out to one hell of a slogan: "Choose Bluesky, for Xitter levels of toxicity combined with Old Twitter levels of censorship."
Your comment is a brilliant example of Mark Cuban's criticism of Bluesky: "agree with me or you are a nazi fascist." Bluesky has always been an who chamber, but it's gotten progressively more toxic in tone as its user base scaled up.
Just one point: "free love" actually has a historical definition. It means that sexual activity should not be regulated by governments. That would also, technically, include the right to practice celibacy.
I meant what I wrote. See https://arstechnica.com/featur..., the citation there to the NYT lawsuit, the "Substantial Similarity" section of https://guides.lib.umich.edu/c..., and the reliance on fair use to allow the copying in Google v. Oracle.
I wonder at what rate they'll need to increase the pricing in order to maintain it. Ironically improved traffic may make driving more desirable.
They will have to increase the price eventually as demand for transport overall rises. The point of the pricing is to deter driving enough that the street network operates within its capacity limits; if driving becomes more desirable than status quo ante, they aren't charging enough and will have to raise prices to keep demand manageable.
Think of it this way: either way, traffic will reach some equilibrium. The question is, what is the limiting factor? If using the road is free, then the limiting factor is traffic congestion. If you widen some congested streets, the limiting factor is *still* congestion, so eventually a new equilibrium is found which features traffic jams with even more cars.
The only way to build your way out of this limit, is to add *so* much capacity to the street network that it far outstrips any conceivable demand. This works in a number of US cities, but they're small and have an extensive grid-based street network with few natural barriers like rivers. There is simply no way to retrofit such a street architecture into a city of 8.5 million people where land costs six million dollars an acre.
So imposing use fees is really is the only way to alleviate traffic for a major city like New York or London. This raises economic fairness issues, for sure, but if you want fairness, you can have everyone suffer, or you can provide everyone with better transportation alternatives, but not necessarily the same ones. Yes, the wealthy will be subsidizing the poor, but they themselves will also get rewards well worth the price.
What the hell is "17.1.106"? If you mean section 106 of Chapter 17 of the U.S. Code, that's usually written as something like "17 USC section 106".
AI companies usually infringe on the first of the rights listed in 17 USC 106: "reproduc[ing] the copyrighted work in copies". This is done repeatedly during training the AI, when the model creator copies the training material to nonvolatile storage for reference, then again when they load it into RAM to train the AI model. In some cases, the AI model will generate further copies by how it responds to prompts, which also infringes the third reserved right, public distribution of those copies.
See also 17 USC 117, which authorizes certain acts of copying computer software when someone owns a license for the software. This section was enacted in response to MAI Systems Corp. v. Peak Computer, Inc., 991 F.2d 511 (9th Cir. 1993), which applied Chapter 17 sections 106 and 101 to copies made within a computer.
Or alternatively, and stop me if you think this is crazy, whether someone you don't know chooses to die or not is none of your goddamned business, and if they are unable to carry it out in a way that causes as little suffering at all, and seek out professional medical assistance then again, providing they are of sound mind, it's none of your goddamned business.
Well, yes -- the lies and the exaggerations are a problem. But even if you *discount* the lies and exaggerations, they're not *all of the problem*.
I have no reason to believe this particular individual is a liar, so I'm inclined to entertain his argument as being offered in good faith. That doesn't mean I necessarily have to buy into it. I'm also allowed to have *degrees* of belief; while the gentleman has *a* point, that doesn't mean there aren't other points to make.
That's where I am on his point. I think he's absolutely right, that LLMs don't have to be a stepping stone to AGI to be useful. Nor do I doubt they *are* useful. But I don't think we fully understand the consequences of embracing them and replacing so many people with them. The dangers of thoughtless AI adoption arise in that very gap between what LLMs do and what a sound step toward AGI ought to do.
LLMs, as I understand them, generate plausible sounding responses to prompts; in fact with the enormous datasets they have been trained on, they sound plausible to a *superhuman* degree. The gap between "accurately reasoned" and "looks really plausible" is a big, serious gap. To be fair, *humans* do this too -- satisfy their bosses with plausible-sounding but not reasoned responses -- but the fact that these systems are better at bullshitting than humans isn't a good thing.
On top of this, the organizations developing these things aren't in the business of making the world a better place -- or if they are in that business, they'd rather not be. They're making a product, and to make that product attractive their models *clearly* strive to give the user an answer that he will find acceptable, which is also dangerous in a system that generates plausible but not-properly-reasoned responses. Most of them rather transparently flatter their users, which sets my teeth on edge, precisely because it is designed to manipulate my faith in responses which aren't necessarily defensible.
In the hands of people increasingly working in isolation from other humans with differing points of view, systems which don't actually reason but are superhumanly believable are extremely dangaerous in my opinion. LLMs may be the most potent agent of confirmation bias ever devised. Now I do think these dangers can be addressed and mitigated to some degree, but the question is, will they be in a race to capture a new and incalculably value market where decision-makers, both vendors and consumers, aren't necessarily focused on the welfare of humanity?
That's not why they are called extrernalities. They are called externalities because they are the external consequences of actions. They are very much in our control, and if you are going to price any commodity fairly, to make it reflect true market and societal costs, then you have to price those in. Otherwise what you're really doing is subsidizing an industry.
This is how I've come to understand it. I welcome any and all corrections.
Passkeys are a cryptographic key stored in a Secure Element. This is usually a private key inside a small cryptographic engine. You feed it some plaintext along with the key ID, and it encrypts it using that key. The outer software then decrypts the ciphertext using the public key. If the decrypted text matches the original plaintext, then that proves you're holding a valid private key, and authentication proceeds.
The private key can be written to and erased from the Secure Element, but never read back out. All it can do is perform operations using the secret key to prove that it is indeed holding the correct secret key.
On phones, the Secure Element is in the hardware of your handset. On PCs, this is most often the TPM (Trusted Platform Module) chip. In both cases, the platform will ask for your PC's/phone's password/fingerprint/whatever before forwarding the request to the Secure Element.
Yubikeys can also serve as a Secure Element for Passkeys; the private key is stored in the Yubikey itself. Further, the Yubikey's stored credentials may be further protected with a PIN, so even if someone steals your Yubikey, they'll still need to know the PIN before it will accept and perform authentication checks. You get eight tries with the PIN; after that, it bricks itself.
The latest series 5 Yubikeys can store up to 100 Passkeys, and Passkeys may be individually deleted when no longer needed. Older series 5 Yubikeys can store only 25 Passkeys, and can only be deleted by erasing all of them.
Theoretically, you can have multiple Passkeys for a given account (one for everyday access; others as emergency backups). Not all sites support creating these, however.
Family, at dinner: "Thank you, Jesus, for this food."
Jesus, in a farm field: "De nada."
The argument only makes sense if you completely discount externalities. Once you factor them in, the costs of fossil fuels are absolutely monumental.
I mean, a bullet to the head cures cancer, but I'm not exactly seeing this being proposed as a medical solution.
Ah yes, reductio ad absurdum.
Twelve Angry Men must have driven you insane
You can't have everything... where would you put it? -- Steven Wright