artificially constructed to maximize its persuasive power.
Imagine having such an immense vocabulary, and using it that weakly.
The capacity of the government of a large jurisdiction like California, or more particularly the US, could bankrupt someone like Musk, so I say, bring it on. Within a decade Musk would have abandoned all efforts, or, even better, be stone cold broke (frankly billionaires shouldn't exist at all, and we should tax the living fuck out of them down to their last $200 million).
We're too afraid of these modern day Bond villains when we should be aiming every financial, and probably every real, cannon straight at them and putting them in a sense of mortal danger every minute of their waking lives, so that they literally piss themselves in terror at the though that "we the people" might decide to wipe them out for good.
Lincoln was a Free Soiler. He may have had a moral aversion to slavery, but it was secondary to his economic concerns. He believed that slavery could continue in the South but should not be extended into the western territories, primarily because it limited economic opportunities for white laborers, who would otherwise have to compete with enslaved workers.
From an economic perspective, he was right. The Southern slave system enriched a small aristocratic elite—roughly 5% of whites—while offering poor whites very limited upward mobility.
The politics of the era were far more complicated than the simplified narrative of a uniformly radical abolitionist North confronting a uniformly pro-secession South. This oversimplification is largely an artifact of neo-Confederate historical revisionism. In reality, the North was deeply racist by modern standards, support for Southern secession was far from universal, and many secession conventions were marked by severe democratic irregularities, including voter intimidation.
The current coalescence of anti-science attitudes and neo-Confederate interpretations of the Civil War is not accidental. Both reflect a willingness to supplant scholarship with narratives that are more “correct” ideologically. This tendency is universal—everyone does it to some degree—but in these cases, it is profoundly anti-intellectual: inconvenient evidence is simply ignored or dismissed. As in the antebellum South, this lack of critical thought is being exploited to entrench an economic elite. It keeps people focused on fears over vaccinations or immigrant labor while policies serving elite interests are quietly enacted.
It's different from humans in that human opinions, expertise and intelligence are rooted in their experience. Good or bad, and inconsistent as it is, it is far, far more stable than AI. If you've ever tried to work at a long running task with generative AI, the crash in performance as the context rots is very, very noticeable, and it's intrinsic to the technology. Work with a human long enough, and you will see the faults in his reasoning, sure, but it's just as good or bad as it was at the beginning.
Correct. This is why I don't like the term "hallucinate". AIs don't experience hallucinations, because they don't experience anything. The problem they have would more correctly be called, in psychology terms "confabulation" -- they patch up holes in their knowledge by making up plausible sounding facts.
I have experimented with AI assistance for certain tasks, and find that generative AI absolutely passes the Turing test for short sessions -- if anything it's too good; too fast; too well-informed. But the longer the session goes, the more the illusion of intelligence evaporates.
This is because under the hood, what AI is doing is a bunch of linear algebra. The "model" is a set of matrices, and the "context" is a set of vectors representing your session up to the current point, augmented during each prompt response by results from Internet searches. The problem is, the "context" takes up lots of expensive high performance video RAM, and every user only gets so much of that. When you run out of space for your context, the older stuff drops out of the context. This is why credibility drops the longer a session runs. You start with a nice empty context, and you bring in some internet search results and run them through the model and it all makes sense. When you start throwing out parts of the context, the context turns into inconsistent mush.
It shouldnâ(TM)t really be surprising that the compulsive liar was lying.
My workplace allows browsing outside of the Intranet with Chrome, Firefox, Safari, maybe others... basically anything but Edge.
That is how it's been, Those AI tools were trained on open source/public domain content, so any contribution by AI tools must be considered released under public domain. It does not get simpler than that, and current US copyright law has already indicated that any AI created works are not eligible for copyright
That's not the question.
The question is whether the AI-produced code is a derivative of existing code, and the answer is still not resolved.
In some cases, the answer is a clear YES, because the code is a direct copy of something written by someone else. If something like that ends up in the kernel, it will have to be removed when someone notices.
What is mind? No matter. What is matter? Never mind. -- Thomas Hewitt Key, 1799-1875