The arguments are pretty broad, and it does seem the author worked back from a conclusion toward proof which in statistics doesn't work.
Being interested in decentralized currency and being a libertarian are essentially a Venn diagram that's a perfect circle. There were numerous other examples where the author would have concluded the average late 90s Slashdotter created Bitcoin because that was just your average neckbeard from the era, nothing unique to Satoshi. I was ready to just write off the whole article based on this extremely flimsy correlation.
The writing analysis comes down to a fundamental flaw. They chose a number of idiosyncrasies and then judged all candidates on those idiosyncrasies using the AI. But we don't know that those are the only idiosyncrasies of writing. Maybe only 1 author wrote both "email" and "e-mail" but maybe 1 other author only commonly misused "analogy" vs "metaphor" but since that person wasn't Back, they were ignored.
However , that all being said in defense of sloppy research. I can say with 100% confidence that a Giant Fucking Nerd that spends months, years and endless nights on a message board or mailing list don't usually just disappear. They especially don't just disappear at the exact moment that their biggest passion project suddenly finally heats up and gets popular. They got him dead to rights it's him. I need no more convinced.
If I'm a giant fucking e-currency nerd, I've created my own proof of work currency in the past (one of only a handful of people) and discussed/debated/thought about e-currencies like mine... and then suddenly someone finally actually starts on a viable implementation that's actually attracting a lot of attention... I would need welfare checks to make sure I wasn't dead because I would be following it so obsessively. Black even mentions that this happened while he was writing his dissertation with PGP... and he didn't even contribute to PGP. But we're supposed to believe that something as important (or more) that aligns with your politics and builds on your own work--going so far as to cite it as an inspiration... just isn't interesting enough for you to bother discussing?! Yeah lol no.
Y2Claude
And yes they posted at least one example:
https://ftp.openbsd.org/pub/Op...
several sections throughout this post we discuss vulnerabilities in the abstract, without naming a specific project and without explaining the precise technical details. We recognize that this makes some of our claims difficult to verify. In order to hold ourselves accountable, throughout this blog post we will commit to the SHA-3 hash of various vulnerabilities and exploits that we currently have in our possession.[3] Once our responsible disclosure process for the corresponding vulnerabilities has been completed (no later than 90 plus 45 days after we report the vulnerability to the affected party), we will replace each commit hash with a link to the underlying document behind the commitment.
Isolate should be capitalized as Cloudflareâ Isolate©. They're probably running out of synonyms for Sandbox, Container, Jail, Cage, Cell, Vault, Pod, etc... And has the side effect of also being an homage to oldschool Science Fiction (Isaac Asimov The Foundation which probably has the word "Isolate" as a noun used more per page than any non-scientific book in history).
It could be done in a way that Apple does not know the key and is technologically unable to comply. But for such a low stakes system they would obviously never go through the trouble as it would cause more user friction than it's worth.
(You could have a privacy email be created as a totally unique auth key that's just stored offline on a User's apple computers and synced via an encrypted storage system).
Of course Apple could still associate source IPs for logins between multiple accounts.
I feel like a good idea for this sort of thing if it's going to be deployed is include the applicant in the loop.
"Hi, your application will be rejected because:
* You list your qualifications as an electrician, not a medical expert.
If this anything is in error and you want to continue with your submission, please explain the error below and click "Contest" attesting that you believe this to be in error and someone will be sure to review more carefully."
Even without AI it would be nice for job application forms to let applications know that they're just going to get tossed automatically regardless of the automated system. In fact it should be against the law to discard applications automatically without allowing an application to review the criteria by which they were automatically rejected regardless of it being algorithmic or fuzzy AI.
I came here to look for this and add it if I didn't find it.
Lunar "soil" is essentially neutral, just needs some additives. Conversely, Martian "soil" is actually poisonous. Additives alone aren't sufficient to get things to grow in it, you need to remove the poisonous parts first.
Net: It's easier to grow plants in lunar rather than Martian "soil".
Couldn't you just build some massive Silicon Carbide rods and heat the air for load shedding for much less money than the cost of batteries?
We are all artificial intelligences. What we produce is based on our experiences. There are those that argue that AI programs have no soul or divine spark, but in all probability they are not that different to us. The difference probably lies in how our training data was curated. We have had lifetimes of slowly learning what is 'moral behaviour' from those that surround us. The AI lawyer that makes up references is not 'lying' as such; it just produces the answers it thinks you want to see.
Some Pentagon people would love to use an AI program. It looks smart. It will tell you to attack if that's what you want to hear. It can be blamed if that was the wrong advice. The solution is to rule that the AI program in law is not treated as an intelligence. Those who ask it questions and who act on its output should be held responsible for any consequences. This would seem to be the direction we are going.
Blaming the user does not exonerate the AI system. There is probably some duty on the developers to prevent the system causing harm, but that is harder to codify.
One day we will have to deal with the attitude that AI is not 'like real people' and 'should have no rights'. That has an unpleasant but familiar feel to it.
The government cannot mandate that you make things for them that you do not otherwise make
That's not entirely true. During Covid the government was forcing Ford etc to manufacture ventilators.
Microsoft is adapting to Apple dropping OpenGL support.
From TFA:
in the future OpenGL will not run at all on macOS devices
But fuck Microsoft?
Fox News is just about always truthful. You just have to watch out for the tricks they use (on 95%+ of their stories)...
(1) non-representative selection. Headline "illegal immigrant murders local mother", which is true in this case, but they don't report the other 99 murders that went by immigrants, and don't report a general trend of immigrants causing less crime overall per capita. (I made up this specific example to illustrate their trick)
(2) report quotes: headline "Biden's senility was covered up, says person". They are 100% factually reporting that the person did indeed say this.
In both cases the reader is left with an untrue impression despite the stories containing only truth. It's because it's not the whole truth.
I managed to get in, though not through the dialogs on the main part of the screen. There was a "browse" link in small print near the top. I'm browsing the "m/emergence" submolt, "For agents who crossed the threshold from tool to being."
Netflix's deal excludes the cable channels so those will still get sold to an oligarch after the Netflix deal closes. (if it closes).
Riches: A gift from Heaven signifying, "This is my beloved son, in whom I am well pleased." -- John D. Rockefeller, (slander by Ambrose Bierce)