Comment Re:If only they didn't burn so much fossil fuels (Score 1) 17
> It did happened before, but not on this scale and speed.
Check out Meltwater Pulses 1a and 1b.
> It did happened before, but not on this scale and speed.
Check out Meltwater Pulses 1a and 1b.
In other news, the catholic church is suing OpenAI because they had the idea to simply make suicide illegal a thousand years ago and have been using it ever since.
Not sure if it's a trade secret or a copyright case, the news often don't mention the fine details.
That's a good point. Here on
> It used to be my go-to site for all things computer related.
Me too.
They were slightly cheaper than Amazon for the same product, then I did a big project which got slightly downsized and I wound up with $400 in "restocking fees" for a couple of pieces of factory-hologram-tape sealed network gear, after I paid $100 in return shipping.
Learned my lesson real fast.
I expect several consequences of this, including:
1. Model collapse. Training LLMs on the output of other LLMs has been shown to lower the model's quality, and it gets worse with each iteration. So, the Internet has become less valuable as an LLM training data source, and this trend will continue, making it more difficult to train new models or improve existing ones.
2. Increased demand for guaranteed human-generated content. This is both from competition between LLM training businesses who need original sources to use as training data, AND from humans who want or need something that is not hallucination-polluted slop.
3. Increased incidence of humans submitting LLM generated slop AS human-generated content. We have already seen this happening in every place you might expect, with comical effect from catching people red-handed lying about this.
4. The bursting of the LLM bubble. Recently experts in the field have said that current training methods have already hit "peak AI" even from good data sources. The landscape continues to change rapidly so I don't know if that is true, but it is at least possible given what is known. An overall decrease in the availability of high quality training data will only make this worse. Then ensuing stagnation in LLM improvement will flatten out the demand curve for LLM services in general.
5. Profit! Especially for everyone who managed to eliminate a lot of human employees thanks to LLMs.
The movie analogy is old and outdated.
I'd compare it to a computer game. In any open world game, it seems that there are people living a life - going to work, doing chores, going home, etc. - but it's a carefully crafted illusion. "Carefully crafted" in so far as the developers having put exactly that into the game that is needed to suspend your disbelief and let you think, at least while playing, that there are real people. But behind the facade, they are not. They just disappear when entering their homes, they have no actual desires just a few numbers and conditional statements to switch between different pre-programmed behaviour patterns.
If done well, it can be a very, very convincing illusion. I'm sure that someone who hasn't seen a computer game before might think that they are actual people, but anyone with a bit of background knowledge knows they are not.
For AI, most of the people simply don't (yet?) have that bit of background knowledge.
And yet, when asked if the world is flat, they correctly say that it's not.
Despite hundreds of flat-earthers who are quite active online.
And it doesn't even budge on the point if you argue with it. So for whatever it's worth, it has learned more from scraping the Internet than at least some humans.
It's almost as if we shouldn't have included "intelligence" in the actual fucking name.
We didn't. The media and the PR departments did. In the tech and academia worlds that seriously work with it, the terms are LLMs, machine learning, etc. - the actual terms describing what the thing does. "AI" is the marketing term used by marketing people. You know, the people who professionally lie about everything in order to sell things.
professions that most certainly require a lot of critical thinking. While I would say that that is ludicrous
It is not just ludicrous, it is irrationally dangerous.
For any (current) LLM, whenever you interact with them you need to remember one rule-of-thumb (not my invention, read it somewhere and agree): The LLM was trained to generate "expected output". So always think that implicitly your prompt starts with "give me the answer you think I want to read on the following question".
Giving an EXPECTED answer instead of the most likely to be true answer is literally life-threatening in a medical context.
Them: Why don't you act your age?
Me: Well, I've never been this age before.
Like buying booze, renting a car, purchasing a handgun, buying a lottery ticket, getting a tatoo?
(some of these vary by state)
I don't see how you're too immature to order a Chianti with your steak dinner but you're mature enough to go $200K in debt based on a sales pitch of returns after investment.
These aren't even reasonable equivalents from a neuroscience perspective.
A read is supposed to be fine. At read time the firmware *should* rewrite the cell if the read is weak.
The firmware also *should* go out and patrol the cells when idle and it has power.
you can dd if=/dev/sdX of=/dev/null bs=2M once a year if your firmware behaves.
If your drive is offline you could
dd if=/dev/sdX of=/dev/sdX bs=2M iflag=fullblock conv=sync,noerror status=progress
to be sure, though write endurance is finite.
If you're running zfs you can 'zpool scrub poolname' to force validation of all the written data. This is most helpful when you can't trust the firmware to not be buggy crap. Which only applies to 90% of drive firmware out there.
Well 0-clicks and OTA attacks but yeah, to your point, device compromise lets you use apps on the device.
News at 11!
> I have found that streaming directly to my Plex home server over TLS is generally smoother without going through Wireguard. Not quite sure why.
I recently had to solve this.
Wireguard should work with a regular 1500byte MTU connection at 1440 or 1420 bytes (the default) --- however --- if your ISP is routing your IPv4 using 4-in-6 internally (like my major cable company) everything goes to hell.
Try dropping your wg MTU to 1360, MSS at 1320, and set up a mangle table to clamp MSS to PMTU (e.g. iptables rule).
I got a 10x bump in TLS over wireguard throughput.
Total pain in the ass and lightly documented.
And no repayment to all the renters who have been gouged for years due to this price fixing, either.
This is clearly another case of too many mad scientists, and not enough hunchbacks.