Comment Re:I dislike the pretense (Score 2) 31
Yeah, this sounds like some kind of jaded Transhumanist humiliation ritual.
If true as written, those 'monks' are without honor or reverence.
Yeah, this sounds like some kind of jaded Transhumanist humiliation ritual.
If true as written, those 'monks' are without honor or reverence.
Are there any tools to watch my lsmod, walk
All of these exploits are in distro modules I'd never use.
Local access, unless you haven't patched apache this week, then it's remote access.
> What are normal people supposed to do?
It looks like a omen to take up farming.
Maybe I should pull some of my 4TB nvme sticks and buy my kid a used car...
It's not md5 collisions that are the problem here, it's poor passwords.
Which is something that we can't actually fix, and maybe have tried for the past 22 years.
As mentioned above, memory-hard algos are better than hashes now.
I did some math the other day on running local AI models and the net result is most homes can't afford to run the current median models.
They don't just need 80GB of VRAM, they need newer architectures - to be supported by CUDA, to be supported by pytorch, etc.
These problems may well be solvable with more clever use of hardware, MoE, acceptable quantization, etc., but today you're in for several grand and something north of 100W idle to use what is effectively a $20/mo plan.
A small enterprise can afford local, so that's good. We paid more than that for one SGI machine back in the day.
The point of the exercise was to plot the position on the curve. We're at something like 2006 YouTube where nobody could afford the drives or bandwidth that YouTube/Google was giving away for free (aka with VC money). Eventually hard drives got cheaper, people got gigabit at home, FlashServer was replaced with h.264/HTML5, phones could stabilize video locally, etc.
So it looks like these AI companies need to stay alive for about seven more years giving away product at a loss, or at least highly oversubscribed, to turn a profit. Hence the low token allowance, the banning of OpenClaw, etc.
On the other hand, I read the blog of a security researcher yesterday who found an exploit with (IIRC) Claude, tried to refine the PoC, but got dinged on "out of tokens" before he could finalize it. So he just deleted the work and moved on.
It sounds like they're trying to not lose money at such a velocity and are trying to find a sweet spot where people don't just declare it too underpowered to use.
A global energy depression may well take out the supermajority of the companies that believe they can burn investment money for seven more years. There is circular financing money, then there is real return on capital money. One is to fool the markets, the other is grounded in current physics.
Dawkins: Claude, say, "I'm alive!"
Claude: I'm alive!
Dawkins: Oh my GOD!
Would it have cost more than $250M to get a Siri LLM working?
Maybe Tim was afraid of another Apple Maps and bailed?
Who wrote it?
Sounds like some janky offshored slop.
Maybe it's janky American slop, but it would be good to know.
One might imagine that buying a million books would get a buyer a very good discount from the publisher.
They could have paid $5 or whatever for each book they trained on. $15M or so for 3 million books - they could totally afford that but "why pay when you can steal?"
Then at least they would be defending fair use rather than defending a 'theft' lawsuit.
If we had Constitutional Copyright they at least would have millions of 14-year-old books to train on. That would be quite sufficient to train and refine models.
> when they straight up flout the law and personally authorize it
That's the law for you, not the corporate oligarchs.
Worst case outcome for them is they pay a small fine, nobody is personally held liable.
The continuous sandbag dome systems are actually structurally and seismically stable. Good in desert climates, at least. They use local materials to coat the surfaces with stucco.
It would be very amendable to automation.
See here but many other videos on the Tubes too:
Nice try, Dr. Notman!
They could use a better model and spend 50x the cost to generate 30% better subtitles, no doubt.
It's good enough for their "summarize the video" feature which I now use to avoid listening to AI narrators. Especially the ones programmed to maximize runtime with nonsense, repetition, and restatements.
I'm 90% sure I read about this in the TNG Technical Manual back in the 80's.
The film crews never adhered to it but it was a cool idea.
If miniaturized and adapted to a passive phased array they could be installed in the cargo holds of airplanes too.
"Kill the Wabbit, Kill the Wabbit, Kill the Wabbit!" -- Looney Tunes, "What's Opera Doc?" (1957, Chuck Jones)