Forgot your password?
typodupeerror

Comment Re:500 miles? (Score 1) 136

Heh, back when I was a wee college freshman in the before times, one of my first jobs was helping a delivery company implement a new trial computer logging system for their trucks. Cadec. It had a steel computer terminal mounted on the dash with an LED display and numeric keypad and a slot for a large steel cartridge that contained the memory for the log. The truckers took the cartridges in the morning and handed them back in at night. I then downloaded the data to the, ONE, PC the delivery company had for logging and printing the nightly reports. That lasted just under 10 months when 80% of the systems broke the speedometers/tachometers in the trucks. Because they had spliced in the analog sensors to the cables... by design.

Comment FlashAttention (Score 2) 43

I did some math the other day on running local AI models and the net result is most homes can't afford to run the current median models.

They don't just need 80GB of VRAM, they need newer architectures - to be supported by CUDA, to be supported by pytorch, etc.

These problems may well be solvable with more clever use of hardware, MoE, acceptable quantization, etc., but today you're in for several grand and something north of 100W idle to use what is effectively a $20/mo plan.

A small enterprise can afford local, so that's good. We paid more than that for one SGI machine back in the day.

The point of the exercise was to plot the position on the curve. We're at something like 2006 YouTube where nobody could afford the drives or bandwidth that YouTube/Google was giving away for free (aka with VC money). Eventually hard drives got cheaper, people got gigabit at home, FlashServer was replaced with h.264/HTML5, phones could stabilize video locally, etc.

So it looks like these AI companies need to stay alive for about seven more years giving away product at a loss, or at least highly oversubscribed, to turn a profit. Hence the low token allowance, the banning of OpenClaw, etc.

On the other hand, I read the blog of a security researcher yesterday who found an exploit with (IIRC) Claude, tried to refine the PoC, but got dinged on "out of tokens" before he could finalize it. So he just deleted the work and moved on.

It sounds like they're trying to not lose money at such a velocity and are trying to find a sweet spot where people don't just declare it too underpowered to use.

A global energy depression may well take out the supermajority of the companies that believe they can burn investment money for seven more years. There is circular financing money, then there is real return on capital money. One is to fool the markets, the other is grounded in current physics.

Comment If only it were _for_ the neighborhood (Score 1) 126

If the data center is primarily intended for use by (exclusively or nearly exclusively) the people in the neighborhood, sure, it could make sense. I know this is quaint and out-of-date but one can imagine a neighborhood squid cache, NNTP server, modern Netflix cache, etc for the neighborhood. Have it be connectable by a high-speed neighborhood LAN, to share the 'hood's WAN.

Just a classic neighborhood network coop, but with some added caching services, which is what would cause it to be called a "datacenter" instead of a "router." ;-)

As if that would really happen. And that's sure not what this is.

Comment Re:training may be legal (Score 3, Insightful) 74

One might imagine that buying a million books would get a buyer a very good discount from the publisher.

They could have paid $5 or whatever for each book they trained on. $15M or so for 3 million books - they could totally afford that but "why pay when you can steal?"

Then at least they would be defending fair use rather than defending a 'theft' lawsuit.

If we had Constitutional Copyright they at least would have millions of 14-year-old books to train on. That would be quite sufficient to train and refine models.

Comment The usual question: what did they do? (Score 1) 45

Once again, I'm not shocked by the percentage laid off, but I'm shocked by the number of individuals. If 700 people was 14% of their workforce, then this company had about a hundred times as many employees as I would have guessed. Not that my guesses are particularly well-informed, but when I look at what this company's product appears to be and compare it to my own experiences, I can't help but make guesses that are apparently 99% off! (I'm that dumb!?)

What do employees at these large companies do all day? Why were they hired in the first place, or why weren't they laid off many years ago? I just don't get it.

I don't mean it as a put-down of their products, but on the surface it just doesn't look like their thousands of employees do anything bigger or more complicated than my dozen-developers-sized team (which is, itself, much larger than the teams I've been on in previous decades). Is everyone's productivity just .. eaten up by labor-not-scaling problems? Do I need to really read the Mythical Man Month instead of treating it as distant folklore that I'll some day get to?

Or is the answer in some other direction? Part of me thinks I should just drop it, and accept that I really don't know jack shit about the profession I've had for the last 40 years.

Comment Before I condemn it... (Score 1) 182

I can't really say it's bad for it to be doing these seemingly-bad things, until I know the answer to this: what is the app's intended purpose? Why would/should a person use it?

If it's intended to inconvenience/expose/punish users for trying to find out things about the White House, then maybe the application is doing the right thing.

Slashdot Top Deals

The best things in life go on sale sooner or later.

Working...