Forgot your password?
typodupeerror

Comment Re:Voters are dumb. (Score 1) 120

The only thing that article says about it is that the local government did a study and concluded that it wasn't an environmental hazard.

Think about it this way: why would a gas turbine mounted on a trailer emit more pollution than a fixed facility? It's burning the same fuel, with the same emissions controls.

Comment Re:Voters are dumb. (Score 0, Flamebait) 120

Mobile generators don't produce any more pollution than regular power plants. It is common for businesses to install mobile generators so that they can operate them right away (I believe the law allows them to operate for 1 year) before they get permanent air permits for them. By the way, Xai does have stationary permits for them now, so even that deeply flawed information is out of date.

It is hilarious to see morons like you vacillate between claiming these companies are irresponsible for using the grid without paying for new generation, and complaining when they do add the grid capacity to cover their use. It couldn't be more obvious that it has triggered a knee-jerk anti-development instinct in your lizard brain and that you have not capacity whatsoever to consider these developments rationally.

All this anti-datacenter nonsense is entirely unfounded in reality, so you are just another one of the stupid voters I am talking about.

Comment Re:Voters are dumb. (Score -1, Troll) 120

Actually, the concerns are not real either. These don't really use all that much power, and they don't use any water or pollute the environment at all to speak of. Adding capacity to the grid to power these data centers should be entirely trivial. It's not because of other dumb rules that other dumb voters have supported in the past. Any attempt to solve the problem by limiting new development is completely nonsensical.

Comment And complexity (Score 3, Informative) 87

the selection of a 40 year old 6502 application is interesting,

Not even the application, just a 120 byte-long binary patch.

It may however help if someone identifies a small digestable chunk as security relevant and set it about the task of dealing withi t.

And that chunk doesn't have any weirdness that requires a seasoned and actually human reverse-engineer.
(Think segmented memory model on anything pre "_64" of the x86 family - the kind of madness that can kill Ghidra).

Also, if it's not from the 8bit era or the very early 16bit era, chances are high that this bit of machine code didn't start as hand-written assembler but some higher-level compiled language (C most likely). It might be better to run Ghidra on it and have some future ChatBot trained on making sense of that decompiled code.

In short there so many thousands of blockers that have been carefully avoided by going to that 40 year old 120-byte long patch of 6502 binary.

Comment Good example of why it's wrong (Score 4, Insightful) 87

But what if you had a similarly loose platform but it's running a kiosk and that kiosk software is purportedly designed to keep the user on acceptable rails.

There is a lot of leverage done by the "similarly".

Apple's computers run on 6502.
This was an insanely popular architecture. It's been used in metric shit tons of other hardware from roughly that era. There are insane amounts of resource about this architecture. It was usually programmed in assembly. There has been a lot of patching of binaries back then. These CPUs have also been used in courses and training for a very long time, most of which are easy to come by. So there's an insane amount of material about 6502 instructions , their binary encoding, and general debugging of software on that platform that could be gobbled by the training of the model. The architecture is also extremely simple and straightforward with very little weirdness. It could be possible for something that boils down to a "next word predictor" to not fumble too much.

Anything developed in the modern online era, where you would be interested in finding vulnerabilities is going to be multiple order of magnitude more complex (think more multiple megabytes of firmware not a 120 bytes patch), rely on very weird architecture (a kiosk running on some x86 derivative? one of the later embed architecture that uses multiple weird addressing mode?) and very poorly documented.

Also combine this with the fact that we're very far into the "dimishing returns" part of the AI development, where each minute improvement requires even vastly more resources (insanely large datacenter, power requirement of entire cities) and more training material than available (so "habsburg AI" ?), it's not going to get better easily.

The fact that a chat bot can find a fix a couple of grammar mistake in a short paragraph of English doesn't mean it could generate an entire epic poem in a some dead language like Etruscan (not Indo-European, not that many examples have survived, even less Etruscan-Latin or -Greek bilingual texts have survived to assist understanding).
The fact that a chat bot successfully reverse engineered and debugged a 120-byte snipped of one of the most well studied architecture doesn't mean it will easilly debug multi-mega bytes firmware of some obscure proprietary microcontroller.

Comment HDD lifecycle (Score 2) 97

harddrives have a finite service life

Correct.
For typical enterprise-grade drive, warranty are in the 5 to 10 years range.
So you can expect them to last that many years at minimum.

So if the bubble pops in 18months as was suggested in the post above, those drives will even still be under warranty, and definitely have quite some life left in them.

and those used in cloud providers are recycled not resold or reused.

...by an normally operating company, where there's somebody who will be held accountable for whatever happens with these drives: yes.
(They'll most likely destroy the drives to avoid any hassle regarding confidentiality, then recycle)

BUT, when a company goes bust, and everybody got laid off, nobody will be around to take care that proper procedures are followed and each drive has a drill shatter its plates. There's no employee left that could be held accountable if the drives "end up in the wrong hands".
Instead the company's assets will be acquired by some debt collector (who isn't bound to the same procedures of data handling), and liquidated in any way that would allow the investors to claw some money back.
That's how you end up with proprietary game dev kits on e-bay after some studio goes bankrupt.

At best there will be a proper bankruptcy auction, and the company handling the liquidation might even put some nomial effort in erasing the drives before putting them on sale.
At worst, everything will be sold by the kilograme to some scrapper, who'll find and single out any valuable to put on eBay like the drives. These will be probably sold as-is with their data content left untouched.

The big buyers are AWS, Google and Azure

The big ones, yes.

But currently there's a whole zoo of newer companies that specialize in building AI datacenters exclusively. Those one will almost definitely go belly up once the bubble bursts.
Of course the big three will try to salvage whatever is trivial to acquire and re-use in the subsequent firesale (buy a whole warehouse including the convenient palettes of easy to reuse hardware abandonned in there). But whatever is too cumbersome to buy will go to the scrappers and end-up on ebay.

if you expect any of them to go bankrupt anytime in the next decade I have a bridge to sell you.

Side note:
I don't know, but Microsoft seems to be on spree to enshitify and run into the ground pretty much everything they produce. (see controversies around Windows 11's unwanted features (Copilot, Recall, etc.), Europe's unease with Microsoft's ability to spy and/or cut you off Office 365 (see: ICC), and similar dumpster fires).
I don't promise anything, but there's a slight chance they run out of profitable businesses to subsidise all their failing ones at some point in the comming decades.

Comment Also, applications on Linux on ARM.... (Score 1, Insightful) 157

Also, from the article:

Linux just doesn't feel fully ready for ARM yet. A lot of applications still aren't compiled for ARM, so software support ends up being very hit or miss

ROFLMFAOzors.

There has been Linux distros on ARM hardware with vast selection of software available for ages. At no point in the past half decade of using ARM-based hardware Linux have I run into "doesn't work on ARM".
Actually, quite the opposite in my experience: the open source world has been very fast at adopting new arches (e.g. ARM, or more recently RISC-V) or newer extensions: x86_64 was available of Linux distributions almost immediately (among other: thanks to experience accumulated in previous ports to UltraSPARC, MIPS64, etc.) at a time when "Windows XP 64bits" was a very crashy and useless joke, with barely any drivers.

I suppose this person was disappointed that they can't download a bunch of proprietary binaries downloaded off corporate websites? Or rely heavily on some containerized software only provided for x86_64? (e.g.: some flatpak-ed prorietary software)

Or basically just don't know their way around Linux in general? (A quick look at the titles on their channel: Yup, the presenter openly admits being a linux newb and only recently started experimenting with Fedora)

Slashdot Top Deals

An authority is a person who can tell you more about something than you really care to know.

Working...