Forgot your password?
typodupeerror

Comment And complexity (Score 3, Informative) 87

the selection of a 40 year old 6502 application is interesting,

Not even the application, just a 120 byte-long binary patch.

It may however help if someone identifies a small digestable chunk as security relevant and set it about the task of dealing withi t.

And that chunk doesn't have any weirdness that requires a seasoned and actually human reverse-engineer.
(Think segmented memory model on anything pre "_64" of the x86 family - the kind of madness that can kill Ghidra).

Also, if it's not from the 8bit era or the very early 16bit era, chances are high that this bit of machine code didn't start as hand-written assembler but some higher-level compiled language (C most likely). It might be better to run Ghidra on it and have some future ChatBot trained on making sense of that decompiled code.

In short there so many thousands of blockers that have been carefully avoided by going to that 40 year old 120-byte long patch of 6502 binary.

Comment Good example of why it's wrong (Score 4, Insightful) 87

But what if you had a similarly loose platform but it's running a kiosk and that kiosk software is purportedly designed to keep the user on acceptable rails.

There is a lot of leverage done by the "similarly".

Apple's computers run on 6502.
This was an insanely popular architecture. It's been used in metric shit tons of other hardware from roughly that era. There are insane amounts of resource about this architecture. It was usually programmed in assembly. There has been a lot of patching of binaries back then. These CPUs have also been used in courses and training for a very long time, most of which are easy to come by. So there's an insane amount of material about 6502 instructions , their binary encoding, and general debugging of software on that platform that could be gobbled by the training of the model. The architecture is also extremely simple and straightforward with very little weirdness. It could be possible for something that boils down to a "next word predictor" to not fumble too much.

Anything developed in the modern online era, where you would be interested in finding vulnerabilities is going to be multiple order of magnitude more complex (think more multiple megabytes of firmware not a 120 bytes patch), rely on very weird architecture (a kiosk running on some x86 derivative? one of the later embed architecture that uses multiple weird addressing mode?) and very poorly documented.

Also combine this with the fact that we're very far into the "dimishing returns" part of the AI development, where each minute improvement requires even vastly more resources (insanely large datacenter, power requirement of entire cities) and more training material than available (so "habsburg AI" ?), it's not going to get better easily.

The fact that a chat bot can find a fix a couple of grammar mistake in a short paragraph of English doesn't mean it could generate an entire epic poem in a some dead language like Etruscan (not Indo-European, not that many examples have survived, even less Etruscan-Latin or -Greek bilingual texts have survived to assist understanding).
The fact that a chat bot successfully reverse engineered and debugged a 120-byte snipped of one of the most well studied architecture doesn't mean it will easilly debug multi-mega bytes firmware of some obscure proprietary microcontroller.

Comment Re:Confusion (Score 1) 209

..was considered the epitome of processed food.

That could well be the result of a FUD campaign by the trillion-dollar meat industry, which may feel threatened by plausible vegan/vegetarian alternatives to their products, which tend to be the result of practices that are cruel and destructive to the environment.

Also, hot dogs, bacon, jerky and deli meats are now considered carcinogenic. Why aren't those considered to be "the epitome of processed food"?

Comment HDD lifecycle (Score 2) 97

harddrives have a finite service life

Correct.
For typical enterprise-grade drive, warranty are in the 5 to 10 years range.
So you can expect them to last that many years at minimum.

So if the bubble pops in 18months as was suggested in the post above, those drives will even still be under warranty, and definitely have quite some life left in them.

and those used in cloud providers are recycled not resold or reused.

...by an normally operating company, where there's somebody who will be held accountable for whatever happens with these drives: yes.
(They'll most likely destroy the drives to avoid any hassle regarding confidentiality, then recycle)

BUT, when a company goes bust, and everybody got laid off, nobody will be around to take care that proper procedures are followed and each drive has a drill shatter its plates. There's no employee left that could be held accountable if the drives "end up in the wrong hands".
Instead the company's assets will be acquired by some debt collector (who isn't bound to the same procedures of data handling), and liquidated in any way that would allow the investors to claw some money back.
That's how you end up with proprietary game dev kits on e-bay after some studio goes bankrupt.

At best there will be a proper bankruptcy auction, and the company handling the liquidation might even put some nomial effort in erasing the drives before putting them on sale.
At worst, everything will be sold by the kilograme to some scrapper, who'll find and single out any valuable to put on eBay like the drives. These will be probably sold as-is with their data content left untouched.

The big buyers are AWS, Google and Azure

The big ones, yes.

But currently there's a whole zoo of newer companies that specialize in building AI datacenters exclusively. Those one will almost definitely go belly up once the bubble bursts.
Of course the big three will try to salvage whatever is trivial to acquire and re-use in the subsequent firesale (buy a whole warehouse including the convenient palettes of easy to reuse hardware abandonned in there). But whatever is too cumbersome to buy will go to the scrappers and end-up on ebay.

if you expect any of them to go bankrupt anytime in the next decade I have a bridge to sell you.

Side note:
I don't know, but Microsoft seems to be on spree to enshitify and run into the ground pretty much everything they produce. (see controversies around Windows 11's unwanted features (Copilot, Recall, etc.), Europe's unease with Microsoft's ability to spy and/or cut you off Office 365 (see: ICC), and similar dumpster fires).
I don't promise anything, but there's a slight chance they run out of profitable businesses to subsidise all their failing ones at some point in the comming decades.

Comment Also, applications on Linux on ARM.... (Score 1, Insightful) 157

Also, from the article:

Linux just doesn't feel fully ready for ARM yet. A lot of applications still aren't compiled for ARM, so software support ends up being very hit or miss

ROFLMFAOzors.

There has been Linux distros on ARM hardware with vast selection of software available for ages. At no point in the past half decade of using ARM-based hardware Linux have I run into "doesn't work on ARM".
Actually, quite the opposite in my experience: the open source world has been very fast at adopting new arches (e.g. ARM, or more recently RISC-V) or newer extensions: x86_64 was available of Linux distributions almost immediately (among other: thanks to experience accumulated in previous ports to UltraSPARC, MIPS64, etc.) at a time when "Windows XP 64bits" was a very crashy and useless joke, with barely any drivers.

I suppose this person was disappointed that they can't download a bunch of proprietary binaries downloaded off corporate websites? Or rely heavily on some containerized software only provided for x86_64? (e.g.: some flatpak-ed prorietary software)

Or basically just don't know their way around Linux in general? (A quick look at the titles on their channel: Yup, the presenter openly admits being a linux newb and only recently started experimenting with Fedora)

Comment Process, Not silicon (AI will make this worse) (Score 2, Interesting) 157

That's because Apple Silicon is really efficient, especially if you take energy consumption into account.

Mostly due merely because Apple used to hog the finest process(*) available at TSMC (e.g. producing their M chips on "3nm" process while AMD uses "5nm" for their flagship), not as much due to some magic design skills.

And BTW, with the current AI bubble, this advantage of Apple is going to evaporate as now the silicon fabs are going to prioritize buyers with even more (investors', not income) money to burn on the latest and bestest processes. Within a year or so, you could expect the "really efficient, especially if you take energy consumption into account" title will go to some custom AI chipsets by Nvidia and co (server CPUs, datacenter GPUs and NPUs, rack's ultra-high bandwidth interconnects, etc.) mass-produced on crazy scale to fill the "promised" continent-spanning datacenters that the big AI companies accounce in their arm's race to outcompete each-other exaflops-wise; and Apple will slide back to "Yet another CPU manufacturer".

---

(*) note: that still partially(**) explains the high price...
(**) note: but Apple still has ginormous margins compared to the cost of parts.

Comment No lying around (Score 1) 67

You're right being sarcastic:

I'm sure they had 40 billion worth of bitcoin lying around and managed to transfer it to actual other bitcoin accounts without anyone noticing.

Nope, they didn't have 620'000 BTCs (more like 50'000 BTC, mentionned elsewhere in the discussions).

They didn't make actual transaction on the blockchain giving out 620'000 to some random bitcoin account.

They just accidentally wrote +620'000 BTC in the database that manages the exchange (which tracks the internal state of who is selling how much to whom).
So suddenly some user was supposed to be in possession of 620'000 BTCs on the exchange according to the web interface, even if the exchange never saw the number of BTCs it holds according to the blockchain ledger go magically up by that number.

Comment Not on the blockchain (Score 1) 67

That's a lot of bitcoin to possess.

That's merely a big number in some database.

Wait, how does the blockchain even allow you to spend what you don't have?

Because this thing is not technically on the blockchain.

The transactions happening on the blockchain are between your own self-managed wallet and the exchange's infrastructure.
(A banking metaphor: Think you getting cash from your pocket and inserting it in the ATM input slot)

The transactions on the exchange are just internal number keeping by the exchange's software stack to keep track of who has how much and oews how much to whom.
(A banking metaphor: when you send money between accounts, e.g., when you use e-banking to pay somebody at the same bank, there is nobody moving actual wads of dollar bills and coins between vault, instead the bank just updates some numbers in their database and now they know you have less and somebody else has more)

Now this is where the banking metaphor breaks: actual real-world banks are extremely regulated and have to pass some high standard to still be licensed as bank, and because of that great effort are put making sure that the database is coherent, that the numbers corresponds to what is metaphorically in their vault.
Nobody would just get magically "+40 billion bucks" on their account due to a mistake.

Meanwhile I wouldn't be surprise if some of the code involved here was vibe coded.
What happened is the exchange did by mistake write "+620'000 BTC" in their database even if they never controlled that much in their actual wallet/there was never that much BTCs according to the blockchain ledger.

Enough recipients sought to sell or withdraw bitcoin that the market sank 17%, before Bithumb halted transactions after roughly 30 minutes.

(emphasis mine).

So some people decided to be clever and run away as fast as possible with the money (have it transfered out of the exchange).
Except that even if the exchange's database says these users "possess" 620'000 BTCs, the exchange only actually has 50'000 BTCs according to the ledger, so this has very likely set off some warning of dubious or impossibly high sum being requested for withdrawal, leading the exchange to freeze everything before their actual 50'000 being fleeced.

Comment GPU cooler (Score 1) 40

The big issues with the GPU is many don't have active cooling, instead expecting a front to back airflow to exhaust fresh air across it.

But out there in the real world: coolers are probably the most often modded thing on graphic cards, so...

So you need some fairly forceful directed air movement. You can rig up things, but it won't be just plug this desktop GPU into the motherboard and connect PCIe power and you're all set.

...so yeah: replacing the cooler with something more fitting for a workstation tower or a compact gaming case is probably going to be extremely common.

At worst, just some new 3D-printed shroud with a couple of Noctuas on it would probably work in a pinch,
but you can bet that AliExpress is going to be filled with cheap custom coolers for the 2-3 different most common PCIe GPU boards found in data centers.

(And of course, the shops that are likely to perform GPU transplants onto new PCIe carrier board will obviously slap some adequate cooler on it).

Comment Memory (Score 2) 40

The memory isn't in the correct form factor for desktop use.

The latest crop of server indeed use LPDDR5X which is indeed incompatible with DDR5 DIMMs and SODIMMs (it's not possible to de-solder the LPDDRs off recycled mainboard and re-solder them on new DDR carriers PCB and plug those into conventional motherboards' DIMM slots).
So yeah, you won't see the market flooded with rebuilt DIMM and SODIMM sticks.

BUT it's a format used (soldered) also by SBC, mobile devices (tablets and smartphones), and some laptops.

So if the hardware is cleared "for pennies on the dollar" from Datacenters, it means that you could see on the market a flood of Raspberry Pi clones, Android devices, gaming consoles, and ultra-thin notebooks that have 32GB or 64GB of (re-) soldered RAM and still cost cheap.

--

The latest crop of GPU in those servers use HBM3. Yes, this is completely different from GDDR7 used on gamer hardware (it's PoP on top of the GPU itself instead of being soldered nearby on the card)... but these GPUs are perfectly functional for gaming, they just have a much *H*igher *B*andwith to the memory.

At minimum it should be possible to keep the PCIe accelerator board as is, as long as you plug them into a motherboard with on-board graphics, and with the corresponding software (Could be as simple as just doing the same Optimus trick that laptop with extra switchable discrete GPU have been doing for ages: the repurposed datacenter GPUs renders the graphics of the game, the mainboard runs the compositor to assemble the final picture that is then output through the mainboard's DisplayPort).
(DISCLAIMER: I actually do use a second hand GPU with HBM in my current build, except that mine still has a functionning DisplayPort so I don't even need to fumble with Linux "prime" drivers).

At worse, boards that use esoteric server connections could be de-soldered and re-soldered onto more conventional PCIe carrier PCBs (there's a whole market in China of rebuilding new PCIe cards with quanditites of RAM adapted for machine learning research out of non-banned regular gaming donnor cards; transplanting GPUs is a well mastered skill).
At worst if the GPUs are very weird (not exposing PCIe pins on the BGA pinout and/or support for PCIe being fused off in the GPU), NVlink-to-PCIe6 bridges chips are very likely to popup (see the current market for dead-cheap PCIe switches) making the GPU transplanted onto a classic PCIe cards still viable.

Slashdot Top Deals

"Though a program be but three lines long, someday it will have to be maintained." -- The Tao of Programming

Working...