Forgot your password?
typodupeerror

Comment And complexity (Score 2) 81

the selection of a 40 year old 6502 application is interesting,

Not even the application, just a 120 byte-long binary patch.

It may however help if someone identifies a small digestable chunk as security relevant and set it about the task of dealing withi t.

And that chunk doesn't have any weirdness that requires a seasoned and actually human reverse-engineer.
(Think segmented memory model on anything pre "_64" of the x86 family - the kind of madness that can kill Ghidra).

Also, if it's not from the 8bit era or the very early 16bit era, chances are high that this bit of machine code didn't start as hand-written assembler but some higher-level compiled language (C most likely). It might be better to run Ghidra on it and have some future ChatBot trained on making sense of that decompiled code.

In short there so many thousands of blockers that have been carefully avoided by going to that 40 year old 120-byte long patch of 6502 binary.

Comment Good example of why it's wrong (Score 3, Insightful) 81

But what if you had a similarly loose platform but it's running a kiosk and that kiosk software is purportedly designed to keep the user on acceptable rails.

There is a lot of leverage done by the "similarly".

Apple's computers run on 6502.
This was an insanely popular architecture. It's been used in metric shit tons of other hardware from roughly that era. There are insane amounts of resource about this architecture. It was usually programmed in assembly. There has been a lot of patching of binaries back then. These CPUs have also been used in courses and training for a very long time, most of which are easy to come by. So there's an insane amount of material about 6502 instructions , their binary encoding, and general debugging of software on that platform that could be gobbled by the training of the model. The architecture is also extremely simple and straightforward with very little weirdness. It could be possible for something that boils down to a "next word predictor" to not fumble too much.

Anything developed in the modern online era, where you would be interested in finding vulnerabilities is going to be multiple order of magnitude more complex (think more multiple megabytes of firmware not a 120 bytes patch), rely on very weird architecture (a kiosk running on some x86 derivative? one of the later embed architecture that uses multiple weird addressing mode?) and very poorly documented.

Also combine this with the fact that we're very far into the "dimishing returns" part of the AI development, where each minute improvement requires even vastly more resources (insanely large datacenter, power requirement of entire cities) and more training material than available (so "habsburg AI" ?), it's not going to get better easily.

The fact that a chat bot can find a fix a couple of grammar mistake in a short paragraph of English doesn't mean it could generate an entire epic poem in a some dead language like Etruscan (not Indo-European, not that many examples have survived, even less Etruscan-Latin or -Greek bilingual texts have survived to assist understanding).
The fact that a chat bot successfully reverse engineered and debugged a 120-byte snipped of one of the most well studied architecture doesn't mean it will easilly debug multi-mega bytes firmware of some obscure proprietary microcontroller.

Comment HDD lifecycle (Score 2) 97

harddrives have a finite service life

Correct.
For typical enterprise-grade drive, warranty are in the 5 to 10 years range.
So you can expect them to last that many years at minimum.

So if the bubble pops in 18months as was suggested in the post above, those drives will even still be under warranty, and definitely have quite some life left in them.

and those used in cloud providers are recycled not resold or reused.

...by an normally operating company, where there's somebody who will be held accountable for whatever happens with these drives: yes.
(They'll most likely destroy the drives to avoid any hassle regarding confidentiality, then recycle)

BUT, when a company goes bust, and everybody got laid off, nobody will be around to take care that proper procedures are followed and each drive has a drill shatter its plates. There's no employee left that could be held accountable if the drives "end up in the wrong hands".
Instead the company's assets will be acquired by some debt collector (who isn't bound to the same procedures of data handling), and liquidated in any way that would allow the investors to claw some money back.
That's how you end up with proprietary game dev kits on e-bay after some studio goes bankrupt.

At best there will be a proper bankruptcy auction, and the company handling the liquidation might even put some nomial effort in erasing the drives before putting them on sale.
At worst, everything will be sold by the kilograme to some scrapper, who'll find and single out any valuable to put on eBay like the drives. These will be probably sold as-is with their data content left untouched.

The big buyers are AWS, Google and Azure

The big ones, yes.

But currently there's a whole zoo of newer companies that specialize in building AI datacenters exclusively. Those one will almost definitely go belly up once the bubble bursts.
Of course the big three will try to salvage whatever is trivial to acquire and re-use in the subsequent firesale (buy a whole warehouse including the convenient palettes of easy to reuse hardware abandonned in there). But whatever is too cumbersome to buy will go to the scrappers and end-up on ebay.

if you expect any of them to go bankrupt anytime in the next decade I have a bridge to sell you.

Side note:
I don't know, but Microsoft seems to be on spree to enshitify and run into the ground pretty much everything they produce. (see controversies around Windows 11's unwanted features (Copilot, Recall, etc.), Europe's unease with Microsoft's ability to spy and/or cut you off Office 365 (see: ICC), and similar dumpster fires).
I don't promise anything, but there's a slight chance they run out of profitable businesses to subsidise all their failing ones at some point in the comming decades.

Comment Also, applications on Linux on ARM.... (Score 1, Insightful) 157

Also, from the article:

Linux just doesn't feel fully ready for ARM yet. A lot of applications still aren't compiled for ARM, so software support ends up being very hit or miss

ROFLMFAOzors.

There has been Linux distros on ARM hardware with vast selection of software available for ages. At no point in the past half decade of using ARM-based hardware Linux have I run into "doesn't work on ARM".
Actually, quite the opposite in my experience: the open source world has been very fast at adopting new arches (e.g. ARM, or more recently RISC-V) or newer extensions: x86_64 was available of Linux distributions almost immediately (among other: thanks to experience accumulated in previous ports to UltraSPARC, MIPS64, etc.) at a time when "Windows XP 64bits" was a very crashy and useless joke, with barely any drivers.

I suppose this person was disappointed that they can't download a bunch of proprietary binaries downloaded off corporate websites? Or rely heavily on some containerized software only provided for x86_64? (e.g.: some flatpak-ed prorietary software)

Or basically just don't know their way around Linux in general? (A quick look at the titles on their channel: Yup, the presenter openly admits being a linux newb and only recently started experimenting with Fedora)

Comment Process, Not silicon (AI will make this worse) (Score 2, Interesting) 157

That's because Apple Silicon is really efficient, especially if you take energy consumption into account.

Mostly due merely because Apple used to hog the finest process(*) available at TSMC (e.g. producing their M chips on "3nm" process while AMD uses "5nm" for their flagship), not as much due to some magic design skills.

And BTW, with the current AI bubble, this advantage of Apple is going to evaporate as now the silicon fabs are going to prioritize buyers with even more (investors', not income) money to burn on the latest and bestest processes. Within a year or so, you could expect the "really efficient, especially if you take energy consumption into account" title will go to some custom AI chipsets by Nvidia and co (server CPUs, datacenter GPUs and NPUs, rack's ultra-high bandwidth interconnects, etc.) mass-produced on crazy scale to fill the "promised" continent-spanning datacenters that the big AI companies accounce in their arm's race to outcompete each-other exaflops-wise; and Apple will slide back to "Yet another CPU manufacturer".

---

(*) note: that still partially(**) explains the high price...
(**) note: but Apple still has ginormous margins compared to the cost of parts.

Comment No lying around (Score 1) 67

You're right being sarcastic:

I'm sure they had 40 billion worth of bitcoin lying around and managed to transfer it to actual other bitcoin accounts without anyone noticing.

Nope, they didn't have 620'000 BTCs (more like 50'000 BTC, mentionned elsewhere in the discussions).

They didn't make actual transaction on the blockchain giving out 620'000 to some random bitcoin account.

They just accidentally wrote +620'000 BTC in the database that manages the exchange (which tracks the internal state of who is selling how much to whom).
So suddenly some user was supposed to be in possession of 620'000 BTCs on the exchange according to the web interface, even if the exchange never saw the number of BTCs it holds according to the blockchain ledger go magically up by that number.

Comment Not on the blockchain (Score 1) 67

That's a lot of bitcoin to possess.

That's merely a big number in some database.

Wait, how does the blockchain even allow you to spend what you don't have?

Because this thing is not technically on the blockchain.

The transactions happening on the blockchain are between your own self-managed wallet and the exchange's infrastructure.
(A banking metaphor: Think you getting cash from your pocket and inserting it in the ATM input slot)

The transactions on the exchange are just internal number keeping by the exchange's software stack to keep track of who has how much and oews how much to whom.
(A banking metaphor: when you send money between accounts, e.g., when you use e-banking to pay somebody at the same bank, there is nobody moving actual wads of dollar bills and coins between vault, instead the bank just updates some numbers in their database and now they know you have less and somebody else has more)

Now this is where the banking metaphor breaks: actual real-world banks are extremely regulated and have to pass some high standard to still be licensed as bank, and because of that great effort are put making sure that the database is coherent, that the numbers corresponds to what is metaphorically in their vault.
Nobody would just get magically "+40 billion bucks" on their account due to a mistake.

Meanwhile I wouldn't be surprise if some of the code involved here was vibe coded.
What happened is the exchange did by mistake write "+620'000 BTC" in their database even if they never controlled that much in their actual wallet/there was never that much BTCs according to the blockchain ledger.

Enough recipients sought to sell or withdraw bitcoin that the market sank 17%, before Bithumb halted transactions after roughly 30 minutes.

(emphasis mine).

So some people decided to be clever and run away as fast as possible with the money (have it transfered out of the exchange).
Except that even if the exchange's database says these users "possess" 620'000 BTCs, the exchange only actually has 50'000 BTCs according to the ledger, so this has very likely set off some warning of dubious or impossibly high sum being requested for withdrawal, leading the exchange to freeze everything before their actual 50'000 being fleeced.

Comment GPU cooler (Score 1) 40

The big issues with the GPU is many don't have active cooling, instead expecting a front to back airflow to exhaust fresh air across it.

But out there in the real world: coolers are probably the most often modded thing on graphic cards, so...

So you need some fairly forceful directed air movement. You can rig up things, but it won't be just plug this desktop GPU into the motherboard and connect PCIe power and you're all set.

...so yeah: replacing the cooler with something more fitting for a workstation tower or a compact gaming case is probably going to be extremely common.

At worst, just some new 3D-printed shroud with a couple of Noctuas on it would probably work in a pinch,
but you can bet that AliExpress is going to be filled with cheap custom coolers for the 2-3 different most common PCIe GPU boards found in data centers.

(And of course, the shops that are likely to perform GPU transplants onto new PCIe carrier board will obviously slap some adequate cooler on it).

Comment Memory (Score 2) 40

The memory isn't in the correct form factor for desktop use.

The latest crop of server indeed use LPDDR5X which is indeed incompatible with DDR5 DIMMs and SODIMMs (it's not possible to de-solder the LPDDRs off recycled mainboard and re-solder them on new DDR carriers PCB and plug those into conventional motherboards' DIMM slots).
So yeah, you won't see the market flooded with rebuilt DIMM and SODIMM sticks.

BUT it's a format used (soldered) also by SBC, mobile devices (tablets and smartphones), and some laptops.

So if the hardware is cleared "for pennies on the dollar" from Datacenters, it means that you could see on the market a flood of Raspberry Pi clones, Android devices, gaming consoles, and ultra-thin notebooks that have 32GB or 64GB of (re-) soldered RAM and still cost cheap.

--

The latest crop of GPU in those servers use HBM3. Yes, this is completely different from GDDR7 used on gamer hardware (it's PoP on top of the GPU itself instead of being soldered nearby on the card)... but these GPUs are perfectly functional for gaming, they just have a much *H*igher *B*andwith to the memory.

At minimum it should be possible to keep the PCIe accelerator board as is, as long as you plug them into a motherboard with on-board graphics, and with the corresponding software (Could be as simple as just doing the same Optimus trick that laptop with extra switchable discrete GPU have been doing for ages: the repurposed datacenter GPUs renders the graphics of the game, the mainboard runs the compositor to assemble the final picture that is then output through the mainboard's DisplayPort).
(DISCLAIMER: I actually do use a second hand GPU with HBM in my current build, except that mine still has a functionning DisplayPort so I don't even need to fumble with Linux "prime" drivers).

At worse, boards that use esoteric server connections could be de-soldered and re-soldered onto more conventional PCIe carrier PCBs (there's a whole market in China of rebuilding new PCIe cards with quanditites of RAM adapted for machine learning research out of non-banned regular gaming donnor cards; transplanting GPUs is a well mastered skill).
At worst if the GPUs are very weird (not exposing PCIe pins on the BGA pinout and/or support for PCIe being fused off in the GPU), NVlink-to-PCIe6 bridges chips are very likely to popup (see the current market for dead-cheap PCIe switches) making the GPU transplanted onto a classic PCIe cards still viable.

Comment Various uses (Score 1) 86

A critical part of that behavior is that YouTube is largely not a long-form video platform. I don't go to watch an hour-long TV show. I go to watch a short clip of a piece of music to figure out if it is the right one, or a short clip that someone sent me. This means I'm not playing YouTube content for hours on end.

That's a *you* thing.

Music is a very prevalent use of YouTube: " Since Lady Gaga's "Bad Romance" in 2009, every video that has reached the top of the "most-viewed YouTube videos" list has been a music video. ", a use that could probably be expensive to Google (do some labels require some global agreement with Google?) and will typically be a "click on the link to a playlist or a mix, then let it play in the background" exactly as in this news (e.g. they use YouTube as a glorified Spotify competitor). By making the most frequent use of YouTube tied to Premium, Google attempts increasing the subscription, hoping tis allows them to show "line goes up" to investors.

Podcasts are other typical use where "hitting play and leave in the background, listening while doing something else".
Other examples would be video essay where you don't case about the visuals and are more interested in the subject.

There are also parents who use "kids" mode of YouTube (restricted to kids-only channels) as the new gen "TV nanny" to leave the kids parked in front of.
(Though that's a different type of lock: there the video is still playing on the screen, while the phone/tablet "locks in the background" - i.e. the video still plays but switching to any other app else requires login in again. This might not be blocked in the current restrictions).

being unable to play in the background causes me to stop watching whatever video I'm watching, and usually I don't come back to it. This reduces that number even further.

I suspect Google has run the numbers and concluded that they might just get enough subscribers to display a "line goes up" report to investors.
Again your (and my) personal use of YouTube aren't the most typical. So it's possible that overall it would still be useful to Google.
e.g.: They will very certainly lose users like you, but a small fraction of the much more frequently represented "music listeners" might decide to fork the money for Premium to continue to be able to listen to music playlist in the background.
But maybe they bet wrong and those will instead abandon YouTube for Spotify, Deezer, SoundCloud, Bandcamp or whatever the cool kids use nowadays.

Comment Serious Gamer (Score 1) 13

I never really understood all the hate for Game Cube from 'serious gamers' .

Basically: Its position among the consoles of the same generation.
And the "Lateral Thinking with Withered Technology" approach typical at Nintendo (don't go for bells and whistles and custom chips, go for well-established tech that's easy to mass-produce).

On paper, the GameCube had musc lower specs than the competition (Sony's Playstation 2 and Microsoft's X-Box), as Nintendo didn't want to follow the arm-race for the beefiest specs.

Contrast with the previous two iterations:
- Nintendo 64: has a very cool chipset developed in partnership with SGI, and which hoped to revolutionize 3D graphics and aimed to compete against the 3D available on Sony's PlayStation 1 (and to some extent SEGA's Saturn, though that machine wasn't primarily 3D).
- Super Famicom / SNES: despite a relatively crappy main CPU (a 16bit extensions of the same 6502 family as before, still running on a 8bit bus), it had advanced visual capabilities (e.g.: the tilemap can be roto-zoomed on Mode7, multiple scrolling and effects planes, etc.) and coprocessors for cool raster-effect tricks (e.g. doing 3D using line-by-line changes of Mode 7's roto-zoom), and supported quite a menagerie of extra in-cartridge coprocessors (e.g.: the SuperFX used for 3D polygonal games like starfox), competing very well against contemporary SEGA's MegaDrive/Genesis and NEC's PC Engine (To the point that it caused SEGA to panic and release a series of not-so successful expansions: CD (and its own roto-zoom) then later 32X). The SNES' graphics could even look decent at a fraction of the price compared to what the "actually a consolized arcade board" NeoGeo was doing - allowing a lot of good arcade ports (e.g., Street Fighter 2's port fo SNES is not put to shame when compared to the origin al CPS2 arcade board).

By the time the GameCube was out, gamers were used to Nintendo trying to make hardware that can seriously compete with the rest of the market on raw performances. Then suddenly the GameCube comes out, which is not aiming to beat either of its contemporary competitors, just aims to be cheap to produce and thus sellable for profit (instead of subsidized by game sales like Sony does) and be simpler to program for than their own N64 predecessor or the PS2 competitor.

The performance was fine, but I guess it was memory staved because it seems like the GC version of some 3rd party titles got cut down a bit.

In practice, devs managed to make a lot of cool games for it.
(Because at the end of the day, the enjoyment you experience comes from how interesting the games are, not what numbers are on some spec sheet).

But on paper, GameCube has, e.g., a fraction of the pixel rate of a PlayStation 2 and less than XBox', smaller amount of RAM compared to those two competitors, etc.
Even the storage space is smaller (mini DVD vs regular full sized DVD).

"Hardcore gamers" running after the shinniest newest visual gizmos where disappointed.

Comment Verifcation (Score 3, Interesting) 115

Whatsapp doesn't let you in person verify

Tap a user to get their profile.
Tap "Encryption"
You have the option to scan a QR code or compare key fingerprints.

or notify you when they changed,

"Your security code with {nickname} change. Tap to learn more."

Meta can trivially eawsily MitM

Why MitM when they already have plenty of side channels (cloud-based AI; cloud-based backups; and its closed-source so they could probably just inject a backdoor in the next upgrade and nobody can notice, etc.)

Slashdot Top Deals

FORTRAN is the language of Powerful Computers. -- Steven Feiner

Working...