Forgot your password?
typodupeerror

Comment Re:You're Absolutely Right! (Score 2) 116

This debate has been going on for at least a couple of decades. I remember back in the Usenet days, when AOL and other early ISP users first started showing up in droves with whacked out untraceable bang paths that people were trying to sort out technical solutions, usually involving some servers tarpitting some domains, with the inevitable consequence that valid users (by whatever definition any given Usenet group had) were blocked.

In a way, AI bots aren't any different than the spam problem on fax machines and email; universal low-barrier delivery meets large scale programmatic swill. AI allows complexity that earlier spambots couldn't dream of, when the most sophisticated way of defeating filters was spelling "porn" as "pr0n" and a bit of header fuckery. In the end there is only two ways to go; either do what filtering you can and accept some degree of false positives, or go to identification systems that will, one way or the other, compromise anonymity, because make no mistake, once you start storing any kind of data linking an account to an actual human being; biometric, picture ID, phone number, mailing address or whatever, it won't take long for the court order to show up demanding you hand over all the de-anonymized account data to find the person distributing child porn, drugs, or calling their local political representative dirty names.

Comment Re:Will believe it when it happens (Score 2) 166

Neo and Android-based Chromebooks, and "good-enough" Office alternatives like Google Docs and I would argue even LibreOffice (I use it almost exclusively these days), mean Microsoft is suffering a differentiation crisis. They'll likely have the corporate lock for some time to come, though they've managed to fuck up Outlook so badly that I have to be wondering if the only thing really keeping the big guys locked in as Teams at this point.

MS's ability to leverage Windows as the platform is decaying, and the "bells and whistles" approach has managed to alienate a lot of users. People are at the point where they use Windows because they have to, but there's enough platform-agnostic functionality out there that the old lock-ins they relied on to keep Windows dominant are becoming more like prisons for their own development teams.

Comment Re:Will believe it when it happens (Score 1) 166

I know MacOS has its critics, and in its own way it has its UI lock in, but after using it now for four years, and my use of Windows now being reduced to an RDP session at work, I have to say the experience overall has been pretty pleasant and productive. The lack of update nagging, the sheer horsepower of Apple Silicon, an actual *nix prompt instead of WSL, printing that isn't an absolute shitshow (and this is saying something because Windows used to be the reigning heavyweight champion of plug and play printer handling).

Windows 11 is its own type of hell, and every time I'm forced to use it I find it a slow, bloated, unintuitive mess. It feels like Windows 7 if you had let your 12 year old kid download a whole bunch of dubious software and now the desktop and taskbar do strange things while spam spontaneously appears. If someone had shown me Windows 11 fifteen years ago I would have gone "Holy shit man, your Windows 7 machine has been rootkitted!"

Comment Re:8Gb RAM? (Score 2) 56

I can still run Eclipse without any issues on old M1 MacBook Pro with 8gb, about 5 years old now. Frankly Apple Silicon is just a beast and no matter how much people try to hand wave it away, these machines with the MacOS optimizations are beasts. I still take my M1 out on the road because if it gets damaged, it will still have paid for itself. I honestly cannot imagine going back to Intel/AMD and Windows, and every time I'm forced to use Windows, even on decent hardware, I'm come away realizing just how inferior the whole ecosystem is. It's sole advantage, and it's a big one, is that the software library for x64/Windows is massive.

Comment Re:Seriously ...? (Score 1) 255

I would argue that when sweeps are largely indiscriminate, and being in proximity to a raid is enough to end up in custody, so that the odds of "wrong place wrong time" greatly increase, it becomes a much stronger argument for not being in the US as a foreign national at all.

Mistakes may happen, but the nature of ICE detainments rises far above mere mistakes. The intent of the current system is to make the US sufficiently inhospitable to foreign nationals that they don't come. So, I take that point, and won't come. And by the looks of depressed visits since last year from my fellow countrymen, many of us are choosing that route.

Comment Re: Seriously ...? (Score 1) 255

I'm unlikely to ever visit the US again. My daughter and I had planned on going to Comic-Con at some point in the next year or two, and we both agree now that while the risk of detainment is rather low, it is non-zero. There are other places we can go, and being Canadian, there are plenty of places in our own country that we haven't seen.

Comment And complexity (Score 3, Informative) 87

the selection of a 40 year old 6502 application is interesting,

Not even the application, just a 120 byte-long binary patch.

It may however help if someone identifies a small digestable chunk as security relevant and set it about the task of dealing withi t.

And that chunk doesn't have any weirdness that requires a seasoned and actually human reverse-engineer.
(Think segmented memory model on anything pre "_64" of the x86 family - the kind of madness that can kill Ghidra).

Also, if it's not from the 8bit era or the very early 16bit era, chances are high that this bit of machine code didn't start as hand-written assembler but some higher-level compiled language (C most likely). It might be better to run Ghidra on it and have some future ChatBot trained on making sense of that decompiled code.

In short there so many thousands of blockers that have been carefully avoided by going to that 40 year old 120-byte long patch of 6502 binary.

Comment Good example of why it's wrong (Score 4, Insightful) 87

But what if you had a similarly loose platform but it's running a kiosk and that kiosk software is purportedly designed to keep the user on acceptable rails.

There is a lot of leverage done by the "similarly".

Apple's computers run on 6502.
This was an insanely popular architecture. It's been used in metric shit tons of other hardware from roughly that era. There are insane amounts of resource about this architecture. It was usually programmed in assembly. There has been a lot of patching of binaries back then. These CPUs have also been used in courses and training for a very long time, most of which are easy to come by. So there's an insane amount of material about 6502 instructions , their binary encoding, and general debugging of software on that platform that could be gobbled by the training of the model. The architecture is also extremely simple and straightforward with very little weirdness. It could be possible for something that boils down to a "next word predictor" to not fumble too much.

Anything developed in the modern online era, where you would be interested in finding vulnerabilities is going to be multiple order of magnitude more complex (think more multiple megabytes of firmware not a 120 bytes patch), rely on very weird architecture (a kiosk running on some x86 derivative? one of the later embed architecture that uses multiple weird addressing mode?) and very poorly documented.

Also combine this with the fact that we're very far into the "dimishing returns" part of the AI development, where each minute improvement requires even vastly more resources (insanely large datacenter, power requirement of entire cities) and more training material than available (so "habsburg AI" ?), it's not going to get better easily.

The fact that a chat bot can find a fix a couple of grammar mistake in a short paragraph of English doesn't mean it could generate an entire epic poem in a some dead language like Etruscan (not Indo-European, not that many examples have survived, even less Etruscan-Latin or -Greek bilingual texts have survived to assist understanding).
The fact that a chat bot successfully reverse engineered and debugged a 120-byte snipped of one of the most well studied architecture doesn't mean it will easilly debug multi-mega bytes firmware of some obscure proprietary microcontroller.

Slashdot Top Deals

I have a theory that it's impossible to prove anything, but I can't prove it.

Working...