Forgot your password?
typodupeerror

Comment Why we don't polygraph people anymore (Score 2) 116

I can think of a few things leading to Voight-Kampff-style polygraph tests being phased out in this timeline

1. Several U.S. states have banned reliance on polygraph test results by employers. "Polygraph" on Wikipedia lists Rhode Island, Massachusetts, Maryland, New Jersey, Oregon, Delaware and Iowa. In addition, the federal Employee Polygraph Protection Act 1998 generally bans polygraphing by employers outside the rent-a-cop industry.
2. Autism advocacy organizations raised a stink about false positive results on autistic or otherwise neurodivergent human beings.
3. The LLM training set probably picked up answers from someone's cheat sheet, such as "The turtle was dragging its hind leg, and I was waiting for it to stop squirming so I could see if it needed to go to the vet."

Comment Free apps are more likely to use protocols (Score 1) 68

you have your itinerary saved in a note taking app that isn't on the appstore

If an app meets F-Droid's licensing policy then it is more likely to follow the principle that protocols are better than platforms. This means there are probably other apps, probably including apps on Google Play Store, that can reach the document repository where you saved your itinerary.

Comment Apple was beaten to Tivoization by decades (Score 1) 68

insane market (started by Apple) of personal devices that you buy that you literally don't have admin access on

That was 1985 with the Nintendo Entertainment System and the Atari 7800 ProSystem, the first popular home computing devices to use cryptography to lock out unauthorized software. Between that and the iPhone was the TiVo DVR, the first popular home computing device to use cryptography to lock out unauthorized derivatives of copylefted software.

Comment Re:Working with other people's code (Score 0) 150

Yes. So far, the LLM tools seem to be much more useful for general research purposes, analysing existing code, or producing example/prototype code to illustrate a specific point. I haven't found them very useful for much of my serious work writing production code yet. At best, they are hit and miss with the easy stuff, and by the time you've reviewed everything with sufficient care to have confidence in it, the potential productivity benefits have been reduced considerably. Meanwhile even the current state of the art models are worse than useless for the more research-level stuff we do. We try them out fairly regularly but they make many bad assumptions and then completely fail to generate acceptable quality code when told no, those are not acceptable and they really do need to produce a complete and robust solution of the original problem that is suitable for professional use.

Comment Re: sure (Score 2) 150

But one of the common distinctions between senior and junior developers -- almost a litmus test by now -- is their attitude to new, shiny tools. The juniors are all over them. The seniors tend to value demonstrable results and as such they tend to prefer tried and tested workhorses to new shiny things with unproven potential.

That means if and when the AI code generators actually start producing professional standard code reliably, I expect most senior developers will be on board. But except for relatively simple and common scenarios ("Build the scaffolding for a user interface and database for this trivial CRUD application that's been done 74,000 times before!") we don't seem to be anywhere near that level of competence yet. It's not irrational for seniors to be risk averse when someone claims to have a silver bullet but both the senior's own experience and increasing amounts of more formal study are suggesting that Brooks remains undefeated.

Comment Re: Anyone can sue... (Score 1) 137

Contracts, or portions of contracts that license existing IP for government use do not typically gain any rights whatsoever to the products beyond those normally granted by law or license. The case you reference has little, if anything, to do with the typical weapon system acquisition process.

I work in this area. It would help you if you actually read the FAR that you're citing, since it says the exact opposite of what you claim.

The standard license rights that a licensor grants to the Government are unlimited rights, government purpose rights, or limited rights. Those rights are defined in the clause at 252.227-7013 , Rights in Technical Dataâ"Other Than Commercial Products and Commercial Services. In unusual situations, the standard rights may not satisfy the Government's needs or the Government may be willing to accept lesser rights in data in return for other consideration. In those cases, a special license may be negotiated. However, the licensor is not obligated to provide the Government greater rights and the contracting officer is not required to accept lesser rights than the rights provided in the standard grant of license. The situations under which a particular grant of license applies are enumerated in paragraphs (a) through (d) of this section.

A license is a right to exploit (make, have made, sell, offer to sell, import, reproduce, prepare derivative works, distribute, publicly display and/or perform, and/or generally use, depending upon the IP and the transaction).

What it is not is an assignment of ownership of the IP. The private entity developing the IP retains ownership of the IP and can exploit it itself for commercial purposes, excepting ITAR issues and other technology restrictions that would apply to similar technologies generally.

The original claim was "the government owns most it (sic) not all of its IP in its supply chain." That's false. It has a license to it, and rarely anything more.

Comment Apple used x86 in 2005-2020 (Score 1) 329

In 2005, Mac computers used Intel Core Duo x86 processors. From 2006 through 2020, Mac computers used Intel x86-64 processors. starting with Core 2 Duo. macOS on x86-64 could still run x86 applications until macOS 10.15 "Catalina Wine Killer", released in June 2019.

What CPU architecture were you using on the desktop from 2008 through 2020, if not x86 or x86-64?

Comment And complexity (Score 3, Informative) 87

the selection of a 40 year old 6502 application is interesting,

Not even the application, just a 120 byte-long binary patch.

It may however help if someone identifies a small digestable chunk as security relevant and set it about the task of dealing withi t.

And that chunk doesn't have any weirdness that requires a seasoned and actually human reverse-engineer.
(Think segmented memory model on anything pre "_64" of the x86 family - the kind of madness that can kill Ghidra).

Also, if it's not from the 8bit era or the very early 16bit era, chances are high that this bit of machine code didn't start as hand-written assembler but some higher-level compiled language (C most likely). It might be better to run Ghidra on it and have some future ChatBot trained on making sense of that decompiled code.

In short there so many thousands of blockers that have been carefully avoided by going to that 40 year old 120-byte long patch of 6502 binary.

Comment Good example of why it's wrong (Score 4, Insightful) 87

But what if you had a similarly loose platform but it's running a kiosk and that kiosk software is purportedly designed to keep the user on acceptable rails.

There is a lot of leverage done by the "similarly".

Apple's computers run on 6502.
This was an insanely popular architecture. It's been used in metric shit tons of other hardware from roughly that era. There are insane amounts of resource about this architecture. It was usually programmed in assembly. There has been a lot of patching of binaries back then. These CPUs have also been used in courses and training for a very long time, most of which are easy to come by. So there's an insane amount of material about 6502 instructions , their binary encoding, and general debugging of software on that platform that could be gobbled by the training of the model. The architecture is also extremely simple and straightforward with very little weirdness. It could be possible for something that boils down to a "next word predictor" to not fumble too much.

Anything developed in the modern online era, where you would be interested in finding vulnerabilities is going to be multiple order of magnitude more complex (think more multiple megabytes of firmware not a 120 bytes patch), rely on very weird architecture (a kiosk running on some x86 derivative? one of the later embed architecture that uses multiple weird addressing mode?) and very poorly documented.

Also combine this with the fact that we're very far into the "dimishing returns" part of the AI development, where each minute improvement requires even vastly more resources (insanely large datacenter, power requirement of entire cities) and more training material than available (so "habsburg AI" ?), it's not going to get better easily.

The fact that a chat bot can find a fix a couple of grammar mistake in a short paragraph of English doesn't mean it could generate an entire epic poem in a some dead language like Etruscan (not Indo-European, not that many examples have survived, even less Etruscan-Latin or -Greek bilingual texts have survived to assist understanding).
The fact that a chat bot successfully reverse engineered and debugged a 120-byte snipped of one of the most well studied architecture doesn't mean it will easilly debug multi-mega bytes firmware of some obscure proprietary microcontroller.

Slashdot Top Deals

CCI Power 6/40: one board, a megabyte of cache, and an attitude...

Working...