Comment Re:As a repair tech... (Score 1) 61
Oh look, working as designed, I can't fix my own thing, but I shouldn't have to care as long as I give Apple money to pay for any work.
Oh look, working as designed, I can't fix my own thing, but I shouldn't have to care as long as I give Apple money to pay for any work.
Actually a relatively high risk to Atlassian....
The big thing is that somehow, they got every 'Agile' consultant in the world to declare that you are properly Agile if and only if you use Jira.
Now the trendy thing is to say you are AI aligned, not Agile aligned. So replace your guy implementing your oh-so-special and unique workflow tortuously in Jira with someone to vibe code up your own truly bespoke issue management solution.
Sure, if it gets severe enough, the workers will nope on out because going to work costs more than it's worth.
However, people will "tuck it" much longer than a business will. If there's "commute budget" that the company can just keep for itself if it can figure out how, it'll be more aggressive about 'work from home' than an employee is willing to push their luck in pushing *against* work in person.
While the US might not have as *hard* of an impact, prices have shot up quite a bit, higher than they were when they were blasting Biden for high gas prices in 2024 Because even if we can produce enough for ourselves, companies want to sell to whomever is most desperate..
As an aside, the guy you linked to I have not known at all, but was insufferable as soon as he cited Grok as a reason to trust a video, and that thanks to Trump the US doesn't have to worry about this, when it was Trump that made this mess in the first place.
I think this suggestion would be in the spirit of 'in addition to' rather than 'instead of'.
All the hard measures listed are about government controlled offices and institutions, with a less compelling "please do work from home" call to businesses. Businesses that externalize the commute cost so they don't have a particularly strong motivation to be accommodating.
If you made the businesses bear the commute costs, then they at least would have real skin in the game. Not just for the current situation, but ongoing motivation to consider whether the personnel *really* have value to be directly there in general.
I was swayed by another comment in this discussion that points out that, for whatever reason, his example is an LLM analysis of a single routine manifested as 120 bytes of machine code. The choice to use something so utterly short is enough to perhaps re calibrate expectations for practical use. It did spot a couple of real issues but mostly buried the user in a list of "I know already" about how the general environment is not exactly credibly secure at all. 75% of the 'findings' were just "Hey, Apple II doesn't have anything that looks like 'security'. LLMs kind of become more incoherent with volume of stuff thrown at them, and so unclear if this is practically scalable at the moment to something more practical.
A human could conceivably hand review 120 bytes of Apple II machine code, but doing the same for even a modest library becomes largely untenable. LLMs are likely in the same boat in this respect, just much faster at getting wherever that boat might be able to go.
A lot of these stories include the final "look at the magical thing the LLM output" while conveniently skipping the "boring lead up" where they basically manually have to tell it what *not* to say before they get it to generate the thing they intended to. And if even that fails, they just skip writing the post.
It's gotten really hard for SUSE shops to be happy with running what is now an 8-year old base. Yes they update some things and deprecate things, but a lot of the bones are same as 2018 release, whereas before they refreshed every 4-5 years.
At this rate, slashdot will just be nothing but GenAI ads top to bottom...
You have a fair point that the selection of a 40 year old 6502 application is interesting, and likely driven by the reality that the LLMs fall apart with vaguely modern application complexity.
It may however help if someone identifies a small digestable chunk as security relevant and set it about the task of dealing withi t.
This was some little program a guy wrote at 20 years old that doesn't have any *real* reason to test for security (if you could run his code, you could just run whatever code you wanted anyway, it was a single user platform without any authentication or anything), and that should say anything one way or another about his capabilities as a 60 year old person?
No, it's busy making more of a mess of Windows 11.
So, for the open ended general purpose of a platform without the concept of privilege separation, you are right, and that's realistically where Apple II sits.
But what if you had a similarly loose platform but it's running a kiosk and that kiosk software is purportedly designed to keep the user on acceptable rails. Then finding a way to break that kiosk software might be significant.
So I'll grant that the concept *could* map to real-world concerns, given how wild west a lot of embedded applications have been and how so many of these applications are ancient because they worked well enough to not need a replacement.
Well the latter point may have more relevance, that a lot of embedded scenarios are like the Apple II scenario, never subjected to rigorous security review and largely banking on no one bothering to reverse engineer the closed source runtimes.
So this can shift the cost/benefit ratio to go look at some of those embedded applications and find ways to induce misbehavior. Depending on the scenario, the vendor is long gone or the design was never made to be field upgradeable. So you end up with known vulnerabilities in some applications that cost real money to replace.
There has been a push for EU 'tech sovereignty' on various fronts, and I could see SUSE benefiting from that. Currently I haven't seen a SUSE push in that, mainly because nvidia has pushed Ubuntu hard.
All warranty and guarantee clauses become null and void upon payment of invoice.