Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×

Comment Re:Meaningless metric, why not use GW? (Score 1) 147

So roughly 3.9% of the world's population installed roughly 6.4% of the world's new PV in 2023? Yeah, what jerks.

No, it is one of the richest country in the world which emits 13% of the world's CO2, but only adding 6.4% of the world's new PV, basically pulling only half of its weight in the fight against climate change when compared to the rest of the world.

https://ourworldindata.org/co2...

Comment Re:Duh (Score 1) 62

Not really. The issue is that _any_ patch can break functionality, and that includes any security patch. You can never be sure to have tested everything. This is not a problem that can be fixed on the technological side. This is a risk management problem. And that risk management needs to be done by people that know the details of the application scenario.

I do agree that giving more resources to the kernel-team would be a good thing. They already do what you describe with the "longterm" kernels. You may remember that they do retire longterm kernels earlier than they would like to due to limited resources.

Comment So 2039, is it. So there is no hurry then. (Score 1) 48

We can delay doing anything for another decade. And then we can shift that because "that goal is too ambitious".

Well. The only good thing is that this skill gate (which climate change clearly is) will not let a bumbling, incompetent and greed humanity through. The mix of people we have here is just not viable long-term.

Comment Re: "can't separate an LLM's data from its command (Score 1) 37

Oh? And how do you deal with interaction created state? Because for an LLM, that is "system commands", and it is needed to make them a bit more useful. A purely static LLM is so utterly dumb, it cannot do anything. In particular, it is impossible to customize for a regular user.

Comment Re:Nice! Very, very nice! (Score 1) 37

Hahaha, no. The _limits_ of what intentional things an LLM can do are reasonably well understood. The limits of what an LLM can be tricked into doing are somewhat understood on the theoretical side, but on the practical ones they depend very strongly on the training data and on implementation details, like temporary state or persistent state depending on user and the interactions with it. Hence the concrete possibilities are _not_ understood at all for any of the LLMs currently deployed.

You are thinking like a developer or user, i.e. you are looking at intended functionality. A security mindset is different: It looks at how you can break things and find additional, unintended functionality. You clearly lack that. Incidentally, Schneier did not call for "fine tuning" of LLMs or anything like that at all. That is just you hallucinating. The problem he points out is an architectural one and he is well aware that reliable and resilient approaches to fix the issue are currently unknown.

Comment Re:EU Cyber Resilience Act - requires code audits (Score 1) 62

Not really. That audit requirement is a red herring. What is actually required is _resilience_. You know, as the title says. For example, there is no issue with using unaudited code or cutting-edge code as long as you have additional safeguards in place (that you need to have anyways) and as long as you have a tested recovery procedure. Yes, you need to be careful. Yes, you need to be able to keep things running or get them running again. No, that does not mean a glacial process where anything needs to have an auditor's stamp of approval. In fact, it means the very opposite.

Comment Re:Blame / Insurance (Score 1) 62

Indeed. Broken error culture where getting better and staying flexible is not the aim, but always having somebody to blame is. Such an organization will never be good at anything and it will be too inflexible and incompetent to act fast when needed. And that need to act fast can arise at any time in a modern IT landscape. Essentially people are too afraid to touch anything because that touch may get them fired. It does not get much more dysfunctional than that, on engineering and on management level both. What a competent approach looks like is that people are allowed to touch (with an adequate level of being careful) and everybody understands that can sometimes break things. Everybody also should understand that this is the only way how you keep being able to move and that you need to prepare for unexpected problems by being able to recover from them.

Comment Re:Synology .. (Score 1) 62

If it is properly hardened and minimal, that is not an issue. If incompetently done, it is an issue. But as they _know_ what their hardware is, they can just have a few competent people follow the kernel release logs and add any patches needed themselves. Whether they do that or not is another question.

Comment Re:Frozen distros are a must in some areas... (Score 1) 62

I have worked in organizations where an absolute distribution is a must, because they have paperwork and audits that require this.

That is generally not true. It is one approach to do the paperwork and pass the audits, but not the only one. As IT and IT security auditor I have run into this approach when the IT organization did not have careful risk management and just did not want to touch anything because of limited capabilities and insight. As soon as you update _anything_ things can break. Kernels are not special in that regard. That does not mean you should never do it. Only that you need to have a good reason, careful risk management, a resilient set-up and sound and tested recovery procedures. And of course an organization with a good error culture. Obviously, in a fundamentally broken error culture where the one that touched the systems gets blamed for when it breaks will just fossilize and never develop any real skills in keeping a system alive. These organizations will then also routinely get completely overwhelmed when an even happens where they must move. Not smart, not professional, but unfortunately very common.

Comment Re:If all you have is a hammer (Score 1) 62

Everything looks like a nail. If you ask security experts they will always say -security first, apply all patches.

Not quite. If you ask _bad_ security experts, that will be their answer. Good ones will ask what your organizational IT risk analysis says first and then try to make an informed decision that can be "patch now", "patch later", "more information needed", "do not patch" and other options. Somebody that just has that proverbial hammer in their tool-chest is not an actual security expert. They are a fake "expert".

My personal first cut analysis for security experts is:
1. Do they know security stuff and are they reasonably current on what is happening in the field?
2. Do they have system administration experience? (Does not need to be extensive, but enough that they have a clue how it works.)
3. Can the code? (Same limit as 2.)
4. Do they understand that risk management is the only valid approach to security?

Anybody that fails any of these 4 is not a general security expert. My guess would be that about 90% of "security experts" only qualify in a limited sense. One of the effects from the fact that anybody can call themselves "security expert" these days, no proven qualifications or qualification profile needed. Compare that to, say, a general surgeon or a qualified engineer in any domain.

Comment Re:Stability (Score 1) 62

New versions have a nasty habit of introducing new bugs and changed behaviour that will break things.

One of the things were Linux is far superior. Unless you have custom drivers or the like (and they are not well-engineered), Linux kernel updates are exceptionally unlikely to break your system as long as you compile based on the old configuration. I can understand that somebody used to the bumbling and half-assed updates Microsoft forces on people will be paranoid here. No need to be paranoid for Linux. In most cases it just works and where it does not, it usually breaks very fast and obviously so. There generally is no need to re-run your full regression tests on a kernel update unless you change something major in the kernel config and have a sensitive set-up, e.g. one using real-time components or unreliable vendor-drivers. A smoke-test generally is quite enough.

Linux is not a system targeted at home-computers. Windows, sadly, still is from the mind-set displayed by Microsoft and they do not seem to be getting any better. After 50 years, MS still cannot even do updates reliably. Contrast that with fully automated updates on Linux. I have done those on Debian for about 25 years now, every 3 days, several systems. One issue so far and that required a minor change that the update process emailed to me. _That_ is what good systems engineering looks like.

Comment Duh (Score 1) 62

The question implied is stupid. You generally do not want the most secure choice. You want one that fits your risk profile. Security is just one factor. It is an important one, but so are reliability and cost (and others). A system that is perfectly secure but frequently breaks your software or hardware is no good and neither is one that is too expensive for you to maintain. A professional approach evaluates all risks and then strikes a balance. Depending on that risk analyses, it also determines how frequently that risk analysis needs to be revisited. Can be 6 months, can be 2 years, can be 5 years. Additionally, that analysis needs to be used to define triggers that mandate immediate or fast action so that when a specific CVE that impacts you gets filed or some specific part of the kernel has a vulnerability or gets patched, you always know whether this means you need to act, you need to find out more or you can safely file it under "no action required".

In other words, always make sure where you stand and what threatens that position. You know, competent risk management. I do understand that many organizations are not large enough to have a person or even a team to do this. Requires keeping current with a lot of stuff and I know how much time I invest in that, out of sheer interest in it. You can get the risk analysis as consulting and the maintenance as managed security. If you are lucky enough to have a guy or gal that likes doing this and is good at it, keep them happy with their job. Professional IT risk management is a survival factor for your organization. Just do not expect you can do this on the cheap. That will come back to bite you and may kill you.

That said, one of the really good things about Linux is that if you use a sane istro, you can replace or patch the kernel yourself with minimal risk. I have been running Debian and Devuan with custom kernels from Kernel.org forever. Obviously, that is something that needs competent system administration, but these days that one is non-optional anyways.

Slashdot Top Deals

Diplomacy is the art of saying "nice doggy" until you can find a rock.

Working...