Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment It's not an invalid situation... (Score 1) 128

If you deploy new software where it does not improve the user experience, then it's valid for the userbase to punish that move to a reasonable extent.

Not to the end result in this article of course, but sharing the pain inflicted by 'change for change's sake' with those who inflict it makes a lot of sense. Sometimes there are requirements to be fulfilled that do matter that make life harder for the users which justify inflicting pain upon the userbase, but in my experience the vast majority of change is a false sense that something must evolve or else it is dead. That sentiment should be punished.

Comment Is there anything new here? (Score 1) 143

So, in one die, it's a little interesting, though GPU stream processors and Intel's Phi would seem to suggest this is not that novel. The latter even let's you ssh in and see the core count for yourself in a very familiar way (though it's not exactly the easiest of devices to manage, it's still a very much real world example of how this isn't new to the world).

The 'not all cores are connected' is even older. In the commodity space, hypertransport and QPI can be used to construct topologies that are not full mesh. So not only is it not all cores on a bus, it is also not all cores mesh connected, the two attributes claimed as novel here.

Basically, as of AMD64 people had relatively affordable access to an implementation of the concept, and as of Nehalm both major x86 vendors had this concept in place. Each die included all the logic needed to implement a fabric, with the board providing essentially passive traces.

Comment Theory versus practice. (Score 2) 305

A filtering firewall in theory can be made just as secure as a NAT gateway. It isn't even particularly hard to do so, but it takes marginally more work than being wide open. Doing a NAT is more complex than a secure firewall ruleset, but the circumstances are widely different

The issue comes down to failure mode. If a NAT fails to work correctly because it wasn't configured quite right, then you get no forwarding, but you are secure. If a forwarding device fails to work correctly, then you can get wide open forwarding, and be less secure.

A common consumer will notice the former, but will not notice the latter. Considering the typical use case of customer buying equipment from the cheapest vendor on the shelf and just letting it go, crappy vendor has to still make sure they get the NAT right or else fail completely in the market, but they don't really have to get the forwarding rule restrictions right to be big in the marketplace. One well versed in the area would be baffled that a vendor can produce a NAT capable device easily enough but might flub the much easier filtering rule case, but unfortunately laziest effort must always be assumed.

So unfortunately, those who would be nothing but empowered by freedom of addressibility will be burdened by being in the same ecosystem with people who don't care and vendors that don't really care either.

Comment My assumption... (Score 1) 236

I assumed that was the sentiment of the engineers themselves.

I don't have a lot of contact with people in *this* field, but in other similarly niche fields with very concrete, yet limited demand (e.g. not aligned with the buzzwords deep in the muck where things *must* happen but 99.99% of the ecosystem take it for granted and doesn't want to actually touch it). In those fields, it was once upon a time not a 'given' and thus young blood was actively pushed in. Now it's a 'given' (despite requiring continuous evolution to keep things up), so the only people in the field are mostly retirement eligible people who have not retired. Overwhelmingly, the impression I get is that they are so *passionate* about the work that they can't bear to see it go untended.

Of course, some of those would declare that, but then even after a fluke young person interested in the work does come along, they still can't bring themselves to retire since the work is really simply enjoyable to them.

Personally, the moment I realistically could retire, I'm out. I love my work and all, but I love not working even more.

Comment Re:Graded on a curve... (Score 1) 231

For my personal workstation, I favor the Ubuntu cycle, though I know the cycle is too short for a lot of other scenarios. So RHEL more closely hits those sensibilities in schedule. Unfortunately I've become less and less enamored of the content of the ubuntu cycle as they seem to try to steer things toward things like Unity and Mir.

However, as you say, RHEL will go crazy back porting features to old codebases without an appropriate level of risk relayed. An update from 6.4 to 6.5 is a lot more drastic than a service pack for most competitors, and then RH will refuse to support 6.4 claiming that '6.5 is a safe update', even if some third party stack cannot work correctly with 6.5. I do believe RH has enough expertise to backport features better than anyone else, but it is a bad idea for *anyone* to do it. I wish Fedora and RH weren't so disparate and Fedora would behave more like a typical Ubuntu release in terms of how conservative it is and how change averse it should be post release.

Comment Thanks... (Score 1) 231

This is precisely the point I keep in mind but always forget to explicitly say...

Where open source has great strength is where people have had to scratch their own itches. Now as more and more people's livelihoods become 'developer supporting *other* people's needs that I don't actually live with day to day', a lot of stuff is evolving in an unfortunate direction. Additionally, when someone *lives* inside a problem too much they lose touch with the relative importance and the degree to which others deal with it and understand a complex way to avoid a nuisance.

Comment Re:Guess it depends on situation... (Score 1) 231

I don't know if they ever required WMI. I think wmi is still supposed to be right, but some calls just go direct. I rarely dabble in windows world, but it was things like enumerating network devices and disks. Once upon a time, if the wmi provider was messed up, my stuff did not work. Now the wmi calls can hang and stuff does work. I might have also changed calls, I'm a bit vague in my recollection. I tended to review the MS documentation and change things around when it looked like the documentation was preferring an alternative strategy. Mind you, when the wmi calls do work, they were as accurate, just in my scenario WMI could hang completely. I am in a position to see wmi hang more often than I think most others are (i.e. before a device is actually shipped).

Which is mostly the nature of my complaint, that it takes extra development time and extra layers to enable something like WMI above and beyond assuring the functionality of the device and, for lack of a better word, 'native' instrumentation. It adds complexity and particularly a CIM broker that I have seen never be very resilient no matter who did it, which makes me suspect a broker has a harder job than I would guess it should.

Comment Guess it depends on situation... (Score 1) 231

I will say I didn't mess with DSC, but a lot of the .NET calls no longer rely upon WMI working. WMI like sfcbd and Pegasus will completely cock up if a provider so much as looks at it funny. Previously, only utilities like ipconfig and stuff would keep working and any thing making WMI calls was just SOL. I noticed in one of the various scenarios where WMI had gone belly up that the calls I had moved to in .NET didn't mind at all for a lot of the stuff. Someone at MS suggested that WMI/CIM was being stepped around by design more and more over time, but it could be one portion of MS versus another.

Comment Re:... and with systemd. (Score 1) 231

instead of developing an alternative to systemd

The stance amongst those opposed to systemd was that what wasn't broken didn't need fixing. Some people disagree and think it needs to be fixed and systemd is it. People objecting to systemd largely don't have to create an alternative, they are content with the linux distributions as they were.

Instead of helping KDE and Gnome supporting non-systemd systems,

KDE at least I thought purported to continue supporting non-systemd systems already. Gnome 3 developers very linked in with the systemd developers and as a whole they prioritize the purity of their vision over any criticisms. Perhaps appropriately electing to focus on bringing their vision to life to serve those who would follow the vision and letting the rest go on to KDE or xfce or MATE or whatever. I personally don't care about gnome shell as it doesn't serve my needs anymore either, but I can accept that they are caring after their user experience. I also wouldn't mind systemd so much except for the fact it is becoming unavoidable whilst retaining compatilibity with ongoing projects in linux.

Comment Graded on a curve... (Score 2) 231

RHEL is about as change averse as a *Linux* company gets. They have the unfortunate balance to play between fulfilling the mission of a solid predictable experience and not appearing to lag so much compared to the base people are well aware of. At times, I will say RHEL is in denial about ABI-breaking changes (e.g. swearing up and down that a kernel driver should compile and work against their rather dramatically backported base just as if it was really the kernel version advertised in uname output).

If you want 'stuff that never changes while still giving new hardware support', you are pretty much stuck with AIX or mainframe at this point.

Comment Highly subjective... (Score 1) 231

For a large chunk of users, no diffence.

For people who dig deep in, huge difference with very polarizing attributes. Some people like the goodies it brings, but it changes a whole lot of stuff in the process without much of a care for appeasing those that appreciate how things worked.

Basically, systemd is building something different. Some say better, some say worse. I happen to be in the latter camp even after using it at significant length.

Comment Re:That's why IPMI should only live on intranets. (Score 1) 62

They have encryption, but it is not mandatory

Same can be said of http and https. Nothing specific to IPMI.

it is shared secret rather than DH or similar.

Well that may be a better way to settle the symmetric key value, but then you have to discuss authentication as a separate item, since Kuid currently serves both in establishing keys as well as authenticating the parties to one another. SNMPv3 USM seems to be a pretty appropriate model for this scenario (where certificate systems are likely to be ignored), which is pretty similar in kind to IPMI except that the client goes first and the key is localized based on a server identifier meaning the secret need not be stored in the clear on the management target.

Anything involving MD5 needs to go.

Well, one IPMI does SHA256 or SHA1. For another, I'm unaware of any attack even against MD5 that would compromise the security when used in an HMAC scheme, as is the case for the hash function use in IPMI.

Comment Good and bad... (Score 1) 231

XFS and PCP are good things to include.

systemd and OpenLMI I find worrisome. systemd being the one impossible to ignore so OpenLMI at least gets something of a pass for the ability to totally ignore it.

systemd has been hashed out time and time again, but OpenLMI is something rarely discussed. DMTF has championed CIM for eons, and the architecture shows in that it clearly defines things as you would see a buzzword compliant enterprise define an architecture amidst the dotcom boom of the late 90s (complete with XML over SOAP and all sorts of other nastiness). It represents drinking the kool-aid after much of the ecosystem has moved on (microsoft has de-emphasized CIM, many of the enterprise vendors that once always provided and demanded CIM providers have come around to a viewpoint that CIM style instrumentation isn't perhaps the best idea).

Slashdot Top Deals

"Money is the root of all money." -- the moving finger

Working...