Forgot your password?
typodupeerror

Comment: Re:That probably won't change... (Score 1) 412

by Junta (#47412283) Attached to: Python Bumps Off Java As Top Learning Language

Those early days are over and 3.x is intentionally designed to be more rational and consistent.

The issue being that is *always* the case. In the early python 2 days, they thought the 'early' days were over. I haven't dealt with python 3 with sufficient depth to be keenly aware of any real gotchas, but the fact they decided to add back in the explicit unicode syntax is a sign that they have at least continued to indulge in flux to fix bad design decisions. In that specific case, I don't see a downside for the increased ability to have python2/3 agnostic code so I won't declare any example of breaking 3.x series code with that. It seems clear to me that the python language can't quite exercise enough restraint in their enthusiasm for their vision of improved syntax and features to walk away confident that code won't break in a couple of 3.x generations.

It's almost like a curse, the more popular and energetic a language implementation is, the more likely it is to experience some incompatible evolution.

Comment: Seems a terrible practice.. (Score 1) 412

by Junta (#47411505) Attached to: Python Bumps Off Java As Top Learning Language

As the hip kids would say 'un-pythonic'. It's sort of like how perl can be perfectly readable until people go and start using all the language features in 'clever' ways. Making a dict on the fly and indexing it in the same statement is the sort of thing I could see rendering python code hard to read and follow...

Comment: That probably won't change... (Score 1) 412

by Junta (#47411495) Attached to: Python Bumps Off Java As Top Learning Language

Python is a language that has a fascinating tendency to break python on version upgrades. Yes, there is very clearly the python 2 to python 3, but even python 2.3 to python 2.6 can create worlds of headaches.

But then again no language is perfect. Old C code is frequently hard to build on modern compilers, perl had a very long history of not needing anything to be touched but some of the disilliusionment in prel 6 has caused even perl5 to get a bit fidgety as of late.

Comment: Re:Not happy about the concept, however... (Score 2) 160

by Junta (#47375725) Attached to: Facebook Fallout, Facts and Frenzy

I fail to see how it's that different than the manipulation that mass media does, who also do not get informed consent. There is the facet of it being more targeted, but the internet is already about targeted material (hopefully done with the best interest of the audience in mind, practically speaking with the best interests of the advertiser). They just stop short of calling it an 'experiment' (in practice, they are continually experimenting on their audience) and somehow by not trying to apply scientific rigor they get off the hook.

I'm not saying that Facebook is undeserving of outrage, I'm saying that a great deal of the media behavior is similarly deserving and somehow we are complacent with that situation.

Comment: Not happy about the concept, however... (Score 2) 160

by Junta (#47375307) Attached to: Facebook Fallout, Facts and Frenzy

My question is why is there particular outrage when they do it as part of a science experiment whereas it is widely acceptable to do the exact same thing in mass media to get revenue.

National and local news programs basically live and breath this sort of thing constantly. They schedule their reporting and editorialize in ways to boost viewership: stirring up anger, soothing with feelgood stories, teasing with ominous advertisements, all according to presumptions about the right way to maximize viewer attention and dedication. 'What everyday item in your house could be killing you right now, find out at 11'.

I don't have a Facebook account precisely because I don't like this sort of thing, but I think it's only fair to acknowledge this dubious manipulative behavior is ubiquitous in our media, not just as science experiments in Facebook.

Comment: It's not an invalid situation... (Score 1) 128

by Junta (#47297379) Attached to: Prisoners Freed After Cops Struggle With New Records Software

If you deploy new software where it does not improve the user experience, then it's valid for the userbase to punish that move to a reasonable extent.

Not to the end result in this article of course, but sharing the pain inflicted by 'change for change's sake' with those who inflict it makes a lot of sense. Sometimes there are requirements to be fulfilled that do matter that make life harder for the users which justify inflicting pain upon the userbase, but in my experience the vast majority of change is a false sense that something must evolve or else it is dead. That sentiment should be punished.

Comment: Is there anything new here? (Score 1) 143

by Junta (#47297345) Attached to: Researchers Unveil Experimental 36-Core Chip

So, in one die, it's a little interesting, though GPU stream processors and Intel's Phi would seem to suggest this is not that novel. The latter even let's you ssh in and see the core count for yourself in a very familiar way (though it's not exactly the easiest of devices to manage, it's still a very much real world example of how this isn't new to the world).

The 'not all cores are connected' is even older. In the commodity space, hypertransport and QPI can be used to construct topologies that are not full mesh. So not only is it not all cores on a bus, it is also not all cores mesh connected, the two attributes claimed as novel here.

Basically, as of AMD64 people had relatively affordable access to an implementation of the concept, and as of Nehalm both major x86 vendors had this concept in place. Each die included all the logic needed to implement a fabric, with the board providing essentially passive traces.

Comment: Theory versus practice. (Score 2) 305

by Junta (#47231931) Attached to: When will large-scale IPv6 deployment happen?

A filtering firewall in theory can be made just as secure as a NAT gateway. It isn't even particularly hard to do so, but it takes marginally more work than being wide open. Doing a NAT is more complex than a secure firewall ruleset, but the circumstances are widely different

The issue comes down to failure mode. If a NAT fails to work correctly because it wasn't configured quite right, then you get no forwarding, but you are secure. If a forwarding device fails to work correctly, then you can get wide open forwarding, and be less secure.

A common consumer will notice the former, but will not notice the latter. Considering the typical use case of customer buying equipment from the cheapest vendor on the shelf and just letting it go, crappy vendor has to still make sure they get the NAT right or else fail completely in the market, but they don't really have to get the forwarding rule restrictions right to be big in the marketplace. One well versed in the area would be baffled that a vendor can produce a NAT capable device easily enough but might flub the much easier filtering rule case, but unfortunately laziest effort must always be assumed.

So unfortunately, those who would be nothing but empowered by freedom of addressibility will be burdened by being in the same ecosystem with people who don't care and vendors that don't really care either.

Comment: My assumption... (Score 1) 236

by Junta (#47229565) Attached to: Are the Glory Days of Analog Engineering Over?

I assumed that was the sentiment of the engineers themselves.

I don't have a lot of contact with people in *this* field, but in other similarly niche fields with very concrete, yet limited demand (e.g. not aligned with the buzzwords deep in the muck where things *must* happen but 99.99% of the ecosystem take it for granted and doesn't want to actually touch it). In those fields, it was once upon a time not a 'given' and thus young blood was actively pushed in. Now it's a 'given' (despite requiring continuous evolution to keep things up), so the only people in the field are mostly retirement eligible people who have not retired. Overwhelmingly, the impression I get is that they are so *passionate* about the work that they can't bear to see it go untended.

Of course, some of those would declare that, but then even after a fluke young person interested in the work does come along, they still can't bring themselves to retire since the work is really simply enjoyable to them.

Personally, the moment I realistically could retire, I'm out. I love my work and all, but I love not working even more.

Comment: Re:Graded on a curve... (Score 1) 231

by Junta (#47213555) Attached to: Red Hat Enterprise Linux 7 Released

For my personal workstation, I favor the Ubuntu cycle, though I know the cycle is too short for a lot of other scenarios. So RHEL more closely hits those sensibilities in schedule. Unfortunately I've become less and less enamored of the content of the ubuntu cycle as they seem to try to steer things toward things like Unity and Mir.

However, as you say, RHEL will go crazy back porting features to old codebases without an appropriate level of risk relayed. An update from 6.4 to 6.5 is a lot more drastic than a service pack for most competitors, and then RH will refuse to support 6.4 claiming that '6.5 is a safe update', even if some third party stack cannot work correctly with 6.5. I do believe RH has enough expertise to backport features better than anyone else, but it is a bad idea for *anyone* to do it. I wish Fedora and RH weren't so disparate and Fedora would behave more like a typical Ubuntu release in terms of how conservative it is and how change averse it should be post release.

Comment: Thanks... (Score 1) 231

by Junta (#47211739) Attached to: Red Hat Enterprise Linux 7 Released

This is precisely the point I keep in mind but always forget to explicitly say...

Where open source has great strength is where people have had to scratch their own itches. Now as more and more people's livelihoods become 'developer supporting *other* people's needs that I don't actually live with day to day', a lot of stuff is evolving in an unfortunate direction. Additionally, when someone *lives* inside a problem too much they lose touch with the relative importance and the degree to which others deal with it and understand a complex way to avoid a nuisance.

Comment: Re:Guess it depends on situation... (Score 1) 231

by Junta (#47211327) Attached to: Red Hat Enterprise Linux 7 Released

I don't know if they ever required WMI. I think wmi is still supposed to be right, but some calls just go direct. I rarely dabble in windows world, but it was things like enumerating network devices and disks. Once upon a time, if the wmi provider was messed up, my stuff did not work. Now the wmi calls can hang and stuff does work. I might have also changed calls, I'm a bit vague in my recollection. I tended to review the MS documentation and change things around when it looked like the documentation was preferring an alternative strategy. Mind you, when the wmi calls do work, they were as accurate, just in my scenario WMI could hang completely. I am in a position to see wmi hang more often than I think most others are (i.e. before a device is actually shipped).

Which is mostly the nature of my complaint, that it takes extra development time and extra layers to enable something like WMI above and beyond assuring the functionality of the device and, for lack of a better word, 'native' instrumentation. It adds complexity and particularly a CIM broker that I have seen never be very resilient no matter who did it, which makes me suspect a broker has a harder job than I would guess it should.

Comment: Guess it depends on situation... (Score 1) 231

by Junta (#47206229) Attached to: Red Hat Enterprise Linux 7 Released

I will say I didn't mess with DSC, but a lot of the .NET calls no longer rely upon WMI working. WMI like sfcbd and Pegasus will completely cock up if a provider so much as looks at it funny. Previously, only utilities like ipconfig and stuff would keep working and any thing making WMI calls was just SOL. I noticed in one of the various scenarios where WMI had gone belly up that the calls I had moved to in .NET didn't mind at all for a lot of the stuff. Someone at MS suggested that WMI/CIM was being stepped around by design more and more over time, but it could be one portion of MS versus another.

"Engineering without management is art." -- Jeff Johnson

Working...