Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×

Comment Re:Courage... (Score 1) 207

The funny thing in your story is that the word that makes it narrow down the most is the pronoun 'she'. I would guess you either work for HP or IBM. Massive stock buybacks and continual layoffs is the modus operandi of most of the companies, but female ceos are a little bit more rare. Of course they do and say the exact same things so they could all probably replace their CEOs with chatbots that just always says 'buyback some more stock and layoff more people' and no one would notice.

Comment Courage... (Score 2) 207

If we don't have the courage to change

It can be debated as to whether this is a necessary thing or a prudent thing or whatever, but regardless of those debates, this s a pretty stupid thing to say. I don't think a CEO should ever characterize their decision to terminate other people's jobs as 'courageous'. There really isn't anything remotely courageous about any of the strategy he laid out. It's not even particularly bold or daring, it's basically the exact thing every executive of every tech company has been saying about their respective companies now.

Not having much of a horse in the race (not working for cisco or even a cisco client), I can't comment on whether it's the right choice or whatever, but it really rubbed me the wrong way to see him refer to layoffs as an act of courage.

Comment Re:Rust (Score 3, Insightful) 57

One issue is that generally such projects are actually pretty niche and get developed with only that niche in mind. There simply isn't a pool of eager developers to tackle only your specific issue.

If you can think about modularity and develop some key components that are more generally useful as distinct projects, you may have better luck.

But overall, open source building a large development community for any given single project is generally the exception rather than the rule, even if you do your best. Even very ubiquitous projects that play a role in nearly every linux system often has maybe one or two developers that are really familiar with it.

Comment Re:Better ways to do it. (Score 1) 57

If you do use the GPL *and* have copyright assignment, there actually could be a case made that you dual license it: GPL for those that play open source and proprietary commercial for others. This is the 'get free coders to do work for you' business model that seems pretty disingenuous, but at least there is a logic to a corporate sponsored project going for GPL.

What surprises me is that most scenarios where corporations pick the license, they pick a BSD style license. I can understand them wanting that property in *other* people's code, but surprise they wouldn't want more assurance that their work wouldn't come back to compete with them in a commercial way when they have a choice.

Comment Re:CLA (Score 1) 57

CLA with copyright assignment opens the door to have your contributions abused by the copyright holders.

CLA without copyright assignment is usually just the 'project' covering their ass in case of problematic contributions that infringe copyright or patents.

Comment Re:CLA (Score 1) 57

The general intent of many CLAs is some stuff to make the contributor attest that he isn't doing something like injecting patented capability or violating someone's copyright. The key distinction between an open source licensed product being redistributed by someone who adds problematic capability versus having that capability injected directly is that the curator of that project is the one that gets sued in the latter case. So if stuff is bolted on but not coming back, the weaker assurance of GPL or BSD style license is acceptable because the risk is not the project owners. The statement is certainly not sufficient to be confident that something isn't wrong, but it's a stronger basis to pass on some culpability to the contributor in the event of issues.

The sort of CLA you are talking about are the ones with copyright assignment. The most prominent example of this is actually the FSF requiring copyright assignment of any accepted contribution. These can be employed when a company or organization wants to reserve the right to modify licensing. In the FSF case, this is why they can change license terms from GPL2 to GPL3, where in a project like the kernel, they cannot change the license because there are too many copyright holders. I actually don't know of a corporate sponsored CLA involving copyright assignment.

I had previously assumed CLA implied copyright assignment until I was forced to actually cope with a couple of CLAs and looked more carefully.

Comment Re:A rather simplistic hardware-centric view (Score 1) 145

Software reliability over the past few decades has shot right up.

I think this is a questionable premise.

1) Accurate, though has been accurate for over a decade now
2) Things have improved security wise, but reliability I think could be another matter. When things go off the rails, it's now less likely to let an adversary take advantage of that circumstance.

3) Try/Catch is a potent tool (depending on the implementation it can come at a cost), but the same things that caused 'segmentation faults' with a serviceable stack trace in core file cause uncaught exceptions with a serviceable stack trace now. It does make it easier to write code that tolerates some unexpected circumstances, but ultimately you still have to plan application state carefully or else be unable to meaningfully continue after the code has bombed. This is something that continues to elude a lot of development.
4) Actually, the pendulum has swung back again in the handheld space to 'apps'. In the browser world, you've traided 'dll hell' for browser hell. Dll hell is a sin of microsoft for not having a reasonable packaging infrastructure to help manage this circumstance better. In any event, now server application crashes, client crash, *or* communication interruption can screw application experience instead of just one.

5. Virtualized systems I don't think have improved software reliability much. It has in some ways make certain administration tasks easier and beter hardware consolidation, but it comes at a cost. I've seen more and more application vendors get lazy and just furnish a 'virtual appliance' rather than an application. When the bundled OS requires updates for security, the update process is frequently hellish or outright forbidden. You need to update openssl in their linux image, but other than that, things are good? Tough, you need to go to version N+1 of their application and deal with API breakage and stuff just because you dared want a security update for a relatively tiny portion of their platform.

6. I think there's some truth in it, but 32 v. 64 bit does still rear it's head in these languages. Particularly since there are a lot of performance related libraries written in C for many of those runtimes.

7. This seems to contradict the point above. Python pretty well fits that description.

8. This has also had a downside, people jumping to SQL when it doesn't make much sense. Things with extraordinarily simple data to manage jump to 'put it in sql' pretty quickly. Some of the 'NoSQL' sensibilities have brought some sanity in some cases, but in other cases have replaced one overused tool with another equally high maintenance beast.

9. True enough. There is some signal/noise issue but better than nothing at all.

I think a big issue is that at the application layer, there has been more and more pressure for rapid delivery and iteration, getting a false sense of security from unit tests (which are good, but not *as* good as some people feel). Stable branches that do bugfixes only are more rare now and more and more users are expected to ride the wave of interface and functional changes if they want bugs fixed at all. 'Good enough' is the mantra of a lot of application development, if a user has to restart or delete all configuration before restart, oh well they can cope.

Comment Re:A rather simplistic hardware-centric view (Score 2) 145

www.scalemp.com does what you request.

It's not exactly all warm and fuzzy. Things are much improved from the Mosix days in terms of having the right available data and kernel scheduling behaviors (largely thanks to the rise of NUMA architecture as the usual system design). However there is a simple reality that the server to server interconnect is still massively higher latency and lower bandwidth than QPI or HyperTransport. So if a 'single system' application is executed designed around assumptions of no worse than QPI inter-process connectivity, it still won't be that nice and an application managing the messaging more explicitly will fare better.

But if you have to use an application that can do multi core but not multi node and force it to scale *somewhat*, ScaleMP can help things out significantly.

Comment Re:Correlation not Causation (Score 1) 227

In this specific case we can split hairs, but in the end they are singling out genetics in a relatively large set of uncontrolled variables as the facet to focus on. Yes, like any good scientist the distinction is made, but pretending that aside from genetics a pair of fraternal and identical twins have *no other* fundamental different life experiences is a long shot that does strongly suggest the belief in a causative hypothesis and that they conducted this research with that assumption in mind. Identical twins raised together I suspect generally has more interesting distinguishing features than merely identical genetic reality compared to fraternal twins.

I personally suspect the hypothesis is true, that genetics plays a major role. However, *this* particular study is almost certainly full of non-genetic correlations that line up with the genetic correlations, making it difficult to say anything for sure on the genetic front versus another variation on the environmental front.

Comment Re:Correlation not Causation (Score 2) 227

"The correlation between reading and mathematics ability at age twelve has a substantial genetic component

The problem is "all siblings presumably experience similar degrees of parental attentiveness, economic opportunity and so on" which is of course very unlikely to be a

I think the issue at hand is it isn't quite controlled well enough to trumpet the genetic component as *the* correlation of interest. Other factors are handwaved away by saying "all siblings presumably experience similar degrees of parental attentiveness, economic opportunity and so on". Anyone who has grown up alongside twins (there actually were a few sets of twins in my town growing up, two sets of them identical, one set mixed gender) knows this is too much to presume. When people look identical, there is a much stronger expectation that they *are* fundamentally identical. The identical twin sets both had rhymed names, but the other twins did not. Parents and teachers and fellow kids more naturally treat fraternal twins like any other set of siblings, but identical twins do not receive the same experience. People assume they like the same things, they should hang out together, they *should* be good at the same things. Many believe there is some mystical/telepathic link between identical twins. Fraternal twins are 'just siblings', to the extent that until explicitly mentioned no one may even realize they are *twins*. Identical twins are blatantly obvious from the moment you see them and trigger a large amount of preconception before anyone so much as utters a word. All these societal expectations undoubtedly have *some* impact on their development that shouldn't be so casually dismissed.

Basically, there is no reason to believe identical and fraternal twins receive a comparable life experience in aggregate when raised together. With that in mind, the study should be saying there is a correlation for identical versus fraternal twins rather than 'there is a correlation with genetics'.

Comment Re:Switch away from Skype and Windows (Score 1) 74

But at the whole, UEFI Secure Boot along with Windows 8 signed boot-loader and OS is *very* hard to circumvent.

If you are paying attention during boot, and the attack comes from within the OS. Of course, MS could have afforded the within the OS protection themselves by being very special in how they treated the system partition without requiring firmware to verify it. If you have full control of the console and/or device, you can do exactly what you describe, boot a valid OS using a malicious configuration designed to rootkit the OS that's there or impersonate the OS that was supposed to be there to gain information about accessing the presumably cloned disk.

Because it is actually pretty ineffectual against an adversary that physically controls your entire system or your disk contents, I think a different design would have been better. Secure boot is too open ended to afford sufficient protection and yet too much a pain by being not quite open ended enough to allow OS vendors without Microsoft blessing. I think Secure Boot should have been done by the key being installed to firmware at initial OS install time. The first OS install getting to 'take ownership' of the platform, and that key being *the* key to trust. This would have allowed Microsoft to put in a Microsoft key and say 'screw trying to certify things like grub'. Installing a different OS after a first would have required going into firmware to unclaim the platform to let the new bootloader claim it on the install of that system.

I'm actually ok with TPM and how things like Bitlocker leverage the TPM. The Secure Boot scheme reeks of too much inconvenience for inadequate security compared to what *could* have been done.

Comment Re:Switch away from Skype and Windows (Score 1) 74

There's a few things that seem off in that statement...

IIRC, Secure Boot didn't actually hook into the TPM.

Another, I'm not sure what you imply with 'modify the TPM'. You can have perhaps the TPM bind some stuff that the legitimate user wouldn't want you to do but you couldn't defeat sealing to a sufficient set of PCRs by having os level control of the TPM facilities afaik.

Comment Re:Switch away from Skype and Windows (Score 0) 74

Windows 8 Secure boot is a pretty flimsy facility that says 'yep, this code was blessed by microsoft'. It does nothing to vouch for whether the configuration leading up to or the configuration of the payload is what you actually want (e.g. a specific user expects they hve put in Windows 8, but instead Red Hat loading with malicious configuration would be a sort of misbehavior that SecureBoot does nothing for).

Of course, the proposed scheme isn't exactly nice. Notably handwaving about 'file is known safe'. In an open, diverse ecosystem this is highly impractical. SELinux errs on the side of letting some stuff slide and still gets enough false positives to frustrate a user trying to use some legitimate applications. These schemes start from a premise of 'if you know everything the system is ever supposed to do, then....' which is unlikely. Doing this from firmware to kernel may be feasible and a way to declare a 'known good state' to start some instrumentation in the common case, but going more into the wide open user space with overly specific restrictions and there will be difficulties. Maybe in some very specific special purpose applications, but in a general purpose system the universe of legitimate things to do is just not well defined enough.

Comment Straightforward guessing where he wants to go.. (Score 4, Insightful) 151

Too early to try to measure 'success'.

shows strong strategic leadership, particularly around the cloud

So far there isn't anything particularly different about his time there as far as degree of success in the 'cloud' market. In terms of Azure, it's a tricky proposition for a company that is ostensibly a high-margin company. Going toe to toe with Amazon, a company that has repeatedly shown it is not shy about operating on margins so thin they are at high risk of actually operating at loss in a given quarter (I would say the same thing about IBM's foray into the space).

I suspect Windows is there to stay for the foreseeable future (it is about the only product they have with a pretty proven market acceptance that is also consistently profitable). Devices I think will go away, as it should. They let Google and Apple get ahead in the broad ecosystem strategy and the vertically integrated strategy respectively, leaving no room for MS really. MS has to figure out how to somehow undercut Android cost for partners or give up on owning the underlying platform. Either way making devices in house will not be winning them any favors, Apple has shown the most success and the most loyalty and yet their share still is going down in the face of the huge ecosystem of android vendors.

xBox would make more money as something sold to a third party, who probably would do better with it than microsoft has.

Slashdot Top Deals

It is easier to write an incorrect program than understand a correct one.

Working...