Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Let's see how that sounds in 5-10 years time .. (Score 1) 449

The issue is that when processor vendors went to dual and then quad core, people started extrapolating and saying 'oh in a decade, we'll be using hundreds of cores on a random desktop'. Instead it tapered out at about 4 for the most part with focus on reducing the power envelope while minimizing performance loss.

I would say the discussion presuming massive core counts is based on an extrapolation of older trends of increasing core count, and it's perfectly reasonable to step back and recognize the change in the trend. Sure, tomorrow we could suddenly be back on the path to 256 core desktop solutions for unforeseen reasons, but as it stands, there's no signs of that being the priority of the industry.

Comment Re:Public Key, not SSH. (Score 1) 203

the public key cryptography part.

The thing is, that is already available in any protocol employing TLS. Client side certificates are the same as ssh keypairs. There's some capability in x509 certificates not in ssh keypairs, but all that can be ignored.

But if you dig into the current in-vogue two-factor stardard (U2F), they actually are implementing what you describe. "During registration with an online service, the user's client device creates a new key pair. It retains the private key and registers the public key with the online service"

Comment Re:Answering Linus' "Where the hell..." question.. (Score 1) 449

That sounds more like a distributed computing problem rather than applications running on a single 'system'. Even if it were centrally controlled, the computational load being time-shared might mean the best solution is still just a handful of cores. Such nanites would presumably be independent or unused enough that continuous CPU load would likely not even be in the picture. This is very much science fiction, but it still strikes me that the computational load would be negligible compared to the medical/engineering problems overcome. You take 30-40 years to start feeling the effects of aging, so it's not like cells require continual repair to achieve your hypothetical situation, just have to manage to repair everything within 25 years.

Comment Re:Let's see how that sounds in 5-10 years time .. (Score 1) 449

To be fair, the trend seems to be hitting a ceiling.

Desktop processors got to quad core and have pretty much sat there. The mobile space has been at quad-core a little less long and there are octo-core implementations moreso than desktop, but it still seems quad core is about where most devices settle. There are more efforts to make GPU style execution cores available for non-graphics use, but in practice a relatively small portion of the market has been able to have meaningful gains exploiting them. As vectorized instructions in cores become more capable, many of those problems actually start coming back to the traditional CPU cores as it works as well as the GPU but with an easier programming model. In short, the marketing results seem to indicate that end user devices might settle around quad core.

Servers have been going up, with 18 core per socket for 2-socket now available. This shows that the desktop parts have room to grow in that dimension, but it just isn't being bothered with.

Comment Shallow, not psychic (Score 1) 255

In the cases that were software defects, the defect was rapidly fixed upon discovery. That's really the meaning of all bugs being shallow, not that they won't ever exist.

That said, POODLE is not a code defect, but defect in the standard (well, except the bit where an implementation skipping validation of the pad). Shellshock was indeed a grave defect, but I think a correct takeaway there is to avoid having a shell language be in the path where untrusted data could be injected as much as possible (as well as fixing it). A fair amount of noise has been made in some circles along the lines of 'told you so, open source is riskier than proprietary', though in the same time frame we've seen Apple's implementation have 'goto fail' and Microsoft's SChannel has also had at least one recent issue.

The short of it is code is not magic. If code is taken for granted, it will be neglected (whether OpenSSL or some boring proprietary function well out of view of marketing concerns). Neglect in security centric libraries is particularly dangerous. In this case, I think Heartbleed panic was the proverbial tide to rise all boats, causing investment across the industry in scrutinizing all TLS stacks.

Comment A victim of applications and history (Score 2, Informative) 129

This seems to come out of the peculiar microsoft feature of being able to be an administrator user but without administrator privilege most of the time except when needed, and a lot of work to make this escalation happen in an non-intrusive fashion or be faked depending on context. It's a really complicated beast that no other platform tries to do.

MS up to and including XP (excluding the DOS based family) basically had the same as everyone else, you either were an administrator or you weren't, with facilities to 'runas' an elevated user to handle as-needed. The problem being they had tons of software from the DOS based system failing to use the right section of the registry and filesystem, requiring people to go through pains to run as administrator to run a lot of applications. This meant that most XP users just logged in as administrator.

To mitigate it, they embarked upon crafting this fairly complex thing to make running as administrator user safer most of the time. It's funny because at the same time they started doing more and more to allow even poorly designed DOS-era software to run without administrator. They create union mounts to make an application think it can write to it's application directory even when it cannot (and do sillier things like make 'system32' a different directory depending on whether a 32 or 64 bit application is looking). I do the atypical usage of a non-administrator user full time with UAC prompts nagging me about passwords if needed, and nowadays it doesn't nag any more than sudo does in a modern linux desktop. If I understand this behavior correctly, this usage model might be immune to this risk factor.

Comment Re:10 Years Can Be A Long Time (Score 1) 332

I also think IBM will shrink, but they are in a lot more unnoticed market segments than mainframe.

I think unless something dramatic happens, Oracle will also shrink. Again, they will of course have a viable market, but the trend clearly is more of the market deciding they don't need oracle after all.

I agree about Apple and Microsoft. Microsoft I will say seems to be recognizing their situation and is trying some things which means Microsoft in 10 years might look quite differently.

Uber/Lyft may or may not be around. There will be a recognition that it makes no sense for them to be exempt from taxi regulations. They might have to more fairly compete with taxis, but they may have some success in evolving taxi regulations to be more reasonable. Even if still unreasonable, Uber or Lyft may be able to offer their services to taxi companies.

Betting on any Google brand is tricky. The brand could persist on a new technology or the technology could get rebranded. I suspect Google Plus has too prominent of a competitor for Google to back off and admit 'defeat'. It might be a decent hedge bet for the utter failure of Facebook, a logical fallback social experience that probably isn't that expensive to maintain in terms of incremental cost.

If twitter goes away, it won't be due to the irrelevance of written dialog. Written dialog is frequently the preference. Do you want to trudge through voicemail if you can instead read the same content? While you are at work, can you unobtrusively watch vine/youtube? For this very discussion, would you have wanted to click 'record a reply', record your reply, possibly redoing it since there's no backspace key? As text messaging came to be a fundamental option of telephony, how much did phone conversations get supplanted by text messages? I could believe people could get over Twitter or their business model could not sustain, but not that video/voice takes over.

I have an Oculus and love it, but it isn't going to reshape the landscape of person-to-person interaction. It'll evolve gaming experiences and it will offer a lot more low key experiences than people bothered with realtime 3d rendering before, but I don't see it replacing phone calls, text messages, or video calls. It's harder than a phone call, far more intrusive than text messages, and you'd have to use an avatar instead of your own face for video calls since the headset obstructs the view.

I don't think VR or AR is on a track to replace traditional displays for most. I would be very happy to do so, but the vast majority finds the concept unappealing. Of course the PC living or dying isn't really related to this. It might be a distinct form factor, but could still a 'PC' in the ways that count.

I suspect the cloud bubble could burst for a number of reasons. A non-state entity severely compromises the security of a big name scaring everyone. The growth hits a saturation point and investors panic because they *always* panic when exponential growth doesn't continue forever.

I agree that HP is screwed and Lenovo is on stronger footing. HP's recent decision to split the x86 server and desktop business is a good example of poor strategic thinking to appease boneheaded investors. When IBM sold off PC, their server costs went significantly higher because they lost a lot of bargaining power. One of the key ingredients in the expected revival of the x86 server business is improved bargaining power. HP went the other way and forfeit that very significant capability. Lenovo also seems more likely to adapt to market shifts than HP, meaning their ability to respond to Xiaomi is probably better than other companies. I view Dell as a potential wildcard now that it has gone private. I haven't seen any signs of strategic thinking that would have warranted going private, so it might still be coming.

Comment Re:The future (Score 1) 332

IBM - These guys are back...big time.

I doubt it. I suspect IBM will be about where they are now. Profitable, well out of the mind of most in the public. They might decline a bit further. I don't think they have anything that would come organically from their development that's going to change their fortunes dramatically. If it does make a comeback, it'll be some lucky acquisition, or else being rooted in very boring, but stable markets as other market bubbles burst and cause investors to gain an appreciation for slow and steady.

Comment Re:The future (Score 2) 332

they will continue to loose money for 2015

While I agree that IBM in general is not all they claim to be, they continue to be profitable. Just not as profitable as they historically were and not as much as investors and executives demand from the 'IBM' brand. They aren't losing money by any means, though they act in many ways like a company that is losing money.

those familiar with biginsights knows IBM is struggling big time

That is another interesting facet of IBM. They tend to have huge big-name initiatives that they expect to change and dominate the industry. Those things in the last decade have pretty much all flopped and been money losers, buoyed by profit from all sorts of boring places that are seemingly not worthy of IBM executive gushing.

IBM in general is an odd institution. On fronts that can be profitable, but must settle for a more modest margin, they have the strategy of trying to offload. Lenovo is mostly built upon their old, failing PC business and is now the global leader and modestly profitable. I expect the same thing of x86 servers. This seems highly inconsistent with their purchase of softlayer. They are trying to get into the ring with Amazon, a company notorious for operating on razor-thin to negative margins for the sake of market share. Surely they must realize that the IaaS business cannot bring about IBM-level margins so long as EC2 lives.

Comment Re:Joyent unfit to lead them? (Score 1) 254

Changing a pronoun is not worth of developer resources. I would have reversed it too -- we don't need everyone's principled opinions infiltrating the codebase and starting problems between people's values and beliefs.

The thing is, the change was done by some third party. Rejecting it and justifying actually took *more* work than just accepting it. The change was just inside comments. Now if the change was to function names or something, that would be different.

If I were faced with a commit that just changed he to they or he to she or she to he or they to he, I'd probably accept it because I don't care what a comment says. The exception would be if it became apparent that two committers cared in different ways about such an asinine thing, then I'd have to think. As this stands, someone rejected a practically patch that some people cared about and should have just accepted the damn patch.

Comment Re:difference? (Score 4, Informative) 254

People don't fork 'just because they can'. They fork because they are failing to get what they want out of the project. It remains to be seen if they are wasting their time.

It could be like ethereal to wireshark, where the holder of the copyright has precisely *zero* development skin in the game.

It could be like XFree86 to Xorg where both had some nominal capability to continue, but it becomes quickly apparent that the fork is where the development effort went.

It could be like Wayland fork where the fork pretty much died (though the main project isn't seeing massive adoption either).

Worst case would be something like the ffmpeg/libav fiasco, where both forks go and which one is available readily for a given distribution is almost more a matter of politics than technical merit, and yet they have significantly diverged.

Comment Incorrect... (Score 1) 254

A 'project' is a vague concept. What 'sponsorship' means can be vague too. Are they providing hosting services? Are they managing the authentication configuration? Did they impose some CI where they get final say? Did they provide employment to some or all participants? Did they pay as part of a contract arrangement for the time of some developers?

In short, knowing how corporate sponsorship historically happens in open source, the corporation maybe provides some contribution, but does take control of the project hosting and copyright such that the 'authoritative' source follows their will, but they do not actually offer many of the developers financial benefit or bind their hands to fork.

This happens not infrequently to very prominent software in open source land, sometimes without the commercial facet. MySQL and MariaDB. Ethereal and Wireshark. gPXE and iPXE. XFree86 and Xorg. ffmpeg and and libav. Openoffice and Libreoffice. Usually it becomes clear where the *real* meat of development was and only one fork is technically viable.

Comment Re:Empty article.. (Score 1) 438

Why I said 'for most'. High end database applications with poor data locality in datacenters is not really something 'most' have to contend with. If I tried to be comprehensive in my statement, the people that understand the nuances would recognize and I'd be preaching to the choir, but most would glaze over and skip it because precisely and accurately describing the entire reality is too complicated for most to want to read.

Comment Empty article.. (Score 5, Informative) 438

I don't know why Intel and Micron get any special consideration given that right in the summary the fact that Samsung has already announced the same move.

Also incorrect assertion that drives don't go faster than 7200 (there are 15k drives, just they are pointless for most with SSD caching strategies available).

Slashdot Top Deals

Systems programmers are the high priests of a low cult. -- R.S. Barton

Working...