Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Comment Re:What's different from 20 years ago? (Score 4, Informative) 61

Displays that can be refreshed 75 times a second? Displays that can turn off between refreshes? GPU and software stack that can provide reasonable detail at high resolutions and high FOV? When VR was in the spotlight, 3dfx hadn't even released the voodoo 1, n64 and ps1 didn't even exist. Given that utterly primitive GPU technology by our standards today was beyond their grasp, it is pretty obvious a great deal has changed.

Also, the proliferation of high quality mobile devices with accelerometers has provided the core pieces from evolved mass-market components. 20 years ago, it was all specialty equipment from the ground up.

Comment Re:No thanks. (Score 2) 61

It's about more than stereoscopic vision. The fact my head moves and the environment perfectly tracks my head movement is the real significant chunk. If you don't have stereoscopic vision in real life, then no you don't have stereoscopic inside a VR headset. Either way you judge 3d by head movements and that is very much helped in a VR context.

Comment Re:Let's see how that sounds in 5-10 years time .. (Score 1) 449

The issue is that when processor vendors went to dual and then quad core, people started extrapolating and saying 'oh in a decade, we'll be using hundreds of cores on a random desktop'. Instead it tapered out at about 4 for the most part with focus on reducing the power envelope while minimizing performance loss.

I would say the discussion presuming massive core counts is based on an extrapolation of older trends of increasing core count, and it's perfectly reasonable to step back and recognize the change in the trend. Sure, tomorrow we could suddenly be back on the path to 256 core desktop solutions for unforeseen reasons, but as it stands, there's no signs of that being the priority of the industry.

Comment Re:Public Key, not SSH. (Score 1) 203

the public key cryptography part.

The thing is, that is already available in any protocol employing TLS. Client side certificates are the same as ssh keypairs. There's some capability in x509 certificates not in ssh keypairs, but all that can be ignored.

But if you dig into the current in-vogue two-factor stardard (U2F), they actually are implementing what you describe. "During registration with an online service, the user's client device creates a new key pair. It retains the private key and registers the public key with the online service"

Comment Re:Answering Linus' "Where the hell..." question.. (Score 1) 449

That sounds more like a distributed computing problem rather than applications running on a single 'system'. Even if it were centrally controlled, the computational load being time-shared might mean the best solution is still just a handful of cores. Such nanites would presumably be independent or unused enough that continuous CPU load would likely not even be in the picture. This is very much science fiction, but it still strikes me that the computational load would be negligible compared to the medical/engineering problems overcome. You take 30-40 years to start feeling the effects of aging, so it's not like cells require continual repair to achieve your hypothetical situation, just have to manage to repair everything within 25 years.

Comment Re:Let's see how that sounds in 5-10 years time .. (Score 1) 449

To be fair, the trend seems to be hitting a ceiling.

Desktop processors got to quad core and have pretty much sat there. The mobile space has been at quad-core a little less long and there are octo-core implementations moreso than desktop, but it still seems quad core is about where most devices settle. There are more efforts to make GPU style execution cores available for non-graphics use, but in practice a relatively small portion of the market has been able to have meaningful gains exploiting them. As vectorized instructions in cores become more capable, many of those problems actually start coming back to the traditional CPU cores as it works as well as the GPU but with an easier programming model. In short, the marketing results seem to indicate that end user devices might settle around quad core.

Servers have been going up, with 18 core per socket for 2-socket now available. This shows that the desktop parts have room to grow in that dimension, but it just isn't being bothered with.

Comment Shallow, not psychic (Score 1) 255

In the cases that were software defects, the defect was rapidly fixed upon discovery. That's really the meaning of all bugs being shallow, not that they won't ever exist.

That said, POODLE is not a code defect, but defect in the standard (well, except the bit where an implementation skipping validation of the pad). Shellshock was indeed a grave defect, but I think a correct takeaway there is to avoid having a shell language be in the path where untrusted data could be injected as much as possible (as well as fixing it). A fair amount of noise has been made in some circles along the lines of 'told you so, open source is riskier than proprietary', though in the same time frame we've seen Apple's implementation have 'goto fail' and Microsoft's SChannel has also had at least one recent issue.

The short of it is code is not magic. If code is taken for granted, it will be neglected (whether OpenSSL or some boring proprietary function well out of view of marketing concerns). Neglect in security centric libraries is particularly dangerous. In this case, I think Heartbleed panic was the proverbial tide to rise all boats, causing investment across the industry in scrutinizing all TLS stacks.

Comment Re:Shouldn't this be a civil case? (Score 1) 86

Ya, I doubt it would really cause too many cancellations, but it's still within the realm of possibilities. I've seen people cancel service over a couple hours of downtime, for things that cost pennies a day. Like one hour down, on a $25/yr subscription to a porn site.

On the network/systems part, I really meant to say it as the department, not as the individual. It's not unheard of to call in contractors, especially where it was an ongoing thing. Not all network and systems people are salary either. Some don't have any. I have had contract gigs to try to figure out why a network was misbehaving. Even when it turns out to be the upstream provider, they still have to pay me for showing up.

Comment Re:100 times this!!! (Score 1) 141

That's you and me. I have something somewhat legitimate in there. I'm also sure you're aware that domain registrar information is rarely changed in a timely fashion, even though they're sending out the yearly reminders to make sure your information is right.

The whole registrar proxy thing is easy money for them. As I recall, they have verbage on the page that strongly recommends using it, implying it's for the safety of your domain.

I've frequently seen that the registration itself is handled by an accounting department, not by the IT department. When I was looking at their pages, it looked like they don't have an in-house IT department. It's probably a contracted web designer who maintains the page, and someone in-house (like accounting) manages the domain. By managed, I mean "pays the bill when it comes up for renewal".

Since it's frequently managed someone who will never check it, it's actually better to let the proxy service handle the contacts. They will (hopefully) update their billing info if there is a change, so the service knows who to send contacts to.

Comment Re:Shouldn't this be a civil case? (Score 4, Insightful) 86

When you blocked McDonalds by flooding all the highways with a 12" deep layer of molasses, it would probably be considered equally damaging.

There is a discernible monetary loss. How much was lost in revenue where customers could not pay for services? How much was lost from cancellation of services because of the outage? How much was spent for network and systems administrators to work on it, beyond their normal workload?

And then ... How much was lost by other companies impacted by degraded network capacity due to the network traffic?

I'm sure those numbers were easily in the millions. Those won't be the all inclusive questions either. I'm afraid to even ponder how big the final figure will become. It could involve seemingly unrelated companies, who lost sales because their VoIP traffic was on one of the over-utilized circuits.

Comment A victim of applications and history (Score 2, Informative) 129

This seems to come out of the peculiar microsoft feature of being able to be an administrator user but without administrator privilege most of the time except when needed, and a lot of work to make this escalation happen in an non-intrusive fashion or be faked depending on context. It's a really complicated beast that no other platform tries to do.

MS up to and including XP (excluding the DOS based family) basically had the same as everyone else, you either were an administrator or you weren't, with facilities to 'runas' an elevated user to handle as-needed. The problem being they had tons of software from the DOS based system failing to use the right section of the registry and filesystem, requiring people to go through pains to run as administrator to run a lot of applications. This meant that most XP users just logged in as administrator.

To mitigate it, they embarked upon crafting this fairly complex thing to make running as administrator user safer most of the time. It's funny because at the same time they started doing more and more to allow even poorly designed DOS-era software to run without administrator. They create union mounts to make an application think it can write to it's application directory even when it cannot (and do sillier things like make 'system32' a different directory depending on whether a 32 or 64 bit application is looking). I do the atypical usage of a non-administrator user full time with UAC prompts nagging me about passwords if needed, and nowadays it doesn't nag any more than sudo does in a modern linux desktop. If I understand this behavior correctly, this usage model might be immune to this risk factor.

Comment Re:10 Years Can Be A Long Time (Score 1) 332

I also think IBM will shrink, but they are in a lot more unnoticed market segments than mainframe.

I think unless something dramatic happens, Oracle will also shrink. Again, they will of course have a viable market, but the trend clearly is more of the market deciding they don't need oracle after all.

I agree about Apple and Microsoft. Microsoft I will say seems to be recognizing their situation and is trying some things which means Microsoft in 10 years might look quite differently.

Uber/Lyft may or may not be around. There will be a recognition that it makes no sense for them to be exempt from taxi regulations. They might have to more fairly compete with taxis, but they may have some success in evolving taxi regulations to be more reasonable. Even if still unreasonable, Uber or Lyft may be able to offer their services to taxi companies.

Betting on any Google brand is tricky. The brand could persist on a new technology or the technology could get rebranded. I suspect Google Plus has too prominent of a competitor for Google to back off and admit 'defeat'. It might be a decent hedge bet for the utter failure of Facebook, a logical fallback social experience that probably isn't that expensive to maintain in terms of incremental cost.

If twitter goes away, it won't be due to the irrelevance of written dialog. Written dialog is frequently the preference. Do you want to trudge through voicemail if you can instead read the same content? While you are at work, can you unobtrusively watch vine/youtube? For this very discussion, would you have wanted to click 'record a reply', record your reply, possibly redoing it since there's no backspace key? As text messaging came to be a fundamental option of telephony, how much did phone conversations get supplanted by text messages? I could believe people could get over Twitter or their business model could not sustain, but not that video/voice takes over.

I have an Oculus and love it, but it isn't going to reshape the landscape of person-to-person interaction. It'll evolve gaming experiences and it will offer a lot more low key experiences than people bothered with realtime 3d rendering before, but I don't see it replacing phone calls, text messages, or video calls. It's harder than a phone call, far more intrusive than text messages, and you'd have to use an avatar instead of your own face for video calls since the headset obstructs the view.

I don't think VR or AR is on a track to replace traditional displays for most. I would be very happy to do so, but the vast majority finds the concept unappealing. Of course the PC living or dying isn't really related to this. It might be a distinct form factor, but could still a 'PC' in the ways that count.

I suspect the cloud bubble could burst for a number of reasons. A non-state entity severely compromises the security of a big name scaring everyone. The growth hits a saturation point and investors panic because they *always* panic when exponential growth doesn't continue forever.

I agree that HP is screwed and Lenovo is on stronger footing. HP's recent decision to split the x86 server and desktop business is a good example of poor strategic thinking to appease boneheaded investors. When IBM sold off PC, their server costs went significantly higher because they lost a lot of bargaining power. One of the key ingredients in the expected revival of the x86 server business is improved bargaining power. HP went the other way and forfeit that very significant capability. Lenovo also seems more likely to adapt to market shifts than HP, meaning their ability to respond to Xiaomi is probably better than other companies. I view Dell as a potential wildcard now that it has gone private. I haven't seen any signs of strategic thinking that would have warranted going private, so it might still be coming.

Slashdot Top Deals

The rule on staying alive as a forecaster is to give 'em a number or give 'em a date, but never give 'em both at once. -- Jane Bryant Quinn

Working...