Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×

Comment Re:the hard way (Score 1) 87

The attack is more precise. Need to know precisely when to pop up the input form of interest. Sure this information could allow them to disambiguate the context so that a random memory change in a random app doesn't trigger a false positive. Of course the whole point was also to demonstrate how well they could do without any remotely suspicious permissions.

Comment Mitigation... (Score 1) 87

So in the OS side, at the very least it seems that an obvious indication of application focus change would go a long way toward making this seem not right.

On the application side, I think applications that are likely to get sensitive information should always display a consistent randomized watermark in their application. Let's say they make an 'always at the top' bar with two randomized words. With that, the sensitive input forms that try to be phished will look incorrect because the watermark suddenly changes.

Comment Re:what are you smoking? (Score 1) 129

As for I/O, you can pass through PCI devices in to the guest for pretty-much native networking performance.

Of course, that comes with its own headaches and negates some of the benefits of a VM architecture. Paravirtualized networking is however pretty adequate for most workloads.

It's not like you have to do VM *or* baremetal across the board anyway. Use what makes sense for the circumstance.

Comment Re:Of Course They Do! (Score 1) 129

CPU throughput impact is nearly undetectable nowadays. Memory *capacity* can suffer (you have overhead of the hypervisor footprint), though memory *performance* can also be pretty much on par with bare metal memory.

On hard disks and networking, things get a bit more complicated. In the most naive way, what you describe is true, a huge loss for emulating devices. However paravirtualized network and disk is pretty common which brings it in the same ballpark as not being in a VM. But that ballpark is relatively large, you still suffer significantly in the IO department in x86 virtualization despite a lot of work to make that less the case.

Of course, VM doesn't always make sense. I have seen people make a hypervisor that ran a single VM that pretty much required all the resources of the hypervisor and no other VM could run. It was architected such that live migration was impossible. This sort of stupidity makes no sense, pissing away efficiency for no gains.

Comment A horrible nightmare... (Score 2) 129

So to the extent this conversation does make sense (it is pretty nonsensical in a lot of areas), it refers to a phenomenon I find annoying as hell: application vendors bundle all their OS bits.

Before, if you wanted to run vendor X's software stack, you might have to mate it with a supported OS, but at least vendor X was *only* responsible for the code they produced. Now increasingly vendor X *only* releases an 'appliance and are in practice responsible for the full OS stack despite having no competency to be in that position'. Let's see the anatomy of a recent example of critical update, OpenSSL.

For the systems where the OS has applications installed on top, patches were ready to deploy pretty much immediately, within days of the problem. It was a relatively no-muss affair. Certificate regeneration was an unfortunate hoop to go through, but it's about as painless as it could have been given the circumstances.

For the 'appliances', some *still* do not even have an update for *Heartbleed* (and many more didn't bother with the other OpenSSL updates). Some have updates, but only in versions that also have functional changes in the application that are not desired, and the vendor refuses to backport the relatively simple library change. In many cases, applying an 'update' actually resembles a reinstall. Having to download a full copy of the new image and doing some 'migration' work to have data continuity.

Vendors have traded generally low amounts of effort in initial deployment for unmaintainable messes with respect to updates.

Comment IIRC... (Score 1) 64

nVidia actually did sell it pretty well though. It wasn't in any way a better experience, but the brand name did actually carry the product as I recall.

It was one of the reasons that the relationship between Intel and nVidia went so far south, Intel made it impossible to have third party chipsets and nVidia lost some revenue opportunity. People rightly critical of the technical aspects were not the downfall of the product line, Intel locking down their platform was.

In short, this stuff *could* in theory fly. In practice, I don't think AMD has the brand strength. People still seem to look to nVidia as 'the go-to' brand more often than AMD in the PC component world.

Comment Re:Courage... (Score 1) 207

The funny thing in your story is that the word that makes it narrow down the most is the pronoun 'she'. I would guess you either work for HP or IBM. Massive stock buybacks and continual layoffs is the modus operandi of most of the companies, but female ceos are a little bit more rare. Of course they do and say the exact same things so they could all probably replace their CEOs with chatbots that just always says 'buyback some more stock and layoff more people' and no one would notice.

Comment Courage... (Score 2) 207

If we don't have the courage to change

It can be debated as to whether this is a necessary thing or a prudent thing or whatever, but regardless of those debates, this s a pretty stupid thing to say. I don't think a CEO should ever characterize their decision to terminate other people's jobs as 'courageous'. There really isn't anything remotely courageous about any of the strategy he laid out. It's not even particularly bold or daring, it's basically the exact thing every executive of every tech company has been saying about their respective companies now.

Not having much of a horse in the race (not working for cisco or even a cisco client), I can't comment on whether it's the right choice or whatever, but it really rubbed me the wrong way to see him refer to layoffs as an act of courage.

Comment Re:Rust (Score 3, Insightful) 57

One issue is that generally such projects are actually pretty niche and get developed with only that niche in mind. There simply isn't a pool of eager developers to tackle only your specific issue.

If you can think about modularity and develop some key components that are more generally useful as distinct projects, you may have better luck.

But overall, open source building a large development community for any given single project is generally the exception rather than the rule, even if you do your best. Even very ubiquitous projects that play a role in nearly every linux system often has maybe one or two developers that are really familiar with it.

Comment Re:Better ways to do it. (Score 1) 57

If you do use the GPL *and* have copyright assignment, there actually could be a case made that you dual license it: GPL for those that play open source and proprietary commercial for others. This is the 'get free coders to do work for you' business model that seems pretty disingenuous, but at least there is a logic to a corporate sponsored project going for GPL.

What surprises me is that most scenarios where corporations pick the license, they pick a BSD style license. I can understand them wanting that property in *other* people's code, but surprise they wouldn't want more assurance that their work wouldn't come back to compete with them in a commercial way when they have a choice.

Comment Re:CLA (Score 1) 57

CLA with copyright assignment opens the door to have your contributions abused by the copyright holders.

CLA without copyright assignment is usually just the 'project' covering their ass in case of problematic contributions that infringe copyright or patents.

Comment Re:CLA (Score 1) 57

The general intent of many CLAs is some stuff to make the contributor attest that he isn't doing something like injecting patented capability or violating someone's copyright. The key distinction between an open source licensed product being redistributed by someone who adds problematic capability versus having that capability injected directly is that the curator of that project is the one that gets sued in the latter case. So if stuff is bolted on but not coming back, the weaker assurance of GPL or BSD style license is acceptable because the risk is not the project owners. The statement is certainly not sufficient to be confident that something isn't wrong, but it's a stronger basis to pass on some culpability to the contributor in the event of issues.

The sort of CLA you are talking about are the ones with copyright assignment. The most prominent example of this is actually the FSF requiring copyright assignment of any accepted contribution. These can be employed when a company or organization wants to reserve the right to modify licensing. In the FSF case, this is why they can change license terms from GPL2 to GPL3, where in a project like the kernel, they cannot change the license because there are too many copyright holders. I actually don't know of a corporate sponsored CLA involving copyright assignment.

I had previously assumed CLA implied copyright assignment until I was forced to actually cope with a couple of CLAs and looked more carefully.

Comment Re:A rather simplistic hardware-centric view (Score 1) 145

Software reliability over the past few decades has shot right up.

I think this is a questionable premise.

1) Accurate, though has been accurate for over a decade now
2) Things have improved security wise, but reliability I think could be another matter. When things go off the rails, it's now less likely to let an adversary take advantage of that circumstance.

3) Try/Catch is a potent tool (depending on the implementation it can come at a cost), but the same things that caused 'segmentation faults' with a serviceable stack trace in core file cause uncaught exceptions with a serviceable stack trace now. It does make it easier to write code that tolerates some unexpected circumstances, but ultimately you still have to plan application state carefully or else be unable to meaningfully continue after the code has bombed. This is something that continues to elude a lot of development.
4) Actually, the pendulum has swung back again in the handheld space to 'apps'. In the browser world, you've traided 'dll hell' for browser hell. Dll hell is a sin of microsoft for not having a reasonable packaging infrastructure to help manage this circumstance better. In any event, now server application crashes, client crash, *or* communication interruption can screw application experience instead of just one.

5. Virtualized systems I don't think have improved software reliability much. It has in some ways make certain administration tasks easier and beter hardware consolidation, but it comes at a cost. I've seen more and more application vendors get lazy and just furnish a 'virtual appliance' rather than an application. When the bundled OS requires updates for security, the update process is frequently hellish or outright forbidden. You need to update openssl in their linux image, but other than that, things are good? Tough, you need to go to version N+1 of their application and deal with API breakage and stuff just because you dared want a security update for a relatively tiny portion of their platform.

6. I think there's some truth in it, but 32 v. 64 bit does still rear it's head in these languages. Particularly since there are a lot of performance related libraries written in C for many of those runtimes.

7. This seems to contradict the point above. Python pretty well fits that description.

8. This has also had a downside, people jumping to SQL when it doesn't make much sense. Things with extraordinarily simple data to manage jump to 'put it in sql' pretty quickly. Some of the 'NoSQL' sensibilities have brought some sanity in some cases, but in other cases have replaced one overused tool with another equally high maintenance beast.

9. True enough. There is some signal/noise issue but better than nothing at all.

I think a big issue is that at the application layer, there has been more and more pressure for rapid delivery and iteration, getting a false sense of security from unit tests (which are good, but not *as* good as some people feel). Stable branches that do bugfixes only are more rare now and more and more users are expected to ride the wave of interface and functional changes if they want bugs fixed at all. 'Good enough' is the mantra of a lot of application development, if a user has to restart or delete all configuration before restart, oh well they can cope.

Slashdot Top Deals

Stellar rays prove fibbing never pays. Embezzlement is another matter.

Working...