Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Comment Re:Does it really matter? (Score 1) 203

You can be 'weightless' at an altitude of 10 feet (for a very brief period of time). You don't have to be in space to be weightless, just in an environment that is accelerating at the same rate as you in the same direction.

But I take the point that physiologically there would be much difference between 50 and 64 miles up inside a vessel.

Comment In this *particular* case... (Score 1) 215

So in this case, the person is in a class action suit. His frustration is that the lawyers who effectively control the thing from the plaintiff perspective have a significant conflict of interest and his voice is likely to go unanswered. The lawyers want the easier money which will be a large amount for them and a moderate amount for the members of the class. Most of the class would be happy to get a moderate amount and fully expect that they were just screwed. He would rather go through the effort to get his day in court and risk the guaranteed money to get back at the defendants more effectively.

In a union, he again consigns his particular fate to leadership that is just as likely to pay little attention to his particular grievances for the sake of their own welfare and sometimes the welfare of the whole.

What he really wants is to not to consign his particular fate to class action lawyers and a union in this case would resemble the same situation. Whether you believe a union overall is a good or bad idea, in this particular scenario I wouldn't expect a significantly better situation for the person's particular sensibilities in this scenario.

Comment No... (Score 4, Informative) 533

There are significant numbers of people who understand it just perfectly and have valid criticisms that are not bugs.

http://ewontfix.com/14/

The systemd team has pissed of Torvalds:
https://lwn.net/Articles/59368...

Additionally, they repeatedly deny that anyone should have a text log for any reason, dismissing criticisms as 'just hook in syslog *too* as an *optional* thing'. Basically systemd discards decades of sensibilities ecosystem to 'do it better', while throwing out the baby with the bathwater (ditching modularity and portable log data and such).

It's not just that 'if you don't like it, fix it'. People don't like the very fundamental aspects of the design that the systemd did *on purpose*.

Comment Re:Translation (Score 1) 589

I think those fall into a class where you need in-house expertise or you are simply screwed no matter what. When you have high quality in-house expertise anyway, the open source world empowers those experts to do things they could not pull off with a proprietary solution.

However, the general market is very large and there are also plenty of places where in-house expertise isn't so critical. A lot of those can get by with open source too 99.5% of the time, but when something goes really off the rails, those guys need help. The difference between RHEL and Windows isn't so big in up front costs. Ongoing costs mostly depend on the subjective preferences of the staff you can secure. RHEL and Windows appeal to very distinct sensibilities overall, so it should be no surprise that both viewpoints can be truthfully found in the world.

Comment Re:But FOSS deverlopers don’t focus on usabi (Score 1) 589

FOSS developers usually focus on usability for themselves and people with similar sensibilities. This is one reason why it is so polarizing from a user experience stance. If your sensibilities are aligned with a critical mass of FOSS developers, then you are grateful to finally find a group of people that think like you making applications. However if you are not in that boat, you find the sensibilities bizarre. The commercial players base their stuff on usability studies and explicitly make calls to favor majority over niche.

This is of course not universal and an oversimplification, but it is a pretty common distinction.

Comment Re:How much do the frequent reboots for "updates" (Score 1) 589

Why does Windows require rebooting almost every time it does an update?

Because their filehandle model is extremely stupid (cannot unlink an in-use file and replace with a new file and have the old filehandle available for old processes and new file content for new processes). However, in a way, it could be considered a blessing....

Why does linux only rarely need to reboot after an update?

On the one hand, this is a nice feature of the system, open filehandles are disassociated from the filenames they were opened as, allowing replacement of content without disrupting running processes. The downside is that you must track all the processes using a 'bad library' and restart them if you want to be assured that any security fix that might have been in there is applied. In practice, this is more and more becoming 'just reboot the system to be safe'. So Linux can scale up to multi-application environments with competent management better than Windows. However, if not carefully managed, then not rebooting after an update could just mean risk. Reboots have gotten cheaper and cheaper over time for most environments (reboots being faster and important services not being tied to just one OS instance), so this pride in uptime on a specific host is becoming more unreasonable.

Comment Re: Serious Question (Score 1) 181

Unfortunately I do not have a link. I do however know some system designers.

They designed a 4 socket Opteron system, and did not make a dual socket. It was peculiar to me so I asked why not a dual socket and they said there was no point in a dual socket because there was no performance advantage.

They also designed both a 4 socket EP system and a 2 socket EP system. I asked why and they said that they could gang up the two QPI links between two sockets for better performance.

I admittedly did not ask point blank if the two socket opteron couldn't do the same trick and they just didn't bother. It might have just been pushing the core-count marketing bullet as far as they could.

I think the Bulldozer scheme is actually pretty analagous to NetBurst. NetBurst happened t a time when a processor was almost exclusively measured by the clock frequency. It had theoretical benefits if workloads behaved a certain way. In practice, it pissed away energy and got terrible performance. Bulldozer happened when 'cores on a package' is a major factor in marketing a processor. It got to 'more cores' by replicating only certain facets with shared resources to get incredibly higher core count than Intel. It was similarly a risky move that *could* have panned out if workload acted as they guessed, but an intel core frequently keeps up with 2 amd 'cores' just like a 1 GHz athlon could outperform a 2 Ghz Intel P4.

Comment Re:I don't like the control it takes away from you (Score 1) 865

. If you're still having to crank has truly serious issues if the car can't compensate by adjusting something to correct the AFR, timing, etc.

It's funny, because I've had the car do exactly the wrong thing. My car would start normally most every time, except maybe once every couple of months I could crank and crank and it wouldn't turn over until I stopped and tried again after a few seconds. Ultimately turned out to be a recall on the PCM where things were so bad that in fact it could stall the engine on the interstate in certain scenarios (never happened to me). Ever since recall, been no problems to crank.

Comment Re:Serious Answer (Score 2) 181

Well, in the *desktops*, core marked an end to AMD dominance in most practical terms, but architecturally they still were not very good for scalability. Basically, they turned back the clock to pentium iii on modern processes and that was enough to recover the desktop space.

Nehalem is the point at which Intel basically overtook AMD again and AMD has not come back since that point. So Intel's had the ball for 3 of their 'tocks'. AMD prior to K7 was pretty weak for a lot longer than that and I don't think anyone familiar with AMD in K6 and older would guess they would be something more than a budget alternative. So AMD could conceivably come out of this with something awesome despite recent misfortune.

Comment Re: Serious Question (Score 4, Insightful) 181

Well, something of an oversimplification/exaggeration.

64 'cores' is 32 piledriver modules. That was a gamble that by and large did not pan out as hoped. For a lot of applications, you must consider those 32 cores. Intel is currently at 12 cores per package versus AMD's 8 per package. Intel is less frequently found with their EP line in a 4 socket configuration because the performance of dual socket can be much higher with Intel's QPI than 4 socket. AMD can't do that topology, so you might as well do 4 socket. Additionally, the memory architecture of Intel tends to cause more dimm slots to be put on a board. AMD's thermals are actually a bit worse than Intel's, so it's not that AMD can be reasonably crammed in but Intel cannot. The pricing disparity is something that Intel chooses at their discretion (their margin is obscene), so if Intel ever gets pressure, they could halve their margin and still be healthy margin-wise.

I'm hoping this lives up to the legacy of the K7 architecture. K7 architecture left Intel horribly embarrassed and took years to finally catch up with when they launched Nehalem. Bulldozer was a decent experiment and software tooling has improved utilization, but it's still rough. With Intel ahead in both microarchitecture and manufacturing process, AMD is currently left with 'budget' pricing out of desperation as their strategy. This is by no means something to dismiss, but it's certainly less exciting and perhaps not sustainable since their costs are in fact higher than Intel's cost (though Intel's R&D budget is gigantic to fuel that low-cost per-unit advantage, so the difference between gross margin between Intel and AMD is huge, but net margin isn't as drastic). If the bulldozer scheme had worked out well, it could have meant another era of AMD dominance, but it sadly didn't work as well in practice.

Comment Actually, heartbleed was pretty affirming.. (Score 1) 175

Timing is pretty convenient. We have a tale of two exploits:
-Heartbleed. Open source project. Huge catastrophic bug, existed as of beginning of 2012. Fix available pretty much immediately upon discovery. As a result, significant resources are pouring in to proactively examine OpenSSL, some fixing and some forking OpenSSL. One way or another, the fix was immediate and concerned parties are empowered to do what they think is needed and the open source world will have risks mitigated as well as closed source being able to make their own call since it is BSD licensed.

-MSIE vulnerability. Closed source. Analagously large bug (albeit client side instead of server side by sheer luck), has existed since at the very latest 2008, but probably as of 2001. Fix was over a week in coming after disclosure. If you are an organization standardized on IE, you were largely SOL with respect to a fix (though mitigation through tedious security settings was possible). Maybe MS ramps up an internal effort to root out more of these, maybe they don't. They seem to have been in a more vigilant stance as a matter of course and that wasn't enough to stop it.

So in other words, very important projects with huge responsibilities can cock up. They can be open source, they can be closed source. The practical lower bound of resources to address issues in both cases will be small when no one knows something is wrong, but the upper bound when concern happens is much higher in open source.

Some have argued that the 'any bug is shallow with enough eyes' was proven wrong with heratbleed. Discovering security bugs are always more tricky than the bug intended to be considered in that philosophy, but even then once discovered, the bug was very very shallow.

Slashdot Top Deals

Force needed to accelerate 2.2lbs of cookies = 1 Fig-newton to 1 meter per second

Working...