Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×

Comment Re:"...benefits of performance" - not so much (Score 1) 228

I was playing with "Ubuntu for Windows" on Windows 10 last week. While it is an interesting effort (and I was able to upgrade from the default Ubuntu 14.04 (Trusty Tahr) userland to the 16.04 (Xenial Xerus) userland with minimal dificulty), performance does not match that of a Lubuntu 16.04 VM on the same hardware running in VMWare Player. I was running repeated Monte Carlo-type simulations, and the same scripts and executables took twice as long to run under Ubuntu for Windows vs. the VM.

So, while interesting, it is still beta (and is labeled as such, and requires "developer mode" to be enabled in Windows 10), and it's not quite ready for prime time yet.

If you're getting that kind of performance difference, it sounds like you're really bound by I/O or (faulty?) thread synchronization. But, yeah, I would never use WSL for performance. For easy transparent prototyping, it's great and I've basically stopped using cygwin.

Comment Re:How good is the masking? (Score 2) 499

If the voice masking wasn't well done, couldn't you end up with an uncanny valley sort of situation with respect to how the applicants sounded? I can imagine a scenario in which the voice sounds "wrong" at a gut level, and that makes some interviewers uncomfortable.

And, overall, what do you think the tech community would be most open to? A person that sounds like a woman, but comes off as a bit of a tomboy (on account of actually being a man), or a man that seems oddly feminine or "weak", "fuzzy" or whatever attributes you would assign to socially acceptable female behavior? And how did men that had their voice masked into female fare, compared to non-masked men?

Comment Re:I hope it is almost time (Score 1) 149

Take a MS Excel macro made on Windows, and run it in MS Excel on another Window machine...

Excel is horribly broken to the extent where macros stop working if you try to use them on a machine with a different language.

Not anymore than saying that a bash script breaks if you run it on a non-C locale. It MIGHT be true, if you actively rely on other stuff behaving in certain ways. And, if you're completely blind to the issues involved, it might very well happen. I've seen Java and C# code generate invalid SVG files etc by using decimal comma (taken from the current locale), rather than a decimal point. But, again, that doesn't mean that the software itself is broken. The author of Excel macros might, on the other hand, be far more likely than either of these groups to just cobble something together based on what seems to work.

Comment Re:A total non story .. (Score 1) 109

Yeah, that's precisely what you would expect for a vulnerability in user space code. "Just" unzip a 7-zip file and suddenly any file in your home directory can be compromised... or gone. Run a vulnerability scanner on your e-mail server (with insufficient sandboxing), or on your web server for uploading files, and things get... worse.

Comment Re:Big Data is not a substitute for Critical Think (Score 1) 69

"Those conflicting forces reduce skill applied in jobs filled by the mostly unqualified." Did you draw this conclusion by collection a lot a data and running the data through your own analysis? Or do you have some other way to prove such a broad accusation? The biggest problem we face to day is from those who use statistics to support their cause and opinions. We are constantly bombarded with poll results and nobody every questions how these results are derived. What statistical methods are being used that allows the pollsters to take a very small sample size and project those results on very large datasets? How do you ask 500 people their opinion on something and then apply those results against 400 million people?

By assuming random sampling, that's how. Whether that assumption is correct is a critical issue, but that is the case for any universe population significantly larger than your sample set. 500,000 or 400 million really does not matter - if you are ignorant of the demographics and how those interact with your sampling strategy, you're not gonna get a correct result.

If, on the other hand, you somehow manage to do random sampling of the true population, 400 people would be enough to nail preferences down to a few percent, (almost) no matter the total population size. And I guess this is the danger of statistics and big data. Intuition says one thing, simple statistical assumptions say another, and a more thorough treatment is rare.

Comment Re:Multiple heads (Score 1) 202

There was one drive maker which actually did this. They had two drive platters at opposite ends, each independent of the other, and either could fail, letting the other completely take over. I've wondered why this isn't more commonplace, perhaps a drive form factor with four heads, all active/active and can handle a head array failing (perhaps lighting up SMART.) This wouldn't just allow for four times the I/O, but allow four different threads to write at the same time, which is useful for virtualization, although these days, virtualization should just go to SSD or a large I/O buffer due to all the random reads/writes.

There was a drive that had two actuators that could access the entire platter. That was the design, but I think in the end the complexity of multiple heads accessing the same sector was problematic because of the ordering of the operations (i.e., one head could write data to the sector that the other head was reading - if you didn't catch this, you would corrupt the data).

Plus, double the heads doubles the chance of a head crash.

It seems like NCQ, write and read caches (sometimes with flash hybrid modes) etc in current drives would bring enough complexity that additional physical heads would also be reasonable to implement. The abstraction in the drive firmware is much thicker these days.

Comment Re:Minimal impact (Score 1) 139

Has to get around stack overflow protection canaries (-fstack-protector-strong or -all), address space randomization, and a non-executable stack and heap. Ubuntu has run -fstack-protector-strong (covers functions calling alloca()) since gcc 4.9 release after 2015-05, according to #1317307. Kees Cook added the -strong feature to gcc, and is part of Ubuntu's compiler team, so it went straight into Ubuntu.

Good luck exploiting this bug.

Denial of service by crashing the process is of course not as nasty as remote code execution, but it can easily be nasty enough, especially if the properties of DNS would allow you to penetrate deep inside networks and services generally believed to be protected. My personal favorite vector here would be XML exposed to parsers that auto-load whatever DTD or other schema that is specified.

Comment Re:Defense (Score 1) 139

The bug can hit you with DNS over TCP as well. While that is somewhat of an oddity, I am not yet confident to say that you can rule it out, especially if you have a MITM that might be able to trigger fallbacks. Since the TCP response could be fragmented over several packets, things rapidly grow beyond iptables capabilities there. (But the "TCP DNS response fragmented over several packets" would thankfully not propagate through layers of caching internal DNS servers.)

Comment Re:Sad (Score 2, Insightful) 107

If you use C/C++ right, you do not end up writing a JIT compiler for a language never intended for it. This is a bug in v8. Now, we don't know where, but that's the kind of code that does things no one sane should ever do. It is supposed to take shortcuts and patch things on the fly. It's of course fully possible that this exploit is not in a performance-critical path, and then your comment is rather well placed. But I do think that anyone writing C/C++ in this context is a fool himself. It is for all practical purposes impossible to use C without doing bare pointer addressing. It is highly possible to use C++ without doing it, even though such use is not terribly widespread.

Comment Re:replicate earth air purification (Score 3, Informative) 112

It's not like putting a sliced tomato on the kitchen sink in a humid climate will prevent other parts of your kitchen from attracting any mold spores around. Bacteria and prokaryotes are mostly incapable of macrosopic movement (especially in air). They are also able to rapidly expand populations. Therefore, a "colonist" doesn't choose to move to the best spot, foregoing a worse one. They will try everywhere. If they gain a foothold, that foothold is likely to just unleash further colonists into the less hospitable, but still slightly viable, habitats.

Comment Re: Cut to the chase (Score 1) 134

Well, for making this "frame rate" theory relevant, the question is not only if anything happens at or close the frame rate, but what is the frame stepping function? And, throwing relativity into the mix, in what reference frame?

A discretized spacetime would mean that the continuous solutions to the Schrödinger/Dirac equations are actually approximations that are better expressed by some discrete time stepping scheme. That could have macroscopic consequences. Especially so if for some weird reason Nature has a rather simple first-order scheme at its frame rate core. But, it does also mean that we would get slightly different results from different objects in free fall, depending on their overall speed relative to the reference frame. This would control the factor between the "local passage of time", and the actual number of "Planck time frames" used by the process. In addition, the discretization of time almost necessitates a discretization of space. This not only means that space has some small grid (not likely either, based on current theory). It also means that there are some absolute directions in space and that some physical processes would behave slightly differently (even if aggregated along macroscopic distances) if they are algined to these directions, or not.

Comment Re:And then we know ... what exactly? (Score 5, Interesting) 134

Well, electron states being quantized has helped us to (truly) understand chemistry and create transistors as well as LEDs. By realizing that things are only allowed to make certain transitions under certain conditions, you can "cheat" and build up high-energy states that are far more stable than they really should be. I am not saying we would get macroscopic anti-gravity or a "Faraday cage for gravity", but this is kind of the space where we would get more specific explanations for how you might be able to accompish those things in theory. For very delicate experiments (similar to the one described!) and possibly sub-nanoscale manufacturing procedures, an understanding of a quantized nature of gravity influences might be useful, if only for better understanding the noise in measurements and tolerances.

Comment Re:It would have to be. (Score 2) 134

Even if mass would be quantized, the Newtonian equation is m1m2/r^2. Even with discrete mass quanta (which is also false, see other replies), you would get a continuous spectrum of resulting forces. Inserting relativity here changes the expressions, but it would really just muddle things. So, no, there is no specific reason to believe gravity to be quantized - outside of an actual theory of quantum gravity.

Comment Re: Stupid (Score 1) 153

If it is just a bug, then we should expect a quick fix and firmware release from VW. If, however, it was a conspiracy, and there is no way that VW EGR technology can ever be made to pass the NOx requirements (without additional hardware - AdBlu tanks), then VW is screwed.

My point is that a "too good to be true" bug could easily have quite devastating consequences if it's just fixed. If they remove 'false' in the putative "if (isInTest() || (isNOxReductionNeeded() && false) enableEGR();" line and this increases fuel consumption or reduces maximum torque a lot, they cannot simply release that fix.

Embedded automotive control systems and scientific research are quite different domains, but in science I've repeatedly been close to thinking I had solved a problem, just to realize that my benchmark was off and the code was not really working at all. I have not published any of those results (AFAIK) yet, but I've reviewed and seen publications with blatant errors. When you have reached the kind of result you hoped for and believed likely, you are not on guard anymore. Fixing the blatant error might very well mean that the whole work is pointless. The error is trivial, the consequences are not.

Slashdot Top Deals

Programming is an unnatural act.

Working...