Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment: Re:Just a decade ago. (Score 1) 170

by Dizzer (#46999587) Attached to: WebKit Unifies JavaScript Compilation With LLVM Optimizer

Ahhhh, the obligatory FORTRAN circle jerk. A bunch of performance assertions without substance dashed with a healthy ignorance of the value of developer time vs. machine time.

Just a little example though:

50million particle N-body simulation benchmark (http://benchmarksgame.alioth.debian.org/)
Intel Fortran: 20.34s
G++ C++: 20.25s

Oh my gosh, what is that? The sounds of dozens of bearded-old-man-fortran-programmer jaws dropping?

http://benchmarksgame.alioth.d...
http://benchmarksgame.alioth.d...

Comment: Re: We're Not (Score 1) 634

by Dizzer (#46970873) Attached to: Why Scientists Are Still Using FORTRAN in 2014

If one sets a random seed from a reproducible generator, then start a swarm of trajectories sampled from a Maxwell distribution of velocities, then one should be able to get the exact same computer renditions of those trajectories on any computer that implements IEEE arithmetic.

No, god damn it!!! How the hell to you do computational research? On a single CPU?! You are either totally thick, or the troll in this discussion.

Or maybe you are invoking quantum mechanical uncertainty for a classical mechanics simulations.

I'm invoking order of operations. Why is that so hard to understand?!

Comment: Re: We're Not (Score 1) 634

by Dizzer (#46970615) Attached to: Why Scientists Are Still Using FORTRAN in 2014

IEEE arithmetic has ~nothing~ to do with Fortran per se (see my comment above) -- the Fortran standard demands its implementation.

Correct. And it is also correct, that the C++ standard does not demand IEEE 754 compliance. There are hardware requirements (i.e. CPU/FPU support) for this. Requireing these would exclude a whole bunch of embedded systems for example.

Fact is that the hardware you will run on is very likely to support IEEE 754.

a type is the union of its storage with its operators. In Fortran, (to the best of my knowledge) using IEEE arithmetic alters these operators,

Huh? Fortran rewires my CPU?!

Different CPUs have different performance characteristics for various operations. Where there is a preferred/faster order of operations, then the compiler can reorganize so that things run faster.

IEEE 754 defines a set of basic operations. What it does not define are rules for the compiler how it must or must not order these operations!
Bear in mind that this goes beyond simple reordering of terms in one calculation. The order can be impacted by MPI communications for example. IEEE 754 says nothing about this, and neither does the Fortran standard.

do you expect bitwise arithmetic to be different, provided that the same sequence of instructions with the same starting values?

Of course not, because the sequence of operations is not the same on different systems and parallelization topologies.

You also think/thought that scientists don't post their trajectories or snapshots.

Nonsense. Maybe you will learn through repetition: Exact trajectories are almost never a relevant physical observable. Read the stuff about uncertainties in initial conditions again please.

The problem you believe Fortran can help you with is not a problem, and furthermore Fortran cannot help you with it anyways. Sorry, man.

Comment: Re: We're Not (Score 1) 634

by Dizzer (#46968717) Attached to: Why Scientists Are Still Using FORTRAN in 2014

A specific trajectory can serve as an example, but the exact trajectory itself is not a meaningful observable. I suggest you go back and take a look a previous posts that mention uncertainties in the initial conditions being overwhelmingly larger than truncation errors. Add to this a thermostat if you will.

Even with a fully IEEE compliant floating point implementation (which by the way is the CPU's business - why are we even talking about this here?) operation order will determine how the truncation error is propagated. Results are still deterministic on the same hardware with the same optimization and the same parallelization. But this is mainly of importance for code debugging. There is no physics based reason for insisting that a truncation error should always be the same.

Furthermore I don't buy it. I'm calling bullshit on you. You have yet to divulge what these magic "lower performance IEEE data types are". Are you talking about longer types? That will not protect you from truncation errors and will do NOTHING to make them more "consistent" either. To me it frankly sounds like you are set in your old fortran ways and are just trying to rationalize sticking to an outdated language.

Comment: Re: We're Not (Score 1) 634

by Dizzer (#46967083) Attached to: Why Scientists Are Still Using FORTRAN in 2014

Apart from what has been said before about this subject by the other commenters, I'd like to stress that you seem to lack the physics understanding to appreciate what the significance of rounding errors is. If you system is unstable enough to have rounding error affect the outcome you need to do uncertainty quantification anyways! In particular a sensitivity analysis of the input data (which is rarely known with precision anywhere comparable to the machine precision) is necessary. Reproducibility does _not_ imply reproducibility of exact trajectories, it means reproducibility of relevant observables.

My colleague/reviewer could not care less about exact trajectories. Do you think trajectories get published? Do you think full trajectories are even stored to disk most of the time?! If the conclusion of your paper hinges on an exact trajectory it should not be published at all. Hell, you shouldn't be doing science if you think that is how it works.

Comment: Re:Q: Why Are Scientists Still Using FORTRAN in 20 (Score 2) 634

by Dizzer (#46964381) Attached to: Why Scientists Are Still Using FORTRAN in 2014

This. I have many friends in the physics dept and the reason they're doing Fortran at all is that they're basing their own stuff off of existing Fortran stuff.

The types of people who haven't head about collaborative development or, dare I say, version control.
I've been there. You end up with a zillion diverging (and never merging) forks, people reinventing various wheels over and over, and of course adding their own bugs.
This is a terribly unproductive and sad way of developing code. Unfortunately most scientists I know (knew) don't give a crap because they are _completely_ oblivious to what they are missing out on.

Comment: Re:Because C and C++ multidimensional arrays suck (Score 1) 634

by Dizzer (#46964343) Attached to: Why Scientists Are Still Using FORTRAN in 2014

All the built-in array people are essentially obsessing over a micro-optimization. First of all I would argue that in a scientific research environment development time is a far more important factor than execution time. And having a framework with a clean outward facing interface for reusers makes a huge difference. Clean, well designed object oriented code also encourages contributions and allows you code to flourish, which reduces the pressure for people to invent their own wheels (again saving developer time). Secondly, the more substantial optimizations come from choosing the appropriate algorithms. Why worry about a 5% speed-up, when choosing the right preconditioner can give you a 10fold speed-up. As an aside, why worry about even a 20% slow down if you have a scalable parallel implementation that you can just throw a few more cores at. Profile before you optimize, and profile _economically_, too!!!!

Comment: Re:We're Not (Score 2) 634

by Dizzer (#46964259) Attached to: Why Scientists Are Still Using FORTRAN in 2014

If you get hung up on floating point truncation errors, then I have bad news for you: Fortran won't protect you from that. You seem to be under the delusion impression that this invalidates the results for some reason. This is utter bullshit. One example are molecular dynamic simulations. An MD simulation is a chaotic system. The _exact_ trajectory is not the relevant result. The phase space that is sampled is. Trajectories of systems with identical initial conditions are bound to diverge on different machines due to a change in floating point operation order and the resulting truncation errors. But the phase space that is sample is _equivalent_ in each run. If for some reason machine precision is important to you you'd be much better off by using a library such as GMP (https://gmplib.org/).

Comment: Sigh, G+ hate is fashionable, isn't it? (Score 1) 339

by Dizzer (#45918057) Attached to: Google Begins To Merge Google+, Gmail Contacts

I seriously don't know what's with all the G+ hate. I primarily use the unlimited storage for photos on G+ to share my pics with friends and family (their APIs make batch uploading easy, and I migrated several thousand pics from my own Gallery2 installation to G+, including description texts, with a small python script). I am subscribed to a bunch of "communities" which deliver quite a bit of interesting content. And I also *enjoy* the link to to youtube comments as it floats interesting videos to my stream that people from my circles commented on.

interlard - vt., to intersperse; diversify -- Webster's New World Dictionary Of The American Language

Working...