Forgot your password?
typodupeerror

Comment: Technical Subjects need Correct Answers (Score 3, Interesting) 293

I'm afraid that this article touches on what I perceive as a growing problem: it's the notion that "Everyone's answers and opinions are right and have value."

This might be fine in some areas where many things are subjective, in which case the axiom "there's no disputing taste" is appropriate. In these cases, then I agree that one should probably hold one's criticism.

But especially in the technical areas, such as computer programming and the physical sciences, the laws of physics and logic often times point to a more correct answer. In my own work, I find that I am constantly wading through massive amounts of literature, and wondering -- what the hell happened to peer review that used to weed much of the crap out? Eventually, wrong answers and half-baked opinions stack up to warp reality, such that it is difficult to find or promote the few rigorous and correct.

I think it's a similar situation on peer-reviewed sites like Stack Exchange. Often times, the posted opinions for solution to a problem run the freaking gamut. I am glad that a lot of the good opinions (based on sound reasoning and experience) are boosted up, but the dreck (based on fuzzy thinking, old wive's tales, and "antipatterns") are ranked downward, thus giving some help to an interested third party (such as me) who really doesn't have time to be patient and P.C.

Disclaimer: the right answer can be the minority opinion -- which may have been knocked hard by other reviewers. Here I am speaking about the 99% of the time that the best answer is the most highly rated.

Comment: How about Parallel Query Execution? (Score 4, Interesting) 162

by wispoftow (#47015229) Attached to: New PostgreSQL Guns For NoSQL Market

NB: I love PostgreSQL with all my heart. I always upgrade to the most recent version, because they implement features that I really need. Added to the existing features of Postgres, it's totally awesome.

But as I have moved toward "Big Data" and the market segment that these new-fangled (non-relational) databases target, I find myself wishing that Postgres would be able to run my vanilla query (*singular*) using all processors. As it is now, I have to either write some awful functions that query manually-partitioned subtables, or simply wait while it plods through all billion or so rows.

Comment: Re:Ten Reasons to use Modern Fortran (Score 1) 634

by wispoftow (#46971637) Attached to: Why Scientists Are Still Using FORTRAN in 2014

This is a very interesting example. Would you believe that gfortran 4.2.1 gives z=2 (two!) and gfortran 4.8 won't even compile, due to a bizarre REAL(4) vs REAL(8) error? There's something very wrong with this, and your point is taken. (The intent(inout) attribute of f90 would not have helped here, either.)

I would point out that, since this has side effects, I would probably have done this as a SUBROUTINE instead of a FUNCTION. Then, things would have materialized in a predictable way. I try to write all of my functions as "pure" functions that have no side effects.

Comment: Re: We're Not (Score 1) 634

by wispoftow (#46970707) Attached to: Why Scientists Are Still Using FORTRAN in 2014

You keep trying to convince me that Fortran is not a panacea like I might actually believe that it is. I have confirmed that you are trolling.

If one sets a random seed from a reproducible generator, then start a swarm of trajectories sampled from a Maxwell distribution of velocities, then one should be able to get the exact same computer renditions of those trajectories on any computer that implements IEEE arithmetic. These are computationally deterministic simulations. This is reproducible research.

Or maybe you are invoking quantum mechanical uncertainty for a classical mechanics simulations. Even quantum mechanical simulations that start with the same wave packet (x and p) can be reproduced faithfully if the initial conditions are known, and the order of operations is the same.This discussion has nothing to do with ensemble averages or quantum mechanical and

.

Compilers most certainly can dictate the order of operations of a CPU -- that's the whole point. Whether this is most efficient for a given architecture is another matter, and one reason why performance can drop if the CPU is ordered to perform its instructions in a suboptimal manner.

Finally, I agree with you that MPI has nothing to do with IEEE arithmetic, and I really don't know why you have brought this up.

(I regret that I must refrain from further discussion on this topic. Ummm... you win.)

Comment: Re:Ten Reasons to use Modern Fortran (Score 1) 634

by wispoftow (#46970159) Attached to: Why Scientists Are Still Using FORTRAN in 2014

Also, I think that my quick post was fairly incomplete, it's a pretty big topic. People generally don't want to make copies every time data enters a function, for performance reasons.

Modern Fortran has "intent" statements that prevent overwriting by functions, that go quite a way in improving safety, and guaranteeing immutability for parallelization considerations.

subroutine dosomething( A )
integer, intent(in) :: A(:,:,:)
end subroutine dosomething

Altering A in this subroutine generates an error. In Fortran, if you wish to "pass something by value" thus keeping the original data, you can always do that explicitly. The copy will be passed by reference. (To the best of my knowledge, I hope a Fortran maven will correct me if I am wrong)

Comment: Re:Ten Reasons to use Modern Fortran (Score 1) 634

by wispoftow (#46970117) Attached to: Why Scientists Are Still Using FORTRAN in 2014

In my mind, I equate "passing by reference" as passing the memory location of the first element to a function, and passing by value as making a copy, then passing that to the function/subroutine.

If the array being passed consumes almost all of the memory of the machine (very common in scientific computing), then making a copy first would leave you dead in the water.

Comment: Re: We're Not (Score 1) 634

by wispoftow (#46969855) Attached to: Why Scientists Are Still Using FORTRAN in 2014

Please, go back and look at my previous comments. I said "consistent" and not "exact". Now, you have called "bullshit" on me, and so it's time to go to source to back up what I have said. I choose "Modern Fortran Explained" by Metcalf, Reid, and Cohen, specifically Chapter 11 which covers the Fortran implementation of IEEE 754-1985 and IEC (ISO) 559. These guys are associated with the likes CERN and NAG.

IEEE arithmetic has ~nothing~ to do with Fortran per se (see my comment above) -- the Fortran standard demands its implementation. You accusing Fortran users of "being stuck in its ways" is blatantly stupid -- as any language that implements IEEE is guilty of the same crime. A "type" is more than its storage -- a type is the union of its storage with its operators. In Fortran, (to the best of my knowledge) using IEEE arithmetic alters these operators, including essentials like +-*/ and including essentials like sqrt. (Aside: Note that GROMACS started life being a fast enough, precise enough, sqrt operator project due to its role in computing Euclidean distance. In my experience, it did not work out so well, unless a thermostat kept bleeding out the aberrant velocity build up.)

Since the FP types use finite width, no doubt there are still FP errors, as this is not exact arithmetic. Fortran does not fix this, and I never said it did. Again, I said IEEE, which Fortran implements, makes it "consistent."

Different CPUs have different performance characteristics for various operations. Where there is a preferred/faster order of operations, then the compiler can reorganize so that things run faster. This can be good and/or bad. From Metcalf, "Some computers perform division by inverting the denominator and then multiplying by the numerator. The additional round-off that this involves means that such an implementation does not conform with the IEEE standard."

My thoughts above were about ~different types~ of machines doing billions of operations in whichever way they please, thus leading to different results in long running simulations. It is possible to demand IEEE arithmetic and exception handling, in which case "Execution may be slowed on some processors by the support of some features."

Now, let's talk about your fuzzy thinking. You said "There is no physics based reason for insisting that a truncation error should always be the same": do you expect bitwise arithmetic to be different, provided that the same sequence of instructions with the same starting values? You also think/thought that scientists don't post their trajectories or snapshots. I'm actually starting to wonder about you.

This subthread had little to do with Fortran, except as an innocent bystander that has IEEE support. This had to do with the suggestion of the usefulness of consistent orders of operations that can lead to consistent results between different types of machines.

Comment: Re: We're Not (Score 1) 634

by wispoftow (#46967513) Attached to: Why Scientists Are Still Using FORTRAN in 2014

What you say is inconsistent with decades of work of my colleagues on the subject, and my own observations. (I have a few dozen papers in JACS and J Phys Chem)

a) The problem is that seemingly "stable" systems develop instability over time due to integration and floating point error. You mention chaos, and this is it -- sensitivity to minute change in initial conditions and accumulated tiny effects. But it's not randomness -- this chaos could be reproduced using well-defined arithmetic.
b) people do publish trajectories in supporting information or on their web sites, perhaps only snapshots, but how would you ever get from snapshot to snapshot to prove that you had implemented their methodology correctly, or to demonstrate the reproduction of a phenomenon: perhaps interesting, perhaps evolution to a corner case thus demonstrating a theoretical/methodological error? If you are working on different processors/compilers, it's almost impossible to reproduce.

I agree that people tend not to do IEEE arithmetic with classical MD. Please go back a comment or two and read it this time to confirm it for yourself. I was trying to give an example of a case where exactly reproducible results could be useful in the field of MD, particularly in development.

I remain convinced that IEEE arithmetic is useful in many important (but perhaps comparatively rare) circumstances, and that FP error will always be a lingering issue that rears its ugly head. Fortran implements ways of dealing with it consistently.

Comment: Re: We're Not (Score 1) 634

by wispoftow (#46967151) Attached to: Why Scientists Are Still Using FORTRAN in 2014

First of all, please note that I said ~exact~ reproduction. You keep going back to some ensemble average as being "good enough." Secondly, what you are saying about the scientific topic is inconsistent with numerical analysis of MD simulations using e.g., the velocity Verlet algorithm.

Please see: http://chris.sweet.name/docs/s... for an example of floating point error in action (special attention to the single vs. double precision differences that appear once the simulation has run a long time.) One of the early criticisms of GROMACS (the fastest MD!) was that it ran really great in single precision. But everyone else (AMBER, CHARM, and homegrown codes) were criticizing it for floating point round-off error leading to trajectories that flew apart because they developed too much momentum.

These don't "appear" if you have slapped a thermostat on the simulation (NVT ensemble). This is why you should always run an equilibration run and then switch to constant energy (NVE ensemble) when you are generating results that you wish to report. Otherwise, you are sweeping all sorts of problems due to floating point arithmetic and integration error under the rug.

But I suppose that you are doing classical MD -- trying to draw some inference on e.g. protein dynamics based on interactions between van der Waals spheres. These simulations tend to be "right" half of the time -- they are wrong until you've cooked the force field enough to match the definitive experiment. Constraints go a long way in keeping the system together.

So, in this environment, where the primary interest is generating "ooh's and ahh's" from movies of dancing proteins as quickly as possible, you are probably safe using single point arithmetic and hiding all of your sins with a nice, strongly coupled thermostat and barostat.

Comment: Re:Q: Why Are Scientists Still Using FORTRAN in 20 (Score 2) 634

by wispoftow (#46964689) Attached to: Why Scientists Are Still Using FORTRAN in 2014

Most people learned Fortran in a class intended to teach scientific programming. I have never, ever, ever seen a course catalog that lists CS 201 FORTRAN PROGRAMMING. It has always been about getting the scientific result -- it just so happens that Fortran has been pretty good at it and actually pretty simple -- and so it has often times been used.

That being said, C almost killed Fortran (77) because they waited so damned long to bring out Fortran 90. People were sick and tired of waiting for dynamic memory allocation and free form input, and so many people who were past entry-level programming started jumping to C (...which I would never relish to teach to a beginner who has no solid interest in computer programming).

Comment: Re:Q: Why Are Scientists Still Using FORTRAN in 20 (Score 2) 634

by wispoftow (#46964639) Attached to: Why Scientists Are Still Using FORTRAN in 2014

Missing out on what, the wonders and simplicity of using git? (/sarcasm)

Many Fortran programmers are academics, and academics are often times (not always) concerned with one-off programs that demonstrate a computer-based solution to a novel phenomenon. Often times, the investigator works alone. Once that is done, sometimes the code is never visited again. In these cases, anything more than VMS-style versioning is total overkill.

I agree that version control is important and often ignored. But this is not specific to Fortran -- but rather that the fact that version control adds overhead both in using and learning -- and academics are often putting all of their efforts into the science.

"Confound these ancestors.... They've stolen our best ideas!" - Ben Jonson

Working...