The short answer is no. The long answer is no ... and a very long list of reasons why.
Start with reading Goldbergs classic paper "What Every Computer Scientist Should Know About Computer Arithmetic" Sun's floating point group made some improvements to the paper and paid for rights to redistribute. Oracle continues to do so. http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
If that isn't depressing enough, and you use trig functions, read http://www.scribd.com/doc/64949170/Ng-Argument-Reduction-for-Huge-Arguments-Good-to-the-Last-Bit you can get the source from netlib for "fdlibm" which is under a BSD flavor license.
If the purely software issues haven't made you realize that you haven't got much of a prayer, please note that different revs of the same intel chips sometimes provide slightly different results (sometimes intentionally, sometimes as a result of tweaking the order of execution in the out of order execution engine). Older x87 arithmetic was 80-bit, newer x64 arithmetic is pure 64-bit, providing no end of fun. Using the SSE instructions provides more variation.
If the pretty much (in principle) "simple" and potentially deterministic software issues aren't enough consider the reality of hw. Chessin has a very good, yet amusing, explanation of the key problems http://queue.acm.org/detail.cfm?id=1839574
Lest you think they only apply to a particular generation of boutique processor, most HPC ensembles are now built out of standard server motherboards and chips.
http://www.csm.ornl.gov/srt/conferences/ResilienceSummit/2010/pdf/michalak.pdf The issue of undetected soft errors is big and growing, as can be seen from the activity in the literature. SC13 "ACR: Automatic Checkpoint/Restart for Soft and Hard Error Protection" (which has lots of good citations of earlier work, including field data such as 27 soft errors per week leading to fatal node failures (that is, wrong enough results that while the hw didn't detect any problem, the issue caused the node to crash) on just one ensemble (ASC Q). its going mainstream in that HPCwire caught wind and in 31 Oct 2013 had a nice tabloidesqe writeup entitled "Addressing the Threat of Silent Data Corruption"
Neutron's don't only disrupt memory elements, but can hit logic as well. See the upcoming issue (already available via IEEE xplorer for member/subscribers) JOURNAL OF SOLID-STATE CIRCUITS, VOL. 49, NO. 1, JANUARY 2014 The 10th Generation 16-Core SPARC64 Processor for Mission Critical UNIX Server" which details the lengths some (but not many) go to ensure that there are no undetected errors (wide range of techniques, ranging from where wires are placed on the chip, ECC, parity, residue arithmetic, automatic retry, etc.). No doubt there are some good (similar) papers in the IBM Technical Journal.
No doubt a good literature search would turn up dozens of other papers, and circuit design textbooks cover some of the territory.
In principle, interval arithmetic could provide a solution (you might not get the same interval, but if the intervals nest, you have consistent results and if they are disjoint you have a bug ... and if they nest, the narrower one is "sharper" which is better). In practice, most algorithms haven't been reworked for good interval implementation, languages don't provide very good support, nor does most hardware. All fixable in principle, but unlikely to be the solution you seek for todays off the shelf virtual systems available cheaply.