Is "mag" unacceptable?
Only a year ago, AR15 lowers were $79, so $150 is a 100% increase in price.
Freeze the ram, remove, reinsert into a device to dump the RAM's contents. It's been done before: http://zedomax.com/blog/2008/09/29/memory-hack-how-to-hack-encryption-keys-by-freezing-memory/
Another vote for the G2
Don't get me started on the "absence of evidence" line. It's a BS saying by those who want to support a particular claim without having any way to show others they're not just making stuff up. This isn't even absence of evidence, though. It's poorly-stated evidence, or possibly even misleadingly-stated evidence.
When a company that is "an exceptionally competent developer of high performance, low-level graphics software" fails to communicate test results in a fashion which can be meaningfully compared by people who would be interested in such a developer's blog, it either calls into question their competence or the content of their communication. A competent group that realizes "hey, that's really cool, we got a higher average FPS" and also sees "that's weird, there are a lot of sections of benchmarking where the actual FPS drops to 15-20", might choose to communicate the first result and hide the second in order to drum up support. Simply reporting the average FPS does nothing to assure us that the performance is better for an end-user (other considerations include micro-stutter, periods of low playability,
Additionally, such "obvious test practices" do really need to be spelled out, and conformity results reported, for a reader to even infer a meaningful comparison. With the system they described, if they were pumping this out to a 1920x1080 display, the test results mean something completely different than if they were driving 1, 2, or 3 2560x1600 displays. I (and others in the field or even just interested in the field), would like to know this information so that it is meaningful, and not merely an "ePeen number".
Perhaps it's due to [H]ard|OCP that I've come to expect better benchmark reporting, but what is shown here disappoints. A test case can be designed to test a number of factors, so simply stating it's for "testing" indicates you're unaware of testing in the graphics area. Those factors include, but are not limited to, conformance, pixel throughput, vertex throughput, fill rate, average fps, memory utilization, GPU utilization, CPU utilization, inter-frame jitter, and average dropped fps. Not all of these always need to be reported, but some of them are linked, and should be reported together.
So yeah, one factor is marginally faster and was useful for finding a previously-missed bit of overhead. Cool, great work, etc. But don't report a single number which may or may not be misleading (and will obviously be touted by some a place to claim platform or API set x is better than y), without giving meaningful context to the results. As you said, Wraithlyn, they're experts. How about they demonstrate that in a meaningful fashion for the other experts who might gain some insight from their blog?
Clearly not. They give a bare number that doesn't indicate whether it is a maximum FPS or an average FPS. They provide neither the test setup (screen resolution, detail settings, etc), nor meaningful analysis of overall performance. For example, if the average FPS is lower on one platform, but the variability is also lower, the actual user-perceived performance will be better. No meaningful details are provided, just some ePeen number which is abstract of context. The statistician in me weeps.
Multithreading for C/C++ and 64 bit are both available in VS Express.
For 64 bit, you have to install the Windows SDK, but it works.
I'm pretty sure that the people doing the first post prepared in advance are just trolls stepping up their game.
True, but the condition is "a lion in your fridge" not "in your current fridge". Giving away your fridge doesn't preclude you from obtaining another one, which gets infested with lion(s).
Also, he's the entire reason my username is Tawnos.
Jeff Grub was one of my favorite "dabbling" fantasy authors. He wrote some of the first canonical M:TG books, e.g. The Brothers' War. Totally OT, but it reminded me to check out what he's written since then, since I haven't read anything by him in a long time.
Note what he said, "patients who cannot be vaccinated for whatever reason." Allowing a patient who is a likely carrier because they are fine to get vaccines but their parents refuse them makes the doctor liable to be sued by the parent of another kid who legitimately couldn't be immunized.
Flu isn't dangerous? News to the CDC
Give the fridge to somebody else, then kill yourself. Then it's not your fridge, and you cannot ever own another fridge because you're dead.
"Gotcha" questions are not effective at determining problem solving ability. Questions that have more than one means of approach are much more effective. If a question is superficially easy if you know the trick, but impossibly hard without it, then it doesn't offer any benefit to assessing how a person might resolve specification ambiguity, approach the problem's possible pitfalls, and ultimately resolve the issue. Examples of these types of questions include the "detect a loop in a linked list" (tortoise and hare algorithm), "swap two variables without using a third" (XOR or use pointer math), "three light bulbs in a room, three switches outside, you can only enter once" (two on, wait, one off, feel the off bulbs for the warm one).