
MP3 codecs are only implemented in hardware on really cheap MP3 players - the sort that don't have the CPU power to do it in software. Those devices can't do anything other than MP3.
On anything more expensive - ipods, Sansa Clips, etc., it's all software, and the device can support lots of different formats.
There isn't really much of a battery power win from doing MP3 in hardware, and dedicated MP3 hardware is no good for the other formats that the average user will want to play. The average user may think he has an "MP3 collection", but he is probably not even aware that some of the files are in other formats, because every music player just plays them all.
But SP mode is compressed, isn't it?
"Standard ATRAC ("SP") is 292kbps, LP2 is ~132kbps, LP4 is ~66kbps"
Not that it matters all that much. I certainly couldn't tell the difference. It was one of the nice things about minidisc, in the days before MP3 players with decent storage capacity.
I've owned two Model Ms, supposedly the best, and they have put me off owning mechanical keyboards. They are tiring to use, and they are noisy, and if your job requires you to type for most of the day, you don't want either of those things. I didn't see a good tradeoff in terms of improved typing speed or accuracy.
This sort of test may not detect all fakes.. really you need to write a test pattern and read it back, as the writes may appear to succeed.
Programs exist to do this for you, e.g. https://sites.google.com/a/int...
When I had one of these fake cards, it seemed to me that the firmware had been designed to allow a filesystem to be created on the device, by remapping the blocks that the filesystem would use for its metadata.
Formatting utilities should really check for bad SD cards...
If you really were working on systems where a failure would have catastrophic consequences, I would hope you had a QA process a lot more sophisticated than running a test suite and this kind of coverage tool to check for problems!
Oh, certainly! The good news here is that the avionics industry knows this, and in any case, the FAA won't let them cut corners. I don't know exactly how the industry uses our tools, but it's typically in conjunction with lots of manual testing, with the coverage tool capturing data as human testers run through test scripts.
And you're right, non-safety critical projects can benefit from it. For any large project, it really isn't an expensive part of the development process, and it can be very revealing. The techniques we use have a low overhead in terms of memory and CPU time, so they're good for both embedded systems and high-performance desktop/server software. An "instrumented" build for coverage is not that different to a regular debug build: a bit slower, a bit larger, but with lots of helpful stuff included. But perhaps I am wandering into "infomercial" territory again...
Thing is, you need both your own test suite and a coverage test tool. The two work together. The coverage tool tells you if your tests are incomplete, helping you to fix them.
If I were actually testing Tetris I would definitely do it the way you suggest: a pre-arranged sequence of blocks and a pre-programmed series of moves. I'd run the game with that sequence, then look at the coverage data to see if I needed to add anything. Some of the process can be automated, but the test cases themselves have to be made by hand.
You're right, this sort of testing should really be about covering the range of possible inputs. But that is typically impossible. There are too many possible scenarios. You need a practical substitute.
I agree that statement coverage is quite crude, it tells you very little about the data being processed. There is more detailed information being produced here - "MC/DC coverage" - which does tell you whether conditional statements have been thoroughly exercised, because each possible reason for the "true" or "false" branch of the conditional has been seen during the test. But even with that, it is no silver bullet, and you can certainly write programs that get 100% coverage on all the metrics, and are still full of bugs.
It is, however, better to have this information than not have it at all. And coverage tools are very practical in real-world situations, particularly those involving testing safety-critical code. They provide evidence that the tests have tested everything that they claim to have tested.
Submitter here. It's "marketing spam" in the sense that it's based on something I did at work. I don't see why this is a problem. Many articles linked from this site involve something that someone did at work.
I thought it was interesting that, though this is a really simple game, you can't test it effectively just by playing it. You have to deliberately seek out all of special cases. That's a fact about virtually all software, but it's not an intuitive one, and that's what the article is about.
Do you know, you're the first person in this topic to actually answer the question? Most others missed the VPN part.
OpenVPN already knows how to discard duplicates and retransmit lost packets. It's a lovely way to build a semi-reliable network on top of an unreliable one, and very hackable.
The questioner only needs to modify OpenVPN (on his PC) to send its UDP packets via two different routes. He should configure his VPS to have two public IP addresses, with OpenVPN (server-side) bound to both of them, and then manually adjust the routing table on his PC to force the use of a specific route for each of those two IP addresses. The hard bit (and it's not really that hard) is making OpenVPN (on the PC) send each packet twice to two different IP addresses, which would require modifications to the source code and some familiarity with the sockets API.
I think it would work, not just for Battlefield but for anything. And it sounds like fun.
I don't believe this either. There's no corroborating evidence, not even a screenshot (though that could be trivially faked).
This is a conspiracy theory and it's as nonsensical as thinking that Bush's "people" accidentally leaked "the truth" while they were supposed to be covering up "the facts" about 9/11.
Take note of the names of the Slashdotters who automatically believe this sort of thing, and give their opinions an appropriate level of credit.
Hmm. Experience suggests the intelligent beings would stare at the 0.0001% and either deny the evidence for it, deny its relevance, or try to destroy it. Inconvenient facts are inconvenient.
You want a piece of toast with the face of Jesus? You already had a man with the face of Jesus, and look what happened to him.... What chance does some toast stand?
Where there's a will, there's a relative.